If you’re driving your car from Portland to Merced, you probably rely on GPS to see where you are. But what if you’re driving your Moon rover from Oceanus Procellarum to the Sea of Tranquility? Actually, GPS should be fine — if this NASA research pans out.
Knowing exactly where you are in space, relative to other bodies anyway, is definitely a non-trivial problem. Fortunately the stars are fixed and by triangulating with them and other known landmarks, a spacecraft can figure out its location quite precisely.
But that’s so much work! Here on Earth we gave that up years ago, and now rely (perhaps too much) on GPS to tell us where we are within a few meters.
By creating our own fixed stars — satellites in geosynchronous orbits — constantly emitting known signals, we made it possible for our devices to quickly sample those signals and immediately locate themselves.
That sure would be handy on the Moon, but a quarter of a million miles makes a lot of difference to a system that relies on ultra-precise timing and signal measurement. Yet there’s nothing theoretically barring GPS signals from being measured out there — and in fact, NASA has already done it at nearly half that distance with the MMS mission a few years ago.
“NASA has been pushing high-altitude GPS technology for years,” said MMS system architect Luke Winternitz in a NASA news release. “GPS around the Moon is the next frontier.”
Astronauts can’t just take their phones up there, of course. Our devices are calibrated for catching and calculating signals from satellites known to be in orbit above us and within a certain range of distances. The time for the signal to reach us from orbit is a fraction of a second, while on or near the Moon it would take perhaps a full second and a half. That may not sound like much, but it fundamentally affects how the receiving and processing systems have to be built.
That’s precisely what the team at NASA Goddard has been working on addressing with a new navigation computer that uses a special high-gain antenna, a super-precise clock and other improvements over the earlier NavCube space GPS system and, of course, the terrestrial ones we all have in our phones.
The idea is to use GPS instead of relying on NASA’s network of ground and satellite measurement systems, which must exchange data to the spacecraft and eat up valuable bandwidth and power. Freeing up those systems could empower them to work on other missions and let more of the GPS-capable satellite’s communications be dedicated to science and other high-priority transmissions.
The team hopes to complete the lunar NavCube hardware by the end of the year and then find a flight to the Moon on which to test it as soon as possible. Fortunately, with Artemis gaining traction, it looks as if there will be no shortage of those.
Spanish soccer’s premier league, LaLiga, has netted itself a €250,000 (~$280k) fine for privacy violations of Europe’s General Data Protection Regulation (GDPR) related to its official app.
As we reported a year ago, users of the LaLiga app were outraged to discover the smartphone software does rather more than show minute-by-minute commentary of football matches — but can use the microphone and GPS of fans’ phones to record their surroundings in a bid to identify bars which are unofficially streaming games instead of coughing up for broadcasting rights.
Unwitting fans who hadn’t read the tea leaves of opaque app permissions took to social media to vent their anger at finding they’d been co-opted into an unofficial LaLiga piracy police force as the app repurposed their smartphone sensors to rat out their favorite local bars.
El Diaro reports the fine being issued by Spain’s data protection watchdog, the AEPD. A spokesperson for the watchdog confirmed the penalty but told us the full decision has not yet been published.
Per El Diaro’s report, the AEPD found LaLiga failed to be adequately clear about how the app recorded audio, violating Article 5.1 of the GDPR — which requires that personal data be processed lawfully, fairly and in a transparent manner. It said LaLiga should have indicated to app users every time the app remotely switched on the microphone to record their surroundings.
If LaLiga had done so that would have required some form of in-app notification once per minute every time a football match is in play, being as — once granted permission to record audio — the app does so for five sections every minute when a league game is happening.
Instead the app only asks for permission to use the microphone twice per user (per LaLiga’s explanation).
The AEPD found the level of notification the app provides to users inadequate — pointing out, per El Diaro’s reports, that users are unlikely to remember what they have previously consented each time they use the app.
It suggests active notification could be provided to users each time the app is recording, such as by displaying an icon that indicates the microphone is listening in, according to the newspaper.
The watchdog also found LaLiga to have violated Article 7.3 of the GDPR which stipulates that when consent is being used as the legal basis for processing personal data users should have the right to withdraw their consent at any time. Whereas, again, the LaLiga app does not offer users an ongoing chance to withdraw consent to its spy mode recording after the initial permission requests.
LaLiga has been given a month to correct the violations with the app. However in a statement responding to the AEPD’s decision the association has denied any wrongdoing — and said it plans to appeal the fine.
“LaLiga disagrees deeply with the interpretation of the AEPD and believes that it has not made the effort to understand how the technology [functions],” it writes. “For the microphone functionality to be active, the user has to expressly, proactively and on two occasions grant consent, so it can not be attributed to LaLiga lack of
transparency or information about this functionality.”
“LaLiga will appeal the decision in court to prove that has acted in accordance with data protection regulations,” it adds.
A video produced by LaLiga to try to sell the spy mode function to fans following last year’s social media backlash claims it does not capture any personal data — and describes the dual permission requests to use the microphone as “an exercise in transparency”.
Clearly, the AEPD takes a very different view.
LaLiga’s argument against the AEPD’s decision that it violated the GDPR appears to rest on its suggestion that the watchdog does not understand the technology it’s using — which it claims “neither record, store, or listen to conversations”.
So it looks to be trying to push its own self-serving interpretation of what is and isn’t personal data. (Nor is it the only commercial entity attempting that, of course.)
In the response statement, which we’ve translated from Spanish, LaLiga writes:
The technology used is designed to generate exclusively a specific sound footprint (fingerprint acoustic). This fingerprint only contains 0.75% of the information, discarding the remaining 99.25%, so it is technically impossible to interpret the voice or human conversations.
This fingerprint is transformed into an alphanumeric code (hash) that cannot be reversed to recreate the original sound. The technology’s operation is backed by an independent expert report, that among other arguments that favor our position, concludes that it “does not allow LaLiga to know the contents of any conversation or identify potential speakers”. Furthermore, it adds that this fraud control mechanism “does not store the information captured from the microphone of the mobile” and “the information captured by the microphone of the mobile is subjected to a complex transformation process that is irreversible”.
In comments to El Diaro, LaLiga also likens its technology to the Shazam app — which compares an audio fingerprint to try to identify a song also being recorded in real-time via the phone’s microphone.
However Shazam users manually activate its listening feature, and are shown a visual ‘listening’ icon during the process. Whereas LaLiga has created an embedded spy mode that systematically switches itself after a couple of initial permissions. So it’s perhaps not the best comparison to try to suggest.
LaLiga’s statement adds that the audio eavesdropping on fans’ surroundings is intended to “achieve a legitimate goal” of fighting piracy.
“LaLiga would not be acting diligently if it did not use all means and technologies at its fingertips to fight against piracy,” it writes. “It is a particularly relevant task taking into account the enormous magnitude of fraud in the marketing system, which is estimated at approximately 400 million euros per year.”
LaLiga also says it will not be making any changes to how the app functions because it already intends to remove what it describes to El Diario as “experimental” functionality at the end of the current football season, which ends June 30.
SpaceX’s next mission for its Falcon Heavy high-capacity rocket is set for June 24, when it’ll take off from NASA’s Kennedy Space Center in Florida with 20 satellites on board that comprise the Depart of Defense’s Space Test Program-2. That’s not all it’ll carry however: There will also be cargo pertaining to four NASA missions aboard the private launch vehicle, including materials that will support the Deep Space Atomic Clock, the Green Propellant Infusion Mission, and two payloads that will serve scientific missions.
NASA detailed all of these missions in a press conference today, going into more detail about what each will involve and why NASA is even pursuing this research to begin with.
Deep Space Atomic Clock
NASA’s Deep Space Atomic Clock mission, run from the Jet Propulsion Laboratory, will see a demonstration super-precise atomic clock into low-Earth orbit, where it will act as a proof-of-concept for using this to deliver much more (like, orders of magnitude better) accuracy and precision when compared to ground-based atomic clocks. This is a key ingredient for future deep space exploration, including crewed missions to both the Moon and Mars, since space-based atomic clocks should help greatly improve outer space navigation.
Jill Seubert, Deep Space Navigator for NASA, explained that this is the world’s first ion-based atomic space clock. “It’s about 50 times more stable than the GPS atomic clocks we use,” adding that we currently have to navigate from Earth because the clocks on board spacecraft are really not very good at maintaining time accuracy.
Seubert noted that her job – Deep Space Navigator – is essentially a spacecraft pilot. “To put my job in context,” she said, “It’s like me standing here in LA today and shooting an arrow, and hitting a target the size of a quarter, and that quarter is sitting in Times Square in New York.”
The problem with piloting today, she noted, is fundamentally one of time – we currently need to measure the echo of a signal back from spacecraft in flight. To navigate space safely, Seubert and her peers effectively listen for the echo using instrumentation here, and measure to within 1 billionth of a second. The clocks we need to measure that accurately have been the size of a refrigerator, she noted. The new Deep Space Atomic Clock shrinks that to size of a gallon of milk, making it feasible to include it on board spacecraft.
That will enable one-way tracking, when paired with data gathered by an onboard camera, using a signal from Earth to spacecraft, or from spacecraft to Earth, but with no round-trip needed. This allows for more efficient tracking across all flights, because you do less time sharing with existing deep space network. It also enables “self-driving spacecraft,” as Seubert put it, which requires no direction at all from navigators on earth.
That could even enable astronauts working on other planets to take advantage of something like a “Google Maps, Mars edition,” Seubert said, with the confidence to rely on the accuracy of the information and automated navigation systems that make use of this tech.
Use of Deep Space Atomic Clock-based navigation can also enable travel to locations so far away that two way communication just isn’t feasible or possible.
This research mission is the first space test of this technology, and will involve testing in low-earth orbit and a key ingredient for proving its viability.
Green Propellant Infusion Mission
The Green Propellant Infusion Mission, or GPIM for short, will demonstrate a ‘green’ alternative to the usual rocket fuel used in launch- and spacecraft. It’s being run in tandem with Ball Aerospace, and will see a small satellite loaded with this alternative fuel (which is a Hydroxyl Ammonium Nitrate blend, for the chemists in the crowd) make use of it to demonstrate its viability as a space-based propellant. This is the first time the green fuel alternative will have been tested in space.
“Most people that work on spacecraft systems these days realize that when you’re flying spacecraft these days we’re relying heavily on heritage [technologies],” explained Christopher McClean, Principal Investigator for NASA’s GPIM at Ball Aersopace. The goal here is to help overcome industry biases that tend to favor these methods with proof of the viability of alternatives. This fuel is also non-toxic, as opposed to the highly-toxic typical spacecraft propellants like hydrazine.
This is the third flight of this specific design of spacecraft (called the BCP-100), which is roughly the size of a refrigerator and has room for an experimental payload – this time around it’s going to be driven by the new green propulsion subsystem. This spacecraft will have five thrusters on board that will help test this propellant through various maneuvers to be performed. The combined capabilities of the propellant can also return the craft to Earth’s atmosphere at the end of its mission.
“We’re not leaving any orbital debris up there, which is part of the ‘green’ of this experiment, in my opinion” noted McClean. Debate has renewed of late about the responsibilities of launch and spacecraft companies regarding orbital debris, sparked in part by SpaceX’s recent launch of part of its Starlink satellite constellation.
This new fuel is not only better performing, but is actually easier to work with because of its non-toxic nature, and it can be transported in spacecraft, so that open up the possibility of shipping fueled craft and also using it safely in research and academic environments, which is huge for unlocking work and study potential.
Space Environment Testbed
The Space Environment Testbed (SET – NASA loves acronyms) project that will fly through medium Earth orbit to help determine whether this region of space (called the ‘slot’ because it slots between two radiation belts) has less radiation than lower orbit space, which could make it a prime locale for navigation and communication satellites that are negatively affected by the radiation present in low-Earth orbit.
NASA Heliophysics Division Director Nicky Fox explained how the SET payloads will be hosted on the Air Force Defence Science and Technology Group mission also going up aboard Falcon Heavy. She said that there will be four different kinds of hardware, designed to demonstrate how they perform under exposure to radiation.
It’s “very important for us to demonstrate how we can harden these,” and work with problems they encounter under these conditions, Fox explained. “We don’t want to be launching a battleship when a dinghy will do – these will help us look at the right kind of materials” and how best to configure them when designing tools and instruments for space-based use.
The experiment will also help with more research into the medium-orbit space itself, and why it behaves the way that it does. “Why is there a slot region, why does it behave like it does, and why does it occasionally get completely filled with particle activity,” Fox offered as the kinds of questions it’ll help provide answers for.
A render of a Space Environment Testbest.
Enhanced Tandem Beacon Experiments
The fourth and final experiment aboard the upcoming Falcon Heavy launch is the Enhanced Tandem Beacon Experiment, including two dedicated CubeSats operated by NASA that make up the Tandem Beacon Experiment. The ‘enhanced part comes from work being done jointly with the COSMIC-2 (Constellation Observing System for Meterology, Ionosphere and Climate-2) – six micro satellites that will act in constellation and monitor Earth continually to gather atmospheric data that can be used to forecast weather, monitor the climate, and observe and research space weather – yes, space has weather.
Rick Doe, a Senior Research Physicist at SRI International explained that you can corrupt radio signals when you cross the ionosphere, and “radio waves are particularly susceptible to distortions” when they encounter disruptions to ions. We depend on these distorted radio signals for navigation on Earth, including commercial aircraft, so this distortion can be a key determinate when doing likes like autonomously navigating aircraft. Being able to determine when signals are particularly distorted by concentrated distorting activity in the ionosphere can help make sure that autonomous navigation takes into account and forecasts for these things in order to help mitigate their impact.
It’s not about countering the effect of this activity – Doe notes that it’s like a tornado in terms of terrestrial weather: You don’t try to counter the tornado, you plan around it and its impact when you’re able to predict is occurrence. The TBECs program will provide similar prediction and mitigation abilities for solar weather.
SpaceX’s mission is currently set for launch on June 24 at 11:30 PM ET, and it’ll carry all of the above on behalf of client NASA. We’ll have coverage of the launch so check back later this month for more.
MIT researchers have created a new autonomous robot boat prototype — which they have named “roboats” to my everlasting glee — that can target and combine with one another Voltron-style to create new structures. Said structures could be bigger boats, but MIT is thinking a bit more creatively — it envisions a fleet of these being able to join up to form on-demand urban infrastructure, including stages for concerts, walking bridges or even entire outdoor markets.
The roboats would of course be able to act as autonomous water taxis and ferries, which could be particularly useful in a setting like Amsterdam, which is why MIT teamed up with Amsterdam’s Institute for Advanced Metropolitan Solutions on this. Equipped with sensors, sub-aquatic thrusters, GPS, cameras and tiny computer brains, the roboats can currently follow a pre-determined path, but testing on newer 3D-printed prototypes introduced a level of autonomy that can accomplish a lot more.
New tests focused on a custom latching system, with a very high degree of precision, that can connect to specific points with millimetre accuracy, using a trial and error algorithm-based autonomous programming to make sure they connect to their target correctly. The initial use case in Amsterdam that MIT identified is overnight garbage collection, where these could act as mini barges working the canal to quickly and easily clear refuse left out by residents and store owners.
Longer-term, the vision is to see what kind of additional configurations might be possible, including larger platforms that can support people on board, and “tentacle-like rubber grippers that tighten around the pin — like a squid grasping its prey” to improve the latching mechanism in a way inspired by a somewhat terrifying visual.
For the first time, Amazon today showed off its newest fully electric delivery drone at its first re:Mars conference in Las Vegas. Chances are, it neither looks nor flies like what you’d expect from a drone. It’s an ingenious hexagonal hybrid design, though, that has very few moving parts and uses the shroud that protects its blades as its wings when it transitions from vertical, helicopter-like flight at takeoff to its airplane-like mode.
These drones, Amazon says, will start making deliveries in the coming months, though it’s not yet clear where exactly that will happen.
What’s maybe even more important, though, is that the drone is chock-full of sensors and a suite of compute modules that run a variety of machine learning models to keep the drone safe. Today’s announcement marks the first time Amazon is publicly talking about those visual, thermal and ultrasonic sensors, which it designed in-house, and how the drone’s autonomous flight systems maneuver it to its landing spot. The focus here was on building a drone that is as safe as possible and able to be independently safe. Even when it’s not connected to a network and it encounters a new situation, it’ll be able to react appropriately and safely.
When you see it fly in airplane mode, it looks a little bit like a TIE fighter, where the core holds all the sensors and navigation technology, as well as the package. The new drone can fly up to 15 miles and carry packages that weigh up to five pounds.
This new design is quite a departure from earlier models. I got a chance to see it ahead of today’s announcement and I admit that I expected a far more conventional design — more like a refined version of the last, almost sled-like, design.
Amazon’s last generation of drones looked very different.
Besides the cool factor of the drone, though, which is probably a bit larger than you may expect, what Amazon is really emphasizing this week is the sensor suite and safety features it developed for the drone.
Ahead of today’s announcement, I sat down with Gur Kimchi, Amazon’s VP for its Prime Air program, to talk about the progress the company has made in recent years and what makes this new drone special.
“Our sense and avoid technology is what makes the drone independently safe,” he told me. “I say independently safe because that’s in contrast to other approaches where some of the safety features are off the aircraft. In our case, they are on the aircraft.”
Kimchi also stressed that Amazon designed virtually all of the drone’s software and hardware stack in-house. “We control the aircraft technologies from the raw materials to the hardware, to software, to the structures, to the factory to the supply chain and eventually to the delivery,” he said. “And finally the aircraft itself has controls and capabilities to react to the world that are unique.”
(JORDAN STEAD / Amazon)
What’s clear is that the team tried to keep the actual flight surfaces as simple as possible. There are four traditional airplane control surfaces and six rotors. That’s it. The autopilot, which evaluates all of the sensor data and which Amazon also developed in-house, gives the drone six degrees of freedom to maneuver to its destination. The angled box at the center of the drone, which houses most of the drone’s smarts and the package it delivers, doesn’t pivot. It sits rigidly within the aircraft.
It’s unclear how loud the drone will be. Kimchi would only say that it’s well within established safety standards and that the profile of the noise also matters. He likened it to the difference between hearing a dentist’s drill and classical music. Either way, though, the drone is likely loud enough that it’s hard to miss when it approaches your backyard.
To see what’s happening around it, the new drone uses a number of sensors and machine learning models — all running independently — that constantly monitor the drone’s flight envelope (which, thanks to its unique shape and controls, is far more flexible than that of a regular drone) and environment. These include regular camera images and infrared cameras to get a view of its surroundings. There are multiple sensors on all sides of the aircraft so that it can spot things that are far away, like an oncoming aircraft, as well as objects that are close, when the drone is landing, for example.
The drone also uses various machine learning models to, for example, detect other air traffic around it and react accordingly, or to detect people in the landing zone or to see a line over it (which is a really hard problem to solve, given that lines tend to be rather hard to detect). To do this, the team uses photogrammetrical models, segmentation models and neural networks. “We probably have the state of the art algorithms in all of these domains,” Kimchi argued.
Whenever the drone detects an object or a person in the landing zone, it obviously aborts — or at least delays — the delivery attempt.
“The most important thing the aircraft can do is make the correct safe decision when it’s exposed to an event that isn’t in the planning — that it has never been programmed for,” Kimchi said.
The team also uses a technique known as Visual Simultaneous Localization and Mapping (VSLAM), which helps the drone build a map of its current environment, even when it doesn’t have any other previous information about a location or any GPS information.
“That combination of perception and algorithmic diversity is what we think makes our system uniquely safe,” said Kimchi. As the drone makes its way to the delivery location or back to the warehouse, all of the sensors and algorithms always have to be in agreement. When one fails or detects an issue, the drone will abort the mission. “Every part of the system has to agree that it’s okay to proceed,” Kimchi said.
What Kimchi stressed throughout our conversation is that Amazon’s approach goes beyond redundancy, which is a pretty obvious concept in aviation and involves having multiple instances of the same hardware on board. Kimchi argues that having a diversity of sensors that are completely independent of each other is also important. The drone only has one angle of attack sensor, for example, but it also has a number of other ways to measure the same value.
Amazon isn’t quite ready to delve into all the details of what the actual on-board hardware looks like, though. Kimchi did tell me that the system uses more than one operating system and CPU architecture, though.
It’s the integration of all of those sensors, AI smarts and the actual design of the drone that makes the whole unit work. At some point, though, things will go wrong. The drone can easily handle a rotor that stops working, which is pretty standard these days. In some circumstances, it can even handle two failed units. And unlike most other drones, it can glide if necessary, just like any other airplane. But when it needs to find a place to land, its AI smarts kick in and the drone will try to find a safe place to land, away from people and objects — and it has to do so without having any prior knowledge of its surroundings.
To get to this point, the team actually used an AI system to evaluate more than 50,000 different configurations. Just the computational fluid dynamics simulations took up 30 million hours of AWS compute time (it’s good to own a large cloud when you want to build a novel, highly optimized drone, it seems). The team also ran millions of simulations, of course, with all of the sensors, and looked at all of the possible positions and sensor ranges — and even different lenses for the cameras — to find an optimal solution. “The optimization is what is the right, diverse set of sensors and how they are configured on the aircraft,” Kimchi noted. “You always have both redundancy and diversity, both from the physical domain — sonar versus photons — and the algorithmic domain.”
The team also ran thousands of hardware-in-the-loop simulations where all the flight services are actuating and all the sensors are perceiving the simulated environment. Here, too, Kimchi wasn’t quite ready to give away the secret sauce the team uses to make that work.
And the team obviously tested the drones in the real world to validate its models. “The analytical models, the computational models are very rich and are very deep, but they are not calibrated against the real world. The real world is the ultimate random event generator,” he said.
It remains to be seen where the new drone will make its first deliveries. That’s a secret Amazon also isn’t quite ready to reveal yet. That will happen within the next few months, though. Amazon started drone deliveries in England a while back, so that’s an obvious choice, but there’s no reason the company could opt for another country as well. The U.S. seems like an unlikely candidate, given that the regulations there are still in flux, but maybe that’s a problem that will be solved by then, too. Either way, what once looked like a bit of a Black Friday stunt may just land in your backyard sooner than you think.
Apple will soon let you grant apps access to your iPhone’s location just once.
Until now, there were three options — “always,” “never,” or “while using,” meaning an app could be collecting your real-time location as you’re using it.
Apple said the “just once” location access is a small change — granted — but one that’s likely to appeal to the more privacy-minded folk.
“For the first time, you can share your location to an app — just once — and then require it to ask you again next time at wants,” said Apple software engineering chief Craig Federighi at its annual developer conference on Monday.
That’s going to be helpful for those who download an app that requires your immediate location, but you don’t want to give it persistent or ongoing access to your whereabouts.
On top of that, Apple said that the apps that you do grant location access to will also have that information recorded on your iPhone in a report style, “so you’ll know what they are up to,” said Federighi.
Apps don’t always use your GPS to figure out where you are. All too often, apps use your Wi-Fi network information, IP address, or even Bluetooth beacon data to figure out where you physically are in the world so they can better target you with ads. Federighi said it will be “shutting the door on that abuse” as well.
The new, more granular location-access feature will feature in iOS 13, expected out later this year,.
In many large cities across Africa, motorcycle taxies are as common as yellow-cabs in New York.
That includes Lagos, Nigeria, where ride-hail startup Gokada has raised a $5.3 million Series A round to grow its two-wheel transit business.
Gokada has trained and on-boarded over 1000 motorcycles and their pilots on its app that connects commuters to moto-taxis and the company’s signature green, DOT approved helmets.
The startup has completed nearly 1 million rides since it was co-founded in 2018 by Fahim Saleh—a Bangladeshi entrepreneur who previously founded and exited Pathao, a motorcycle, bicycle, and car transportation company.
Gokada will use the financing to increase its fleet and ride volume, while developing a network to offer goods and services to its drivers. “We’re going to start a Gokada club in each of the cities with a restaurant where drivers can relax, and we’ll experiment with a Gokada Shop, where drivers can get things they need on a regular basis, such as plantains, yams, and rice,” Saleh told TechCrunch.
The startup differs from other ride-hail ventures in that it doesn’t split fare revenue with drivers. Gokada charges drivers a flat-fee of 3000 Nigerian Naira a day (around $8) to work on their platform. The company is looking to generate a larger share of its revenue from building a commercial network around its rider community.
“We don’t do anything with the fares. We want to create an Amazon prime type membership…and ecosystem around the driver where we’re going to provide them more and more services, such as motorcycle insurance, maintenance, personal life-insurance, micro-finance loans,” Saleh said.
“We’re trying to provide a network of great services for our drivers that makes them stick with us, and not necessarily see a reason to switch to other platforms,” said Saleh.
Competition among those platforms is heating up, as global players enter Africa’s motorcycle taxi market and local startups raise VC and expand to new countries.
Uber began offering a two-wheel transit option in East Africa in 2018, around the same time Bolt (previously Taxify) started motorcycle taxi service in Kenya.
Safeboda will use the round to further expand in East Africa and Nigeria in the near future, the startup’s CEO Maxime Diedonne confirmed to TechCrunch.
In Nigeria, Gokada faces a competitor in local startup MAX.ng, which offers mobile based passenger and logistics delivery services.
Overall, Africa’s motorcycle taxi market is becoming a significant sub-sector in the continent’s e-transport startup landscape. Two-wheel transit startups are vying to digitize a share of Africa’s boda boda and okada markets (the name for motorcycle taxis in East and West Africa)—representing a collective revenue pool of $4 billion and expected to double to $9 billion by 2021, according to a TechSci study.
“There is a formalization of an informal sector play here…to make it safer and higher quality,” Gokada investor Nazar Yasin of Rise Capital told TechCrunch.
The appeal to passengers is the lower cost of motorbike transit compared to buses or cabs ($1.85 is Gokada’s average fare) and the ability of two-wheelers to cut through the heavy congestion in cities such as Lagos and Nairobi.
A notable facet of motorcycle ride-hail companies in Africa is better organizing a space with a reputation for being somewhat chaotic and downright dangerous (see Nigeria’s past bans on the sector entirely due to safety).
For Gokada that includes training courses and certification of riders, the ability to track trips and safety stats from the app, and quality control for motorcycles—something that’s been lacking in East and West Africa’s non-digital moto-taxi space.
The company’s rider program offers a way for drivers to buy, own, and maintain their motorcycles as they earn. Gokada has entered into partnership with Indian motorcycle maker TVS Motors to create a custom version of the company’s TVS Apache motorcycles for Gokada drivers.
Gokada is also experimenting with adding sensors to its fleet to better track safety standards. “We’re looking at seat sensors and another GPS sensor to track things like ‘did this driver add more than one passenger on the bike’ and all that data will feed back into our servers,” Saleh said.
The company won’t enter any new countries in Africa in the near future. “We plan to expand all over Nigeria. We think its a large enough market for now,” said Gokada CEO Fahim Saleh. Nigeria is Africa’s most populous nation (190 million) and largest economy.
Thousands of TP-Link routers are vulnerable to a bug that can be used to remotely take control the device, but it took over a year for the company to publish the patches on its website.
The vulnerability allows any low-skilled attacker to remotely gain full access to an affected router. The exploit relies on the router’s default password to work, which many don’t change.
In the worst case scnario, an attacker could target vulnerable devices on a massive scale, using similar mechanism to how botnets like Mirai worked — by scouring the web and hijacking routers using default passwords like “admin” and “pass”.
Andrew Mabbitt, founder of U.K. cybersecurity firm Fidus Information Security, first discovered and disclosed the remote code execution bug to TP-Link in October 2017. TP-Link released a patch a few weeks later for the vulnerable WR940N router, but Mabbitt warned TP-Link again in January 2018 that another router, TP-Link’s WR740N, was also vulnerable to the same bug because the company reused vulnerable code between devices.
TP-Link said the vulnerability was quickly patched in both routers. But when we checked, the firmware for WR740N wasn’t available on the website.
When asked, a TP-Link spokesperson said the update was “currently available when requested from tech support,” but wouldn’t explain why. Only after TechCrunch reached out, TP-Link updated the firmware page to include the latest security update.
Top countries with vulnerable WR740N routers. (Image: Shodan)
Routers have long been notorious for security problems. At the heart of any network, any flaw affecting a router can have disastrous effects on every connected device. By gaining complete control over the router, Mabbitt said an attacker could wreak havoc on a network. Modifying the settings on the router affects everyone who’s connected to the same network, like altering the DNS settings to trick users into visiting a fake page to steal their login credentials.
TP-Link declined to disclose how many potentially vulnerable routers it had sold, but said that the WR740N had been discontinued a year earlier in 2017. When we checked two search engines for exposed devices and databases, Shodan and Binary Edge, each suggested there are anywhere between 129,000 and 149,000 devices on the internet — though the number of vulnerable devices is likely far lower.
Mabbitt said he believed TP-Link still had a duty of care to alert customers of the update if thousands of devices are still vulnerable, rather than hoping they will contact the company’s tech support.
Both the U.K. and the U.S. state of California are set to soon require companies to sell devices with unique default passwords to prevent botnets from hijacking internet-connected devices at scale and using their collective internet bandwidth to knock websites offline.
The Mirai botnet downed Dyn, a domain name service giant, which knocked dozens of major sites offline for hours — including Twitter, Spotify and SoundCloud.
U.K. space accelerator Seraphim Space Camp appeared last year as the first ever UK SpaceTech accelerator and being, frankly, the only accelerator of its type, it has quickly shored-up a number of partnership links and hoovered up many of the startups in the… space.
As is traditional with accelerators, it’s unveiled its latest cohort of startups.
The “Mission 3” accelerator programme, consists of seven new startups and will be a nine-week programme, culminating in an Investor Day. The first two cohorts (16 companies) have, says Seraphim, now collectively raised or had offers of around £20m in private investment and grants over the last nine months and have created 58 job opportunities since completing the programme.
The initiative has a number of high profile partners such as MoD’s Defence Science and Technology Lab (Dstl), unmanned aircraft systems company AeroVironment, ground station services provider Kongsberg Satellite Services and leading satellite provided Eutelsat. They join returning partnersAirbus, Rolls-Royce, Dentons, Cyient, Inmarsat, the European Space Agency and SA Catapult.
Here’s a run-down of the companies in their own words:
aXenic is a global leader in the design, development and production of optical modulators for communications and sensing. It has developed a revolutionary optical modulator which is a vital component to transfer high quantities of data through satellite links. Its strongly patented solution is a fraction of the weight and size of existing solutions whilst still providing greater bandwidth.
ConstelIR (Fellowship) is a spin-out from Fraunhofer University. It is the first company to map precise on-earth temperatures from a constellation of satellites it plans to launch. This orbital temperature monitoring system will be 3% of the cost of existing solutions without losing measurement accuracy.
Hawa Dawa combines its proprietary IoT smart sensor data with other sources of data (including satellite data) to give highly accurate data on air quality. Within two years’ traction with Siemens, IBM, SwissCom, Swiss Pos as well as 8 cities in Germany demonstrates that the company’s end-to-end solution is really needed by the market.
Methera Global plans to launch a constellation flexible and dynamic satellites, focusing all of its bandwidth to a small area of target regions from multiple satellites. Providing an increase in capacity which is 10 times greater than planned systems, initial customers are the likes of emergency response services helping communities who don’t currently have access to the internet – 4B of the world’s population.
Trik is an enterprise drone 3D mapping software for structural inspection with a unique platform. Its platform transforms the data into a customisable 3D model which can be updated on a regular basis.
Xonaspace (Fellowship) uses an XPS and LEO satellite constellation for extremely precise GPS systems, with early end applications including autonomous vehicles
Veoware’s vision is to industrialise the space sector so that hardware, software and data become easily accessible and rapidly available for the benefit of humankind. This is made possible due to its team of world renowned experts with 100+ years of experience
British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”
Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.
AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.
How do computers know where it is in the world? (Localization + Mapping)
How do computers understand what the world looks like? (Geometry)
How do computers understand the world as we do? (Semantics)
Part 1: How do computers know where it is in the world? (Localization)
Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/
When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.