These robo-ants can work together in swarms to navigate tricky terrain

While the agility of a Spot or Atlas robot is something to behold, there’s a special merit reserved for tiny, simple robots that work not as a versatile individual but as an adaptable group. These “tribots” are built on the model of ants, and like them can work together to overcome obstacles with teamwork.

Developed by EPFL and Osaka University, tribots are tiny, light and simple, moving more like inchworms than ants, but able to fling themselves up and forward if necessary. The bots themselves and the system they make up are modeled on trap-jaw ants, which alternate between crawling and jumping, and work (as do most other ants) in fluid roles like explorer, worker and leader. Each robot is not itself very intelligent, but they are controlled as a collective that deploys their abilities intelligently.

In this case a team of tribots might be expected to get from one end of a piece of complex terrain to another. An explorer could move ahead, sensing obstacles and relaying their locations and dimensions to the rest of the team. The leader can then assign worker units to head over to try to push the obstacles out of the way. If that doesn’t work, an explorer can try hopping over it — and if successful, it can relay its telemetry to the others so they can do the same thing.

fly tribot fly

Fly, tribot, fly!

It’s all done quite slowly at this point — you’ll notice that in the video, much of the action is happening at 16x speed. But rapidity isn’t the idea here; similar to Squishy Robotics’ creations, it’s more about adaptability and simplicity of deployment.

The little bots weigh only 10 grams each, and are easily mass-produced, as they’re basically PCBs with some mechanical bits and grip points attached — “a quasi-two-dimensional metamaterial sandwich,” according to the paper. If they only cost (say) a buck each, you could drop dozens or hundreds on a target area and over an hour or two they could characterize it, take measurements and look for radiation or heat hot spots, and so on.

If they moved a little faster, the same logic and a modified design could let a set of robots emerge in a kitchen or dining room to find and collect crumbs or scoot plates into place. (Ray Bradbury called them “electric mice” or something in “There will come soft rains,” one of my favorite stories of his. I’m always on the lookout for them.)

Swarm-based bots have the advantage of not failing catastrophically when something goes wrong — when a robot fails, the collective persists, and it can be replaced as easily as a part.

“Since they can be manufactured and deployed in large numbers, having some ‘casualties’ would not affect the success of the mission,” noted EPFL’s Jamie Paik, who co-designed the robots. “With their unique collective intelligence, our tiny robots can demonstrate better adaptability to unknown environments; therefore, for certain missions, they would outperform larger, more powerful robots.”

It raises the question, in fact, of whether the sub-robots themselves constitute a sort of uber-robot? (This is more of a philosophical question, raised first in the case of the Constructicons and Devastator. Transformers was ahead of its time in many ways.)

The robots are still in prototype form, but even as they are, constitute a major advance over other “collective” type robot systems. The team documents their advances in a paper published in the journal Nature.

MIT’s human-mirroring robot nails the Bottle Cap Challenge

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has a robot that can mirror the actions of a human just by watching their bicep. This has a number of practical applications, including potentially assisting a person to lift large or awkward objects, but it can also be applied to… more topical pursuits.

The robot, which CSAIL calls RoboRaise, managed to successfully take its cue from its human pal and perform the Bottle Cap Challenge, which is that thing going around right now where people (including tons of celebrities) either successfully or unsuccessfully try to demonstrate extreme motor control by kicking off the threaded cap of a plastic bottle with well-placed kick.

RoboRaise can’t kick because, well, because it’s just an arm – but it’s still an impressive demonstration of how well its mimicry and soft robotic hand appendage works to see it manage to spin the cap off the bottle without disturbing the bottle itself. The robot itself seems pretty pleased with it own prowess, too, judging by the smug smile on the face on its display.

This new autonomous startup has designed its delivery robot to conquer winter

Refraction, a new autonomous delivery robot company that came out of stealth Wednesday at TC Sessions: Mobility, sees opportunity in areas most AV startups are avoiding: regions with the worst weather.

The company, founded by University of Michigan professors Matthew Johnson-Roberson and Ram Vasudevan, calls its REV-1 delivery robot the “Goldilocks of autonomous vehicles.”

The pair have a long history with autonomous vehicles. Johnson-Roberson got his start by participating in the DARPA Grand Challenge in 2004 and stayed in academia researching and then teaching robotics. Vasudevan’s career had a stint at Ford working on control algorithms for autonomous operations on snow and ice. Both work together at University of Michigan’s Robotics Program.

The REV-1 is lightweight and low cost — there are no expensive lidar sensors on the vehicle — it operates in a bike lane and is designed to travel in rain or snow, Johnson-Roberson, co-founder and CEO of Refraction told TechCrunch.

The robot, which debuted onstage at the California Theater in San Jose during the event, is about the size of an electric bicycle. The REV-1 weighs about 100 pounds and stands about 5 feet tall and is 4.5 feet long. Inside the robot is 16 cubic feet of space, enough room to fit four or five grocery bags.

It’s not particularly fast — top speed is 15 miles per hour. But because it’s designed for a bike lane, it doesn’t need to be. That slower speed and lightweight design allows the vehicle to have a short stopping distance of about five feet.

Refraction has backing from eLab Ventures and Trucks Venture Capital.

Consumers have an appetite and an expectation for on-demand goods that are delivered quickly. But companies are struggling to find consistent, reliable and economical ways to address that need, said Bob Stefanski, managing director of eLab Ventures.

Stefanksi believes Refraction’s sturdy, smaller-sized delivery robots will allow for faster technology development and will be able to cover a larger service area than competitors operating on the sidewalk.

“Their vehicles are also lightweight enough to deploy more safely than a self-driving car or large robot,” Stefanski noted. “The market is huge, especially in densely populated areas.”

The REV-1 uses a system of 12 cameras as its primary sensor system, along with radar and ultrasound sensors for additional safety.

“It doesn’t make sense economically speaking to use a $10,000 lidar to deliver $10 of food,” Johnson-Roberson said. By skipping the more expensive lidar sensor, they’re able to keep the total cost of the vehicle to $5,000.

The company’s first test application is with local restaurant partners. The company hopes to lock in bigger national partnerships in the next six months. But don’t expect those to be in the southwest or California, where so many other autonomous vehicle companies are testing.

“Other companies are not trying to run in the winter here,” Johnson-Roberson said. “It’s a different problem than the one that others are trying to solve, so we hope that gives us some space to breathe and some chance to carve out some opportunity.”

Luminar eyes production vehicles with $100M round and new Iris lidar platform

Luminar is one of the major players in the new crop of lidar companies that have sprung up all over the world, and it’s moving fast to outpace its peers. Today the company announced a new $100M funding round, bringing its total raised to over $250M — as well as a perception platform and a new, compact lidar unit aimed at inclusion in actual cars. Big day!

The new hardware, called Iris, looks to be about a third of the size of the test unit Luminar has been sticking on vehicles thus far. That one was about the size of a couple hardbacks stacked up, and Iris is more like a really thick sandwich.

Size is very important, of course, since few cars just have caverns of unused space hidden away in prime surfaces like the corners and windshield area. Other lidar makers have lowered the profiles of their hardware in various ways; Luminar seems to have compactified in a fairly straightforward fashion, getting everything into a package smaller in every dimension.

Luminar IRIS AND TEST FLEET LiDARS

Test model, left, Iris on the right.

Photos of Iris put it in various positions: below the headlights on one car, attached to the rear-view mirror in another, and high up atop the cabin on a semi truck. It’s small enough that it won’t have to displace other components too much, although of course competitors are aiming to make theirs even more easy to integrate. That won’t matter, Luminar founder and CEO Austin Russell told me recently, if they can’t get it out of the lab.

“The development stage is a huge undertaking — to actually move it towards real-world adoption and into true
series production vehicles,” he said (among many other things). The company who gets there first will lead the industry, and naturally he plans to make Luminar that company.

Part of that is of course the production process, which has been vastly improved over the last couple years. These units can be made quickly enough that they can be supplied by the thousands rather than dozens, and the cost has dropped precipitously — by design.

Iris will cost under $1,000 per unit for production vehicles seeking serious autonomy, and for $500 you can get a more limited version for more limited purposes like driver assistance, or ADAS. Luminar says Iris is “slated to launch commercially on production vehicles beginning in 2022,” but that doesn’t mean necessarily that they’re shipping to customers right now. The company is negotiating more than a billion dollars in contracts at present, a representative told me, and 2022 would be the earliest that vehicles with Iris could be made available.

LUMINAR IRIS TRAFFIC JAM PILOT

The Iris units are about a foot below the center of the headlight units here. Note that this is not a production vehicle, just a test one.

Another part of integration is software. The signal from the sensor has to go somewhere, and while some lidar companies have indicated they plan to let the carmaker or whoever deal with it their own way, others have opted to build up the tech stack and create “perception” software on top of the lidar. Perception software can be a range of things: something as simple as drawing boxes around objects identified as people would count, as would a much richer process that flags intentions, gaze directions, characterizes motions and suspected next actions, and so on.

Luminar has opted to build into perception, or rather has revealed that it has been working on it for some time. It now has 60 people on the task split between Palo Alto and Orlando, and hired a new VP of Software, former robo-taxi head at Daimler Christoph Schroder.

What exactly will be the nature and limitations of Luminar’s perception stack? There are dangers waiting if you decide to take it too far, since at some point you begin to compete with your customers, carmakers who have their own perception and control stacks that may or may not overlap with yours. The company gave very few details as to what specifically would be covered by its platform, but no doubt that will become clearer as the product itself matures.

Last and certainly not least is the matter of the $100 million in additional funding. This brings Luminar to a total of over a quarter of a billion dollars in the last few years, matching its competitor Innoviz, which has made similar decisions regarding commercialization and development.

The list of investors has gotten quite long, so I’ll just quote Luminar here:

G2VP, Moore Strategic Ventures, LLC, Nick Woodman, The Westly Group, 1517 Fund / Peter Thiel, Canvas Ventures, along with strategic investors Corning Inc, Cornes, and Volvo Cars Tech Fund.

The board has also grown, with former Broadcom exec Scott McGregor and G2VP’s Ben Kortland joining the table.

We may have already passed “peak lidar” as far as sheer number of deals and startups in the space, but that doesn’t mean things are going to cool down. If anything the opposite, as established companies battle over lucrative partnerships and begin eating one another to stay competitive. Seems like Luminar has no plans on becoming a meal.

Bankrupt Maker Faire revives, reduced to Make Community

Maker Faire and Maker Media are getting a second chance after suddenly going bankrupt, but they’ll return in a weakened capacity. Sadly, their flagship crafting festivals remain in jeopardy, and it’s unclear how long the reformed company can survive.

Maker Media suddenly laid off all 22 employees and shut down last month, as first reported by TechCrunch. Now its founder and CEO Dale Dougherty tells me he’s bought back the brands, domains, and content from creditors and rehired 15 of 22 laid off staffers with his own money. Next week, he’ll announce the relaunch of the company with the new name “Make Community“.

Read our story about how Maker Faire fell apart

The company is already working on a new issue of Make Magazine that it will hope to publish quarterly (down from six times per year) and the online archives of its do-it-yourself project guides will remain available. I hopes to keep publishing books. And it will continue to license the Maker Faire name to event organizers who’ve thrown over 200 of the festivals full of science-art and workshops in 40 countries. But Dougherty doesn’t have the funding to commit to producing the company-owned flagship Bay Area and New York Maker Faires any more.

Maker Faire Layoffs

“We’ve succeeded in just getting the transition to happen and getting Community set up” Dougherty tells me. But sounding shaky, he asks “Can I devise a better model to do what we’ve been doing the past 15 years? I don’t know if I have the answer yet.” Print publishing proved tougher and tougher recently. Combined with declining corporate sponsorships of the main events, Maker Media was losing too much money to stay afloat last time.

On June 3rd, we basically stopped doing business. And, you know, the bank froze our accounts” Dougherty said at a meetup he held in Oakland to take feedback on his plan, according a recording made by attendee Brian Benchoff. Grasping for a way to make the numbers work, he told the small crowd gathered “I’d be happy if someone wanted to take this off my hands.”

Maker Faire

Maker Faire [Image via Maker Faire Instagram]

For now, Dougherty is financing the revival himself “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.”

Dale 1

Maker Faire founder and Make Community CEO Dale Dougherty

The immediate plan is to announce a new membership model next week at Make.co where hobbyists and craft-lovers can pay a monthly or annual fee to become patrons of Make Community. Dougherty was cagey about what they’ll get in return beyond a sense of keeping alive the organization that’s held the maker community together since 2005. He does hope to get the next Make Magazine issue out by the end of summer or early fall, and existing subscribers should get it in the mail.

The company is still determining whether to move forward as a non-profit or co-op instead of as a venture-backed for-profit as before. “The one thing i don’t like about non-profit is that you end up working for the source you got the money from. You dance to their tune to get their funding” he told the meetup.

Last time, he burned through $10 million in venture funding from Obvious Ventures, Raine Ventures, and Floodgate. That could make VCs weary of putting more cash into a questionable business model. But if enough of the 80,000 remaining Make Magazine subscribers, 1 million YouTube followers, and millions who’ve attended Maker Faire events step up, pehaps the company can find surer footing.

“I hope this is actually an opportunity not just to revive what we do but maybe take it to a new level” Dougherty tells me. After all, plenty of today’s budding inventors and engineers grew up reading Make Magazine and being awestruck by the massive animatronic creations featured at its festivals.

Audibly peturbed, the founder exclaimed at his community meetup “It frustrates the heck out of me thinking that I’m the one backing up Maker Faire when there’s all these billionaires in the valley.”

Maker Faire lives

May Mobility reveals prototype of a wheelchair-accessible autonomous vehicle

Autonomous transportation startup May Mobility is doing more than just talking about accessibility when it comes to self-driving transportation tech development. The company recently began developing a wheelchair-accessible prototype version of its autonomous shuttle vehicle, and just concluded an initial round of gathering feedback from the community of people in Columbus, Ohio, who would actually be using the shuttle.

May Mobility’s design includes accommodations for entry and exit, as well as for securing the passenger’s wheelchair once it’s on board during the course of the trip. The company learned from the first round of feedback that its design needs improvement in terms of making the ramp longer to facilitate more gradual onboarding and disembarking, as well as optimizing pick-up and drop-off points.

It still plans to work on implementing some improvements, before deploying its vehicles, but we can expect to see accessible May Mobility shuttles in operation across its pilots in Columbus, Providence and Grand Rapids soon, according to the company.

Ultimately, though, the company says that it feels its solution is perceived as at least on par with existing accessible transit options currently in service in the area.

may mobility alisyn malek

May Mobility Chief Operating Officer and co-founder Alisyn Malek speaking at TechCrunch Sessions: Mobility on July 10, 2019.

“For us, our focus is how we can transform cities, making them safer, greener and more accessible for everybody,” said May Mobility co-founder and COO Alisyn Malek on stage at TechCrunch Sessions: Mobility. “How can we make transportation easier for everybody? And part of that is we really have to think about ‘everybody.'”

May Mobility’s vehicles are specifically low-speed electric vehicles, for which there aren’t yet clear guidelines or regulations around their design and safety features, so the company thinks it makes sense to work directly with community members to get a head start on accessible design. And one of the constant refrains from autonomous vehicle companies is that their technology will bring access to people who otherwise wouldn’t be able to make use of cars, but few have shown concrete steps they’re taking to actually address the practical realities of true accessibility.

Some others in the industry are taking action, however, including Lyft, which is working with its autonomous technology partner Aptiv and the National Federation of the Blind on designing self-driving service that works for blind and low-vision passengers. But May Mobility’s service has the advantage of operating commercially for the public in defined, manageable engagements that provide value for the community now, which means the actions it’s taking toward accessibility will have real benefit where it’s already in service.

Udelv partners with HEB on Texas autonomous grocery delivery pilot

Autonomous delivery company Udelv has signed yet another partner to launch a new pilot of its self-driving goods delivery service: Texas-based supermarket chain HEB Group. The pilot will provide service to customers in Olmos Park, just outside of downtown San Antonio where the grocery retailer is based.

California-based Udelv will provide HEB with one of its Newton second-generation autonomous delivery vehicles, which are already in service in trials in the Bay Area, Arizona and Houston providing deliveries on behalf of some of Udelv’s other clients, which include Walmart among others.

Udelv CEO and founder Daniel Laury explained in an interview that they’re very excited to be partnering with HEB, because of the company’s reach in Texas, where it’s the largest grocery chain with approximately 400 stores. This initial phase only covers one car and one store, and during this part of the pilot the vehicle will have a safety driver on board. But the plan includes the option to expand the partnership to cover more vehicles and eventually achieve full driverless operation.

“They’re really at the forefront of technology, in the areas where they need to be,” Laury said. “It’s a very impressive company.”

For its part, HEB Group has been in discussion with a number of potential partners for autonomous deliver trials, and according to Paul Tepfenhart, SVP of Omnichannel and Emerging Technologies at HEB, but it liked Udelv specifically because of their safety record, and because they didn’t just come in with a set plan and a fully formed off-the-shelf offering – they truly partnered with HEB on what the final deployment of the pilot would look like.

Both Tepfenhart and Laury emphasized the importance of customer experience in providing autonomous solutions, and Laury noted that he thinks Udelv’s unique advantage in the increasingly competitive autonomous curbside delivery business is its attention to the robotics of the actual delivery and storage components of its custom vehicle.

“The reason I think we’re we’ve been so successful, is because we focused a lot on the delivery robotics,” Laury explained. “If you think about it, there’s no autonomous delivery business that works if you don’t have the robotics aspect of it figured out also. You can have an autonomous vehicle, but if you don’t have an automated cargo space where merchants can load [their goods] and consumers can unload the vehicle by themselves, you have no business.”

Udelv also thinks that it has an advantage when it comes to its business model, which aims to generate revenue now, in exchange for providing actual value to paying customers, rather than counting on being supported entirely through funding from a wealthy investor or deep-pocketed corporate partners. Laury likens it to Tesla’s approach, where it actually has over 500,000 vehicles on the road helping it build its autonomous technology – but all of those are operated by paying customers who get all the benefits of owing their cars today.

“We want to be the Tesla of autonomous delivery,” Laury said. “If you think about it, Tesla has got 500,000 vehicles on the road […] if you think about this, for of all the the cars in the world that have some level of automated driver assistance (ADAS) or autonomy, I think Tesla’s 90% of them – and they get the customers to pay a ridiculous amount of money for that. Everybody else in the business is getting funding from something else. Waymo is getting funding from search; Cruise is getting funding from GM and SoftBank and others, Nuro is getting funding from SoftBank. So, pretty much everybody else is getting funding from a source that’s a different source from the actual business they’re supposed to be in.”

Laury says that Udelv’s unique strength is in the ability the company has to provide value to partners like HEB today, through its focus on robotics and solving problems like engineering the robotics of the loading and customer pick-up experience, which puts it in a unique place where it can fund its own research through revenue-generating services that can be offered in-market now, rather than ten years from now.

Where May Mobility’s self-driving shuttles might show up next

May Mobility might be operating low-speed self-driving shuttles in three U.S. cities, but its founders don’t view this as just another startup racing to deploy autonomous vehicle technology.

They describe the Ann Arbor-based company as a transportation service provider. As May Moblility’s co-founder and COO Alisyn Malek told TechCrunch, they’re in the “business of moving people.” Autonomous vehicle technology is just the “killer feature” to help them do that. 

TechCrunch recently spent the day with May Mobility in Detroit, where it first launched, to get a closer look at its operations, learn where it might be headed next and why companies in the industry are starting to back off previously ambitious timelines.

Malek will elaborate on what markets are most appealing to May Mobility while on stage at TC Sessions: Mobility on July 10 in San Jose. Malek will join Lia Theodosiou-Pisanelli, head of partner product and programs at Aurora, to talk about what product makes the most sense for autonomous vehicle technology.

Watch a plane land itself truly autonomously for the first time

A team of German researchers has created an automatic landing system for small aircraft that lets them touch down not only without a pilot, but without any of the tech on the ground that lets other planes do it. It could open up a new era of autonomous flight — and make ordinary landings safer to boot.

Now it would be natural to think that with the sophisticated autopilot systems that we have today, a plane could land itself quite easily. And that’s kind of true — but the autoland systems on full-size aircraft aren’t really autonomous. They rely on a set of radio signals emitted by stations only found at major airports: the Instrument Landing System, or ILS.

These signals tell the plane exactly where the runway is even in poor visibility, but even so an “automatic” landing is rarely done. Instead, the pilots — as they do elsewhere — use the autopilot system as an assist, in this case to help them locate the runway and descend properly. A plane can land automatically using ILS and other systems, but it’s rare and even when they do it, it isn’t truly autonomous — it’s more like the airport is flying the plane by wire.

But researchers at Technische Universität München (TUM, or think of it as Munich Tech) have created a system that can land a plane without relying on ground systems at all, and demonstrated it with a pilot on board — or rather, passenger, since he kept his hands in his lap the whole time.

tum plane

The automated plane comes in for a landing.

A plane making an autonomous landing needs to know exactly where the runway is, naturally, but it can’t rely on GPS — too imprecise — and if it can’t use ILS and other ground systems, what’s left? Well, the computer can find the runway the way pilots do: with its eyes. In this case, both visible-light and infrared cameras on the nose of the plane.

TUM’s tests used a a single-passenger plane, a Diamond DA42 that the team outfitted with a custom-designed automatic control system and a computer vision processor both built for the purpose, together called C2Land. The computer, trained to recognize and characterize a runway using the cameras, put its know-how to work in May taking the plane in for a flawless landing.

tumlanding

autotum

As test pilot Thomas Wimmer put it in a TUM news release: “The cameras already recognize the runway at a great distance from the airport. The system then guides the aircraft through the landing approach on a completely automatic basis and lands it precisely on the runway’s centerline.”

You can see the full flight in the video below.

This is a major milestone in automated flight, since until now planes have had to rely on extensive ground-based systems to perform a landing like this one — which means automated landings aren’t currently possible at smaller airports or should something go wrong with the ILS. A small plane like this one is more likely to be at a small airport with no such system, and should a heavy fog roll in, an autoland system like this might be preferable to a pilot who can’t see in infrared.

Right now the tech is very much still experimental, not even at the level where it could be distributed and tested widely, let alone certified by aviation authorities. But the safety benefits are obvious and even as a backup or augmentation to the existing, rarely used autoland systems it would likely be a welcome addition.

These humanoid robots can autonomously navigate cinder block mazes thanks to IHMC

Programming robots to walk on flat, even ground is difficult enough, but Florida’s Institute for Human and Machine Cognition (IHMC) is tackling the grander challenge of making sure bipedal robots can successfully navigate rough terrain. The research organization has been demonstrating its work in this area since 2016, but its latest video (via Engadget) shows the progress it has made.

In the new video, IHMC’s autonomous footstep planning program is at work on both Boston Dynamics’ Atlas robot, and the NASA-developed Valkyrie robot (humanoid robots have the coolest names). This video shows off navigation of a heaping pile of cinder blocks, as well as narrower paths, which are trickier to navigate because of limited navigation options.

Basically, IHMC manages these complex navigation operations by specifying a beginning and end point for the robot, and then mapping all possible paths on a footstep-by-footstep basis, evaluating the cost of each and ultimately arriving at a best possible path — all of which can occur relatively quickly on modern hardware.

These robots can also quickly adapt to changes in the environment and path blockage thanks to IHMC’s work, and can even manage single-path tightrope-style walking (albeit on a narrow row of cinder books, not on an actual rope).

There’s still work to be done — the team at IHMC says that it’s having about a 50% success rate on narrow paths, but its ability to navigate rough terrain with these robots and its software is at a much higher 90%, and it’s pretty near a perfect track record on flat ground.