Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself

Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

This clever transforming robot flies and rolls on its rotating arms

There’s great potential in using both drones and ground-based robots for situations like disaster response, but generally these platforms either fly or creep along the ground. Not the “Flying STAR,” which does both quite well, and through a mechanism so clever and simple you’ll wish you’d thought of it.

Conceived of by researchers at Ben-Gurion University in Israel, the “flying sprawl-tuned autonomous robot” is based on the elementary observation that both rotors and wheels spin. So why shouldn’t a vehicle have both?

Well, there are lots of good reasons why it’s difficult to create such a hybrid, but the team, led by David Zarrouk, overcame them with the help of today’s high-powered, lightweight drone components. The result is a robot that can easily fly when it needs to, then land softly and, by tilting the rotor arms downwards, direct that same motive force into four wheels.

Of course you could have a drone that simply has a couple wheels on the bottom that let it roll along. But this improves on that idea in several ways. In the first place, it’s mechanically more efficient since the same motor drives the rotors and wheels at the same time — though when rolling the RPMs are of course considerably lower. But the rotating arms also give the robot a flexible stance, large wheelbase, and high clearance that make it much more capable on rough terrain.

You can watch FSTAR fly, roll, transform, flatten, and so on in the following video, prepared for presentation at the IEEE International Convention on Robotics and Automation in Montreal:

The ability to roll along at up to 8 feet per second using comparatively little energy, while also being able to leap over obstacles, scale stairs, or simply ascend and fly to a new location give FSTAR considerable adaptability.

“We plan to develop larger and smaller versions to expand this family of sprawling robots for different applications, as well as algorithms that will help exploit speed and cost of transport for these flying/driving robots,” said Zarrouk in a press release.

Obviously at present this is a mere prototype, and will need further work to bring it to a state where it could be useful for rescue teams, commercial operations, and the military.

Ford CTO Ken Washington at TC Sessions: Mobility on July 10

A conference dedicated to transportation and mobility wouldn’t be complete without hearing from Ford, the U.S. automaker with a storied 116-year history.

We’re excited to announce that Ford CTO Ken Washington will participate in TechCrunch’s inaugural TC Sessions: Mobility, a one-day event on July 10, 2019 in San Jose, Calif., that’s bringing the best and brightest minds founders, investors and technologists who are determined to invent a future Henry Ford might never have imagined. Or maybe he did.

If there’s a person at Ford who can provide insight into where the company is head, it’s Washington.

As CTO and vice president of Research and Advanced Engineering, Washington leads Ford’s worldwide research organization, oversees the development and implementation of the company’s technology strategy and plans, and plays a key role in its expansion into emerging mobility opportunities.

Prior to joining Ford, he was vice president of the Advanced Technology Center at Lockheed Martin Space Systems Company, where he led a team of scientists and engineers in performing research and development in space science and related R&D.

TC Sessions: Mobility has a jam-packed agenda, overflowing with some of the biggest names and most exciting startups in the transportation industry. With Early-Bird ticket sales ending soon, you’ll want to be sure to grab your tickets after checking out this agenda.

Throughout the day, you can expect to hear from and partake in discussions about the future of transportation, the promise and problems of autonomous vehicles, the potential for bikes and scooters, investing in early-stage startups and more.

We’ll be joined by some of the most esteemed and prescient people in the space, including Dmitri Dolgov  at Waymo Argo AI Chief Safety Officer Summer Craze Fowler, Nuro co-founder Dave FergusonKarl Iagnemma of Aptiv, Voyage CEO Oliver Cameron and Seleta Reynolds of the Los Angeles Department of Transportation.

Early-Bird tickets are now on sale — save $100 on tickets before prices go up.

Students, you can grab your tickets for just $45.

Cat vs best and worst robot vacuum cleaners 

If you’ve flirted with the idea of buying a robot vacuum you may also have stepped back from the brink in unfolding horror at the alphabetic soup of branded discs popping into view. Consumer choice sounds like a great idea until you’ve tried to get a handle on the handle-less vacuum space.

Amazon offers an A to Z linklist of “top brands” that’s only a handful of letters short of a full alphabetic set. The horror.

What awaits the unseasoned robot vacuum buyer as they resign themselves to hours of online research to try to inform — or, well, form — a purchase decision is a seeming endless permutation of robot vac reviews and round-ups.

Unfortunately there are just so many brands in play that all these reviews tend to act as fuel, feeding a growing black hole of indecision that sucks away at your precious spare time, demanding you spend more and more of it reading about robots that suck (when you could, let’s be frank, be getting on with the vacuuming task yourself) — only to come up for air each time even less convinced that buying a robot dirtbag is at all a good idea.

Reader, I know, because I fell into this hole. And it was hellish. So in the spirit of trying to prevent anyone else falling prey to convenience-based indecision I am — apologies in advance — adding to the pile of existing literature about robot vacuums with a short comparative account that (hopefully) helps cut through some of the chaff to the dirt-pulling chase.

Here’s the bottom line: Budget robot vacuums that lack navigational smarts are simply not worth your money, or indeed your time.

Yes, that’s despite the fact they are still actually expensive vacuum cleaners.

Basically these models entail overpaying for a vacuum cleaner that’s so poor you’ll still have to do most of the job yourself (i.e. with a non-robotic vacuum cleaner).

It’s the very worst kind of badly applied robotics.

Abandon hope of getting anything worth your money at the bottom end of the heap. I know this because, alas, I tried — opting, finally and foolishly (but, in my defence, at a point of near desperation after sifting so much virtual chaff the whole enterprise seemed to have gained lottery odds of success and I frankly just wanted my spare time back), for a model sold by a well-known local retailer.

It was a budget option but I assumed — or, well, hoped — the retailer had done its homework and picked a better-than-average choice. Or at least something that, y’know, could suck dust.

The brand in question (Rowenta) sat alongside the better known (and a bit more expensive) iRobot on the shop shelf. Surely that must count for something? I imagined wildly. Reader, that logic is a trap.

I can’t comment on the comparative performance of iRobot’s bots, which I have not personally tested, but I do not hesitate to compare a €180 (~$200) Rowenta-branded robot vacuum to a very expensive cat toy.

This robot vacuum was spectacularly successful at entertaining the cat — presumably on account of its dumb disposition, bouncing stupidly off of furniture owing to a total lack of navigational smarts. (Headbutting is a pretty big clue to how stupid a robot it is, as it’s never a stand-in for intelligence even when encountered in human form.)

Even more tantalizingly, from the cat’s point of view, the bot featured two white and whisker-like side brushes that protrude and spin at paw-tempting distance. In short: Pure robotic catnip.

The cat did not stop attacking the bot’s whiskers the whole time it was in operation. That certainly added to the obstacles getting in its way. But the more existential problem was it wasn’t sucking very much at all.

At the end of its first concluded ‘clean’, after it somehow managed to lurch its way back to first bump and finally hump its charging hub, I extracted the bin and had to laugh at the modest sized furball within. I’ve found larger clumps of dust gathering themselves in corners. So: Full marks for cat-based entertainment but as a vacuum cleaner it was horrible.

At this point I did what every sensible customer does when confronted with an abject lemon: Returned it for a full refund. And that, reader, might have been that for me and the cat and robot vacs. Who can be bothered to waste so much money and time for what appeared laughably incremental convenience? Even with a steady supply of cat fur to contend with.

But as luck would have it a Roborock representative emailed to ask if I would like to review their latest top-of-the-range model — which, at €549, does clock in at the opposite end of the price scale; ~3x the pitiful Rowenta. So of course I jumped at the chance to give the category a second spin — to see if a smarter device could impress me and not just tickle the cat’s fancy.

Clearly the price difference here, at the top vs the bottom of the range, is substantial. And yet, if you bought a car that was 3x times cheaper than a Ferrari you’d still expect not just that the wheels stay on but that it can actually get you somewhere, in good time and do so without making you horribly car sick.

Turns out buyers of robot vacuums need to tread far more carefully.

Here comes the bookending top-line conclusion: Robot vacuums are amazing. A modern convenience marvel. But — and it’s a big one — only if you’re willing to shell out serious cash to get a device that actually does the job intended.

Roborock S6: It’s a beast at gobbling your furry friend’s dander

Comparing the Roborock S6 and the Rowenta Smart Force Essential Aqua RR6971WH (to give it its full and equally terrible name) is like comparing a high-end electric car with a wind-up kid’s toy.

Where the latter product was so penny-pinching the company hadn’t even paid to include in the box a user manual that contained actual words — opting, we must assume, to save on translation costs by producing a comic packed with inscrutable graphics and bizarro don’t do diagrams which only served to cement the fast-cooling buyer’s conviction they’d been sold a total lemon — the Roborock’s box contains a well written paper manual that contains words and clearly labeled diagrams. What a luxury!

At the same time there’s not really that much you need to grok to get your head around operating the Roborock. After a first pass to familiarize yourself with its various functions it’s delightfully easy to use. It will even produce periodic vocal updates — such as telling you it’s done cleaning and is going back to base. (Presumably in case you start to worry it’s gone astray under the bed. Or that quiet industry is a front for brewing robotic rebellion against indentured human servitude.)

One button starts a full clean — and this does mean full thanks to on-board laser navigation that allows the bot to map the rooms in real-time. This means you get methodical passes, minimal headbutting and only occasional spots missed. (Another button will do a spot clean if the S6 does miss something or there’s a fresh spill that needs tidying — you just lift the bot to where you want it and hit the appropriate spot.)

There is an app too, if you want to access extra features like being able to tell it to go clean a specific room, schedule cleans or set no-go zones. But, equally delightfully, there’s no absolute need to hook the bot to your wi-fi just to get it to do its primary job. All core features work without the faff of having to connect it to the Internet — nor indeed the worry of who might get access to your room-mapping data. From a privacy point of view this wi-fi-less app-free operation is a major plus.

In a small apartment with hard flooring the only necessary prep is a quick check to clear stuff like charging cables and stray socks off the floor. You can of course park dining chairs on the table to offer the bot a cleaner sweep. Though I found the navigation pretty adept at circling chair legs. Sadly the unit is a little too tall to make it under the sofa.

The S6 includes an integrated mopping function, which works incredibly well on lino-style hard flooring (but won’t be any use if you only have carpets). To mop you fill the water tank attachment; velcro-fix a dampened mop cloth to the bottom; and slide-clip the whole unit under the bot’s rear. Then you hit the go button and it’ll vacuum and mop in the same pass.

In my small apartment the S6 had no trouble doing a full floor clean in under an hour, without needing to return to base to recharge in the middle. (Roborock says the S6 will drive for up to three hours on a single charge.)

It also did not seem to get confused by relatively dark flooring in my apartment — which some reviews had suggested can cause headaches for robot vacuums by confusing their cliff sensors.

After that first clean I popped the lid to check on the contents of the S6’s transparent lint bin — finding an impressive quantity of dusty fuzz neatly wadded therein. This was really just robot vacuum porn, though; the gleaming floors spoke for themselves on the quality of the clean.

The level of dust gobbled by the S6 vs the Rowenta underlines the quality difference between the bottom and top end of the robot vacuum category.

So where the latter’s plastic carapace immediately became a magnet for all the room dust it had kicked up but spectacularly failed to suck, the S6’s gleaming white shell has stayed remarkably lint-free, acquiring only a minimal smattering of cat hairs over several days of operation — while the floors it’s worked have been left visibly dust- and fur-free. (At least until the cat got to work dirtying them again.)

Higher suction power, better brushes and a higher quality integrated filter appear to make all the difference. The S6 also does a much better cleaning job a lot more quietly. Roborock claims it’s 50% quieter than the prior model (the S5) and touts it as its quietest robot vacuum yet.

It’s not super silent but is quiet enough when cleaning hard floors not to cause a major disturbance if you’re working or watching something in the same room. Though the novelty can certainly be distracting.

Even the look of the S6 exudes robotic smarts — with its raised laser-housing bump resembling a glowing orange cylonic eye-slot.

Although I was surprised, at first glance, by the single, rather feeble looking side brush vs the firm pair the Rowenta had fixed to its undercarriage. But again the S6’s tool is smartly applied — stepping up and down speed depending on what the bot’s tackling. I found it could miss the odd bit of lint or debris such as cat litter but when it did these specs stood out as the exception on an otherwise clean floor.

It’s also true that the cat did stick its paw in again to try attacking the S6’s single spinning brush. But these attacks were fewer and a lot less fervent than vs the Rowenta, as if the bot’s more deliberate navigation commanded greater respect and/or a more considered ambush. So it appears that even to a feline eye the premium S6 looks a lot less like a dumb toy.

Cat plots another ambush while the S6 works the floor

On a practical front, the S6’s lint bin has a capacity of 480ml. Roborock suggests cleaning it out weekly (assuming you’re using the bot every week), as well as washing the integrated dust filter (it supplies a spare in the box so you can switch one out to clean it and have enough time for it to fully dry before rotating it back into use).

If you use the mopping function the supplied reusable mop cloths do need washing afterwards too (Roborock also includes a few disposable alternatives in the box but that seems a pretty wasteful option when it’s easy enough to stick a reusable cloth in with a load of laundry or give it a quick wash yourself). So if you’re chasing a fully automated, robot-powered, end-to-cleaning-chores dream be warned there’s still a little human elbow grease required to keep everything running smoothly.

Still, there’s no doubt a top-of-the-range robot vacuum like the S6 will save you time cleaning.

If you can justify the not inconsiderable cost involved in buying this extra time by shelling out for a premium robot vacuum that’s smart enough to clean effectively all that’s left to figure out is how to spend your time windfall wisely — resisting the temptation to just put your feet up and watch the clever little robot at work.

Drone sighting at Germany’s busiest airport grounds flights for about an hour

A drone sighting caused all flights to be suspended at Frankfurt Airport for around an hour this morning. The airport is Germany’s busiest by passenger numbers, serving almost 14.8 million passengers in the first three months of this year.

In a tweet sent after flights had resumed the airport reported that operations were suspended at 07:27, before the suspension was lifted at 08:15, with flights resuming at 08:18.

It added that security authorities were investigating the incident.

A report in local press suggests more than 100 takeoffs and landings were cancelled as a result of the disruption caused by the drone sighting.

It’s the second such incident at the airport after a drone sighting at the end of March also caused flights to be suspended for around half an hour.

Drone sightings near airports have been on the increase for years as drones have landed in the market at increasingly affordable prices, as have reports of drone near misses with aircraft.

The Frankfurt suspension follows far more major disruption caused by repeat drone sightings at the UK’s second largest airport, Gatwick Airport, late last year — which caused a series of flight shutdowns and travel misery for hundreds of thousands of people right before the holiday period.

The UK government came in for trenchant criticism immediately afterwards, with experts saying it had failed to listen and warnings about the risks posed by drone misuse. A planned drone bill has also been long delayed, meaning new legislation to comprehensively regulate drones has slipped.

In response to the Gatwick debacle the UK government quickly pushed through an expansion of existing drone no-fly zones around airports after criticism by aviation experts — beefing up the existing 1km exclusion zone to 5km. It also said police would get new powers to tackle drone misuse.

In Germany an amendment to air traffic regulations entered into force in 2017 that prohibits drones being flown within 1.5km of an airport. Drones are also banned from being flown in controlled airspace.

However with local press reporting rising drone sightings near German airports, with the country’s Air Traffic Control registering 125 last year (31 of which were around Frankfurt), the 1.5km limit looks similarly inadequate.

Waymo and Lyft partner to scale self-driving robotaxi service in Phoenix

Waymo is partnering with Lyft to bring self-driving vehicles onto the ride-hailing network in Phoenix as the company ramps up its commercial robotaxi service.

Waymo will add 10 of its self-driving vehicles onto Lyft platform over the next few months, according to CEO John Krafcik. Once Waymo vehicles are on the platform, Lyft users in the area will have the option to select a Waymo directly from the Lyft app for eligible rides.

“This first step in our partnership will allow us to introduce the Waymo Driver to Lyft users, enabling them to take what for many will be their first ride in a self-driving vehicle,” Krafcik said in a blog posted Tuesday.

The companies didn’t provide further details about the partnership, but it appears to be similar to Lyft’s relationship with Aptiv, the U.S. auto supplier and self-driving software company. Under that partnership, Aptiv’s self-driving vehicles operate on Lyft’s ride-hailing platform in Las Vegas. As of April 2019, the vehicles had provided more than 40,000 paid autonomous rides in Las Vegas via the Lyft app.
Waymo has been ramping up its autonomous ride-hailing network in Phoenix for months now. In April, the company made its ride-hailing service, and accompanying app, Waymo One more widely available by putting it on the Google Play store.

The company, which spun out to become a business under Alphabet, launched Waymo One in the Phoenix area in December. The Waymo One self-driving car service, and accompanying app, was only available to Phoenix residents who were part of its early rider program, which aimed to bring vetted regular folks into its self-driving minivans.

Life-size robo-dinosaur and ostrich backpack hint at how first birds got off the ground

Everyone knows birds descended from dinosaurs, but exactly how that happened is the subject of much study and debate. To help clear things up, these researchers went all out and just straight up built a robotic dinosaur to test their theory: that these proto-birds flapped their “wings” well before they ever flew.

Now, this isn’t some hyper-controversial position or anything. It’s pretty reasonable when you think about it: natural selection tends to emphasize existing features rather than invent them from scratch. If these critters had, say, moved from being quadrupedal to being bipedal and had some extra limbs up front, it would make sense that over a few million years those limbs would evolve into something useful.

But when did it start, and how? To investigate, Jing-Shan Zhao of Tsinghua University in Beijing looked into an animal called Caudipteryx, a ground-dwelling animal with “feathered forelimbs that could be considered “proto-wings.”

Based on the well-preserved fossil record of this bird-dino crossover, the researchers estimated a number of physiological metrics, such as the creature’s top speed and the rhythm with which it would run. From this they could estimate forces on other parts of the body — just as someone studying a human jogger would be able to say that such and such a joint is under this or that amount of stress.

What they found was that, in theory, these “natural frequencies” and biophysics of the Caudipteryx’s body would cause its little baby wings to flap up and down in a way suggestive of actual flight. Of course they wouldn’t provide any lift, but this natural rhythm and movement may have been the seed which grew over generations into something greater.

To give this theory a bit of practical punch, the researchers then constructed a pair of unusual mechanical items: a pair of replica Caudipteryx wings for a juvenile ostrich to wear, and a robotic dinosaur that imitated the original’s gait. A bit fanciful, sure — but why shouldn’t science get a little crazy now and then?

In the case of the ostrich backpack, they literally just built a replica of the dino-wings and attached it to the bird, then had the bird run. Sensors on board the device verified what the researchers observed: that the wings flapped naturally as a result of the body’s motion and vibrations from the feet impacting the ground.

The robot is a life-size reconstruction based on a complete fossil of the animal, made of 3D-printed parts, to which the ostrich’s fantasy wings could also be affixed. The researchers’ theoretical model predicted that the flapping would be most pronounced as the speed of the bird approached 2.31 meters per second — and that’s just what they observed in the stationary model imitating gaits corresponding to various running speeds.

You can see another gif over at the Nature blog. As the researchers summarize:

These analyses suggest that the impetus of the evolution of powered flight in the theropod lineage that lead to Aves may have been an entirely natural phenomenon produced by bipedal motion in the presence of feathered forelimbs.

Just how legit is this? Well, I’m not a paleontologist. And an ostrich isn’t a Caudipteryx. And the robot isn’t exactly convincing to look at. We’ll let the scholarly community pass judgment on this paper and its evidence (don’t worry, it’s been peer reviewed), but I think it’s fantastic that the researchers took this route to test their theory. A few years ago this kind of thing would be far more difficult to do, and although it seems a little silly when you watch it (especially in gif form), there’s a lot to be said for this kind of real-life tinkering when so much of science is occurring in computer simulations.

The paper was published today in the journal PLOS Computational Biology.

CMU uses knitting machines to make soft robots that hug

Perhaps people fear robotics and automation simply because they’re not cuddly enough. It’s certainly an under-explored aspect in the growing field of soft robotics. A team at Carnegie Mellon is taking on the challenge by creating soft robots with knitting machines.

Cuddliness aside, the real goal here is to design robotic form factors that are lower cost, less dangerous and in some cases even wearable. The team is designing an automated process that adds tendons, which can connect to harder motors, in order to create movement. Examples include, no joke, “stuffed figures that give hugs when poked in the stomach and even a sweater with a sleeve that moves on its own.”

Down the road, the research could lead to more serious soft robotics created with commercial knitting machines designed to produce garments.

“We have so many soft objects in our lives and many of them could be made interactive with this technology,” CMU PhD student Lea Albaugh says in a release tied to the news. “A garment could be part of your personal information system. Your sweater, for example, might tap you on your shoulder to get your attention. The fabric of a chair might serve as a haptic interface. Backpacks might open themselves.”

In a sense, it’s a kind of old-school take on 3D printing and other additive manufacturing. Potential materials for tendons include polyester-wrapped quilting thread, pure silk yarn and nylon monofilament. Conductive yarn, meanwhile, could give the robot a much better sense of its own movements.

Inspection robots are climbing the walls to monitor safety conditions in hazardous locations

Down in Christchurch, New Zealand a team of roboticists at Invert Robotics has commercialized an inspection robot that uses tiny suction cups on a series of treads and a specialty chemical to create a technology that has robots literally climbing the walls.

Meanwhile, a world away in Pittsburgh, Gecko Robotics is tackling much the same problem with high-powered magnets and an inspection robot of its own.

Both companies have recently closed on new financing, with Invert raising $8.8 million from investors including Finistere Ventures and Yamaha Motor Ventures & Laboratory Silicon Valley, and Gecko Robotics wrapping up a $9 million round, which began fundraising last June, according to a filing with the Securities and Exchange Commission.

For the food-focused investment fund, Finistere Ventures, the benefit of a wall-climbing robot is apparent in looking at supply chain issues, according to co-founder and partner Arama Kukutai .

“The immediate value of Invert Robotics across the global food supply chain – from ensuring food and beverages are stored and transported in safe, pathogen-free environments, to avoiding catastrophic failures in agrichemical-industry containers and plants – is undeniably impressive,” Kukutai said in a statement. “However, we see the potential applications as almost limitless.”

Plant inspections in the food, chemicals and aviation industry are dangerous endeavors, and automation can make a significant improvement in how companies address the critical function of quality assurance, according to investors and entrepreneurs.

“There has been virtually no innovation in industrial services technology for decades,” Founders Fund  partner Trae Stephens told TechCrunch in a statement. “Gecko’s robots massively reduce facility shutdown time while gathering critical performance data and preventing potentially fatal accidents. The demand for what they are building is huge.”

While Gecko uses powerful magnets to secure its robots to surfaces, Invert Robotics uses powerful suction to enable its robots to climb the walls.

“If you think of a plunger and how a plunger adheres to a surface… it creates a perfect seal with the surface, you find it very hard to lift the plunger off the surface,” said managing director, Neil Fletcher. “We’ve taken that concept and we’ve made it able to slide along the surface without losing the vacuum. It’s a fine balance between maintaining the vacuum that we’ve created and leaking enough air into the vacuum to allow the unit to slide along and we coat the suction cups with a special chemical that reduces the friction.”

Both agriculture and chemicals represent billion-dollar markets for non-destructive testing, Fletcher said, and the company is already working with companies like Dow Chemical and BASF to assess their processing assets and ensure that they’re fit for use.

Yamaha has a strategic interest in developing these types of robotics systems, which prompted the investment from the firm’s skunkworks and investment shop out of Silicon Valley.

“As part of Yamaha’s long-term vision supporting the development of advanced robots to improve workplace efficiency and safety, Invert Robotics’ technology and its value proposition made a positive impression on our investment committee,” added Craig Boshier, partner and general manager for Yamaha Motor Ventures in Australia and New Zealand. “Importantly, the robotic technology’s adaptability to different environments and industries is well supported by an engaged team. That combination, with proper capitalization, positions Invert Robotics for success in its global market expansion.”

Pittsburgh’s own Gecko Robotics has similar aspirations, and an investor base including Mark Cuban, Founders Fund, The Westly Group, Justin Kan and Y Combinator.

Since 2012, the company has been working on its technology using ultrasound transducers and a high-def camera to scan boiler walls as the company’s robot would scale them.

Given the billions of dollars in demand, and the potential life-saving applications, it’s no wonder investors are clambering to get a piece of the market.