This robotics museum in Korea will construct itself (in theory)

The planned Robot Science Museum in Seoul will have a humdinger of a first exhibition: its own robotic construction. It’s very much a publicity stunt, though a fun one — but who knows? Perhaps robots putting buildings together won’t be so uncommon in the next few years, in which case Korea will just be an early adopter.

The idea for robotic construction comes from Melike Altinisik Architects, the Turkish firm that won a competition to design the museum. Their proposal took the form of an egg-like shape covered in panels that can be lifted into place by robotic arms.

“From design, manufacturing to construction and services robots will be in charge,” wrote the firm in the announcement that they had won the competition. Now, let’s be honest: this is obviously an exaggeration. The building has clearly been designed by the talented humans at MAA, albeit with a great deal of help from computers. But it has been designed with robots in mind, and they will be integral to its creation.The parts will all be designed digitally, and robots will “mold, assemble, weld and polish” the plates for the outside, according to World Architecture, after which of course they will also be put in place by robots. The base and surrounds will be produced by an immense 3D printer laying down concrete.

So while much of the project will unfortunately have to be done by people, it will certainly serve as a demonstration of those processes that can be accomplished by robots and computers.

Construction is set to begin in 2020, with the building opening its (likely human-installed) doors in 2022 as a branch of the Seoul Metropolitan Museum. Though my instincts tell me that this kind of unprecedented combination of processes is more likely than not to produce significant delays. Here’s hoping the robots cooperate.

Watch the ANYmal quadrupedal robot go for an adventure in the sewers of Zurich

There’s a lot of talk about the many potential uses of multi-legged robots like Cheetahbot and Spot — but in order for those to come to fruition, the robots actually have to go out and do stuff. And to train for a glorious future of sewer inspection (and helping rescue people, probably), this Swiss quadrupedal bot is going deep underground.

ETH Zurich / Daniel Winkler

The robot is called ANYmal, and it’s a long-term collaboration between the Swiss Federal Institute of Technology, abbreviated there as ETH Zurich, and a spin-off from the university called ANYbotics. Its latest escapade was a trip to the sewers below that city, where it could eventually aid or replace the manual inspection process.

ANYmal isn’t brand new — like most robot platforms, it’s been under constant revision for years. But it’s only recently that cameras and sensors like lidar have gotten good enough and small enough that real-world testing in a dark, slimy place like sewer pipes could be considered.

Most cities have miles and miles of underground infrastructure that can only be checked by expert inspectors. This is dangerous and tedious work — perfect for automation. Imagine instead of yearly inspections by people, if robots were swinging by once a week. If anything looks off, it calls in the humans. It could also enter areas rendered inaccessible by disasters or simply too small for people to navigate safely.

But of course, before an army of robots can inhabit our sewers (where have I encountered this concept before? Oh yeah…) the robot needs to experience and learn about that environment. First outings will be only minimally autonomous, with more independence added as the robot and team gain confidence.

“Just because something works in the lab doesn’t always mean it will in the real world,” explained ANYbotics co-founder Peter Fankhauser in the ETHZ story.

Testing the robot’s sensors and skills in a real-world scenario provides new insights and tons of data for the engineers to work with. For instance, when the environment is completely dark, laser-based imaging may work, but what if there’s a lot of water, steam or smoke? ANYmal should also be able to feel its surroundings, its creators decided.

ETH Zurich / Daniel Winkler

So they tested both sensor-equipped feet (with mixed success) and the possibility of ANYmal raising its “paw” to touch a wall, to find a button or determine temperature or texture. This latter action had to be manually improvised by the pilots, but clearly it’s something it should be able to do on its own. Add it to the list!

You can watch “Inspector ANYmal’s” trip below Zurich in the video below.

See Spot dance: Watch a Boston Dynamics robot get a little funky

In this fun video the Boston Dynamics Spot dances, wiggles, and shimmies right into our hearts. This little four-legged robot – a smaller sibling to the massive Big Dog – is surprisingly agile and the team at Boston Robotics have taught the little robot to dance to Bruno Mars which means that robots could soon replace us on the factory floor and on the dance floor. Good luck, meatbags!

As one YouTube commenter noted: if you think Spot is happy now just imagine how it will dance when we’re all gone!

The Salto-1P now does amazing targeted jumps

When we last met with Salto the jumping robot it was bopping around like a crazed grasshopper. Now researchers have added targeting systems to the little creature, allowing it to maintain a constant hop while controlling exactly when and where Salto lands.

Called “deadbeat foot placement hopping control” the Salto can now watch a surface for a target and essentially fly over to where it needs to land using built-in propellers.

Researchers Duncan Haldane, Justin Yim and Ronald Fearing created the Salto as part of the Army Research Office and they will be exhibiting the little guy at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems.

The team upgraded Salto’s controller to make it far more precise on landing, a feat that was almost impossible using the previous controller system, SLIP. “The robot behaves more or less like a spring-loaded inverted pendulum, a simplified dynamic model that shows up often enough in both biology and robotics that it has its own acronym: SLIP,” wrote Evan Ackerman at IEEE. “Way back in the 1980s, Marc Raibert developed a controller for SLIP-like robots, and people are still using it today, including Salto-1P up until just recently.”

This robotic hose-dragon could jet its way into burning buildings

While hose-toting drones may be a fantasy, hose-powered robo-dragons (or robotic hose-dragons — however you like it) are very much a reality. This strange but potentially useful robot from Japanese researchers could snake into the windows of burning buildings, blasting everything around it with the powerful jets of water it uses to maneuver itself.

Yes, it’s a real thing: created by Tohoku University and Hachinohe College, the DragonFireFighter was presented last month at the International Conference on Robotics and Automation.

It works on the same principle your hose does when you turn it on and it starts flapping around everywhere. Essentially your hose is acting as a simple jet: the force of the water being blasted out pushes the hose itself in the opposite direction. So what if the hose had several nozzles, pointing in several directions, that could be opened and closed independently?

Well, you’d have a robotic hose-dragon. And we do.

The DragonFireFighter has a nozzle-covered sort of “head” and what can only be called a “neck.” The water pressure from the hose is diverted into numerous outlets on both in order to create a stable position that can be adjusted more or less at will.

It requires a bit of human intervention to go forwards, but as you can see several jets are pushing it that direction already, presumably at this point for stability and rigidity purposes. If the operators had a little more line to give it, it seems to me it could zoom out quite a bit further than where it was permitted to in the video.

For now it may be more effective to just direct all that water pressure into the window, but one can certainly imagine situations where something like this would be useful.

DragonFireFighter was also displayed at the International Fire and Disaster Prevention Exhibition in Tokyo.

One last thing. I really have to give credit where credit’s due: I couldn’t possibly outdo IEEE Spectrum’s headline, “Firefighting Robot Snake Flies on Jets of Water.”

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

Android co-creator isn’t sure whether robots will adopt a single platform

Android co-creator Andy Rubin isn’t so sure whether there will be one software platform to rule all robots. The former Google exec and Playground Global CEO talked in length about the role of platforms for automation at TechCrunch’s TC Sessions: Robotics event at UC Berkeley.

“The business model of platformization is something that is near and dear to my heart,” Rubin said. “For robotics and automatization, the idea of there being one cohesive platform that everyone ends up adopting? I’m not sure.”

Rubin did speak at length about the eventual need for companies to create systems for sharing machine learning data so that these machines will be able to communicate with each other and communicate their learnings so that obstacles only have to be overcome once across different devices.

Also in the panel was a quick look at the latest iteration of Cassie, a bipedal robot from Agility Robotics.

Agility Robotics Cassie

TC Sessions: Robotics 2018

Robot posture and movement style affects how humans interact with them

It seems obvious that the way a robot moves would affect how people interact with it, and whether they consider it easy or safe to be near. But what poses and movement types specifically are reassuring or alarming? Disney Research looked into a few of the possibilities of how a robot might approach a simple interaction with a nearby human.

The study had people picking up a baton with a magnet at one end and passing it to a robotic arm, which would automatically move to collect the baton with its own magnet.

But the researchers threw variations into the mix to see how they affected how the forces involved, how people moved, and what they felt about the interaction. The robot had two types each of three phases: movement into position, grasping the object, and removing it from the person’s hand.

For movement, it either started hanging down inertly and sprung up to move into position, or it began already partly raised. The latter condition was found to make people accommodate the robot more, putting the baton into a more natural position for it to grab. Makes sense — when you pass something to a friend, it helps if they already have their hand out.

Grasping was done either quickly or more deliberately. In the first condition the robot’s arm attaches the magnet as soon as it’s in position; in the second, it pushes up against the baton and repositions it for a more natural way to pull out. There wasn’t a big emotional difference here but opposing forces were much less in the second grasp type, perhaps meaning it was easier.

Once attached, the robot retracted the baton either slowly or more quickly. Humans preferred the former, saying that the latter felt as if the object was being yanked out of their hands.

The results won’t blow anyone’s mind, but they’re an important contribution to the fast-growing field of human-robot interaction. Once there are best practices for this kind of thing, interacting with robots that, say, clear your table at a restaurant or hand workers items in a factory will be operating with the knowledge that they won’t be producing any extra anxiety in nearby humans.

A side effect of all this was that the people in the experiment gradually seemed to learn to predict the robot’s movements and accommodate them — as you might expect. But it’s a good sign that even over a handful of interactions a person can start building a rapport with a machine they’ve never worked with before.

These robotic skiers hit the slopes in style

 Researchers took part in the Ski Robot Challenge last month and the resulting videos – essentially quick cuts of robots in ski jackets totally whanging off the gates and spinning out in the powder. The Challenge, run by the Korea Institute for Robot Industry Advancement, is sort of a Winter Olympics for wonky androids. The rules are pretty complex. According to Spectrum: Each robot must… Read More

Robot assistants and a marijuana incubator

 We’ve had plenty of time to get used to our robot overlords and Boston Dynamics is helping us get there. This week we talk about the company’s addition of a door-opening arm to its SpotMini robot. It’s not spooky at all. We then switch gears and discuss Facebook’s Messenger for Kids. Is it good, bad or the company’s master plan to get every last human being with… Read More