Ekasbo’s Matebot may be the cutest cat robot yet created

If Shrek saw Matebot, no amount of sad-eyes could win him back to Puss in Boots’ side. Created by Shenzhen-based robotics company Ekasbo, Matebot looks like a black and white cartoon cat and responds to your touch by wiggling its ears, changing the expression in its big LED eyes and tilting its head.

Ekasbo's Matebot in a sad mood

Built with voice recognition, infrared technology and seven moving parts, the Matebot is designed to serve as an interactive companion, including for people who can’t keep pets, creator Zhang Meng told TechCrunch at Computex in Taiwan.

The Matebot is controlled with a smartphone app and can be integrated with Android voice control systems. Its price starts at about 4,999 yen or about $45 USD.

This robot learns its two-handed moves from human dexterity

If robots are really to help us out around the house or care for our injured and elderly, they’re going to want two hands… at least. But using two hands is harder than we make it look — so this robotic control system learns from humans before attempting to do the same.

The idea behind the research, from the University of Wisconsin-Madison, isn’t to build a two-handed robot from scratch, but simply to create a system that understands and executes the same type of manipulations that we humans do without thinking about them.

For instance, when you need to open a jar, you grip it with one hand and move it into position, then tighten that grip as the other hand takes hold of the lid and twists or pops it off. There’s so much going on in this elementary two-handed action that it would be hopeless to ask a robot to do it autonomously right now. But that robot could still have a general idea of why this type of manipulation is done on this occasion, and do what it can to pursue it.

The researchers first had humans wearing a motion capture equipment perform a variety of simulated everyday tasks, like stacking cups, opening containers and pouring out the contents, and picking up items with other things balanced on top. All this data — where the hands go, how they interact, and so on — was chewed up and ruminated on by a machine learning system, which found that people tended to do one of four things with their hands:

  • Self-handover: This is where you pick up an object and put it in the other hand so it’s easier to put it where it’s going, or to free up the first hand to do something else.
  • One hand fixed: An object is held steady by one hand providing a strong, rigid grip, while the other performs an operation on it like removing a lid or stirring the contents.
  • Fixed offset: Both hands work together to pick something up and rotate or move it.
  • One hand seeking: Not actually a two-handed action, but the principle of deliberately keeping one hand out of action while the other finds the object required or performs its own task.

The robot put this knowledge to work not in doing the actions itself — again, these are extremely complex motions that current AIs are incapable of executing — but in its interpretations of movements made by a human controller.

You would think that when a person is remotely controlling a robot, it would just mirror the person’s movements exactly. And in the tests, the robot does so to provide a baseline of how without knowledge about these “bimanual actions,” many of them are simply impossible.

Think of the jar-opening example. We know that when we’re opening the jar, we have to hold one side steady with a stronger grip and may even have to push back with the jar hand against the movement of the opening hand. If you tried to do this remotely with robotic arms, that information is not present any more, and the one hand will likely knock the jar out of the grip of the other, or fail to grip it properly because the other isn’t helping out.

The system created by the researchers recognizes when one of the four actions above is happening, and takes measures to make sure that they’re a success. That means, for instance, being aware of the pressures exerted on each arm by the other when they pick up a bucket together. Or providing extra rigidity to the arm holding an object while the other interacts with the lid. Even when only one hand is being used (“seeking”), the system knows that it can deprioritize the movements of the unused hand and dedicate more resources (be it body movements or computational power) to the working hand.

In videos of demonstrations, it seems clear that this knowledge greatly improves the success rate of the attempts by remote operators to perform a set of tasks meant to simulate preparing a breakfast: cracking (fake) eggs, stirring and shifting things, picking up a tray with glasses on it and keeping it level.

Of course this is all still being done by a human, more or less — but the human’s actions are being augmented and re-interpreted into something more than simple mechanical reproduction.

Doing these tasks autonomously is a long ways off, but research like this forms the foundation for that work. Before a robot can attempt to move like a human, it has to understand not just how humans move, but why they do certain things in certain circumstances, and furthermore what important processes may be hidden from obvious observation — things like planning the hand’s route, choosing a grip location, and so on.

The Madison team was led by Daniel Rakita; their paper describing the system is published in the journal Science Robotics.

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Kiwi’s food delivery bots are rolling out to 12 new colleges

If you’re a student at UC Berkeley, the diminutive rolling robots from Kiwi are probably a familiar sight by now, trundling along with a burrito inside to deliver to a dorm or apartment building. Now students at a dozen more campuses will be able to join this great, lazy future of robotic delivery as Kiwi expands to them with a clever student-run model.

Speaking at TechCrunch’s Robotics/AI Session at the Berkeley campus, Kiwi’s Felipe Chavez and Sasha Iatsenia discussed the success of their burgeoning business and the way they planned to take it national.

In case you’re not aware of the Kiwi model, it’s basically this: When you place an order online with a participating restaurant, you have the option of delivery via Kiwi. If you so choose, one of the company’s fleet of knee-high robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your front door (or as close as it can reasonably get). You can even watch the last bit live from the robot’s perspective as it rolls up to your place.

The robots are what Kiwi calls “semi-autonomous.” This means that although they can navigate most sidewalks and avoid pedestrians, each has a human monitoring it and setting waypoints for it to follow, on average every five seconds. Iatsenia told me that they’d tried going full autonomous and that it worked… most of the time. But most of the time isn’t good enough for a commercial service, so they’ve got humans in the loop. They’re working on improving autonomy but for now this is how it is.

That the robots are being controlled in some fashion by a team of people in Colombia (where the co-founders hail from) does take a considerable amount of the futurism out of this endeavor, but on reflection it’s kind of a natural evolution of the existing delivery infrastructure. After all, someone has to drive the car that brings you your food as well. And in reality most AI is operated or informed directly or indirectly by actual people.

That those drivers are in South America operating multiple vehicles at a time is a technological advance over your average delivery vehicle — though it must be said that there is an unsavory air of offshoring labor to save money on wages. That said, few people shed tears over the wages earned by the Chinese assemblers who put together our smartphones and laptops, or the garbage pickers who separate your poorly sorted recycling. The global labor economy is a complicated one, and the company is making jobs in the place it was at least partly born.

Whatever the method, Kiwi has traction: it’s done more than 50,000 deliveries and the model seems to have proven itself. Customers are happy, they get stuff delivered more than ever once they get the app, and there are fewer and fewer incidents where a robot is kicked over or, you know, catches on fire. Notably, the founders said on stage, the community has really adopted the little vehicles, and should one overturn or be otherwise interfered with, it’s often set on its way soon after by a passerby.

Iatsenia and Chavez think the model is ready to push out to other campuses, where a similar effort will have to take place — but rather than do it themselves by raising millions and hiring staff all over the country, they’re trusting the robotics-loving student groups at other universities to help out.

For a small and low-cash startup like Kiwi, it would be risky to overextend by taking on a major round and using that to scale up. They started as robotics enthusiasts looking to bring something like this to their campus, so why can’t they help others do the same?

So the team looked at dozens of universities, narrowing them down by factors important to robotic delivery: layout, density, commercial corridors, demographics, and so on. Ultimately they arrived at the following list:

  • Northern Illinois University
  • University of Oklahoma
  • Purdue University
  • Texas A&M
  • Parsons
  • Cornell
  • East Tennessee State University
  • Nebraska University-Lincoln
  • Stanford
  • Harvard
  • NYU
  • Rutgers

What they’re doing is reaching out to robotics clubs and student groups at those colleges to see who wants to take partial ownership of Kiwi administration out there. Maintenance and deployment would still be handled by Berkeley students, but the student clubs would go through a certification process and then do the local work, like a capsized bot and on-site issues with customers and restaurants.

“We are exploring several options to work with students down the road including rev share,” Iatsenia told me. “It depends on the campus.”

So far they’ve sent out 40 robots to the 12 campuses listed and will be rolling out operations as the programs move forward on their own time. If you’re not one of the unis listed, don’t worry — if this goes the way Kiwi plans, it sounds like you can expect further expansion soon.

Digging into key takeaways from our 2019 Robotics+AI Sessions Event

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Brian Heater and Lucas Matney shared their key takeaways from our Robotics+AI Sessions event at UC Berkeley last week.

The event was filled with panels, demos and intimate discussions with key robotics and deep learning founders, executives and technologists. Brian and Lucas discuss which companies excited them most, as well as which verticals have the most exciting growth prospects in the robotics world.

“This is the second [robotics event] in a row that was done at Berkeley where people really know the events; they respect it, they trust it and we’re able to get really, I would say far and away the top names in robotics. It was honestly a room full of all-stars.

I think our Disrupt events are definitely skewed towards investors and entrepreneurs that may be fresh off getting some seed or Series A cash so they can drop some money on a big ticket item. But here it’s cool because there are so many students. robotics founders and a lot of wide-eyed people wandering from the student union grabbing a pass and coming in. So it’s a cool different level of energy that I think we’re used to.

And I’ll say that this is the key way in which we’ve been able to recruit some of the really big people like why we keep getting Boston Dynamics back to the event, who generally are very secretive.”

Brian and Lucas dive deeper into how several of the major robotics companies and technologies have evolved over time, and also dig into the key patterns and best practices seen in successful robotics startups.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

 

FarmWise turns to Roush to build autonomous vegetable weeders

FarmWise wants robots to do the dirty part of farming: weeding. With that thought, the San Francisco-based startup enlisted the help of Michigan-based manufacturing and automotive company Roush to build prototypes of the self-driving robots. An early prototype is pictured above.

Financial details of the collaboration were not released.

The idea is these autonomous weeders will replace herbicides and save the grower on labor. By using high-precision weeding, the robotic farm hands can increase the yield of the crops by working day and night to remove unwanted plants and weeds. After all, herbicides are in part because weeding is a terrible job.

With Roush, FarmWise will build a dozen prototypes win 2019 with the intention of scaling to additional units in 2020. But why Michigan?

“Michigan is well-known throughout the world for its manufacturing and automotive industries, the advanced technology expertise and state-of-the-art manufacturing practices,” Thomas Palomares, FarmWise co-founder and CTO said. “These are many of the key ingredients we need to manufacture and test our machines. We were connected to Roush through support from PlanetM, and as a technology startup, joining forces with a large and well-respected legacy automaker is critical to support the scale of our manufacturing plan.”
Roush has a long history in Michigan as a leading manufacturing of high performance auto parts. More recently, the company has expanded its focus to using its manufacturing expertise elsewhere including robotics and alternative fuel system design.

“This collaboration showcases the opportunities that result from connecting startups like FarmWise with Michigan-based companies like Roush that bring their manufacturing know-how to making these concepts a reality,” said Trevor Pawl, group vice president of PlanetM, Pure Michigan Business Connect and International Trade at the Michigan Economic Development Corporation. “We are excited to see this collaboration come to fruition. It is a great example of how Michigan can bring together emerging companies globally seeking prototype and production support with our qualified manufacturing base in the state.”

FarmWise was founded in 2016 and has raised $5.7 million through a seed-stage investment including an investment from Playground Global. TechCrunch first saw FarmWise during Alchemist Accelerator’s batch 15 demo day.

FarmWise turns to Roush to build autonomous vegetable weeders

FarmWise wants robots to do the dirty part of farming: weeding. With that thought, the San Francisco-based startup enlisted the help of Michigan-based manufacturing and automotive company Roush to build prototypes of the self-driving robots. An early prototype is pictured above.

Financial details of the collaboration were not released.

The idea is these autonomous weeders will replace herbicides and save the grower on labor. By using high-precision weeding, the robotic farm hands can increase the yield of the crops by working day and night to remove unwanted plants and weeds. After all, herbicides are in part because weeding is a terrible job.

With Roush, FarmWise will build a dozen prototypes win 2019 with the intention of scaling to additional units in 2020. But why Michigan?

“Michigan is well-known throughout the world for its manufacturing and automotive industries, the advanced technology expertise and state-of-the-art manufacturing practices,” Thomas Palomares, FarmWise co-founder and CTO said. “These are many of the key ingredients we need to manufacture and test our machines. We were connected to Roush through support from PlanetM, and as a technology startup, joining forces with a large and well-respected legacy automaker is critical to support the scale of our manufacturing plan.”
Roush has a long history in Michigan as a leading manufacturing of high performance auto parts. More recently, the company has expanded its focus to using its manufacturing expertise elsewhere including robotics and alternative fuel system design.

“This collaboration showcases the opportunities that result from connecting startups like FarmWise with Michigan-based companies like Roush that bring their manufacturing know-how to making these concepts a reality,” said Trevor Pawl, group vice president of PlanetM, Pure Michigan Business Connect and International Trade at the Michigan Economic Development Corporation. “We are excited to see this collaboration come to fruition. It is a great example of how Michigan can bring together emerging companies globally seeking prototype and production support with our qualified manufacturing base in the state.”

FarmWise was founded in 2016 and has raised $5.7 million through a seed-stage investment including an investment from Playground Global. TechCrunch first saw FarmWise during Alchemist Accelerator’s batch 15 demo day.

This robotics museum in Korea will construct itself (in theory)

The planned Robot Science Museum in Seoul will have a humdinger of a first exhibition: its own robotic construction. It’s very much a publicity stunt, though a fun one — but who knows? Perhaps robots putting buildings together won’t be so uncommon in the next few years, in which case Korea will just be an early adopter.

The idea for robotic construction comes from Melike Altinisik Architects, the Turkish firm that won a competition to design the museum. Their proposal took the form of an egg-like shape covered in panels that can be lifted into place by robotic arms.

“From design, manufacturing to construction and services robots will be in charge,” wrote the firm in the announcement that they had won the competition. Now, let’s be honest: this is obviously an exaggeration. The building has clearly been designed by the talented humans at MAA, albeit with a great deal of help from computers. But it has been designed with robots in mind, and they will be integral to its creation.The parts will all be designed digitally, and robots will “mold, assemble, weld and polish” the plates for the outside, according to World Architecture, after which of course they will also be put in place by robots. The base and surrounds will be produced by an immense 3D printer laying down concrete.

So while much of the project will unfortunately have to be done by people, it will certainly serve as a demonstration of those processes that can be accomplished by robots and computers.

Construction is set to begin in 2020, with the building opening its (likely human-installed) doors in 2022 as a branch of the Seoul Metropolitan Museum. Though my instincts tell me that this kind of unprecedented combination of processes is more likely than not to produce significant delays. Here’s hoping the robots cooperate.

Watch the ANYmal quadrupedal robot go for an adventure in the sewers of Zurich

There’s a lot of talk about the many potential uses of multi-legged robots like Cheetahbot and Spot — but in order for those to come to fruition, the robots actually have to go out and do stuff. And to train for a glorious future of sewer inspection (and helping rescue people, probably), this Swiss quadrupedal bot is going deep underground.

ETH Zurich / Daniel Winkler

The robot is called ANYmal, and it’s a long-term collaboration between the Swiss Federal Institute of Technology, abbreviated there as ETH Zurich, and a spin-off from the university called ANYbotics. Its latest escapade was a trip to the sewers below that city, where it could eventually aid or replace the manual inspection process.

ANYmal isn’t brand new — like most robot platforms, it’s been under constant revision for years. But it’s only recently that cameras and sensors like lidar have gotten good enough and small enough that real-world testing in a dark, slimy place like sewer pipes could be considered.

Most cities have miles and miles of underground infrastructure that can only be checked by expert inspectors. This is dangerous and tedious work — perfect for automation. Imagine instead of yearly inspections by people, if robots were swinging by once a week. If anything looks off, it calls in the humans. It could also enter areas rendered inaccessible by disasters or simply too small for people to navigate safely.

But of course, before an army of robots can inhabit our sewers (where have I encountered this concept before? Oh yeah…) the robot needs to experience and learn about that environment. First outings will be only minimally autonomous, with more independence added as the robot and team gain confidence.

“Just because something works in the lab doesn’t always mean it will in the real world,” explained ANYbotics co-founder Peter Fankhauser in the ETHZ story.

Testing the robot’s sensors and skills in a real-world scenario provides new insights and tons of data for the engineers to work with. For instance, when the environment is completely dark, laser-based imaging may work, but what if there’s a lot of water, steam or smoke? ANYmal should also be able to feel its surroundings, its creators decided.

ETH Zurich / Daniel Winkler

So they tested both sensor-equipped feet (with mixed success) and the possibility of ANYmal raising its “paw” to touch a wall, to find a button or determine temperature or texture. This latter action had to be manually improvised by the pilots, but clearly it’s something it should be able to do on its own. Add it to the list!

You can watch “Inspector ANYmal’s” trip below Zurich in the video below.

See Spot dance: Watch a Boston Dynamics robot get a little funky

In this fun video the Boston Dynamics Spot dances, wiggles, and shimmies right into our hearts. This little four-legged robot – a smaller sibling to the massive Big Dog – is surprisingly agile and the team at Boston Robotics have taught the little robot to dance to Bruno Mars which means that robots could soon replace us on the factory floor and on the dance floor. Good luck, meatbags!

As one YouTube commenter noted: if you think Spot is happy now just imagine how it will dance when we’re all gone!