Early-bird savings end next Friday on tickets to Robotics+AI 2020

TechCrunch Sessions: Robotics+AI 2020 is gearing up to be one amazing show. This annual day-long event draws the brightest minds and makers from these two industries — 1,500 attendees last year alone. And if you really want to make 2020 a game-changing year, grab yourself an early-bird ticket and save $150 on tickets before prices go up after January 31.

Not convinced yet? Check out some agenda highlights featuring some of today’s leading robotics and AI leaders:

  • Saving Humanity from AI with Stuart Russell (UC Berkeley)
    The UC Berkeley professor and AI authority argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
  • Automating Amazon with Tye Brady (Amazon Robotics)
    Amazon Robotics’ chief technology officer will discuss how the company is using the latest in robotics and AI to optimize its massive logistics. He’ll also discuss the future of warehouse automation and how humans and robots share a work space.
  • Engineering for the Red Planet with Lucy Condakchian (Maxar Technologies)
    Maxar Technologies has been involved with U.S. space efforts for decades, and is about to send its sixth (!) robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian is general manager of robotics at Maxar and will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.
  • Toward a Driverless Future with Anca Dragan (Waymo/UC Berkeley) and Jur van den Berg (Ike)
    Autonomous driving is set to be one of the biggest categories for robotics and AI. But there are plenty of roadblocks standing in its way. Experts will discuss how we get there from here. 

See the full agenda here.

If you’re a startup, nab one of the five demo tables left and showcase your company to new customers, press, and potential investors. Demo tables run $2,200 and come with four attendee tickets so you can divide and conquer the networking scene at the conference.

Students, get your super-reduced $50 ticket here and learn from some of the biggest names in the biz and meet your future employer or internship opportunity.

Don’t forget, the early-bird ticket sale ends on January 31. After that, prices go up by $150. Purchase your tickets here and save an additional 18% when you book a group of four or more.

This ultrasonic gripper could let robots hold things without touching them

If robots are to help out in places like hospitals and phone repair shops, they’re going to need a light touch. And what’s lighter than not touching at all? Researchers have created a gripper that uses ultrasonics to suspend an object in midair, potentially making it suitable for the most delicate tasks.

It’s done with an array of tiny speakers that emit sound at very carefully controlled frequencies and volumes. These produce a sort of standing pressure wave that can hold an object up or, if the pressure is coming from multiple directions, hold it in place or move it around.

This kind of “acoustic levitation,” as it’s called, is not exactly new — we see it being used as a trick here and there, but so far there have been no obvious practical applications. Marcel Schuck and his team at ETH Zürich, however, show that a portable such device could easily find a place in processes where tiny objects must be very lightly held.

A small electric component, or a tiny oiled gear or bearing for a watch or micro-robot, for instance, would ideally be held without physical contact, since that contact could impart static or dirt to it. So even when robotic grippers are up to the task, they must be kept clean or isolated. Acoustic manipulation, however, would have significantly less possibility of contamination.

Another, more sinister-looking prototype.

The problem is that it isn’t obvious exactly which combination of frequencies and amplitudes are necessary to suspend a given object in the air. So a large part of this work was developing software that can easily be configured to work with a new object, or programmed to move it in a specific way — rotating, flipping or otherwise moving it at the user’s behest.

A working prototype is complete, but Schuck plans to poll various industries to see whether and how such a device could be useful to them. Watchmaking is of course important in Switzerland, and the parts are both small and sensitive to touch. “Toothed gearwheels, for example, are first coated with lubricant, and then the thickness of this lubricant layer is measured. Even the faintest touch could damage the thin film of lubricant,” he points out in the ETHZ news release.

How would a watchmaker use such a robotic arm? How would a designer of microscopic robots, or a biochemist? The potential is clear, but not necessarily obvious. Fortunately, he has a bit of fellowship cash to spend on the question and hopes to spin it off as a startup next year if his early inquiries bear fruit.

Unearth the future of agriculture at TC Sessions: Robotics+AI with the CEOs of Traptic, Farmwise and Pyka

Farming is one of the oldest professions, but today those amber waves of grain (and soy) are a test bed for sophisticated robotic solutions to problems farmers have had for millennia. Learn about the cutting edge (sometimes literally) of agricultural robots at TC Sessions: Robotics+AI on March 3 with the founders of Traptic, Pyka, and Farmwise.

Traptic, and its co-founder and CEO Lewis Anderson, you may remember from Disrupt SF 2019, where it was a finalist in the Startup Battlefield. The company has developed a robotic berry picker that identifies ripe strawberries and plucks them off the plants with a gentle grip. It could be the beginning of a new automated era for the fruit industry, which is decades behind grains and other crops when it comes to machine-based harvesting.

Farmwise has a job that’s equally delicate yet involves rough treatment of the plants — weeding. Its towering machine trundles along rows of crops, using computer vision to locate and remove invasive plants, working 24/7, 365 days a year. CEO Sebastian Boyer will speak to the difficulty of this task and how he plans to evolve the machines to become “doctors” for crops, monitoring health and spontaneously removing pests like aphids.

Pyka’s robot is considerably less earthbound than those: an autonomous, all-electric crop-spraying aircraft — with wings! This is a much different challenge from the more stable farming and spraying drones like those of DroneSeed and SkyX, but the choice gives the craft more power and range, hugely important for today’s vast fields. Co-founder Michael Norcia can speak to that scale and his company’s methods of meeting it.

These three companies and founders are at the very frontier of what’s possible at the intersection of agriculture and technology, so expect a fruitful conversation.

$150 Early Bird savings end on Feb. 14! Book your $275 Early Bird Ticket today and put that extra money in your pocket.

Students, grab your super discounted $50 tickets right here. You might just meet your future employer/internship opportunity at this event.

Startups, we only have 5 demo tables left for the event. Book your $2200 demo table here and get in front of some of today’s leading names in the biz. Each table comes with 4 tickets to attend the show.

Facebook speeds up AI training by culling the weak

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Eric Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9 percent success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one bedroom apartment, it’s much easier to do that than navigate a ten bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijman. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijman explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9 percent reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed towards the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijman. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions, and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

Diligent’s Vivian Chu and Labrador’s Mike Dooley will discuss assistive robotics at TC Sessions: Robotics+AI

Too often the world of robotics seems to be a solution in search of a problem. Assistive robotics, on the other hand, are among one of the primary real-world tasks existing technology can seemingly address almost immediately.

The concept for the technology has been around for some time now and has caught on particularly well in places like Japan, where human help simply can’t keep up with the needs of an aging population. At TC Sessions: Robotics+AI at U.C. Berkeley on March 3, we’ll be speaking with a pair of founders developing offerings for precisely these needs.

Vivian Chu is the cofounder and CEO of Diligent Robotics. The company has developed the Moxi robot to help assist with chores and other non-patient tasks, in order to allow caregivers more time to interact with patients. Prior to Diligent, Chu worked at both Google[X] and Honda Research Institute.

Mike Dooley is the cofounder and CEO of Labrador Systems. The Los Angeles-based company recently closed a $2 million seed round to develop assistive robots for the home. Dooley has worked at a number of robotics companies including, most recently a stint as the VP of Product and Business Development at iRobot.

Early Bird tickets are now on sale for $275, but you better hurry, prices go up in less than a month by $100. Students can book a super discounted ticket for just $50 right here.

Soft Robotics raises $23 million from investors including industrial robot giant FANUC

Robotics startup company Soft Robotics has closed its Series B round of funding, raising $23 million led by Calibrate Ventures and Material Impact, and including participation from exiting investors including Honeywell, Yahama, Hyperplane and more. This round also brings in FANUC, the world’s largest maker of industrial robots and a recently announced strategic partner for Soft Robotics .

The company said in a press release announcing this latest round of funding that the round was oversubscribed, which suggests it isn’t looking to glut itself on capital investors, given that this $23 million follows a similarly sized $20 million round that closed in 2018 which it also referred to as “oversubscribed.” Prior to that, Soft Robotics had raised $5 million in a Series A round closed in 2015. It has plenty of large, global clients already, so it’s probably not hurting for revenue.

Soft Robotics is focused on developing robotic grippers that, as you might’ve guessed from the name, make use of soft material endpoints that can more easily grip a range of different objects without the kind of extremely specific and tolerance-allergic complex programming that’s required for most traditional industrial robotic claws.

With its 2018 funding raise, Soft Robotics said that it was expanding further into food and beverage, as well as doubling down on its presence in the retail and logistics industries. This round and its new partnership with FANUC (which involves a new integrated system that pairs its mGrip robotic gripper with a new Mini-P controller, all with simple integration to FANUC’s existing lineup of industrial robots) will give it strategic and functional access to what is the most influenentioal industrial robotics company in the world.

This round will specifically help Soft Robotics spend on growth, looking to increase its variability even further and work on expanding its food packaging and consumer goods applications, as well as diving into e-commerce and logistics – specifically to help automate and improve the returns process, a costly and ever-growing challenge as more retail moves online.

‘PigeonBot’ brings flying robots closer to real birds

Try as they might, even the most advanced roboticists on Earth struggle to recreate the effortless elegance and efficiency with which birds fly through the air. The “PigeonBot” from Stanford researchers takes a step towards changing that by investigating and demonstrating the unique qualities of feathered flight.

On a superficial level, PigeonBot looks a bit, shall we say, like a school project. But a lot of thought went into this rather haphazard looking contraption. Turns out the way birds fly is really not very well understood, as the relationship between the dynamic wing shape and positions of individual feathers are super complex.

Mechanical engineering professor David Lentink challenged some of his graduate students to “dissect the biomechanics of the avian wing morphing mechanism and embody these insights in a morphing biohybrid robot that features real flight feathers,” taking as their model the common pigeon — the resilience of which Lentink admires.

As he explains in an interview with the journal Science:

The first Ph.D.student, Amanda Stowers, analyzed the skeletal motion and determined we only needed to emulate the wrist and finger motion in our robot to actuate all 20 primary and 20 secondary flight feathers. The second student, Laura Matloff,uncovered how the feathers moved via a simple linear response to skeletal movement. The robotic insight here is that a bird wing is a gigantic underactuated system in which a bird doesn’t have to constantly actuate each feather individually. Instead, all the feathers follow wrist and finger motion automatically via the elastic ligament that connects the feathers to the skeleton. It’s an ingenious system that greatly simplifies feather position control.

In addition to finding that the individual control of feathers is more automatic than manual, the team found that tiny microstructures on the feathers form a sort of one-way Velcro-type material that keeps them forming a continuous surface rather than a bunch of disconnected ones. These and other findings were published in Science, while the robot itself, devised by “the third student,” Eric Chang, is described in Science Robotics.

Using 40 actual pigeon feathers and a super-light frame, Chang and the team made a simple flying machine that doesn’t derive lift from its feathers — it has a propeller on the front — but uses them to steer and maneuver using the same type of flexion and morphing as the birds themselves do when gliding.

Studying the biology of the wing itself, then observing and adjusting the PigeonBot systems, the team found that the bird (and bot) used its “wrist” when the wing was partly retracted, and “fingers” when extended, to control flight. But it’s done in a highly elegant fashion that minimizes the thought and the mechanisms required.

PigeonBot’s wing. You can see that the feathers are joined by elastic connections so moving one moves others.

It’s the kind of thing that could inform improved wing design for aircraft, which currently rely in many ways on principles established more than a century ago. Passenger jets, of course, don’t need to dive or roll on short notice, but drones and other small craft might find the ability extremely useful.

“The underactuated morphing wing principles presented here may inspire more economical and simpler morphing wing designs for aircraft and robots with more degrees of freedom than previously considered,” write the researchers in the Science Robotics paper.

Up next for the team is observation of more bird species to see if these techniques are shared with others. Lentink is working on a tail to match the wings, and separately on a new bio-inspired robot inspired by falcons, which could potentially have legs and claws as well. “I have many ideas,” he admitted.

TRI-AD’s James Kuffner and Max Bajracharya are coming to TC Sessions: Robotics+AI 2020

With the Tokyo Summer Olympics rapidly approaching, 2020 is shaping up to be a big year for TRI-AD (Toyota Research Institute – Advanced Development). Opened in 2018, the research wing is devoted to bringing some of TRI’s work into practice. The organization is heavily invested in both autonomous driving and other key robotics projects.

TRI-AD’s CEO James Kuffner and VP of Robotics Max Bajracharya will be joining us onstage at TC Sessions Robotics+AI on March 3 at UC Berkeley to discuss their work in the field. The company has been working to promote accessibility, both in terms of its work in automotive and smart cities, as well as robotics aimed to help assist Japan’s aging population.

The Summer Olympics will serve as an opportunity for TRI-AD to showcase those technologies in practice. Kuffner and Bajracharya will discuss why companies like Toyota are investing in robotics and working to make every day robotics a reality.

Early-Bird tickets are now on sale for $275. Book your tickets now and save $150 before prices go up!

Student Tickets are just $50 — grab yours here.

Startups, book a demo exhibitor table and get four tickets to the show and a demo area to showcase your company. Packages run $2,200.

Companies take baby steps toward home robots at CES

“I think there are fewer fake robots this year.” I spoke to a lot of roboticists and robot-adjacent folks at this year’s CES, but that comment from Labrador Systems co-funder/CEO Mike Dooley summed up the situation nicely. The show is slowly, but steadily, starting to take robotics more seriously.

It’s true that words like “fake” and “seriously” are quite subjective; surely all of those classified by one of us as the former would take great issue with the tag. It’s also true that there are still many devices that fit firmly within the realm of novelty and hypothetical, both on the show floor and in press conferences, but after a week at CES — including several behind-the-scenes conversations with investors and startups — the consensus seems to be that the show is slowly embracing the more series side of robotics.

I believe the reason for this shift is two-fold. First, the world of consumer robotics hasn’t caught on as quickly as many had planned/hoped. Second, enterprise and industrial robotics actually have. Let’s tackle those points in order.

As my colleague Darrell pointed out in a recent piece, consumer robotics were showing signs of life at this year’s event. However, those who predicted a watershed for the industry after the Roomba’s arrival on the scene some 18 years ago have no doubt been largely disappointed with the ensuing decades.

Save over $200 with discounted student tickets to Robotics + AI 2020

If you’re a current student and you love robots — and the AI that drives them — you do not want to miss out on TC Sessions: Robotics + AI 2020. Our day-long deep dive into these two life-altering technologies takes place on March 3 at UC Berkeley and features the best and brightest minds, makers and influencers.

We’ve set aside a limited number of deeply discounted tickets for students because, let’s face it, the future of robotics and AI can’t happen without cultivating the next generation. Tickets cost $50, which means you save more than $200. Reserve your student ticket now.

Not a student? No problem, we have a savings deal for you, too. If you register now, you’ll save $150 when you book an early-bird ticket by Feb. 14.

More than 1,000 robotics and AI enthusiasts, experts and visionaries attended last year’s event, and we expect even more this year. Talk about a targeted audience and the perfect place for students to network for an internship, employment or even a future co-founder.

What can you expect this year? For starters, we have an outstanding lineup of speaker and demos — more than 20 presentations — on tap. Let’s take a quick look at just some of the offerings you don’t want to miss.

  • Saving Humanity from AI: Stuart Russell, UC Berkeley professor and AI authority, argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
  • Opening the Black Box with Explainable A.I: Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI International will discuss what we’re doing about it and what still needs to be done.
  • Engineering for the Red Planet: Maxar Technologies has been involved with U.S. space efforts for decades and is about to send its fifth robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian, general manager of robotics at Maxar, will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.

That’s just a sample — take a gander at the event agenda to help you plan your time accordingly. We’ll add even more speakers in the coming weeks, so keep checking back.

TC Sessions: Robotics + AI 2020 takes place on March 3 at UC Berkeley. It’s a full day focused on exploring the future of robotics and a great opportunity for students to connect with leading technologists, founders, researchers and investors. Join us in Berkeley. Buy your student ticket today and get ready to build the future.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.