Anki’s Vector is getting Alexa access next week

Vector’s already got the adorable bit down, courtesy of a team staffed by former Pixar and Dreamworks animators. But Anki’s grownup version of Cozmo can use a bit of help when it comes to smarts. Starting next week, the desktop robot will be able to tap into Amazon Alexa’s deep base of answers.

The new feature will arrive as a software update on December 17. As evidenced by the below preview video, it’s not a direct integration into Vector’s AI personality. “Hey Vector” will get you an answer from the familiar robot face, while “Alexa” will pop up a blue ring.

From there, you can do the standard array of Alexa features, from setting reminders to controlling smart home devices. Though, as the company notes, a handful of functions won’t be available at launch, including music streaming, Drop In, Calling and Kindle and Audible features.

Just in time for the holidays, the new additions should still make Vector a bit more useful day to day, expanding functionality beyond an adorable rolling robotic paper weight.

Wandelbots raises $6.8M to make programming a robot as easy as putting on a jacket

Industrial robotics is on track to be worth around $20 billion by 2020, but while it may something in common with other categories of cutting-edge tech — innovative use of artificial intelligence, pushing the boundaries of autonomous machines that are disrupting pre-existing technology — there is one key area where it differs: each robotics firm uses its own proprietary software and operating systems to run its machines, making programming the robots complicated, time-consuming and expensive.

A startup out of Germany called Wandelbots (a portmanteau of “change” and “robots” in German) has come up with an innovative way to skirt around that challenge: it has built a bridge that connects the operating systems of the 12 most popular industrial robotics makers with what a business wants them to do, and now they can be trained by a person wearing a jacket kitted with dozens of sensors.

“We are providing a universal language to teach those robots in the same way, independent of the technology stack,” said CEO Christian Piechnick said in an interview. Essentially reverse engineering the process of how a lot of software is built, Wandelbots is creating what is a Linux-like underpinning to all of it.

With some very big deals under its belt with the likes of Volkwagen, Infineon and Midea, the startup out of Dresden has now raised €6 million ($6.8 million), a Series A to take it to its next level of growth and specifically to open an office in China. The funding comes from Paua VenturesEQT Ventures and other unnamed previous investors. (It had previously raised a seed round around the time it was a finalist in our Disrupt Battlefield last year, pre-launch.)

Notably, Paua has a bit of a history of backing transformational software companies (it also invests in Stripe), and EQT, being connected to a private equity firm, is treating this as a strategic investment that might be deployed across its own assets.

Piechnick — who co-founded Wandelbots with Georg Püschel, Maria Piechnick, Sebastian Werner, Jan Falkenberg and Giang Nguyen on the back of research they did at university — said that typical programming of industrial robots to perform a task could have in the past taken three months, the employment of specialist systems integrators, and of course an extra cost on top of the machines themselves.

Someone with no technical knowledge, wearing one of Wandelbots’ jackets, can bring that process down to 10 minutes, with costs reduced by a factor of ten.

“In order to offer competitive products in the face of the rapid changes within the automotive industry, we need more cost savings and greater speed in the areas of production and automation of manufacturing processes,” said Marco Weiß, Head of New Mobility & Innovations at Volkswagen Sachsen GmbH, in a statement. “Wandelbots’ technology opens up significant opportunities for automation. Using Wandelbots offering, the installation and setup of robotic solutions can be implemented incredibly quickly by teams with limited programming skills.”

Wandelbots’ focus at the moment is on programming robotic arms rather than the mobile machines that you may have seen Amazon and others using to move goods around warehouses. For now, this means that there is not a strong crossover in terms of competition between these two branches of enterprise robotics.

However, Amazon has been expanding and working on new areas beyond warehouse movements: it has, for example, been working ways of using computer vision and robotic arms to identify and pick out the most optimal fruits and vegetables out of boxes to put into grocery orders.

Innovations like that from Amazon and others could see more pressure for innovation among robotics makers, although Piechnick notes that up to now we’ve seen very little in the way of movement, and there may never be (creating more opportunity for companies like his that build more usability).

“Attempts to build robotics operating systems have been tried over and over again, and each time it’s failed,” he said. “But robotics has completely different requirements, such as real time computing, safety issues and many other different factors. A robot in operation is much more complicated than a phone.” He also added that Wandelbots itself has a number of innovations of its own currently going through the patent process, which will widen its own functionality too in terms of what and how its software can train a robot to do. (This may see more than jackets enter the mix.)

As with companies in the area of robotic process automation — which uses AI to take over more mundane back-office features — Piechnick maintains that what he has built, and the rise of robotics overall, is not going to replace workers, but put them on to other roles, while allowing businesses to expand the scope of what they can do that a human might never have been able to execute.

“No company we work with has ever replaced a human worker with a robot,” he said, explaining that generally the upgrade is from machine to better machine. “It makes you more efficient and cost reductive, and it allows you to put your good people on more complicated tasks.”

Currently, Wandelbots is working with large-scale enterprises, although ultimately, it’s smaller businesses that are its target customer, he said.

“Previously the ROI on robots was too difficult for SMEs,” he said. “With our tech this changes.”

“Wandelbots will be one of the key companies enabling the mass-adoption of industrial robotics by revolutionizing how robots are trained and used,” said Georg Stockinger, Partner at Paua Ventures, in a statement. “Over the last few years, we’ve seen a steep decline in robotic hardware costs. Now, Wandelbots’ resolves the remaining hurdle to disruptive growth in industrial automation – the ease and speed of implementation and teaching. Both factors together will create a perfect storm, driving the next wave of industrial revolution.”

 

 

Robot couriers scoop up early-stage cash

Much of the last couple of decades of innovation has centered around finding ways to get what we want without leaving the sofa.

So far, online ordering and on-demand delivery have allowed us to largely accomplish this goal. Just point, click and wait. But there’s one catch: Delivery people. We can never all lie around ordering pizzas if someone still has to deliver them.

Enter robots. In tech-futurist circles, it’s pretty commonplace to hear predictions about how some medley of autonomous vehicles and AI-enabled bots will take over doorstep deliveries in the coming years. They’ll bring us takeout, drop off our packages and displace lots of humans who currently make a living doing these things.

If this vision does become reality, there’s a strong chance it’ll largely be due to a handful of early-stage startups currently working to roboticize last-mile delivery. Below, we take a look at who they are, what they’re doing, who’s backing them and where they’re setting up shop.

The players

Crunchbase data unearthed at least eight companies in the robot delivery space with headquarters or operations in North America that have secured seed or early-stage funding in the past couple of years.

They range from heavily funded startups to lean seed-stage operations. Silicon Valley-based Nuro, an autonomous delivery startup founded by former engineers at Alphabet’s Waymo, is the most heavily funded, having raised $92 million to date. Others have raised a few million.

In the chart below, we look at key players, ranked by funding to date, along with their locations and key investors.

Who’s your backer?

While startups may be paving the way for robot delivery, they’re not doing so alone. One of the ways larger enterprises are keeping a toehold in the space is through backing and partnering with early-stage startups. They’re joining a long list of prominent seed and venture investors also eagerly eyeing the sector.

The list of larger corporate investors includes Germany’s Daimler, the lead investor in Starship Technologies. China’s Tencent, meanwhile, is backing San Francisco-based Marble, while Toyota AI Ventures has invested in Boxbot.

As for partnering, takeout food delivery services seem to be the most active users of robot couriers.

Starship, whose bot has been described as a slow-moving, medium-sized cooler on six wheels, is making particularly strong inroads in takeout. The San Francisco- and Estonia-based company, launched by Skype founders Janus Friis and Ahti Heinla, is teaming up with DoorDash and Postmates in parts of California and Washington, DC. It’s also working with the Domino’s pizza chain in Germany and the Netherlands.

Robby Technologies, another maker of cute, six-wheeled bots, has also been partnering with Postmates in parts of Los Angeles. And Marble, which is branding its boxy bots as “your friendly neighborhood robot,” teamed up last year for a trial with Yelp in San Francisco.

San Francisco Bay Area dominates

While their visions of world domination are necessarily global, the robot delivery talent pool remains rather local.

Six of the eight seed- and early-stage startups tracked by Crunchbase are based in the San Francisco Bay Area, and the remaining two have some operations in the region.

Why is this? Partly, there’s a concentration of talent in the area, with key engineering staff coming from larger local companies like Uber, Tesla and Waymo . Plus, of course, there’s a ready supply of investor capital, which bot startups presumably will need as they scale.

Silicon Valley and San Francisco, known for scarce and astronomically expensive housing, are also geographies in which employers struggle to find people to deliver stuff at prevailing wages to the hordes of tech workers toiling at projects like designing robots to replace them.

That said, the region isn’t entirely friendly territory for slow-moving sidewalk robots. In San Francisco, already home to absurdly steep streets and sidewalks crowded with humans and discarded scooters, city legislators voted to ban delivery robots from most places and severely restrict them in areas where permitted.

The rise of the pizza delivery robot manager

But while San Francisco may be wary of a delivery robot invasion, other geographies, including nearby Berkeley, Calif., where startup Kiwi Campus operates, have been more welcoming.

In the process, they’re creating an interesting new set of robot overseer jobs that could shed some light on the future of last-mile delivery employment.

For some startups in early trial mode, robot wrangling jobs involve shadowing bots and making sure they carry out their assigned duties without travails.

Remote robot management is also a thing and will likely see the sharpest growth. Starship, for instance, relies on operators in Estonia to track and manage bots as they make their deliveries in faraway countries.

For now, it’s too early to tell whether monitoring and controlling hordes of delivery bots will provide better pay and working conditions than old-fashioned human delivery jobs.

At least, however, much of it could theoretically be done while lying on the sofa.

The International Space Station’s new robot is a freaky floating space Alexa

Meet Cimon. The 3D-printed floating robot head was developed by Airbus for the German Space Agency. He’s been a crew member of the International Space Station since June, though as Gizmodo notes, this is the first time we’re seeing him in action.

Really the floating, Watson-powered robot face is like an extremely expensive Amazon Echo designed to study human-machine interactions in space. This video highlights an early interaction between Cimon and European Space Agency astronaut Alexander Gerst.

Gerst requests his “favorite song,” leading Cimon to play Kraftwerk’s “Man Machine,” only to be shaken by the astronaut, who then demands the robot shoot some video. Once again Cimon complies, though this time he’s clearly a bit annoyed that the music has stopped. Kind of a rough first encounter for the two new co-workers.

“Happy with his initial outing, both Cimon’s developers and Alexander hope to see Cimon back in action again soon,” the ESA says. “While no further sessions are planned during the Horizons mission at this stage, it could mark the beginning of exciting collaboration between astronauts, robotic assistants and possible future artificial intelligence in space.”

Hopefully things go a bit more smoothly next time. Lord knows the last thing you want to do is piss off a space robot.

Toyota taps Docomo 5G to remotely control its humanoid robot

Toyota introduced T-HR3 to the world right around this time last year. The humanoid robot is capable of mimicking the motions of a plugged-in human, à la “Pacific Rim” and countless other sci-fi franchises. The ‘bot has learned a few new tricks in the intervening year, including, notably, untethered control via 5G.

Using the next-gen wireless tech, a pilot is able to remotely control the robot from a distance of up to 10 kilometers (~6 miles). Toyota notes in a press release tied to the news that, in spite of earlier images, demos have been performed with a tethered robot. Using 5G tech from Japanese carrier Docomo, however, the robot can be controlled from a distance with low latency.

As for what such a robot might actual be good for (beyond knocking the snot out of pint-sized kaiju), Toyota sees potential in homes and healthcare, with an eye on “a prosperous society of mobility.”

At the very least, it’s a nice little bit of press for the promise of 5G connectivity, which networking companies aim to frame as being a relevant technology well beyond just smartphones and computers. The tech will be demoed at a Docomo event in Tokyo early next year. 

Mars Lander InSight sends the first of many selfies after a successful touchdown

Last night’s 10 minutes of terror as the InSight Mars Lander descended to the Martian surface at 12,300 MPH were a nail-biter for sure, but now the robotic science platform is safe and sound — and has sent pics back to prove it.

The first thing it sent was a couple pictures of its surroundings: Elysium Planitia, a rather boring-looking, featureless plane that is nevertheless perfect for InSight’s drilling and seismic activity work.

The images, taken with its Instrument Context Camera, are hardly exciting on their own merits — a dirty landscape viewed through a dusty tube. But when you consider that it’s of an unexplored territory on a distant planet, and that it’s Martian dust and rubble occluding the lens, it suddenly seems pretty amazing!

Decelerating from interplanetary velocity and making a perfect landing was definitely the hard part, but it was by no means InSight’s last challenge. After touching down, it still needs to set itself up and make sure that none of its many components and instruments were damaged during the long flight and short descent to Mars.

And the first good news arrived shortly after landing, relayed via NASA’s Odyssey spacecraft in orbit: a partial selfie showing that it was intact and ready to roll. The image shows, among other things, the large mobile arm folded up on top of the lander, and a big copper dome covering some other components.

Telemetry data sent around the same time show that InSight has also successfully deployed its solar panels and its collecting power with which to continue operating. These fragile fans are crucial to the lander, of course, and it’s a great relief to hear they’re working properly.

These are just the first of many images the lander will send, though unlike Curiosity and the other rovers, it won’t be traveling around taking snapshots of everything it sees. Its data will be collected from deep inside the planet, offering us insight into the planet’s — and our solar system’s — origins.

That night, a forest flew

Wildfires are consuming our forests and grasslands faster than we can replace them. It’s a vicious cycle of destruction and inadequate restoration rooted, so to speak, in decades of neglect of the institutions and technologies needed to keep these environments healthy.

DroneSeed is a Seattle-based startup that aims to combat this growing problem with a modern toolkit that scales: drones, artificial intelligence and biological engineering. And it’s even more complicated than it sounds.

Trees in decline

A bit of background first. The problem of disappearing forests is a complex one, but it boils down to a few major factors: climate change, outdated methods and shrinking budgets (and as you can imagine, all three are related).

Forest fires are a natural occurrence, of course. And they’re necessary, as you’ve likely read, to sort of clear the deck for new growth to take hold. But climate change, monoculture growth, population increases, lack of control burns and other factors have led to these events taking place not just more often, but more extensively and to more permanent effect.

On average, the U.S. is losing 7 million acres a year. That’s not easy to replace to begin with — and as budgets for the likes of national and state forest upkeep have shrunk continually over the last half century, there have been fewer and fewer resources with which to combat this trend.

The most effective and common reforestation technique for a recently burned woodland is human planters carrying sacks of seedlings and manually selecting and placing them across miles of landscapes. This back-breaking work is rarely done by anyone for more than a year or two, so labor is scarce and turnover is intense.

Even if the labor was available on tap, the trees might not be. Seedlings take time to grow in nurseries and a major wildfire might necessitate the purchase and planting of millions of new trees. It’s impossible for nurseries to anticipate this demand, and the risk associated with growing such numbers on speculation is more than many can afford. One missed guess could put the whole operation underwater.

Meanwhile, if nothing gets planted, invasive weeds move in with a vengeance, claiming huge areas that were once old growth forests. Lacking the labor and tree inventory to stem this possibility, forest keepers resort to a stopgap measure: use helicopters to drench the area in herbicides to kill weeds, then saturate it with fast-growing cheatgrass or the like. (The alternative to spraying is, again, the manual approach: machetes.)

At least then, in a year, instead of a weedy wasteland, you have a grassy monoculture — not a forest, but it’ll do until the forest gets here.

One final complication: helicopter spraying is a horrendously dangerous profession. These pilots are flying at sub-100-foot elevations, performing high-speed maneuvers so that their sprays reach the very edge of burn zones but they don’t crash head-on into the trees. This is an extremely dangerous occupation: 80 to 100 crashes occur every year in the U.S. alone.

In short, there are more and worse fires and we have fewer resources — and dated ones at that — with which to restore forests after them.

These are facts anyone in forest ecology and logging are familiar with, but perhaps not as well known among technologists. We do tend to stay in areas with cell coverage. But it turns out that a boost from the cloistered knowledge workers of the tech world — specifically those in the Emerald City — may be exactly what the industry and ecosystem require.

Simple idea, complex solution

So what’s the solution to all this? Automation, right?

Automation, especially via robotics, is proverbially suited for jobs that are “dull, dirty, and dangerous.” Restoring a forest is dirty and dangerous to be sure. But dull isn’t quite right. It turns out that the process requires far more intelligence than anyone was willing, it seems, to apply to the problem — with the exception of those planters. That’s changing.

Earlier this year, DroneSeed was awarded the first multi-craft, over-55-pounds unmanned aerial vehicle license ever issued by the FAA. Its custom UAV platforms, equipped with multispectral camera arrays, high-end lidar, six-gallon tanks of herbicide and proprietary seed dispersal mechanisms have been hired by several major forest management companies, with government entities eyeing the service as well.

Ryan Warner/DroneSeed

These drones scout a burned area, mapping it down to as high as centimeter accuracy, including objects and plant species, fumigate it efficiently and autonomously, identify where trees would grow best, then deploy painstakingly designed seed-nutrient packages to those locations. It’s cheaper than people, less wasteful and dangerous than helicopters and smart enough to scale to national forests currently at risk of permanent damage.

I met with the company’s team at their headquarters near Ballard, where complete and half-finished drones sat on top of their cases and the air was thick with capsaicin (we’ll get to that).

The idea for the company began when founder and CEO Grant Canary burned through a few sustainable startup ideas after his last company was acquired, and was told, in his despondency, that he might have to just go plant trees. Canary took his friend’s suggestion literally.

“I started looking into how it’s done today,” he told me. “It’s incredibly outdated. Even at the most sophisticated companies in the world, planters are superheroes that use bags and a shovel to plant trees. They’re being paid to move material over mountainous terrain and be a simple AI and determine where to plant trees where they will grow — microsites. We are now able to do both these functions with drones. This allows those same workers to address much larger areas faster without the caloric wear and tear.”

(Video: Ryan Warner/DroneSeed)

It may not surprise you to hear that investors are not especially hot on forest restoration (I joked that it was a “growth industry” but really because of the reasons above it’s in dire straits).

But investors are interested in automation, machine learning, drones and especially government contracts. So the pitch took that form. With the money DroneSeed secured, it has built its modestly sized but highly accomplished team and produced the prototype drones with which is has captured several significant contracts before even announcing that it exists.

“We definitely don’t fit the mold or metrics most startups are judged on. The nice thing about not fitting the mold is people double take and then get curious,” Canary said. “Once they see we can actually execute and have been with 3 of the 5 largest timber companies in the U.S. for years, they get excited and really start advocating hard for us.”

The company went through Techstars, and Social Capital helped them get on their feet, with Spero Ventures joining up after the company got some groundwork done.

If things go as DroneSeed hopes, these drones could be deployed all over the world by trained teams, allowing spraying and planting efforts in nurseries and natural forests to take place exponentially faster and more efficiently than they are today. It’s genuine change-the-world-from-your-garage stuff, which is why this article is so long.

Hunter (weed) killers

The job at hand isn’t simple or even straightforward. Every landscape differs from every other, not just in the shape and size of the area to be treated but the ecology, native species, soil type and acidity, type of fire or logging that cleared it and so on. So the first and most important task is to gather information.

For this, DroneSeed has a special craft equipped with a sophisticated imaging stack. This first pass is done using waypoints set on satellite imagery.

The information collected at this point is really far more detailed than what’s actually needed. The lidar, for instance, collects spatial information at a resolution much beyond what’s needed to understand the shape of the terrain and major obstacles. It produces a 3D map of the vegetation as well as the terrain, allowing the system to identify stumps, roots, bushes, new trees, erosion and other important features.

This works hand in hand with the multispectral camera, which collects imagery not just in the visible bands — useful for identifying things — but also in those outside the human range, which allows for in-depth analysis of the soil and plant life.

The resulting map of the area is not just useful for drone navigation, but for the surgical strikes that are necessary to make this kind of drone-based operation worth doing in the first place. No doubt there are researchers who would love to have this data as well.

Ryan Warner/DroneSeed

Now, spraying and planting are very different tasks. The first tends to be done indiscriminately using helicopters, and the second by laborers who burn out after a couple of years — as mentioned above, it’s incredibly difficult work. The challenge in the first case is to improve efficiency and efficacy, while in the second case is to automate something that requires considerable intelligence.

Spraying is in many ways simpler. Identifying invasive plants isn’t easy, exactly, but it can be done with imagery like that the drones are collecting. Having identified patches of a plant to be eliminated, the drones can calculate a path and expend only as much herbicide is necessary to kill them, instead of dumping hundreds of gallons indiscriminately on the entire area. It’s cheaper and more environmentally friendly. Naturally, the opposite approach could be used for distributing fertilizer or some other agent.

I’m making it sound easy again. This isn’t a plug and play situation — you can’t buy a DJI drone and hit the “weedkiller” option in its control software. A big part of this operation was the creation not only of the drones themselves, but the infrastructure with which to deploy them.

Conservation convoy

The drones themselves are unique, but not alarmingly so. They’re heavy-duty craft, capable of lifting well over the 57 pounds of payload they carry (the FAA limits them to 115 pounds).

“We buy and gut aircraft, then retrofit them,” Canary explained simply. Their head of hardware, would probably like to think there’s a bit more to it than that, but really the problem they’re solving isn’t “make a drone” but “make drones plant trees.” To that end, Canary explained, “the most unique engineering challenge was building a planting module for the drone that functions with the software.” We’ll get to that later.

DroneSeed deploys drones in swarms, which means as many as five drones in the air at once — which in turn means they need two trucks and trailers with their boxes, power supplies, ground stations and so on. The company’s VP of operations comes from a military background where managing multiple aircraft onsite was part of the job, and she’s brought her rigorous command of multi-aircraft environments to the company.

Ryan Warner/DroneSeed

The drones take off and fly autonomously, but always under direct observation by the crew. If anything goes wrong, they’re there to take over, though of course there are plenty of autonomous behaviors for what to do in case of, say, a lost positioning signal or bird strike.

They fly in patterns calculated ahead of time to be the most efficient, spraying at problem areas when they’re over them, and returning to the ground stations to have power supplies swapped out before returning to the pattern. It’s key to get this process down pat, since efficiency is a major selling point. If a helicopter does it in a day, why shouldn’t a drone swarm? It would be sad if they had to truck the craft back to a hangar and recharge them every hour or two. It also increases logistics costs like gas and lodging if it takes more time and driving.

This means the team involves several people, as well as several drones. Qualified pilots and observers are needed, as well as people familiar with the hardware and software that can maintain and troubleshoot on site — usually with no cell signal or other support. Like many other forms of automation, this one brings its own new job opportunities to the table.

AI plays Mother Nature

The actual planting process is deceptively complex.

The idea of loading up a drone with seeds and setting it free on a blasted landscape is easy enough to picture. Hell, it’s been done. There are efforts going back decades to essentially load seeds or seedlings into guns and fire them out into the landscape at speeds high enough to bury them in the dirt: in theory this combines the benefits of manual planting with the scale of carpeting the place with seeds.

But whether it was slapdash placement or the shock of being fired out of a seed gun, this approach never seemed to work.

Forestry researchers have shown the effectiveness of finding the right “microsite” for a seed or seedling; in fact, it’s why manual planting works as well as it does. Trained humans find perfect spots to put seedlings: in the lee of a log; near but not too near the edge of a stream; on the flattest part of a slope, and so on. If you really want a forest to grow, you need optimal placement, perfect conditions and preventative surgical strikes with pesticides.

Ryan Warner/DroneSeed

Although it’s difficult, it’s also the kind of thing that a machine learning model can become good at. Sorting through messy, complex imagery and finding local minima and maxima is a specialty of today’s ML systems, and the aerial imagery from the drones is rich in relevant data.

The company’s CTO led the creation of an ML model that determines the best locations to put trees at a site — though this task can be highly variable depending on the needs of the forest. A logging company might want a tree every couple of feet, even if that means putting them in sub-optimal conditions — but a few inches to the left or right may make all the difference. On the other hand, national forests may want more sparse deployments or specific species in certain locations to curb erosion or establish sustainable firebreaks.

Once the data has been crunched, the map is loaded into the drones’ hive mind and the convoy goes to the location, where the craft are loaded with seeds instead of herbicides.

But not just any old seeds! You see, that’s one more wrinkle. If you just throw a sagebrush seed on the ground, even if it’s in the best spot in the world, it could easily be snatched up by an animal, roll or wash down to a nearby crevasse, or simply fail to find the right nutrients in time despite the planter’s best efforts.

That’s why DroneSeed’s head of Planting and his team have been working on a proprietary seed packet that they were unbelievably reticent to detail.

From what I could gather, they’ve put a ton of work into packaging the seeds into nutrient-packed little pucks held together with a biodegradable fiber. The outside is dusted with capsaicin, the chemical that makes spicy food spicy (and also what makes bear spray do what it does). If they hadn’t told me, I might have guessed, since the workshop area was hazy with it, leading us all to cough and tear up a little. If I were a marmot, I’d learn to avoid these things real fast.

The pucks, or “seed vessels,” can and must be customized for the location and purpose — you have to match the content and acidity of the soil, things like that. DroneSeed will have to make millions of these things, but it doesn’t plan to be the manufacturer.

Finally these pucks are loaded in a special puck-dispenser which, closely coordinating with the drone, spits one out at the exact moment and speed needed to put it within a few centimeters of the microsite.

All these factors should improve the survival rate of seedlings substantially. That means that the company’s methods will not only be more efficient, but more effective. Reforestation is a numbers game played at scale, and even slight improvements — and DroneSeed is promising more than that — are measured in square miles and millions of tons of biomass.

Proof of life

DroneSeed has already signed several big contracts for spraying, and planting is next. Unfortunately, the timing on their side meant they missed this year’s planting season, though by doing a few small sites and showing off the results, they’ll be in pole position for next year.

After demonstrating the effectiveness of the planting technique, the company expects to expand its business substantially. That’s the scaling part — again, not easy, but easier than hiring another couple thousand planters every year.

Ryan Warner/DroneSeed

Ideally the hardware can be assigned to local teams that do the on-site work, producing loci of activity around major forests from which jobs can be deployed at large or small scales. A set of five or six drones does the work of one helicopter, roughly speaking, so depending on the volume requested by a company or forestry organization, you may need dozens on demand.

That’s all yet to be explored, but DroneSeed is confident that the industry will see the writing on the wall when it comes to the old methods, and identify them as a solution that fits the future.

If it sounds like I’m cheerleading for this company, that’s because I am. It’s not often in the world of tech startups that you find a group of people not just attempting to solve a serious problem — it’s common enough to find companies hitting this or that issue — but who have spent the time, gathered the expertise and really done the dirty, boots-on-the-ground work that needs to happen so it goes from great idea to real company.

That’s what I felt was the case with DroneSeed, and here’s hoping their work pays off — for their sake, sure, but mainly for ours.

Amazon launches a cloud-based robotics testing platform

Amazon’s kicking off Re:Invent week with the launch of AWS RoboMaker. The cloud-based service utilizes the widely deployed open source software Robot Operating System (ROS) to offer developers a place to develop and test robotics applications.

RoboMaker essentially serves as a platform to help speed up the time consuming robotics development process. Among the tools offered by the service are Amazon’s machine learning technologies and analytics that help create a simulation for real world robotics development.

The system can also be used to help manage fleet deployment for warehouse style robotics designed to work in tandem.

“AWS RoboMaker automatically provisions the underlying infrastructure and it downloads, compiles, and configures the operating system, development software, and ROS,” the company writes. “AWS RoboMaker’s robotics simulation makes it easy to set up large-scale and parallel simulations with pre-built worlds, such as indoor rooms, retail stores, and racing tracks, so developers can test their applications on-demand and run multiple simulations in parallel.”

The feature arrives as Amazon is taking a more serious look at robotics. The company has long deployed warehouse robotics, which will be in full force this holiday season. It’s also reportedly been looking at pick and place robots to help speed up fulfillment, along with a rumored home robot said to be on track for 2019.

China’s Geek+ raises $150M to build robots for warehouses and logistics

One of the most immediate — and already live — applications for robotics and artificial intelligence in general has been in using unmanned robots in warehouses and other environments, where they replace humans in repetitive jobs such as sorting and moving objects from A to B. Now, Beijing-based robotics startup Geek Plus (aka Geek+) says that it has raised $150 million to seize that growing opportunity by investing in product development, sales and customer service. Geek Plus says that this is the largest-ever funding round for a logistics robotics startup.

The round, a Series B, was led by Warburg Pincus, with participation from other shareholders including Volcanics Venture and Vertex Ventures. The company is not disclosing its valuation but we have asked and will update as we learn more. Warburg Pincus led the startup’s previous round of $60 million in 2017, and Geek Plus has raised around $217 million since being founded in 2015.

Part of the reason for this large round is because the company says its on track for a big year, projecting to growth business five-fold, and it wants to capitalise on that growth both inside its own giant home market of Mainland China as well as further afield.

To date, Geek Plus says that it has delivered more than 5,000 robots across some 100 robotics warehouse projects in China, Hong Kong, Taiwan, Japan, Australia, Singapore, Europe, and the United States. Current customers include Alibaba’s Tmall and VIP.com out of Asia.

“The investment by Warburg Pincus and other shareholders fully demonstrates their recognition of Geek Plus’s development and their confidence in Geek Plus’s future prospects,” said Yong Zheng, founder and CEO of the startup, in a statement. “We will continue to focus on empowering various industries through AI & Robotics technologies. This year, we expect to grow our business by more than five times…We will continue to be customer-centric, and create values for customers through a seamless integration of AI & Robotics technologies and customers’ supply chain needs.”

There are a number of other companies that have been delving into logistics robotics, both to improve their own logistics operations, and to build as a standalone business to be used by others. These represent competitors and potential partners, or even acquirers, to Geek Plus.

Top of the list is perhaps Amazon, which acquired robotics startup Kiva for $775 million in 2012 to build out its warehouse robots. Since then, it has expanded its robotics operations into a wider unit that, in the grand tradition of other services like AWS and fulfilment, may well productise its robots to fill the needs of other companies that want to inject some AI and robots into their own warehouse operations.

InVia and Fetch, meanwhile, are two companies busy building out technology to sell to third parties from the start, respectively robotics-as-a-service and robots themselves.

Given that some of these robots are also making their way out of the warehouse and into last-mile delivery scenarios, one estimate now puts the value of the wider market at $10 billion.

Geek Plus has been building products to date that cover a range of scenarios where you might use a robot: they include “cargo-to-man” picking systems, sorting and movement systems, and unmanned forklifts.

“Since our first investment in Geek+ last year, we have been very impressed with Geek+’s rapid growth, especially in business expansion and internationalization,” Jericho Zhang, executive director of Warburg Pincus, said in a statement. “Technology is revolutionizing supply chain. Geek+ is one of the leading technology companies that is able to combine robotics, big data, AI and other cutting-edge technologies to solve the pain points of the traditional supply chain. As it accumulates more data, and continues to optimize algorithms and expand into other industries, we are confident that Geek+ will continue to lead the revolution and innovation in the space.”

They’re making a real HAL 9000, and it’s called CASE

Don’t panic! Life imitates art, to be sure, but hopefully the researchers in charge of the Cognitive Architecture for Space Exploration, or CASE, have taken the right lessons from 2001: A Space Odyssey, and their AI won’t kill us all and/or expose us to alien artifacts so we enter a state of cosmic nirvana. (I think that’s what happened.)

CASE is primarily the work of Pete Bonasso, who has been working in AI and robotics for decades — since well before the current vogue of virtual assistants and natural language processing. It’s easy to forget these days that research in this area goes back to the middle of the century, with a boom in the ’80s and ’90s as computing and robotics began to proliferate.

The question is how to intelligently monitor and administrate a complicated environment like that of a space station, crewed spaceship, or a colony on the surface of the Moon or Mars. A simple question with an answer that has been evolving for decades; the International Space Station (which just turned 20) has complex systems governing it and has grown more complex over time — but it’s far from the HAL 9000 that we all think of, and which inspired Bonasso to begin with.

“When people ask me what I am working on, the easiest thing to say is, ‘I am building HAL 9000,’ ” he wrote in a piece published today in the journal Science Robotics. Currently that work is being done under the auspices of TRAC Lab, a research outfit in Houston.

One of the many challenges of this project is marrying the various layers of awareness and activity together. It may be, for example, that a robot arm needs to move something on the outside the habitat. Meanwhile someone may also want to initiate a video call with another part of the colony. There’s no reason for one single system to encompass command and control methods for robotics and a VOIP stack — yet at some point these responsibilities should be known and understood by some overarching agent.

CASE, therefore, isn’t some kind of mega-intelligent know-it-all AI, but an architecture for organizing systems and agents that is itself an intelligent agent. As Bonasso describes in his piece, and as is documented more thoroughly elsewhere, CASE is composed of several “layers” that govern control, routine activities, and planning. A voice interaction system translates human-language queries or commands into tasks those layers can carry out. But it’s the “ontology” system that’s the most important.

Any AI expected to manage a spaceship or colony has to have an intuitive understanding of the people, objects, and processes that make it up. At a basic level, for instance, that might mean knowing that if there’s no one in a room, the lights can turn off to save power but it can’t be depressurized. Or if someone moves a rover from its bay to park it by a solar panel, the AI has to understand that it’s gone, how to describe where it is, and how to plan around its absence.

This type of common sense logic is deceptively difficult and is one of the major problems being tackled in AI today. We have years to learn cause and effect, to gather and put together visual clues to create a map of the world, and so on — for robots and AI, it has to be created from scratch (and they’re not good at improvising). But CASE is working on fitting the pieces together.

Screen showing another ontology system from TRAC Labs, PRONTOE.

“For example,” Bonasso writes, “the user could say, ‘Send the rover to the vehicle bay,’ and CASE would respond, ‘There are two rovers. Rover1 is charging a battery. Shall I send Rover2?’ Alas, if you say, ‘Open the pod bay doors, CASE’ (assuming there are pod bay doors in the habitat), unlike HAL, it will respond, ‘Certainly, Dave,’ because we have no plans to program paranoia into the system.”

I’m not sure why he had to write “alas” — our love of cinema is exceeded by our will to live, surely.

That won’t be a problem for some time to come, of course — CASE is still very much a work in progress.

“We have demonstrated it to manage a simulated base for about 4 hours, but much needs to be done for it to run an actual base,” Bonasso writes. “We are working with what NASA calls analogs, places where humans get together and pretend they are living on a distant planet or the moon. We hope to slowly, piece by piece, work CASE into one or more analogs to determine its value for future space expeditions.”

I’ve asked Bonasso for some more details and will update this post if I hear back.

Whether a CASE- or HAL-like AI will ever be in charge of a base is almost not a question any more — in a way it’s the only reasonable way to manage what will certainly be an immensely complex system of systems. But for obvious reasons it needs to be developed from scratch with an emphasis on safety, reliability… and sanity.