Microsoft makes getting started with Java and VS Code easier

After only a few years, Microsoft’s free Visual Studio Code has become one of the most popular code editors on the market. One of VS Code’s advantages is its flexibility. This flexibility does come with some complexity when it comes to getting everything set up. Today, the company launched a new project that makes it significantly easier to get started with writing Java on VS Code.

Recently, a Microsoft spokesperson told us, the VS Code team noticed that it was still difficult for some developers, including students and novices programmers, to set up their Java development environments. Typically, this is a pretty involved process that includes installing a number of binaries and VS Code extensions.

To help these developers, Microsoft today launched an installer that handles all of this for them. It first looks at whether a JDK is already installed or not. If not, it’ll install a binary from AdoptOpenJDK (which Microsoft sponsors), install VS Code if needed and the Java Extension Pack. AdoptOpenJDK, which is essentially a vendor-neutral alternative to the Oracle JDK, is now Microsoft’s recommended Java distribution for users who install the VS Code Java extension.

Currently, the installer is only available for Windows, but the team plans to expand its availability once it sees interest by the community.

UK government invests $194M to commercialize quantum computing

The UK government today announced a £153 million investment into efforts to commercialize quantum computing. That’s about $193 million and with additional commitments from numerous industry players, that number goes up to over $440 million. With this, the UK’s National Quantum Technologies Programme has now passed £1 billion (or about $1.27 billion) in investments since its inception in 2014.

In the US, president Trump last year signed into law a $1.2 billion investment into quantum computing and the European Union, which the UK is infamously trying to leave, also launched a similarly-sized plan. Indeed, it’s hard not to look at this announcement in the context of Brexit, which would cut the UK off from these European efforts, though it’s worth noting that the UK obviously has a long history of fundamental computer science research, something that is surely also motivating these efforts.

“This milestone shows that Quantum is no longer an experimental science for the UK,” UK Science Minister Chris Skidmore said in today’s announcement. “Investment by government and businesses is paying off, as we become one of the world’s leading nations for quantum science and technologies. Now industry is turning what was once a futuristic pipedream into life-changing products.”

Specifically, the UK program is looking into research that can grow its local quantum industry. To do so, the £153 million Industrial Strategy Challenge Fund will invest in new products and innovations through research and development competitions, but also into industry-led projects. It will also function as an investment accelerator, with the hope of encouraging venture capitalist to invest in early-stage, spin-out and startup quantum companies.

“The announcement of this significant public funding for the industrialization of quantum technologies exemplifies the benefits of the Industrial Strategy, both in terms of improved coordination across government departments and also the creation of long-term partnerships between government, academia and businesses,” said Roger McKinlay, Challenge Director for Quantum Technologies at UK Research and Innovation. “Five years of investment in the UK National Quantum Technologies Programme has given the UK a technological lead which businesses are now ready to turn in to a significant commercial advantage.”

For governments, quantum computing obviously opens up a number of economic opportunities, but there are also national security interests at play here. Once it becomes a reality, a general quantum computer with long coherence times will easily be able to defeat today’s encryption schemes, for example. That’s not what today’s announcement is about, but it is surely something that all of the world’s governments are thinking about.

 

 

Google’s Game Builder turns building multiplayer games into a game

Google’s Area 120 team, the company’s in-house incubator for some of its more experimental projects, today launched Game Builder, a free and easy to use tool for PC and macOS users who want to build their own 3D games without having to know how to code. Game Builder is currently only available through Valve’s Steam platform, so you’ll need an account there to try it.

After a quick download, Game Builder asks you about what screen size you want to work on and then drops you right into the experience after you tell it whether you want to start a new project, work on an existing project or try out some sample projects. These sample projects include a first-person shooter, a platformer and a demo of the tool’s card system for programming more complex interactions.

The menu system and building experience take some getting used to and isn’t immediately intuitive, but after a while, you’ll get the hang of it. By default, the overall design aesthetic clearly draws some inspiration from Minecraft, but you’re pretty free in what kind of game you want to create. It does not strike me as a tool for getting smaller children into game programming since we’re talking about a relatively text-heavy and complex experience.

To build more complex interactions, you use Game Builder’s card-based visual programming system. That’s pretty straightforward, too, but also takes some getting used to. Google says building a 3D level is like playing a game. There’s some truth in that, in that you are building inside the game environment, but it’s not necessarily an easy game either.

One cool feature here is that you can also build multiplayer games and even create games in real-time with your friends.

Traditionally, drag-and-drop game builders feel pretty limited. The Area 120 team is trying to overcome this by also letting you use JavaScript to go beyond some of the pre-programmed features. Google is also betting on Poly, its library of 3D objects, to give users lots of options for creating and designing their levels.

It’s no secret that Google is taking games pretty seriously these days, now that it is getting ready to launch its Stadia game streaming service later this year. There doesn’t seem to be a connection between the two just yet, but I wouldn’t be surprised if we saw Game Builder on Stadia, too.

RealityEngines.AI raises $5.25M seed round to make ML easier for enterprises

RealityEngines.AI, a research startup that wants to help enterprises make better use of AI, even when they only have incomplete data, today announced that it has raised a $5.25 million seed funding round. The round was led by former Google CEO and Chairman Eric Schmidt and Google founding board member Ram Shriram. Khosla Ventures, Paul Buchheit, Deepchand Nishar, Elad Gil, Keval Desai, Don Burnette and others also participated in this round.

The fact that the service was able to raise from this rather prominent group of investors clearly shows that its overall thesis resonates. The company, which doesn’t have a product yet, tells me that it specifically wants to help enterprises make better use of the smaller and noisier datasets they have and provide them with state-of-the-art machine learning and AI systems that they can quickly take into production. It also aims to provide its customers with systems that can explain their predictions and are free of various forms of bias, something that’s hard to do when the system is essentially a black box.

As RealityEngines CEO Bindu Reddy, who was previously the head of products for Google Apps, told me the company plans to use the funding to build out its research and development team. The company, after all, is tackling some of the most fundamental and hardest problems in machine learning right now — and that costs money. Some, like working with smaller datasets, already have some available solutions like generative adversarial networks that can augment existing datasets and that RealityEngines expects to innovate on.

Reddy is also betting on reinforcement learning as one of the core machine learning techniques for the platform.

Once it has its product in place, the plan is to make it available as a pay-as-you-go managed service that will make machine learning more accessible to large enterprise, but also to small and medium businesses, which also increasingly need access to these tools to remain competitive.

Apple joins the open-source Cloud Native Computing Foundation

The Cloud Native Computing Foundation (CNCF), the home of open-source projects like Kubernetes, today announced that Apple is joining it as a top-level Platinum End User Member. With this, Apple is joining 89 existing CNCF end-user members like Adidas, Atlassian, Box, GitHub, the New York Times, Reddit, Spotify and Walmart.

Apple, in typical fashion, isn’t commenting on the announcement, but the CNCF notes that end user memberships are meant for organizations that are “heavy users of open source cloud native technologies” and that are looking to give back to the community. By becoming a CNCF end-user member, companies also join the Linux Foundation .

As part of its membership, Apple also gets a seat on the CNCF’s Governing Board. https://www.linkedin.com/in/tomerdoron, a senior engineering manager at Apple, will take this seat.

“Having a company with the experience and scale of Apple as an end user member is a huge testament to the vitality of cloud native computing for the future of infrastructure and application development,” said Chris Aniszczyk, CTO of the Cloud Native Computing Foundation. “We’re thrilled to have the support of Apple, and look forward to the future contributions to the broader cloud native project community.”

While you may not necessarily think of Apple as a major open source company, the company has open sourced everything from the XNU kernel that’s part of the Darwin operating system to its Swift programming language. The company has not typically participated all that much in the open source cloud infrastructure community, though, but today’s move may signal that this is changing. Apple obviously runs its own data centers, so chances are it is indeed a heavy user of open source infrastructure projects, though the company doesn’t typically talk about these.

Qubole launches Quantum, its serverless database engine

Qubole, the data platform founded by Apache Hive creator and former head of Facebook’s Data Infrastructure team Ashish Thusoo, today announced the launch of Quantum, its first serverless offering.

Qubole may not necessarily be a household name, but its customers include the likes of Autodesk, Comcast, Lyft, Nextdoor and Zillow . For these users, Qubole has long offered a self-service platform that allowed their data scientists and engineers to build their AI, machine learning and analytics workflows on the public cloud of their choice. The platform sits on top of open-source technologies like Apache Spark, Presto and Kafka, for example.

Typically, enterprises have to provision a considerable amount of resources to give these platforms the resources they need. These resources often go unused and the infrastructure can quickly become complex.

Qubole already abstracts most of this away and offering what is essentially a serverless platform. With Quantum, however, it is going a step further by launching a high-performance serverless SQP engine that allows users to query petabytes of data with nothing else by ANSI-SQL, given them the choice between using a Presto cluster or a serverless SQL engine to run their queries, for example.

The data can be stored on AWS, Azure, Google cloud or Oracle Cloud and users won’t have to set up a second data lake or move their data to another platform to use the SQL engine. Quantum automatically scales up or down as needed, of course, and users can still work with the same metastore for their data, no matter whether they choose the clustered or serverless option. Indeed, Quantum is essentially just another SQL engine without Qubole’s overall suite of engines.

Typically, Qubole charges enterprises by compute minutes. When using Quantum, the company uses the same metric, but enterprises pay for the execution time of the query. “So instead of the Qubole compute units being associated with the number of minutes the cluster was up and running, it is associated with the Qubole compute units consumed by that particular query or that particular workload, which is even more fine-grained ” Thusoo explained. “This works really well when you have to do interactive workloads.”

Thusoo notes that Quantum is targeted at analysts who often need to perform interactive queries on data stored in object stores. Qubole integrates with services like Tableau and Looker (which Google is now in the process of acquiring). “They suddenly get access to very elastic compute capacity, but they are able to come through a very familiar user interface,” Thusoo noted.

 

How Amazon’s delivery robots will navigate your sidewalk

Earlier this year, Amazon announced its Scout sidewalk delivery robot. At the time, details were sparse, except for the fact that the company had started to make deliveries in a neighborhood in Washington State. Today, at Amazon’s re:Mars conference, I sat down with Sean Scott, the VP in charge of Scout, to talk about how his team built the robot, how it finds its way around and what its future looks like.

These relatively small blue robots could be roaming a sidewalk near you soon, though as of now, Amazon isn’t quite ready to talk about when and where it will expand its network from its single neighborhood to other areas.

“For the last decade, we’ve invested billions of dollars in cargo planes and delivery vans, fulfillment center robots, and last holiday period, we shipped over a billion products with Prime free shipping,” Scott told me. “So it’s my job as VP of Amazon Scout to bring another new, innovative, safe and sustainable solution to this delivery network to help us really grow quickly and efficiently to meet customer demand.”

Currently, in Amazon’s trial, the robots are always accompanied by human assistants. Those assistants — and they probably look a bit like robot dog walkers as they trot through the neighborhood — are currently the ones who are taking the packages out of the robot when they arrive at their destination and put it on the customers’ doorsteps. For now, that also means the customers don’t have to be home, though chances are they will have to be once this project rolls out to more users.

As of now, when it’s ready to make deliveries, Amazon drives a large van to the neighborhood and the Scout robots leave from there and return when they are done. Scott wouldn’t say how far the robots can travel, but it seems reasonable to assume that they could easily go for a mile or two.

As we learned earlier this year, Amazon did make a small acquisition to kickstart the program but it’s worth stressing that it now does virtually all of the work in house, including building and assembling the robots and writing the software for it.

“For Scout we’re actually owning the entire development from the industrial design to the actual hardware, mechanical, electrical, the software, the systems, manufacturing and operations,” said Scott. “That really helps us control everything we’re doing.” Having that end-to-end control enables the team to iterate significantly faster.

The team even built a rig to test the Scout’s wheels and in the process, learned that the wheels’ material was actually too soft to survive the rigors of daily sidewalk driving for long.

Inside its labs, the team also built a sidewalk environment for real-world testing, but as is so often the case these days, the team did most of the machine learning training in a simulation. Indeed, since there are basically no maps for navigating sidewalks, the team has to build its own maps of every neighborhood it goes into and it then uses this highly detailed map in its simulation.

That’s important, Scott noted, because simply using a game engine with repeating textures just wouldn’t be good enough to train the algorithms that keep the robot on track. To do that, you need real-world textures, for example.

“We thought about building a synthetic world, but it turns out building a synthetic world is much harder than copying the real world,” Scott said. “So we decided to copy the real world.” He showed me a video of the simulated robot moving through the simulation, using a map that looks a bit like a highly zoomed-in Google Maps 3D view. Not perfect, but perfectly reasonable, down to the gutters on the street and the small bumps where two concrete plates on the sidewalk line up.

This simulation allows Amazon to make thousands of simulated deliveries before the team ever goes out to test the robot on the street. In the demo I saw, the robot had no issues navigating around obstacles, pausing for crossing cats and getting to his destination. That’s possible thanks to a combination of detailed maps and high-resolution imagery of its surroundings, combined with GPS data (when available) and cutting-edge machine-learning techniques.

Once it is out and about, though, the robot will have to face the elements. It’s watertight, something you’d expect from a company that is based in Seattle, and it’s got sensors all around to ensure it can both find its way on sidewalks that are often littered with obstacles (think trash day) and full of curious cats and dogs. Around the robot is an array of cameras and ultrasonic sensors, all of which are then evaluated by a set of machine learning algorithms that help it plot its path.

“We jokingly refer to the sidewalk as the Wild West,” said Scott. “Every sidewalk is a snowflake and every neighborhood is a collection of snowflakes.”

At times, the robot also has to deviate from the sidewalk, simply because it is blocked. In those cases, it will opt for driving on the street. That’s something local laws in many states now allow for, though Scott tells me that the team only considers it when it’s a street where a pedestrian would also feel comfortable. “If you feel safe walking on that road, that’s where we want to be. We want to be viewed as a pedestrian and treated as a pedestrian,” he said. And that’s how the law in Washington State looks at these robots, which, for example, mean that they have to be given the right of way.

Scott also noted that the team designed the robot so it would be visible when necessary, with blinking lights when it crosses a street, for example, but also a bit boring, so that it would blend into the environment. “We really want this to blend into the background and be part of the environment and not be this loud and obnoxious thing that’s always rolling through the neighborhood,” said Scott. So it has the bright blue Amazon Prime color on top to be seen, but is otherwise relatively bland and without any anthropomorphic features. It’s just your average neighborhood delivery robot, in other terms.

As it moves along, it makes very deliberate movements, which Scott believes will make people feel more comfortable around it. Unlikely a drone, there’s no major risk when any parts of the robot break during a mission. Somebody can simply come and pick it up. Still, the team says it did design the robot with safety at the front and center of its process.

One thing that’s currently not clear — and that Amazon didn’t want to talk about yet — is how it will solve the actual handover of the package. Right now, the assistant handles this part, but in Amazon’s photos, the customer walks up to the robot and takes the package out of it. That’s a reasonable scenario, I think. In the long run, Amazon could also outfit the robot with multiple compartments to make multiple deliveries in one go. Right now, the Scout can only handle a single package.

One advantage of the robot has over human delivery people is that if you’re not home, it can just wait for a while, Scott said. So it’s conceivable that you’ll come home one day and there’s a Scout, standing patiently in front of your door, waiting to deliver your latest impulse order. Until then, it’ll likely be a while, though. Amazon won’t commit to any timetable or wider rollout.

Jeff Bezos wants to build the infrastructure for space startups

At its re:Mars conference, Amazon’s CEO Jeff Bezos took the stage today to be “interviewed” by Jenny Freshwater, Amazon’s director of forecasting. As any AWS machine learning tool could have forecasted, having an employee interview her boss didn’t lead to any challenging questions or especially illuminating answers, but Bezos did get a chance to talk about a variety of topics, ranging from business advice to his plans for Blue Origin.

We can safely ignore the business advice, given that Amazon’s principle of “disagree and commit” is about as well known as it could be, but his comments about Blue Origin, his plans for moon exploration and its relationship to startups were quite interesting.

He noted that we now know so much more about the moon than ever before, including that it does provide a number of resources that make it a good base for further space exploration. “The reason we need to go to space is to save the Earth,” he said. “We are going to grow this civilization — and I’m talking about something that our grandchildren will work on — and their grandchildren. This isn’t something that this generation is able to accomplish. But we need to move heavy industry off Earth.”

Building up the infrastructure for this is obviously expensive, though. “Infrastructure is always expensive,” he said. “Amazon was easy to start in 1994 with a small amount of capital because the transportation system already existed.” Similarly, the payment system, in the form of credit cards, was already in place, as was the telecom network.

“You cannot start an interesting space company today from your dorm room. The price of admission is too high and the reason for that is that the infrastructure doesn’t exist,” Bezos noted. “So my mission with Blue Origin is to help build that infrastructure, that heavy lifting infrastructure that future generations will be able to stand on top of the same way I stood on top of the U.S. Postal Service and so on.”

The obvious follow-ups here would have been about how Amazon is now building its own logistics network and replacing the U.S. Postal Service with its own delivery services.

Once the Amazon space station opens, Bezos expects that the first deliveries will be of liquid hydrogen and liquid oxygen. “It’s going to be a small selection but a very important one,” he joked.

Either way, though, it’s clear that Bezos does see Blue Origin as having a vital mission for the future of mankind. In that, he shares his passion with Elon Musk and other space entrepreneurs.

It’s worth noting that Amazon already offers satellite ground stations as a service and is looking to offer space-based internet access with Project Kuiper.

Bezos’s fireside chat was briefly interrupted by a protestor, who urged the billionaire to “save the animals.” As far as conference protests go, this one was pretty mild, though the fact that the protestor made it onto the stage probably means that Amazon will step up security at its next events and that somebody on the security team is going to have to disagree and commit.

Amazon will soon make having a chat with Alexa feel more natural

At its re:Mars conference, Amazon today announced that it is working on making interacting with its Alexa personal assistant more natural by enabling more fluid conversations that can move from topic to topic — and without having to constantly say “Alexa.”

At re:Mars, the company showed a brief demo of how this would work to order movie tickets, for example, taking you from asking “Alexa, what movies are playing nearby?” to actually selecting the movie, buying the tickets and making a restaurant reservation nearby — and then watching the trailer and ordering an Uber.

In many ways, this was a demo that I would have expected to see from Google at I/O, but Amazon has clearly stepped up its Alexa game in recent months.

The way the company is doing this is by relying on a new dialogue system that can predict next actions and easily switch between different Alexa skills. “Now, we have advanced our machine learning capabilities such that Alexa can predict customer’s true goal from the direction of the dialogue, and proactively enable the conversation flow across skills,” the company explained.

This new experience, which Amazon demoed on an Alexa Show, with the appropriate visual responses, will go live to users in the coming months.

Over the last few months, the company also announced today, Alexa became 20% more accurate in understanding your requests.

In addition, developers will be able to make use of some of these technologies that Amazon is using for this new dialogue tool. This new tool, Alexa Conversations, allows developers to build similar flows. Traditionally, this involved writing lots of code, but Alexa Conversations can bring this down by about a third. The developer simply has to declare a set of actions and a few example interactions. Then, the service will run an automated dialogue simulator so that the developers don’t actually have to think of all the different ways that a customer will interact with their skills. Over time, it will also learn from how real-world users interact with the system.

A first look at Amazon’s new delivery drone

For the first time, Amazon today showed off its newest fully electric delivery drone at its first re:Mars conference in Las Vegas. Chances are, it neither looks nor flies like what you’d expect from a drone. It’s an ingenious hexagonal hybrid design, though, that has very few moving parts and uses the shroud that protects its blades as its wings when it transitions from vertical, helicopter-like flight at takeoff to its airplane-like mode.

These drones, Amazon says, will start making deliveries in the coming months, though it’s not yet clear where exactly that will happen.

What’s maybe even more important, though, is that the drone is chock-full of sensors and a suite of compute modules that run a variety of machine learning models to keep the drone safe. Today’s announcement marks the first time Amazon is publicly talking about those visual, thermal and ultrasonic sensors, which it designed in-house, and how the drone’s autonomous flight systems maneuver it to its landing spot. The focus here was on building a drone that is as safe as possible and able to be independently safe. Even when it’s not connected to a network and it encounters a new situation, it’ll be able to react appropriately and safely.

When you see it fly in airplane mode, it looks a little bit like a TIE fighter, where the core holds all the sensors and navigation technology, as well as the package. The new drone can fly up to 15 miles and carry packages that weigh up to five pounds.

This new design is quite a departure from earlier models. I got a chance to see it ahead of today’s announcement and I admit that I expected a far more conventional design — more like a refined version of the last, almost sled-like, design.

Amazon’s last generation of drones looked very different.

Besides the cool factor of the drone, though, which is probably a bit larger than you may expect, what Amazon is really emphasizing this week is the sensor suite and safety features it developed for the drone.

Ahead of today’s announcement, I sat down with Gur Kimchi, Amazon’s VP for its Prime Air program, to talk about the progress the company has made in recent years and what makes this new drone special.

“Our sense and avoid technology is what makes the drone independently safe,” he told me. “I say independently safe because that’s in contrast to other approaches where some of the safety features are off the aircraft. In our case, they are on the aircraft.”

Kimchi also stressed that Amazon designed virtually all of the drone’s software and hardware stack in-house. “We control the aircraft technologies from the raw materials to the hardware, to software, to the structures, to the factory to the supply chain and eventually to the delivery,” he said. “And finally the aircraft itself has controls and capabilities to react to the world that are unique.”

(JORDAN STEAD / Amazon)

What’s clear is that the team tried to keep the actual flight surfaces as simple as possible. There are four traditional airplane control surfaces and six rotors. That’s it. The autopilot, which evaluates all of the sensor data and which Amazon also developed in-house, gives the drone six degrees of freedom to maneuver to its destination. The angled box at the center of the drone, which houses most of the drone’s smarts and the package it delivers, doesn’t pivot. It sits rigidly within the aircraft.

It’s unclear how loud the drone will be. Kimchi would only say that it’s well within established safety standards and that the profile of the noise also matters. He likened it to the difference between hearing a dentist’s drill and classical music. Either way, though, the drone is likely loud enough that it’s hard to miss when it approaches your backyard.

To see what’s happening around it, the new drone uses a number of sensors and machine learning models — all running independently — that constantly monitor the drone’s flight envelope (which, thanks to its unique shape and controls, is far more flexible than that of a regular drone) and environment. These include regular camera images and infrared cameras to get a view of its surroundings. There are multiple sensors on all sides of the aircraft so that it can spot things that are far away, like an oncoming aircraft, as well as objects that are close, when the drone is landing, for example.

The drone also uses various machine learning models to, for example, detect other air traffic around it and react accordingly, or to detect people in the landing zone or to see a line over it (which is a really hard problem to solve, given that lines tend to be rather hard to detect). To do this, the team uses photogrammetrical models, segmentation models and neural networks. “We probably have the state of the art algorithms in all of these domains,” Kimchi argued.

Whenever the drone detects an object or a person in the landing zone, it obviously aborts — or at least delays — the delivery attempt.

“The most important thing the aircraft can do is make the correct safe decision when it’s exposed to an event that isn’t in the planning — that it has never been programmed for,” Kimchi said.

The team also uses a technique known as Visual Simultaneous Localization and Mapping (VSLAM), which helps the drone build a map of its current environment, even when it doesn’t have any other previous information about a location or any GPS information.

“That combination of perception and algorithmic diversity is what we think makes our system uniquely safe,” said Kimchi. As the drone makes its way to the delivery location or back to the warehouse, all of the sensors and algorithms always have to be in agreement. When one fails or detects an issue, the drone will abort the mission. “Every part of the system has to agree that it’s okay to proceed,” Kimchi said.

What Kimchi stressed throughout our conversation is that Amazon’s approach goes beyond redundancy, which is a pretty obvious concept in aviation and involves having multiple instances of the same hardware on board. Kimchi argues that having a diversity of sensors that are completely independent of each other is also important. The drone only has one angle of attack sensor, for example, but it also has a number of other ways to measure the same value.

Amazon isn’t quite ready to delve into all the details of what the actual on-board hardware looks like, though. Kimchi did tell me that the system uses more than one operating system and CPU architecture, though.

It’s the integration of all of those sensors, AI smarts and the actual design of the drone that makes the whole unit work. At some point, though, things will go wrong. The drone can easily handle a rotor that stops working, which is pretty standard these days. In some circumstances, it can even handle two failed units. And unlike most other drones, it can glide if necessary, just like any other airplane. But when it needs to find a place to land, its AI smarts kick in and the drone will try to find a safe place to land, away from people and objects — and it has to do so without having any prior knowledge of its surroundings.

Amazon Prime Air drone

To get to this point, the team actually used an AI system to evaluate more than 50,000 different configurations. Just the computational fluid dynamics simulations took up 30 million hours of AWS compute time (it’s good to own a large cloud when you want to build a novel, highly optimized drone, it seems). The team also ran millions of simulations, of course, with all of the sensors, and looked at all of the possible positions and sensor ranges — and even different lenses for the cameras — to find an optimal solution. “The optimization is what is the right, diverse set of sensors and how they are configured on the aircraft,” Kimchi noted. “You always have both redundancy and diversity, both from the physical domain — sonar versus photons — and the algorithmic domain.”

The team also ran thousands of hardware-in-the-loop simulations where all the flight services are actuating and all the sensors are perceiving the simulated environment. Here, too, Kimchi wasn’t quite ready to give away the secret sauce the team uses to make that work.

And the team obviously tested the drones in the real world to validate its models. “The analytical models, the computational models are very rich and are very deep, but they are not calibrated against the real world. The real world is the ultimate random event generator,” he said.

It remains to be seen where the new drone will make its first deliveries. That’s a secret Amazon also isn’t quite ready to reveal yet. That will happen within the next few months, though. Amazon started drone deliveries in England a while back, so that’s an obvious choice, but there’s no reason the company could opt for another country as well. The U.S. seems like an unlikely candidate, given that the regulations there are still in flux, but maybe that’s a problem that will be solved by then, too. Either way, what once looked like a bit of a Black Friday stunt may just land in your backyard sooner than you think.