This ultrasonic gripper could let robots hold things without touching them

If robots are to help out in places like hospitals and phone repair shops, they’re going to need a light touch. And what’s lighter than not touching at all? Researchers have created a gripper that uses ultrasonics to suspend an object in midair, potentially making it suitable for the most delicate tasks.

It’s done with an array of tiny speakers that emit sound at very carefully controlled frequencies and volumes. These produce a sort of standing pressure wave that can hold an object up or, if the pressure is coming from multiple directions, hold it in place or move it around.

This kind of “acoustic levitation,” as it’s called, is not exactly new — we see it being used as a trick here and there, but so far there have been no obvious practical applications. Marcel Schuck and his team at ETH Zürich, however, show that a portable such device could easily find a place in processes where tiny objects must be very lightly held.

A small electric component, or a tiny oiled gear or bearing for a watch or micro-robot, for instance, would ideally be held without physical contact, since that contact could impart static or dirt to it. So even when robotic grippers are up to the task, they must be kept clean or isolated. Acoustic manipulation, however, would have significantly less possibility of contamination.

Another, more sinister-looking prototype.

The problem is that it isn’t obvious exactly which combination of frequencies and amplitudes are necessary to suspend a given object in the air. So a large part of this work was developing software that can easily be configured to work with a new object, or programmed to move it in a specific way — rotating, flipping or otherwise moving it at the user’s behest.

A working prototype is complete, but Schuck plans to poll various industries to see whether and how such a device could be useful to them. Watchmaking is of course important in Switzerland, and the parts are both small and sensitive to touch. “Toothed gearwheels, for example, are first coated with lubricant, and then the thickness of this lubricant layer is measured. Even the faintest touch could damage the thin film of lubricant,” he points out in the ETHZ news release.

How would a watchmaker use such a robotic arm? How would a designer of microscopic robots, or a biochemist? The potential is clear, but not necessarily obvious. Fortunately, he has a bit of fellowship cash to spend on the question and hopes to spin it off as a startup next year if his early inquiries bear fruit.

Unearth the future of agriculture at TC Sessions: Robotics+AI with the CEOs of Traptic, Farmwise and Pyka

Farming is one of the oldest professions, but today those amber waves of grain (and soy) are a test bed for sophisticated robotic solutions to problems farmers have had for millennia. Learn about the cutting edge (sometimes literally) of agricultural robots at TC Sessions: Robotics+AI on March 3 with the founders of Traptic, Pyka, and Farmwise.

Traptic, and its co-founder and CEO Lewis Anderson, you may remember from Disrupt SF 2019, where it was a finalist in the Startup Battlefield. The company has developed a robotic berry picker that identifies ripe strawberries and plucks them off the plants with a gentle grip. It could be the beginning of a new automated era for the fruit industry, which is decades behind grains and other crops when it comes to machine-based harvesting.

Farmwise has a job that’s equally delicate yet involves rough treatment of the plants — weeding. Its towering machine trundles along rows of crops, using computer vision to locate and remove invasive plants, working 24/7, 365 days a year. CEO Sebastian Boyer will speak to the difficulty of this task and how he plans to evolve the machines to become “doctors” for crops, monitoring health and spontaneously removing pests like aphids.

Pyka’s robot is considerably less earthbound than those: an autonomous, all-electric crop-spraying aircraft — with wings! This is a much different challenge from the more stable farming and spraying drones like those of DroneSeed and SkyX, but the choice gives the craft more power and range, hugely important for today’s vast fields. Co-founder Michael Norcia can speak to that scale and his company’s methods of meeting it.

These three companies and founders are at the very frontier of what’s possible at the intersection of agriculture and technology, so expect a fruitful conversation.

$150 Early Bird savings end on Feb. 14! Book your $275 Early Bird Ticket today and put that extra money in your pocket.

Students, grab your super discounted $50 tickets right here. You might just meet your future employer/internship opportunity at this event.

Startups, we only have 5 demo tables left for the event. Book your $2200 demo table here and get in front of some of today’s leading names in the biz. Each table comes with 4 tickets to attend the show.

Female Founders Alliance absorbs Monarq accelerator to better promote women and non-binary founders

Seattle’s Female Founders Alliance, which runs the Ready Set Raise accelerator for women and non-binary founders, has acquired New York’s Monarq, an incubator with similar goals and origins. The latter will be integrated into the former, but it seems to be a happy collaboration rather than a consolidation of necessity.

Monarq was founded three years ago by Irene Ryabaya and Diana Murakhovskaya and 32 companies have gone through its process. FFA has accepted half that number into its program as of the second cohort, with a third underway for 2020. I covered graduate GiveInKind in November when it raised a $1.5M seed round.

“Monarq and FFA share a common sponsor that introduced us years ago, and we’ve been connected and supportive of each other since,” explained FFA CEO Leslie Feinzaig to TechCrunch. “This year, Diana and Irena’s side gigs started to take off — Diana raised a $20M VC fund, and Irena’s startup, WarmIntro, started signing up substantial customers. it made strategic sense for FFA to solidify our national expansion and strengthen our network of investors and mentors that are East Coast based.”

Ryabaya and Murakhovskaya will be focusing on The Artemis Fund and WarmIntro respectively, and Monarq’s accelerator will be tucked into the Ready Set Raise brand. The merge will create what FFA claims is the country’s largest network of female and non-binary industry folks, which should prove an asset for those in the program.

It’s possible to see this as consolidation within a specialized branch of the startup industry, but Feinzaig said business is booming.

“The market for women’s leadership is absolutely growing, and creating a lot of opportunities in the process,” she said. “What’s different now is that there is a recognition that this is good business, not a charitable cause.”

The FFA’s stated goal of gender parity among founders only grows more achievable with increased reach. It may be that the increased scale also improves results in an already impressive portfolio.

Facebook speeds up AI training by culling the weak

Training an artificial intelligence agent to do something like navigate a complex 3D world is computationally expensive and time-consuming. In order to better create these potentially useful systems, Facebook engineers derived huge efficiency benefits from, essentially, leaving the slowest of the pack behind.

It’s part of the company’s new focus on “embodied AI,” meaning machine learning systems that interact intelligently with their surroundings. That could mean lots of things — responding to a voice command using conversational context, for instance, but also more subtle things like a robot knowing it has entered the wrong room of a house. Exactly why Facebook is so interested in that I’ll leave to your own speculation, but the fact is they’ve recruited and funded serious researchers to look into this and related domains of AI work.

To create such “embodied” systems, you need to train them using a reasonable facsimile of the real world. One can’t expect an AI that’s never seen an actual hallway to know what walls and doors are. And given how slow real robots actually move in real life you can’t expect them to learn their lessons here. That’s what led Facebook to create Habitat, a set of simulated real-world environments meant to be photorealistic enough that what an AI learns by navigating them could also be applied to the real world.

Such simulators, which are common in robotics and AI training, are also useful because, being simulators, you can run many instances of them at the same time — for simple ones, thousands simultaneously, each one with an agent in it attempting to solve a problem and eventually reporting back its findings to the central system that dispatched it.

Unfortunately, photorealistic 3D environments use a lot of computation compared to simpler virtual ones, meaning that researchers are limited to a handful of simultaneous instances, slowing learning to a comparative crawl.

The Facebook researchers, led by Dhruv Batra and Eric Wijmans, the former a professor and the latter a PhD student at Georgia Tech, found a way to speed up this process by an order of magnitude or more. And the result is an AI system that can navigate a 3D environment from a starting point to goal with a 99.9 percent success rate and few mistakes.

Simple navigation is foundational to a working “embodied AI” or robot, which is why the team chose to pursue it without adding any extra difficulties.

“It’s the first task. Forget the question answering, forget the context — can you just get from point A to point B? When the agent has a map this is easy, but with no map it’s an open problem,” said Batra. “Failing at navigation means whatever stack is built on top of it is going to come tumbling down.”

The problem, they found, was that the training systems were spending too much time waiting on slowpokes. Perhaps it’s unfair to call them that — these are AI agents that for whatever reason are simply unable to complete their task quickly.

“It’s not necessarily that they’re learning slowly,” explained Wijmans. “But if you’re simulating navigating a one bedroom apartment, it’s much easier to do that than navigate a ten bedroom mansion.”

The central system is designed to wait for all its dispatched agents to complete their virtual tasks and report back. If a single agent takes 10 times longer than the rest, that means there’s a huge amount of wasted time while the system sits around waiting so it can update its information and send out a new batch.

This little explanatory gif shows how when one agent gets stuck, it delays others learning from its experience.

The innovation of the Facebook team is to intelligently cut off these unfortunate laggards before they finish. After a certain amount of time in simulation, they’re done and whatever data they’ve collected gets added to the hoard.

“You have all these workers running, and they’re all doing their thing, and they all talk to each other,” said Wijman. “One will tell the others, ‘okay, I’m almost done,’ and they’ll all report in on their progress. Any ones that see they’re lagging behind the rest will reduce the amount of work that they do before the big synchronization that happens.”

In this case you can see that each worker stops at the same time and shares simultaneously.

If a machine learning agent could feel bad, I’m sure it would at this point, and indeed that agent does get “punished” by the system in that it doesn’t get as much virtual “reinforcement” as the others. The anthropomorphic terms make this out to be more human than it is — essentially inefficient algorithms or ones placed in difficult circumstances get downgraded in importance. But their contributions are still valuable.

“We leverage all the experience that the workers accumulate, no matter how much, whether it’s a success or failure — we still learn from it,” Wijman explained.

What this means is that there are no wasted cycles where some workers are waiting for others to finish. Bringing more experience on the task at hand in on time means the next batch of slightly better workers goes out that much earlier, a self-reinforcing cycle that produces serious gains.

In the experiments they ran, the researchers found that the system, catchily named Decentralized Distributed Proximal Policy Optimization or DD-PPO, appeared to scale almost ideally, with performance increasing nearly linearly to more computing power dedicated to the task. That is to say, increasing the computing power 10x resulted in nearly 10x the results. On the other hand, standard algorithms led to very limited scaling, where 10x or 100x the computing power only results in a small boost to results because of how these sophisticated simulators hamstring themselves.

These efficient methods let the Facebook researchers produce agents that could solve a point to point navigation task in a virtual environment within their allotted time with 99.9 percent reliability. They even demonstrated robustness to mistakes, finding a way to quickly recognize they’d taken a wrong turn and go back the other way.

The researchers speculated that the agents had learned to “exploit the structural regularities,” a phrase that in some circumstances means the AI figured out how to cheat. But Wijmans clarified that it’s more likely that the environments they used have some real-world layout rules.

“These are real houses that we digitized, so they’re learning things about how western style houses tend to be laid out,” he said. Just as you wouldn’t expect the kitchen to enter directly into a bedroom, the AI has learned to recognize other patterns and make other “assumptions.”

The next goal is to find a way to let these agents accomplish their task with fewer resources. Each agent had a virtual camera it navigated with that provided it ordinary and depth imagery, but also an infallible coordinate system to tell where it traveled and a compass that always pointed towards the goal. If only it were always so easy! But until this experiment, even with those resources the success rate was considerably lower even with far more training time.

Habitat itself is also getting a fresh coat of paint with some interactivity and customizability.

Habitat as seen through a variety of virtualized vision systems.

“Before these improvements, Habitat was a static universe,” explained Wijman. “The agent can move and bump into walls, but it can’t open a drawer or knock over a table. We built it this way because we wanted fast, large scale simulation — but if you want to solve tasks like ‘go pick up my laptop from my desk,’ you’d better be able to actually pick up that laptop.”

Therefore now Habitat lets users add objects to rooms, apply forces to those objects, check for collisions, and so on. After all, there’s more to real life than disembodied gliding around a frictionless 3D construct.

The improvements should make Habitat a more robust platform for experimentation, and will also make it possible for agents trained in it to directly transfer their learning to the real world — something the team has already begun work on and will publish a paper on soon.

Baraja’s unique and ingenious take on lidar shines in a crowded industry

It seems like every company making lidar has a new and clever approach, but Baraja takes the cake. Its method is not only elegant and powerful, but fundamentally avoids many issues that nag other lidar technologies. But it’ll need more than smart tech to make headway in this complex and evolving industry.

To understand how lidar works in general, consult my handy introduction to the topic. Essentially a laser emitted by a device skims across or otherwise very quickly illuminates the scene, and the time it takes for that laser’s photons to return allows it to quite precisely determine the distance of every spot it points at.

But to picture how Baraja’s lidar works, you need to picture the cover of Pink Floyd’s “Dark Side of the Moon.”

GIFs kind of choke on rainbows, but you get the idea.

Imagine a flashlight shooting through a prism like that, illuminating the scene in front of it — now imagine you could focus that flashlight by selecting which color came out of the prism, sending more light to the top part of the scene (red and orange) or middle (yellow and green). That’s what Baraja’s lidar does, except naturally it’s a bit more complicated than that.

The company has been developing its tech for years with the backing of Sequoia and Australian VC outfit Blackbird, which led a $32 million round late in 2018 — Baraja only revealed its tech the next year and was exhibiting it at CES, where I met with co-founder and CEO Federico Collarte.

“We’ve stayed in stealth for a long, long time,” he told me. “The people who needed to know already knew about us.”

The idea for the tech came out of the telecommunications industry, where Collarte and co-founder Cibby Pulikkaseril thought of a novel use for a fiber optic laser that could reconfigure itself extremely quickly.

We thought if we could set the light free, send it through prism-like optics, then we could steer a laser beam without moving parts. The idea seemed too simple — we thought, ‘if it worked, then everybody would be doing it this way,’ ” he told me, but they quit their jobs and worked on it for a few months with a friends and family round, anyway. “It turns out it does work, and the invention is very novel and hence we’ve been successful in patenting it.”

Rather than send a coherent laser at a single wavelength (1550 nanometers, well into the infrared, is the lidar standard), Baraja uses a set of fixed lenses to refract that beam into a spectrum spread vertically over its field of view. Yet it isn’t one single beam being split but a series of coded pulses, each at a slightly different wavelength that travels ever so slightly differently through the lenses. It returns the same way, the lenses bending it the opposite direction to return to its origin for detection.

It’s a bit difficult to grasp this concept, but once one does it’s hard to see it as anything but astonishingly clever. Not just because of the fascinating optics (something I’m partial to, if it isn’t obvious), but because it obviates a number of serious problems other lidars are facing or about to face.

First, there are next to no moving parts whatsoever in the entire Baraja system. Spinning lidars like the popular early devices from Velodyne are being replaced at large by ones using metamaterials, MEMS, and other methods that don’t have bearings or hinges that can wear out.

Baraja’s “head” unit, connected by fiber optic to the brain.

In Baraja’s system, there are two units, a “dumb” head and an “engine.” The head has no moving parts and no electronics; it’s all glass, just a set of lenses. The engine, which can be located nearby or a foot or two away, produces the laser and sends it to the head via a fiber-optic cable (and some kind of proprietary mechanism that rotates slowly enough that it could theoretically work for years continuously). This means it’s not only very robust physically, but its volume can be spread out wherever is convenient in the car’s body. The head itself also can be resized more or less arbitrarily without significantly altering the optical design, Collarte said.

Second, the method of diffracting the beam gives the system considerable leeway in how it covers the scene. Different wavelengths are sent out at different vertical angles; a shorter wavelength goes out toward the top of the scene and a slightly longer one goes a little lower. But the band of 1550 +/- 20 nanometers allows for millions of fractional wavelengths that the system can choose between, giving it the ability to set its own vertical resolution.

It could for instance (these numbers are imaginary) send out a beam every quarter of a nanometer in wavelength, corresponding to a beam going out every quarter of a degree vertically, and by going from the bottom to the top of its frequency range cover the top to the bottom of the scene with equally spaced beams at reasonable intervals.

But why waste a bunch of beams on the sky, say, when you know most of the action is taking place in the middle part of the scene, where the street and roads are? In that case you can send out a few high frequency beams to check up there, then skip down to the middle frequencies, where you can then send out beams with intervals of a thousandth of a nanometer, emerging correspondingly close together to create a denser picture of that central region.

If this is making your brain hurt a little, don’t worry. Just think of Dark Side of the Moon and imagine if you could skip red, orange and purple, and send out more beams in green and blue — and because you’re only using those colors, you can send out more shades of green-blue and deep blue than before.

Third, the method of creating the spectrum beam provides against interference from other lidar systems. It is an emerging concern that lidar systems of a type could inadvertently send or reflect beams into one another, producing noise and hindering normal operation. Most companies are attempting to mitigate this by some means or another, but Baraja’s method avoids the possibility altogether.

“The interference problem — they’re living with it. We solved it,” said Collarte.

The spectrum system means that for a beam to interfere with the sensor it would have to be both a perfect frequency match and come in at the precise angle at which that frequency emerges from and returns to the lens. That’s already vanishingly unlikely, but to make it astronomically so, each beam from the Baraja device is not a single pulse but a coded set of pulses that can be individually identified. The company’s core technology and secret sauce is the ability to modulate and pulse the laser millions of times per second, and it puts this to good use here.

Collarte acknowledged that competition is fierce in the lidar space, but not necessarily competition for customers. “They have not solved the autonomy problem,” he points out, “so the volumes are too small. Many are running out of money. So if you don’t differentiate, you die.” And some have.

Instead companies are competing for partners and investors, and must show that their solution is not merely a good idea technically, but that it is a sound investment and reasonable to deploy at volume. Collarte praised his investors, Sequoia and Blackbird, but also said that the company will be announcing significant partnerships soon, both in automotive and beyond.

SpinLaunch spins up a $35M round to continue building its space catapult

SpinLaunch, a company that aims to turn the launch industry on its head with a wild new concept for getting to orbit, has raised a $35M round B to continue its quest. The team has yet to demonstrate their kinetic launch system, but this year will be the year that changes, they claim.

TechCrunch first reported on SpinLaunch’s ambitious plans in 2018, when the company raised its previous $35 million, which combined with $10M it raised prior to that and today’s round comes to a total of $80M. With that kind of money you might actually be able to build a space catapult.

The basic idea behind SpinLaunch’s approach is to get a craft out of the atmosphere using a “rotational acceleration method” that brings a craft to escape velocity without any rockets. While the company has been extremely tight-lipped about the details, one imagines a sort of giant rail gun curled into a spiral, from which payloads will emerge into the atmosphere at several thousand miles per hour — weather be damned.

Naturally there is no shortage of objections to this method, the most obvious of which is that going from an evacuated tube into the atmosphere at those speeds might be like firing the payload into a brick wall. It’s doubtful that SpinLaunch would have proceeded this far if it did not have a mitigation for this (such as the needle-like appearance of the concept craft) and other potential problems, but the secretive company has revealed little.

The time for broader publicity may soon be at hand, however: the funds will be used to build out its new headquarter and R&D facility in Long Beach, but also to complete its flight test facility at Spaceport America in New Mexico.

“Later this year, we aim to change the history of space launch with the completion of our first flight test mass accelerator at Spaceport America,” said founder and CEO Jonathan Yaney in a press release announcing the funding.

Lowering the cost of launch has been the focus of some of the most successful space startups out there, and SpinLaunch aims to leapfrog their cost savings by offering orbital access for under $500,000. First commercial launch is targeted for 2022, assuming the upcoming tests go well.

The funding round was led by previous investors Airbus Ventures, GV, and KPCB, as well as Catapult Ventures, Lauder Partners, John Doerr and Byers Family.

‘PigeonBot’ brings flying robots closer to real birds

Try as they might, even the most advanced roboticists on Earth struggle to recreate the effortless elegance and efficiency with which birds fly through the air. The “PigeonBot” from Stanford researchers takes a step towards changing that by investigating and demonstrating the unique qualities of feathered flight.

On a superficial level, PigeonBot looks a bit, shall we say, like a school project. But a lot of thought went into this rather haphazard looking contraption. Turns out the way birds fly is really not very well understood, as the relationship between the dynamic wing shape and positions of individual feathers are super complex.

Mechanical engineering professor David Lentink challenged some of his graduate students to “dissect the biomechanics of the avian wing morphing mechanism and embody these insights in a morphing biohybrid robot that features real flight feathers,” taking as their model the common pigeon — the resilience of which Lentink admires.

As he explains in an interview with the journal Science:

The first Ph.D.student, Amanda Stowers, analyzed the skeletal motion and determined we only needed to emulate the wrist and finger motion in our robot to actuate all 20 primary and 20 secondary flight feathers. The second student, Laura Matloff,uncovered how the feathers moved via a simple linear response to skeletal movement. The robotic insight here is that a bird wing is a gigantic underactuated system in which a bird doesn’t have to constantly actuate each feather individually. Instead, all the feathers follow wrist and finger motion automatically via the elastic ligament that connects the feathers to the skeleton. It’s an ingenious system that greatly simplifies feather position control.

In addition to finding that the individual control of feathers is more automatic than manual, the team found that tiny microstructures on the feathers form a sort of one-way Velcro-type material that keeps them forming a continuous surface rather than a bunch of disconnected ones. These and other findings were published in Science, while the robot itself, devised by “the third student,” Eric Chang, is described in Science Robotics.

Using 40 actual pigeon feathers and a super-light frame, Chang and the team made a simple flying machine that doesn’t derive lift from its feathers — it has a propeller on the front — but uses them to steer and maneuver using the same type of flexion and morphing as the birds themselves do when gliding.

Studying the biology of the wing itself, then observing and adjusting the PigeonBot systems, the team found that the bird (and bot) used its “wrist” when the wing was partly retracted, and “fingers” when extended, to control flight. But it’s done in a highly elegant fashion that minimizes the thought and the mechanisms required.

PigeonBot’s wing. You can see that the feathers are joined by elastic connections so moving one moves others.

It’s the kind of thing that could inform improved wing design for aircraft, which currently rely in many ways on principles established more than a century ago. Passenger jets, of course, don’t need to dive or roll on short notice, but drones and other small craft might find the ability extremely useful.

“The underactuated morphing wing principles presented here may inspire more economical and simpler morphing wing designs for aircraft and robots with more degrees of freedom than previously considered,” write the researchers in the Science Robotics paper.

Up next for the team is observation of more bird species to see if these techniques are shared with others. Lentink is working on a tail to match the wings, and separately on a new bio-inspired robot inspired by falcons, which could potentially have legs and claws as well. “I have many ideas,” he admitted.

Loliware’s kelp-based plastic alternatives snag $6M seed round from eco-conscious investors

The last few years have seen many cities ban plastic bags, plastic straws and other common forms of waste, giving environmentally conscious alternatives a huge boost — among them Loliware, purveyor of fine disposable goods created from kelp. Huge demand and smart sourcing has attracted a big first funding round.

I covered Loliware early on when it was one of the first companies to be invested in by the Ocean Solutions Accelerator, a program started in 2017 by the nonprofit Sustainable Ocean Alliance. Founder Chelsea “Sea” Briganti told me about the new funding on the SOA’s strange yet quite successful “Accelerator at Sea” program late last year.

The company makes straws primarily, with other products planned, out of kelp matter. Kelp, if you’re not familiar, is a common type of aquatic algae (also called seaweed) that can grow quite large and is known for its robustness. It also grows in vast, vast quantities in many coastal locations, creating “kelp forests” that sustain entire ecosystems. Intelligent stewardship of these fast-growing kelp stocks could make them a significantly better source than corn or paper, which are currently used to create most biodegradable straws.

A proprietary process turns the kelp into straws that feel plastic-like but degrade simply (and not in your hot drink — it can stand considerably more exposure than corn and paper-based straws). Naturally the taste, desirable in some circumstances but not when drinking a seltzer, is also removed.

It took a lot of R&D and fine-tuning, Briganti told me:

“None of this has ever been done before. We led all development from material technology to new-to-world engineering of machinery and manufacturing practices. This way we ensure all aspects of the product’s development are truly scalable.”

They’ve gone through more than a thousand prototypes and are continuing to iterate as advances make possible things like higher flexibility or different shapes.

“Ultimately our material is a massive departure from the paradigms with which other companies are approaching the development of biodegradable materials,” she said. “They start with a problematic, last-forever, fossil fuel-derived paradigm and try to make it not so bad — this is step-change development and too slow and frumpy to truly make an impact.”

Of course it doesn’t matter how good your process is if no one is buying it, a fact that plagues many ethics-first operations, but in fact demand has grown so fast that Loliware’s biggest challenge has been scaling to meet it. The company has gone from a few million to a hundred million in recent years to a projected billion straws shipping in 2020.

“It takes us about 12 months to get to full automation [from the lab],” she said. “Once we get to full automation, we license the tech to a strategic plastic or paper manufacturer. Meaning, we do not manufacture billions of straws, or anything, in-house.”

It makes sense, of course, just as contracting out your PCB or plastic mold or what have you. Briganti wanted to have global impact, and that requires taking advantage of global infrastructure that’s already there.

Lastly, the consideration of a sustainable ecosystem was always important to Briganti, as the whole company is founded on the idea of reducing waste and using fundamentally ethical processes.

“Our products utilize a super-sustainable supply of seaweed, a supply that is overseen and regulated by local governments,” Briganti said. “In 2020, Loliware will launch the first-ever Algae Sustainability Council (ASC), which allows us to be at the helm of the design of these new global seaweed supply chain systems as well as establishing the oversight, ensuring sustainable practices and equitability. We are also pioneering what we have coined the ‘Zero Waste Circular Extraction Methodology,’ which will be a new paradigm in seaweed processing, utilizing every component of the biomass as it suggests.”

The $5.9 million “super seed” round has many investors, including several who were on board the ship in Alaska for the Accelerator at Sea this past October (as SOA Seabird Ventures). The CEO of Blue Bottle Coffee has invested, as have New York Ventures, Magic Hour, For Good VC, Hatzimemos/Libby, Geekdom Fund, HUmanCo VC, CityRock and Closed Loop Partners.

The money will be used for scaling and further R&D; Loliware plans to launch several new straw types (like a bent straw for juice boxes), a cup and a new utensil. 2020 may be the year you start seeing the company’s straws in your favorite coffee shop rather than a few early adopters here and there. You can keep track of where they can be found here at the company’s website.

Apple buys edge-based AI startup Xnor.ai for a reported $200M

Xnor.ai, spun off in 2017 from the nonprofit Allen Institute for AI (AI2), has been acquired by Apple for about $200 million. A source close to the company corroborated a report this morning from GeekWire to that effect.

Apple confirmed the reports with its standard statement for this sort of quiet acquisition: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (I’ve asked for clarification just in case.)

Xnor.ai began as a process for making machine learning algorithms highly efficient — so efficient that they could run on even the lowest tier of hardware out there, things like embedded electronics in security cameras that use only a modicum of power. Yet using Xnor’s algorithms they could accomplish tasks like object recognition, which in other circumstances might require a powerful processor or connection to the cloud.

CEO Ali Farhadi and his founding team put the company together at AI2 and spun it out just before the organization formally launched its incubator program. It raised $2.7M in early 2017 and $12M in 2018, both rounds led by Seattle’s Madrona Venture Group, and has steadily grown its local operations and areas of business.

The $200M acquisition price is only approximate, the source indicated, but even if the final number were less by half that would be a big return for Madrona and other investors.

The company will likely move to Apple’s Seattle offices; GeekWire, visiting the Xnor.ai offices (in inclement weather, no less), reported that a move was clearly underway. AI2 confirmed that Farhadi is no longer working there, but he will retain his faculty position at the University of Washington.

An acquisition by Apple makes perfect sense when one thinks of how that company has been directing its efforts towards edge computing. With a chip dedicated to executing machine learning workflows in a variety of situations, Apple clearly intends for its devices to operate independent of the cloud for such tasks as facial recognition, natural language processing, and augmented reality. It’s as much for performance as privacy purposes.

Its camera software especially makes extensive use of machine learning algorithms for both capturing and processing images, a compute-heavy task that could potentially be made much lighter with the inclusion of Xnor’s economizing techniques. The future of photography is code, after all — so the more of it you can execute, and the less time and power it takes to do so, the better.

 

It could also indicate new forays in the smart home, toward which with HomePod Apple has made some tentative steps. But Xnor’s technology is highly adaptable and as such rather difficult to predict as far as what it enables for such a vast company as Apple.

Kids with lazy eye can be treated just by letting them watch TV on this special screen

Amblyopia, commonly called lazy eye, is a medical condition that adversely affects the eyesight of millions, but if caught early can be cured altogether — unfortunately this usually means months of wearing an eyepatch. NovaSight claims successful treatment with nothing more than an hour a day in front of its special display.

The condition amounts to when the two eyes aren’t synced up in their movements. Normally both eyes will focus the detail-oriented fovea part of the retina on whatever object the person is attending to; In those with amblyopia, one eye won’t target the fovea correctly and as a result the eyes don’t converge properly and vision suffers, and if not treated can lead to serious vision loss.

It can be detected early on in children, and treatment can be as simple as covering the good eye with a patch for most of the day, which forces the other eye to adjust and align itself properly. The problem is of course that this is uncomfortable and embarrassing for the kid, and of course only using one eye isn’t ideal for playing schoolyard games and other everyday things.

And you look cool doing it!

NovaSight’s innovation with CureSight is to let this alignment process happen without the eyepatch, instead selectively blurring content the child watches so that the affected eye has to do the work while the other takes a rest.

It accomplishes this with the same technology that, ironically, gave many of us double vision back in the early days of 3D: glasses with blue and red lenses.

Blue-red stereoscopy presents two slightly different versions of the same image, one tinted red and one tinted blue. Normally it would be used with slightly different parallax to produce a binocular 3D image — that’s what many of us saw in theaters or amusement park rides.

In this case, however, one of the two tinted images just has a blurry circle right where the kid is looking. The screen uses a built-in Tobii eye-tracking sensor so it knows where the circle should be; I got to test it out briefly and the circle quickly caught up with my gaze. This makes it so the other eye, affected by the condition but the only one with access to the details of the image, has to be relied on to point where the kid needs it to.

The best part is that there isn’t some treatment schema or tests — kids can literally just watch YouTube or a movie using the special setup, and they’re getting better, NovaSight claims. And it can be done at home on the kid’s schedule — always a plus.

Graphs from NovaSight website.

The company has already done some limited clinical trials that showed “significant improvement” over a 12-week period. Whether it can be relied on to completely cure the condition or if it should be paired with other established treatments will come out in further trials the company has planned.

In the meantime, however, it’s nice to see a technology like 3D displays applied to improving vision rather than promoting bad films. NovaSight has been developing and promoting its tech over the last year; It also has a product that helps diagnose vision problems using a similar application of 3D display tech. You can learn more or request additional info at its website.