Don’t just stir; Stircle

Although I do my best to minimize the trash produced by my lifestyle (blog posts notwithstanding), one I can’t really control, at least without carrying a spoon on my person at all times, is the necessity of using a disposable stick to stir my coffee. That could all change with the Stircle, a little platform that spins your drink around to mix it.

Now, of course this is ridiculous. And there are other things to worry about. But honestly, the scale of waste here is pretty amazing. Design house Amron Experimental says that 400 million stir sticks are used every day, and I have no reason to doubt that. My native Seattle probably accounts for a quarter of that.

So you need to get the sugar (or agave nectar) and cream (or almond milk) mixed in your iced americano. Instead of reaching for a stick and stirring vigorously for 10 or 15 seconds, you could instead place your cup in the Stircle (first noticed by New Atlas and a few other design blogs), which would presumably be built into the fixins table at your coffee shop.

Around and around and around she goes, where she stops, nobody… oh. There.

Once you put your cup on the Stircle, it starts spinning — first one way, then the other, and so on, agitating your drink and achieving the goal of an evenly mixed beverage without using a wood or plastic stirrer. It’s electric, but I can imagine one being powered by a lever or button that compresses a spring. That would make it even greener.

The video shows that it probably gets that sugar and other low-lying mixers up into the upper strata of the drink, so I think we’re set there. And it looks as though it will take a lot of different sizes, including reusable tumblers. It clearly needs a cup with a lid, since otherwise the circling liquid will fly out in every direction, which means you have to be taking your coffee to go. That leaves out pretty much every time I go out for coffee in my neighborhood, where it’s served (to stay) in a mug or tall glass.

But a solution doesn’t have to fix everything to be clever or useful. This would be great at an airport, for instance, where I imagine every order is to go. Maybe they’ll put it in a bar, too, for extra smooth stirring of martinis.

Actually, I know that people in labs use automatic magnetic stirrers to do their coffee. This would be a way to do that without appropriating lab property. Those things are pretty cool too, though.

You might remember Amron from one of their many previous clever designs; I happen to remember the Keybrid and Split Ring Key, both of which I used for a while. I’ll be honest, I don’t expect to see a Stircle in my neighborhood cafe any time soon, but I sure hope they show up in Starbucks stores around the world. We’re going to run out of those stirrer things sooner or later.

Technique to beam HD video with 99 percent less power could sharpen the eyes of smart homes

Everyone seems to be insisting on installing cameras all over their homes these days, which seems incongruous with the ongoing privacy crisis — but that’s a post for another time. Today, we’re talking about enabling those cameras to send high-definition video signals wirelessly without killing their little batteries. A new technique makes beaming video out more than 99 percent more efficient, possibly making batteries unnecessary altogether.

Cameras found in smart homes or wearables need to transmit HD video, but it takes a lot of power to process that video and then transmit the encoded data over wi-fi. Small devices leave little room for batteries, and they’ll have to be recharged frequently if they’re constantly streaming. Who’s got time for that?

The idea behind this new system, created by a University of Washington team led by prolific researcher Shyam Gollakota, isn’t fundamentally different from some others that are out there right now. Devices with low data rates, like a digital thermometer or motion sensor, can something called backscatter to send a low-power signal consisting of a couple bytes.

Backscatter is a way of sending a signal that requires very little power, because what’s actually transmitting the power is not the device that’s transmitting the data. A signal is sent out from one source, say a router or phone, and another antenna essentially reflects that signal, but modifies it. By having it blink on and off you could indicate 1s and 0s, for instance.

UW’s system attaches the camera’s output directly to the output of the antenna, so the brightness of a pixel directly correlates to the length of the signal reflected. A short pulse means a dark pixel, a longer one is lighter, and the longest length indicates white.

Some clever manipulation of the video data by the team reduced the number of pulses necessary to send a full video frame, from sharing some data between pixels to using a “zigzag” scan (left to right, then right to left) scan pattern. To get color, each pixel needs to have its color channels sent in succession, but this too can be optimized.

Assembly and rendering of the video is accomplished on the receiving end, for example on a phone or monitor, where power is more plentiful.

In the end, a full-color HD signal at 60FPS can be with less than a watt of power, and a more modest but still very useful signal — say, 720p at 10FPS — can be sent for under 80 microwatts. That’s a huge reduction in power draw, mainly achieved by eliminating the entire analog to digital converter and on-chip compression. At those levels, you can essentially pull all the power you need straight out of the air.

They put together a demonstration device with off-the-shelf components, though without custom chips it won’t reach those

A frame sent during one of the tests. This transmission was going at about 10FPS.

microwatt power levels; still, the technique works as described. The prototype helped them determine what type of sensor and chip package would be necessary in a dedicated device.

Of course, it would be a bad idea to just blast video frames into the ether without any compression; luckily, the way the data is coded and transmitted can easily be modified to be meaningless to an observer. Essentially you’d just add an interfering signal known to both devices before transmission, and the receiver can subtract it.

Video is the first application the team thought of, but there’s no reason their technique for efficient, quick backscatter transmission couldn’t be used for non-video data.

The tech is already licensed to Jeeva Wireless, a startup founded by UW researchers (including Gollakota) a while back that’s already working on commercializing another low-power wireless device. You can read the details about the new system in their paper, presented last week at the Symposium on Networked Systems Design and Implementation.

The Skagen Falster is a high fashion Android wearable

Skagen is a well-know maker of thin and uniquely Danish watches. Founded in 1989, the company is now part of the Fossil group and, as such, has begin dabbling in both the analog with the Hagen and now Android Wear with the Falster. The Falster is unique in that it stuffs all of the power of a standard Android Wear device into a watch that mimics the chromed aesthetic of Skagen’s austere design while offering just enough features to make you a fashionable smartwatch wearer.

The Falster, which costs $275 and is available now, has a fully round digital OLED face which means you can read the time at all times. When the watch wakes up you can see an ultra bright white on black time-telling color scheme and then tap the crown to jump into the various features including Android Fit and the always clever Translate feature that lets you record a sentence and then show it the person in front of you.

You can buy it with a leather or metal band and the mesh steel model costs $20 extra.

Sadly, in order stuff the electronics into such a small case, Skagen did away with GPS, LTE connectivity, and even a heart-rate monitor. In other words if you were expecting a workout companion then the Falster isn’t the Android you’re looking for. However, if you’re looking for a bare-bones fashion smartwatch, Skagen ticks all the boxes.

[gallery ids="1622526,1622527,1622528,1622529"]

What you get from the Flasterou do get, however, is a low-cost, high-style Android Wear watch with most of the trimmings. I’ve worn this watch off and on few a few weeks now and, although I do definitely miss the heart rate monitor for workouts, the fact that this thing looks and acts like a normal watch 99% of the time makes it quite interesting. If obvious brand recognition nee ostentation are your goal, the Apple Watch or any of the Samsung Gear line are more your style. This watch, made by a company famous for its Danish understatement, offers the opposite of that.

Skagen offers a few very basic watch faces with the Skagen branding at various points on the dial. I particularly like the list face which includes world time or temperature in various spots around the world, offering you an at-a-glance view of timezones. Like most Android Wear systems you can change the display by pressing and holding on the face.

It lasts about a day on one charge although busy days may run down the battery sooner as notifications flood the screen. The notification system – essentially a little icon that appears over the watch face – sometimes fails and instead shows a baffling grey square. This is the single annoyance I noticed, UI-wise, when it came to the Falster. It works with both Android smartphones and iOS.

What this watch boils down to is an improved fitness tracker and notification system. If you’re wearing, say, a Fitbit, something like the Skagen Falster offers a superior experience in a very chic package. Because the watch is fairly compact (at 42mm I won’t say it’s small but it would work on a thinner wrist) it takes away a lot of the bulk of other smartwatches and, more important, doesn’t look like a smartwatch. Those of use who don’t want to look like we’re wearing robotic egg sacs on our wrists will enjoy that aspect of Skagen’s effort, even without all the trimmings we expect from a modern smartwatch.

Skagen, like so many other watch manufacturers, decided if it couldn’t been the digital revolution it would join it. The result is the Falster and, to a lesser degree, their analog collections. Whether or not traditional watchmakers will survive the 21st century is still up in the air but, as evidenced by this handsome and well-made watch, they’re at least giving it the old Danish try.

Waymo reportedly applies to put autonomous cars on California roads with no safety drivers

Waymo has become the second company to apply for the newly-available permit to deploy autonomous vehicles without safety drivers on some California roads, the San Francisco Chronicle reports. It would be putting its cars — well, minivans — on streets around Mountain View, where it already has an abundance of data.

The company already has driverless driverless cars in play over in Phoenix, as it showed in a few promotional videos last month. So this isn’t the first public demonstration of its confidence.

California only just made it possible to grant permits allowing autonomous vehicles without safety drivers on April 2; one other company has applied for it in addition to Waymo, but it’s unclear which. The new permit type also allows for vehicles lacking any kind of traditional manual controls, but for now the company is sticking with its modified Chrysler Pacificas. Hey, they’re practical.

The recent fatal collision of an Uber self-driving car with a pedestrian, plus another fatality in a Tesla operating in semi-autonomous mode, make this something of an awkward time to introduce vehicles to the road minus safety drivers. Of course, it must be said that both of those cars had people behind the wheel at the time of their crashes.

Assuming the permit is granted, Waymo’s vehicles will be limited to the Mountain View area, which makes sense — the company has been operating there essentially since its genesis as a research project within Google. So there should be no shortage of detail in the data, and the local authorities will be familiar with the people necessary for handling any issues like accidents, permit problems, and so on.

No details yet on what exactly the cars will be doing, or whether you’ll be able to ride in one. Be patient.

DroneShield is keeping hostile UAVs away from NASCAR events

If you were hoping to get some sweet drone footage of a NASCAR race in progress, you may find your quadcopter grounded unceremoniously by a mysterious force: DroneShield is bringing its anti-drone tech to NASCAR events at the Texas Motor Speedway.

The company makes a handful of products, all aimed at detecting and safely intercepting drones that are flying where they shouldn’t. That’s a growing problem, of course, and not just at airports or Area 51. A stray drone at a major sporting event could fall and interrupt the game, or strike someone, or at a race it may even cause a major accident.

Most recently it introduced a new version of its handheld “DroneGun,” which scrambles the UAV’s signal so that it has no choice but to safely put itself down, as these devices are generally programmed to do. You can’t buy one — technically, they’re illegal — but the police sure can.

Recently DroneShield’s tech was deployed at the Commonwealth Games in Brisbane and at the Olympics in PyeongChang, and now the company has announced that it was tapped by a number of Texas authorities for the protection of stock car races.

DroneShield’s systems in place in PyeongChang

“We are proud to be able to assist a high-profile event like this,” said Oleg Vornik, DroneShield’s CEO, in an email announcing the news. “We also believe that this is significant for DroneShield in that this is the first known live operational use of all three of our key products – DroneSentinel, DroneSentry and DroneGun – by U.S. law enforcement.”

It’s a big get for a company that clearly saw an opportunity in the growing drone market (in combating it, really) and executed well on it.

Who’s a good AI? Dog-based data creates a canine machine learning system

We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.

It’s a collaboration between the University of Washington and the Allen Institute for AI, and the resulting paper will be presented at CVPR in June.

Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.

And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.

As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.

In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.

They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.

This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.

“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”

That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.

This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”

Google Home and Google Home Mini smart speakers go on sale in India

Google’s two smart speaker products — the Google Home and Google Home Mini — and its Pixel 2 and Pixel 2 XL smartphones are now available in India following a launch event in the country.

The devices are priced at Rs 9,999 ($154), and Rs 4,499 ($69), respectively, and Google confirmed that they are available for purchase online via Flipkart and offline through over 750 retailer stores, including Reliance Digital, Croma and Bajaj Electronics.

The Google smart speakers don’t cater to India’s multitude of local languages at this point, but the U.S. company said that they do understand “distinctly” India voices and “will respond to you with uniquely Indian contexts,” such as answering questions about local sport, cooking or TV shows.

For a limited time, Google is incentivizing early customers who will get six months of Google Play Music alongside offers for local streaming services Saavn and Gaana when they buy the Home or Home Mini.

Google Home and Home Mini were first announced at Google I/O in 2016. The company said recently that it has sold “tens of millions” of speakers, with more than seven million sales between October 2017 and January 18.

Still, it’s been a long time coming to India, which has allowed others to get into the market first. Amazon, which is pouring considerable resources into its India-based business to battle Flipkart, brought its rival Echo smart devices to India last October.

Conserve the Sound is an archive of noises from old tape players, projectors and other dying tech

All of us grew up around tech different from what we have today, and many of us look back on those devices with fondness. But can you recall the exact sound your first Casio keyboard made, or the cadence of a rotary phone’s clicks? Conserve the Sound aims to, well, conserve the sound of gadgets like these so that future generations will know what it sounded like to put a cartridge in the NES.

It’s actually quite an old project at this point, having been funded first in 2013, but its collection has grown to a considerable size. The money came from German art institution Film & Medienstiftung NRW; the site was created (and is maintained) by creative house Chunderksen.

The whole thing is suitably minimal, much like an actual museum: You find objects either by browsing randomly or by finding a corresponding tag, and are presented with some straightforward imagery and a player loaded with the carefully captured sound of the device being operated.

Though the items themselves are banal, listening to these sounds of a bygone age is strangely addictive. They trigger memories or curiosity — was my Nintendo that squeaky? Didn’t my rotary phone click more? What kind was it anyway? I wonder if they have my old boombox… oh! A View-Master!

The collection has grown over the years and continues to grow; it now includes interviews with experts in various subjects on the importance of saving these sounds. You can even submit your own, if you like. “We welcome suggestions in general, sound suggestions, stories, anecdotes and of course collaborations,” write the creators.

I for one would love to revisit all the different modems and sounds I grew up using: 2400, 9600, 14.4, 28.8, all the way up to 56.6. Not exactly pleasant noises, admittedly, but I anticipate they will bring back a flood of memories, Proust-style, of BBSes, hours-long download times and pirated screen savers.

Massterly aims to be the first full-service autonomous marine shipping company

Logistics may not be the most exciting application of autonomous vehicles, but it’s definitely one of the most important. And the marine shipping industry — one of the oldest industries in the world, you can imagine — is ready for it. Or at least two major Norwegian shipping companies are: they’re building an autonomous shipping venture called Massterly from the ground up.

“Massterly” isn’t just a pun on mass; “Maritime Autonomous Surface Ship” is the term Wilhelmson and Kongsberg coined to describe the self-captaining boats that will ply the seas of tomorrow.

These companies, with “a combined 360 years of experience” as their video put it, are trying to get the jump on the next phase of shipping, starting with creating the world’s first fully electric and autonomous container ship, the Yara Birkeland. It’s a modest vessel by shipping terms — 250 feet long and capable of carrying 120 containers according to the concept — but will be capable of loading, navigating, and unloading without a crew

(One assumes there will be some people on board or nearby to intervene if anything goes wrong, of course. Why else would there be railings up front?)

Each has major radar and lidar units, visible light and IR cameras, satellite connectivity, and so on.

Control centers will be on land, where the ships will be administered much like air traffic, and ships can be taken over for manual intervention if necessary.

At first there will be limited trials, naturally: the Yara Birkeland will stay within 12 nautical miles of the Norwegian coast, shuttling between Larvik, Brevik, and Herøya. It’ll only be going 6 knots — so don’t expect it to make any overnight deliveries.

“As a world-leading maritime nation, Norway has taken a position at the forefront in developing autonomous ships,” said Wilhelmson group CEO Thomas Wilhelmson in a press release. “We take the next step on this journey by establishing infrastructure and services to design and operate vessels, as well as advanced logistics solutions associated with maritime autonomous operations. Massterly will reduce costs at all levels and be applicable to all companies that have a transport need.”

The Yara Birkeland is expected to be seaworthy by 2020, though Massterly should be operating as a company by the end of the year.

Under a millimeter wide and powered by light, these tiny cameras could hide almost anywhere

As if there weren’t already cameras enough in this world, researchers created a new type that is both microscopic and self-powered, making it possible to embed just about anywhere and have it work perpetually. It’s undoubtedly cool technology, but it’s probably also going to cause a spike in tinfoil sales.

Engineers have previously investigated the possibility of having a camera sensor power itself with the same light that falls on it. After all, it’s basically just two different functions of a photovoltaic cell — one stores the energy that falls on it while the other records how much energy fell on it.

The problem is that if you have a cell doing one thing, it can’t do the other. So if you want to have a sensor of a certain size, you have to dedicate a certain amount of that real estate to collecting power, or else swap the cells rapidly between performing the two tasks.

Euisik Yoon and postdoc Sung-Yun Park at the University of Michigan came up with a solution that avoids both these problems. It turns out that photosensitive diodes aren’t totally opaque — in fact, quite a lot of light passes right through them. So putting the solar cell under the image sensor doesn’t actually deprive it of light.

That breakthrough led to the creation of this “simultaneous imaging and energy harvesting” sensor, which does what it says on the tin.

The prototype sensor they built is less than a square millimeter, and fully self-powered in sunlight. It captured images at up to 15 frames per second of pretty reasonable quality:

The Benjamin on the left is at 7 frames per second, and on the right is 15.

In the paper, the researchers point out that they could easily produce better images with a few tweaks to the sensor, and Park tells IEEE Spectrum that the power consumption of the chip is also not optimized — so it could also operate at higher framerates or lower lighting levels.

Ultimately the sensor could be essentially a nearly invisible camera that operates forever with no need for a battery or even wireless power. Sounds great!

In order for this to be a successful spy camera, of course, it needs more than just an imaging component — a storage and transmission medium are necessary for any camera to be useful. But microscopic versions of those are also in development, so putting them together is just a matter of time and effort.

The team published their work this week in the journal IEEE Electron Device Letters.