Gift Guide: Indie games for players worn out on AAA titles

2018 has been a big year for big games, and with new titles from the Assassin’s Creed, Red Dead Redemption, Call of Duty, and Battlefield franchises all competing… it’s enough to make a gamer want to just quit and play something a little more low key. Here are some of the smaller, independent games we liked from this year and who they might appeal to.

Bonus: many of these can be gotten for less than $30, making them super solid/easy gifts. They aren’t for any particular platform or in any particular order, except that I’ve been playing the heck out of Ashen for the last couple days, so it’s first.

Ashen – for “Souls” lovers

Available on: Xbox One, Windows

(To be fair, this is less of an “indie” than the others on this list, some of which were made by one person, but it’s just off the beaten path enough to qualify.)

If you’ve ever heard your loved one talk about “builds,” really hard bosses, or which helmet completes their outfit best, they probably play games of the Dark Souls type. Ashen is a new action-adventure-RPG in the same vein but with a few notable twists. It has a lovely art style, a streamlined (but still byzantine) progression system, and an interesting multiplayer style where other players drop into your game, and you drop into theirs, with no real warning or interaction. It works better than you’d think, and I’ve already had some great experiences with it.

Yoku’s Island Express – for people who like both pinball and Metroidvanias

Available on: Switch, PS4, Xbox One, Windows

Don’t be fooled by the cuteness of Yoku’s Island Express. This game is both unique and well-crafted, a fusion of (believe it or not) pinball mechanics and gradual exploration of an enormous map. It’s definitely weird, but it immediately clicks in a way you wouldn’t expect. It’s a great break from the grim environments of… well, lots of the games on this list.

Dead Cells – for action fans who won’t mind “roguelike” repetition

Available on: PS4, Xbox One, Switch, Windows, Linux, macOS

The “roguelike” genre has you traversing procedurally-generated variations on a series of levels and progressing farther by improving your own skills — and sometimes getting a couple shiny new weapons or abilities. Dead Cells takes this genre and combines it with incredibly tight side-scrolling action and platforming that never gets old even when you’re going through the sewers for the 20th time. The developers were very responsive during Early Access; the game was great when I bought it early in the year, and now it’s even better.

Below – for atmosphere fans who won’t mind “roguelike” repetition

Available on: Xbox One, Windows

In some ways, Below is the opposite of Dead Cells, though they share a bit of DNA. This game, the long-awaited follow-up to Superbrothers: Sword and Sworcery EP by Capy, is a slow, dark, tense descent into a mysterious cave; it’s almost totally wordless and shown with a pulled-back perspective that makes things feel both twee and terrifying. The less said about the particulars of the game, the better (the gamer should discover on their own), but it may be fairly noted that this is a title that requires some patience and experimentation — and yes, you’re going to die on a spike trap.

Cultist Simulator – for the curious

Available on: Windows, macOS, Linux

It’s very hard to explain Cultist Simulator. It’s an interactive story, different every time, told through cards that you draw and play, and which interact with each other in strange and wonderful ways. One card might be a place, another an action, another a person, all of which can be used, investigated, or sacrificed to other cards: ideas, drives, gods… it’s really quite amazing, even if you rarely have any idea what’s happening. But the curious and driven will derive great satisfaction from learning the way this strange, beautifully made machine works.

Return of the Obra Dinn – for the observant (and dedicated)

Available on: macOS, Windows

This game absorbed me completely for a few days earlier this year. Like the above, it’s a bit hard to explain: you’re given the task of determining the identities and fates of the entire crew of the titular ghost ship by using a magic watch to witness their last words and the moment of their death. That task, and the story it reveals as you accomplish it, grows increasingly disturbing and complex. The beautiful 1-bit art, great music and voice acting, and extremely clever construction make this game — essentially made by one person, Lucas Pope — one of my favorites of the year. But it’s only for people who don’t mind banging their head against things a bit.

Dusk – for connoisseurs of old-school shooters

Available on: Windows, Switch

If your loved one ever talks about the good old days of Quake, Half-Life, Unreal and other classic shooters, Dusk will be right up their alley. The chunky graphics are straight out of the ’90s but the game brings a level of self-awareness and fun, not to mention some gameplay improvements, that make it a joy to play.

CrossCode – for anyone who spent more time playing SNES Classic than AAA games this year

Available on: Windows, Linux, macOS

This crowd-funded RPG was long in the making, and it shows. It’s huge! A fusion of SNES and PSX-era pixel art, smooth but furious top-down action a la Secret of Mana, and a whole lot of skills and equipment. I’ve played nearly 20 hours so far and I’m only now starting to fill out the second branch of four skill trees; the overarching story is still just getting rolling. I told you it was huge! But it’s also fabulous.

Celeste – for the dexterous and those not inclined to anger

Available on: PS4, Xbox One, Switch, macOS, Windows, Linux

Celeste is one of those games they call “Nintendo Hard,” that elusive combination of difficulty and control that cause you to be more disappointed in yourself than the game when you die. And you will die in Celeste — over and over. Hundreds of times. It gleefully tracks the number of deaths on each set of stages, and you should expect well into three figures. The platforming is that hard — but the game is also that good. Not only is its pixel art style cute and the environments lovingly and carefully crafted, but it tells a touching story and the dialogue is actually pretty fun.

Overcooked! 2 –  for friendships strong enough to survive it

Available on: PS4, Xbox One, Switch, Windows, macOS

Much like the first Overcooked, the sequel has you and your friends attempting to navigate chaotic kitchens, hazards, and each other as you try to put together simple dishes like salads and hamburgers for never-sated patrons. The simple controls belie the emergent complexity of the gameplay, and while it can be frustrating at first, it’s immensely satisfying when you get into the zone and blast through a target number of dishes. But only do it with friends you think you can tolerate screaming and bossing each other around.

Into the Breach – for the tactically minded

Available on: Switch, Windows, macOS, Linux

The follow-up to the addictive starship simulator roguelike Faster Than Light (FTL), Into the Breach is a game of tactics taking place on tiny boards loaded with monsters and mechs — but don’t let the small size fool you. The solutions to these little tableaux require serious thinking as you position, attack, and (hopefully) repel the alien invaders. Matt says it’s “perfect for Switch.”

How Russia’s online influence campaign engaged with millions for years

Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.

The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.

A clearer picture

If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.

If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.

Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?

The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.

Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.

The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:

Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.

Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.

Examples of posts targeting black Americans.

In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”

Instagram on the rise

Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.

In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).

A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.

IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.

Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.

In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”

Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.

Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.

This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.

For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”

In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.

Desperately seeking transparency

In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.

“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”

“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”

Predictably, even social networks’ moderation policies became targets for propagandizing.

Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.

Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.

“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)

If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.

These face-generating systems are getting rather too creepily good for my liking

Machine learning models are getting quite good at generating realistic human faces — so good that I may never trust a machine, or human, to be real ever again. The new approach, from researchers at Nvidia, leapfrogs others by separating levels of detail in the faces and allowing them to be tweaked separately. The results are eerily realistic.

The paper, published on preprint repository Arxiv (PDF), describes a new architecture for generating and blending images, particularly human faces, that “leads to better interpolation properties, and also better disentangles the latent factors of variation.”

What that means, basically, is that the system is more aware of meaningful variation between images, and at a variety of scales to boot. The researchers’ older system might, for example, produce two “distinct” faces that were mostly the same except the ears of one are erased and the shirt is a different color. That’s not really distinctiveness — but the system doesn’t know that those are not important pieces of the image to focus on.

It’s inspired by what’s called style transfer, in which the important stylistic aspects of, say, a painting, are extracted and applied to the creation of another image, which (if all goes well) ends up having a similar look. In this case, the “style” isn’t so much the brush strokes or color space, but the composition of the image (centered, looking left or right, etc.) and the physical characteristics of the face (skin tone, freckles, hair).

These features can have different scales, as well — at the fine side, it’s things like individual facial features; in the middle, it’s the general composition of the shot; at the largest scale, it’s things like overall coloration. Allowing the system to adjust all of them changes the whole image, while only adjusting a few might just change the color of someone’s hair, or just the presence of freckles or facial hair.

In the image at the top, notice how completely the faces change, yet obvious markers of both the “source” and “style” are obviously present, for instance the blue shirts in the bottom row. In other cases things are made up out of whole cloth, like the kimono the kid in the very center seems to be wearing. Where’d that come from? Note that all this is totally variable, not just a A + B = C, but with all aspects of A and B present or absent depending on how the settings are tweaked.

None of these are real people. But I wouldn’t look twice at most of these images if they were someone’s profile picture or the like. It’s kind of scary to think that we now have basically a face generator that can spit out perfectly normal looking humans all day long. Here are a few dozen:

It’s not perfect, but it works. And not just for people. Cars, cats, landscapes — all this stuff more or less fits the same paradigm of small, medium and large features that can be isolated and reproduced individually. An infinite cat generator sounds like a lot more fun to me, personally.

The researchers also have published a new data set of face data: 70,000 images of faces collected (with permission) from Flickr, aligned and cropped. They used Mechanical Turk to weed out statues, paintings and other outliers. Given the standard data set used by these types of projects is mostly red carpet photos of celebrities, this should provide a much more variable set of faces to work with. The data set will be available for others to download here soon.

Watch Rocket Lab launch 10 cubesats into orbit tonight for NASA

It’s been just over a month since Rocket Lab’s inaugural (and long-delayed) commercial launch, “It’s Business Time,” and it’s about to take another customer to space: NASA. Tonight’s 8PM scheduled launch will take 10 small satellites to orbit as part of NASA’s Educational Launch of Nanosatellites (ELaNa) XIX mission.

This is not only Rocket Lab’s first all-NASA launch, but the first launch from NASA under its “Venture Class Launch Services” initiative, which is taking advantage of the new generation of smaller, quick-turnaround launch vehicles.

“The NASA Venture Class Launch Service contract was designed from the ground up to be an innovative way for NASA to work and encourage new launch companies to come to the market and enable a future class of rockets for the growing small satellite market,” said Justin Treptow, ELaNa XIX’s mission manager in a Rocket Lab press release.

On board tonight’s launch are four satellites from NASA researchers, plus six from various universities and institutions around the country. NASA Spaceflight has a good roundup of the projects, as well as some technical details about the rocket, if you’re curious. They’ll all go their separate ways once the Electron rocket takes them up to the appropriate altitude.

The launch vehicle is named “This One’s For Pickering,” after former JPL head Sir William Pickering, who led the team that created Explorer I, the first U.S. satellite in space. Sir Pickering was born in New Zealand, where Rocket Lab is based and where the launch will take place.

Liftoff will take place no sooner than about 8 PM Pacific time, and payload deployment should be just short of an hour after T-0; you can watch the live stream at Rocket Lab’s website.

Listen to the soothing sounds of Martian wind collected by NASA’s InSight lander

The InSight Mars lander accomplished a perfect landing last week on the Elysium Planitia region of the planet, where it is hard at work preparing to drill into the surface (and taking selfies, of course). But one “unplanned treat” is a recording of the wind rolling across the Martian plains — which you can listen to right here.

Technically the lander isn’t rigged to detect sound, at least in the way you’d do it if you were deliberately trying to record it. But the robotic platform’s air pressure sensor and seismometer are both capable of detecting the minute variations as the wind rolls over it. The air pressure sensor, inside that silver dome you see above, produced the most normal-sounding signal, though it still had to be adjusted to be more like what you’d hear if you were there.

“The InSight lander acts like a giant ear,” explained InSight science team member Tom Pike in a NASA news release. “The solar panels on the lander’s sides respond to pressure fluctuations of the wind. It’s like InSight is cupping its ears and hearing the Mars wind beating on it.”

Curious what it sounds like? The resulting recording can be listened to on Soundcloud or below:

Sounds a lot like regular wind, right? Well, what were you expecting? Like so many aspects of space exploration, the prosaic nature of the thing itself — a rock, a landscape feature, a breath of wind — is offset by the fact that it’s occurring millions of miles away on an alien world and relayed here by a high-tech robot. Wind on Mars might not sound much different than wind on Earth — but surely that’s not the point!

We’ll have more recordings soon, I’m sure, so you can use it as noise to fall asleep to. But even better sounds are forthcoming: the Mars 2020 rover will have actual high-quality microphones on board, and will record the sounds of its landing as well as the Martian ambience.

AI desperately needs regulation and public accountability, experts say

Artificial intelligence systems and creators are in dire need of direct intervention by governments and human rights watchdogs, according to a new report from researchers at Google, Microsoft and others at AI Now. Surprisingly, it looks like the tech industry just isn’t that good at regulating itself.

In the 40-page report (PDF) published this week, the New York University-based organization (with Microsoft Research and Google-associated members) shows that AI-based tools have been deployed with little regard for potential ill effects or even documentation of good ones. While this would be one thing if it was happening in controlled trials here and there, instead these untested, undocumented AI systems are being put to work in places where they can deeply affect thousands or millions of people.

I won’t go into the examples here, but think border patrol, entire school districts and police departments, and so on. These systems are causing real harm, and not only are there no systems in place to stop them, but few to even track and quantify that harm.

“The frameworks presently governing AI are not capable of ensuring accountability,” the researchers write in the paper. “As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.”

Right now companies are creating AI-based solutions to everything from grading students to assessing immigrants for criminality. And the companies creating these programs are bound by little more than a few ethical statements they decided on themselves.

Google, for instance, recently made a big deal about setting some “AI principles” after that uproar about its work for the Defense Department. It said its AI tools would be socially beneficial, accountable and won’t contravene widely accepted principles human rights.

Naturally, it turned out the company has the whole time been working on a prototype censored search engine for China. Great job!

So now we know exactly how far that company can be trusted to set its own boundaries. We may as well assume that’s the case for the likes of Facebook, which is using AI-based tools to moderate; Amazon, which is openly pursuing AI for surveillance purposes; and Microsoft, which yesterday published a good piece on AI ethics — but as good as its intentions seem to be, a “code of ethics” is nothing but promises a company is free to break at any time.

The AI Now report has a number of recommendations, which I’ve summarized below but really are worth reading in their entirety. It’s quite readable and a good review, as well as smart analysis.

  • Regulation is desperately needed. But a “national AI safety body” or something like that is impractical. Instead, AI experts within industries like health or transportation should be looking at modernizing domain-specific rules to include provisions limiting and defining the role of machine learning tools. We don’t need a Department of AI, but the FAA should be ready to assess the legality of, say, a machine learning-assisted air traffic control system.
  • Facial recognition, in particular questionable applications of it like emotion and criminality detection, need to be closely examined and subjected to the kind of restrictions as are false advertising and fraudulent medicine.
  • Public accountability and documentation need to be the rule, including a system’s internal operations, from data sets to decision-making processes. These are necessary not just for basic auditing and justification for using a given system, but for legal purposes should such a decision be challenged by a person that system has classified or affected. Companies need to swallow their pride and document these things even if they’d rather keep them as trade secrets — which seems to me the biggest ask in the report.
  • More funding and more precedents need to be established in the process of AI accountability; it’s not enough for the ACLU to write a post about a municipal “automated decision-making system” that deprives certain classes of people of their rights. These things need to be taken to court and the people affected need mechanisms of feedback.
  • The entire industry of AI needs to escape its engineering and computer science cradle — the new tools and capabilities cut across boundaries and disciplines and should be considered in research not just by the technical side. “Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations,” write the researchers.

They’re good recommendations, but not the kind that can be made on short notice, so expect 2019 to be another morass of missteps and misrepresentations. And as usual, never trust what a company says, only what it does — and even then, don’t trust it to say what it does.

Loro’s mounted wheelchair assistant puts high tech to work for people with disabilities

A person with physical disabilities can’t interact with the world the same way as the able, but there’s no reason we can’t use tech to close that gap. Loro is a device that mounts to a wheelchair and offers its occupant the ability to see and interact with the people and things around them in powerful ways.

Loro’s camera and app work together to let the user see farther, read or translate writing, identify people, gesture with a laser pointer and more. They demonstrated their tech onstage today during Startup Battlefield at TechCrunch Disrupt Berlin.

Invented by a team of mostly students who gathered at Harvard’s Innovation Lab, Loro began as a simple camera for disabled people to more easily view their surroundings.

“We started this project for our friend Steve,” said Loro co-founder and creative director, Johae Song. A designer like her and others in their friend group, he was diagnosed with Amyotrophic Lateral Sclerosis, or ALS, a degenerative neural disease that paralyzes the muscles of the afflicted. “So we decided to come up with ideas of how to help people with mobility challenges.”

“We started with just the idea of a camera attached to the wheelchair, to give people a panoramic view so they can navigate easily,” explained co-founder David Hojah. “We developed from that idea after talking with mentors and experts; we did a lot of iterations, and came up with the idea to be smarter, and now it’s this platform that can do all these things.”

It’s not simple to design responsibly for a population like ALS sufferers and others with motor problems. The problems they may have in everyday life aren’t necessarily what one would think, nor are the solutions always obvious. So the Loro team determined to consult many sources and expend a great deal of time in simple observation.

“Very basic observation — just sit and watch,” Hojah said. “From that you can get ideas of what people need without even asking them specific questions.”

Others would voice specific concerns without suggesting solutions, such as a flashlight the user can direct through the camera interface.

“People didn’t say, ‘I want a flashlight,’ they said ‘I can’t get around in the dark.’ So we brainstormed and came up with the flashlight,” he said. An obvious solution in some ways, but only through observation and understanding can it be implemented well.

The focus is always on communication and independence, Song said, and users are the ones who determine what gets included.

“We brainstorm together and then go out and user test. We realize some features work, others don’t. We try to just let them play with it and see what features people use the most.”

There are assistive devices for motor-impaired people out there already, Song and Hojah acknowledged, but they’re generally expensive, unwieldy and poorly designed. Hojah’s background is in medical device design, so he knows of what he speaks.

Consequently, Loro has been designed to be as accessible as possible, with a tablet interface that can be navigated using gaze tracking (via a Tobii camera setup) or other inputs like joysticks and sip-and-puff tubes.

The camera can be directed to, for example, look behind the wheelchair so the user can safely back up. Or it can zoom in on a menu that’s difficult to see from the user’s perspective and read the items off. The laser pointer allows a user with no ability to point or gesture to signal in ways we take for granted, such as choosing a pastry from a case. Text to speech is built right in, so users don’t have to use a separate app to speak out loud.

The camera also tracks faces and can recognize them from a personal (though for now, cloud-hosted) database for people who need help tracking those with whom they interact. The best of us can lose a name or fail to place a face — honestly, I wouldn’t mind having a Loro on my shoulder during some of our events.

Right now the team is focused on finalizing the hardware; the app and capabilities are mostly finalized but the enclosure and so on need to be made production-ready. The company itself is very early-stage — they just incorporated a few months ago and worked with $100,000 in pre-seed funding to create the prototype. Next up is doing a seed round to get ready to manufacture.

“The whole team, we’re really passionate about empowering these people to be really independent, not just waiting for help from others,” Hojah said. Their driving force, he made clear, is compassion.

 

[gallery ids="1752219,1752224,1752228,1752229,1752230"]

Spaceflight’s 64-satellite rideshare launch takes off tomorrow on a Falcon 9

Seattle-based launch coordinator Spaceflight is gearing up for its biggest operation yet: Smallsat Express, deploying a staggering 64 separate satellites from 34 different clients — all from a single Falcon 9 rocket. It’s quite an endeavor, but the company believes that this kind of jam-packed “space bus” is the best way to make satellite deployment cheap and easy — relatively speaking, anyway.

Spaceflight, started in 2011, has plenty of launches from a variety of providers under its belt. But demand has been so intense that after taking up a handful of slots on this or that rocket, they finally decided to take the next logical step: “Why not buy our own Falcon?”

That’s how founder Curt Blake explained it to me when I visited the company’s modest office in Westlake, a mile or so from downtown Seattle. Unfortunately, he said, they happened to make that investment just before another SpaceX rocket exploded on the launch pad. That rattled everyone, but ultimately the cost-benefit equation for wholesale rideshare like this makes too much sense.

“There have been lots of shared launches before, but not on this scale,” he said. Dozens deployed, but not 64. The number was actually even higher originally, but some clients had to back out relatively late in the game. That’s one of the downsides of a major shared launch: an inflexible timeline. If 9 out of 10 of the passengers are ready to go, they can’t sit and wait while the last one gets their ducks in a row; the next favorable launch time might be months off.

Spaceflight, like other launch coordinators, does a bunch of things for their clients: help navigate the red tape and schedule things, of course — but perhaps most importantly for a launch of this scale, it works with everyone to create a payload that can launch scores of satellites ranging in size from breadbox to cooler.

That payload, Blake said, is known at SpaceX as the “FrankenStack.” A “stack” is the components in the rocket’s payload that actually do stuff, and Spaceflight had to make this one from scratch. They learned a lot, Blake noted, and had to invent a lot in order to cram all those satellites in there.

The FrankenStack rather like a giant wedding cake, with layers of different satellite deployment hardware. After all, these satellites are all going to different places, different orbits, different directions. You can’t just get up there and hit the “release” button.

At the very bottom, or rather above the cone that attaches to the rocket stage, is the MPC, or multi-payload carrier, which has a variety of large items on four shelves, including ones that need to be launched along the FrankenStack’s path, as opposed to perpendicular. Above that is the hub and cubesat portion, also called the upper free flier, because it will detach from the MPC and go its own way.

If all goes well, there will be 64 more little stars in the sky by the end of tomorrow. Watch the live stream of the launch on SpaceX’s site starting at about 10 AM.

Mars Lander InSight sends the first of many selfies after a successful touchdown

Last night’s 10 minutes of terror as the InSight Mars Lander descended to the Martian surface at 12,300 MPH were a nail-biter for sure, but now the robotic science platform is safe and sound — and has sent pics back to prove it.

The first thing it sent was a couple pictures of its surroundings: Elysium Planitia, a rather boring-looking, featureless plane that is nevertheless perfect for InSight’s drilling and seismic activity work.

The images, taken with its Instrument Context Camera, are hardly exciting on their own merits — a dirty landscape viewed through a dusty tube. But when you consider that it’s of an unexplored territory on a distant planet, and that it’s Martian dust and rubble occluding the lens, it suddenly seems pretty amazing!

Decelerating from interplanetary velocity and making a perfect landing was definitely the hard part, but it was by no means InSight’s last challenge. After touching down, it still needs to set itself up and make sure that none of its many components and instruments were damaged during the long flight and short descent to Mars.

And the first good news arrived shortly after landing, relayed via NASA’s Odyssey spacecraft in orbit: a partial selfie showing that it was intact and ready to roll. The image shows, among other things, the large mobile arm folded up on top of the lander, and a big copper dome covering some other components.

Telemetry data sent around the same time show that InSight has also successfully deployed its solar panels and its collecting power with which to continue operating. These fragile fans are crucial to the lander, of course, and it’s a great relief to hear they’re working properly.

These are just the first of many images the lander will send, though unlike Curiosity and the other rovers, it won’t be traveling around taking snapshots of everything it sees. Its data will be collected from deep inside the planet, offering us insight into the planet’s — and our solar system’s — origins.

That night, a forest flew

Wildfires are consuming our forests and grasslands faster than we can replace them. It’s a vicious cycle of destruction and inadequate restoration rooted, so to speak, in decades of neglect of the institutions and technologies needed to keep these environments healthy.

DroneSeed is a Seattle-based startup that aims to combat this growing problem with a modern toolkit that scales: drones, artificial intelligence and biological engineering. And it’s even more complicated than it sounds.

Trees in decline

A bit of background first. The problem of disappearing forests is a complex one, but it boils down to a few major factors: climate change, outdated methods and shrinking budgets (and as you can imagine, all three are related).

Forest fires are a natural occurrence, of course. And they’re necessary, as you’ve likely read, to sort of clear the deck for new growth to take hold. But climate change, monoculture growth, population increases, lack of control burns and other factors have led to these events taking place not just more often, but more extensively and to more permanent effect.

On average, the U.S. is losing 7 million acres a year. That’s not easy to replace to begin with — and as budgets for the likes of national and state forest upkeep have shrunk continually over the last half century, there have been fewer and fewer resources with which to combat this trend.

The most effective and common reforestation technique for a recently burned woodland is human planters carrying sacks of seedlings and manually selecting and placing them across miles of landscapes. This back-breaking work is rarely done by anyone for more than a year or two, so labor is scarce and turnover is intense.

Even if the labor was available on tap, the trees might not be. Seedlings take time to grow in nurseries and a major wildfire might necessitate the purchase and planting of millions of new trees. It’s impossible for nurseries to anticipate this demand, and the risk associated with growing such numbers on speculation is more than many can afford. One missed guess could put the whole operation underwater.

Meanwhile, if nothing gets planted, invasive weeds move in with a vengeance, claiming huge areas that were once old growth forests. Lacking the labor and tree inventory to stem this possibility, forest keepers resort to a stopgap measure: use helicopters to drench the area in herbicides to kill weeds, then saturate it with fast-growing cheatgrass or the like. (The alternative to spraying is, again, the manual approach: machetes.)

At least then, in a year, instead of a weedy wasteland, you have a grassy monoculture — not a forest, but it’ll do until the forest gets here.

One final complication: helicopter spraying is a horrendously dangerous profession. These pilots are flying at sub-100-foot elevations, performing high-speed maneuvers so that their sprays reach the very edge of burn zones but they don’t crash head-on into the trees. This is an extremely dangerous occupation: 80 to 100 crashes occur every year in the U.S. alone.

In short, there are more and worse fires and we have fewer resources — and dated ones at that — with which to restore forests after them.

These are facts anyone in forest ecology and logging are familiar with, but perhaps not as well known among technologists. We do tend to stay in areas with cell coverage. But it turns out that a boost from the cloistered knowledge workers of the tech world — specifically those in the Emerald City — may be exactly what the industry and ecosystem require.

Simple idea, complex solution

So what’s the solution to all this? Automation, right?

Automation, especially via robotics, is proverbially suited for jobs that are “dull, dirty, and dangerous.” Restoring a forest is dirty and dangerous to be sure. But dull isn’t quite right. It turns out that the process requires far more intelligence than anyone was willing, it seems, to apply to the problem — with the exception of those planters. That’s changing.

Earlier this year, DroneSeed was awarded the first multi-craft, over-55-pounds unmanned aerial vehicle license ever issued by the FAA. Its custom UAV platforms, equipped with multispectral camera arrays, high-end lidar, six-gallon tanks of herbicide and proprietary seed dispersal mechanisms have been hired by several major forest management companies, with government entities eyeing the service as well.

Ryan Warner/DroneSeed

These drones scout a burned area, mapping it down to as high as centimeter accuracy, including objects and plant species, fumigate it efficiently and autonomously, identify where trees would grow best, then deploy painstakingly designed seed-nutrient packages to those locations. It’s cheaper than people, less wasteful and dangerous than helicopters and smart enough to scale to national forests currently at risk of permanent damage.

I met with the company’s team at their headquarters near Ballard, where complete and half-finished drones sat on top of their cases and the air was thick with capsaicin (we’ll get to that).

The idea for the company began when founder and CEO Grant Canary burned through a few sustainable startup ideas after his last company was acquired, and was told, in his despondency, that he might have to just go plant trees. Canary took his friend’s suggestion literally.

“I started looking into how it’s done today,” he told me. “It’s incredibly outdated. Even at the most sophisticated companies in the world, planters are superheroes that use bags and a shovel to plant trees. They’re being paid to move material over mountainous terrain and be a simple AI and determine where to plant trees where they will grow — microsites. We are now able to do both these functions with drones. This allows those same workers to address much larger areas faster without the caloric wear and tear.”

(Video: Ryan Warner/DroneSeed)

It may not surprise you to hear that investors are not especially hot on forest restoration (I joked that it was a “growth industry” but really because of the reasons above it’s in dire straits).

But investors are interested in automation, machine learning, drones and especially government contracts. So the pitch took that form. With the money DroneSeed secured, it has built its modestly sized but highly accomplished team and produced the prototype drones with which is has captured several significant contracts before even announcing that it exists.

“We definitely don’t fit the mold or metrics most startups are judged on. The nice thing about not fitting the mold is people double take and then get curious,” Canary said. “Once they see we can actually execute and have been with 3 of the 5 largest timber companies in the U.S. for years, they get excited and really start advocating hard for us.”

The company went through Techstars, and Social Capital helped them get on their feet, with Spero Ventures joining up after the company got some groundwork done.

If things go as DroneSeed hopes, these drones could be deployed all over the world by trained teams, allowing spraying and planting efforts in nurseries and natural forests to take place exponentially faster and more efficiently than they are today. It’s genuine change-the-world-from-your-garage stuff, which is why this article is so long.

Hunter (weed) killers

The job at hand isn’t simple or even straightforward. Every landscape differs from every other, not just in the shape and size of the area to be treated but the ecology, native species, soil type and acidity, type of fire or logging that cleared it and so on. So the first and most important task is to gather information.

For this, DroneSeed has a special craft equipped with a sophisticated imaging stack. This first pass is done using waypoints set on satellite imagery.

The information collected at this point is really far more detailed than what’s actually needed. The lidar, for instance, collects spatial information at a resolution much beyond what’s needed to understand the shape of the terrain and major obstacles. It produces a 3D map of the vegetation as well as the terrain, allowing the system to identify stumps, roots, bushes, new trees, erosion and other important features.

This works hand in hand with the multispectral camera, which collects imagery not just in the visible bands — useful for identifying things — but also in those outside the human range, which allows for in-depth analysis of the soil and plant life.

The resulting map of the area is not just useful for drone navigation, but for the surgical strikes that are necessary to make this kind of drone-based operation worth doing in the first place. No doubt there are researchers who would love to have this data as well.

Ryan Warner/DroneSeed

Now, spraying and planting are very different tasks. The first tends to be done indiscriminately using helicopters, and the second by laborers who burn out after a couple of years — as mentioned above, it’s incredibly difficult work. The challenge in the first case is to improve efficiency and efficacy, while in the second case is to automate something that requires considerable intelligence.

Spraying is in many ways simpler. Identifying invasive plants isn’t easy, exactly, but it can be done with imagery like that the drones are collecting. Having identified patches of a plant to be eliminated, the drones can calculate a path and expend only as much herbicide is necessary to kill them, instead of dumping hundreds of gallons indiscriminately on the entire area. It’s cheaper and more environmentally friendly. Naturally, the opposite approach could be used for distributing fertilizer or some other agent.

I’m making it sound easy again. This isn’t a plug and play situation — you can’t buy a DJI drone and hit the “weedkiller” option in its control software. A big part of this operation was the creation not only of the drones themselves, but the infrastructure with which to deploy them.

Conservation convoy

The drones themselves are unique, but not alarmingly so. They’re heavy-duty craft, capable of lifting well over the 57 pounds of payload they carry (the FAA limits them to 115 pounds).

“We buy and gut aircraft, then retrofit them,” Canary explained simply. Their head of hardware, would probably like to think there’s a bit more to it than that, but really the problem they’re solving isn’t “make a drone” but “make drones plant trees.” To that end, Canary explained, “the most unique engineering challenge was building a planting module for the drone that functions with the software.” We’ll get to that later.

DroneSeed deploys drones in swarms, which means as many as five drones in the air at once — which in turn means they need two trucks and trailers with their boxes, power supplies, ground stations and so on. The company’s VP of operations comes from a military background where managing multiple aircraft onsite was part of the job, and she’s brought her rigorous command of multi-aircraft environments to the company.

Ryan Warner/DroneSeed

The drones take off and fly autonomously, but always under direct observation by the crew. If anything goes wrong, they’re there to take over, though of course there are plenty of autonomous behaviors for what to do in case of, say, a lost positioning signal or bird strike.

They fly in patterns calculated ahead of time to be the most efficient, spraying at problem areas when they’re over them, and returning to the ground stations to have power supplies swapped out before returning to the pattern. It’s key to get this process down pat, since efficiency is a major selling point. If a helicopter does it in a day, why shouldn’t a drone swarm? It would be sad if they had to truck the craft back to a hangar and recharge them every hour or two. It also increases logistics costs like gas and lodging if it takes more time and driving.

This means the team involves several people, as well as several drones. Qualified pilots and observers are needed, as well as people familiar with the hardware and software that can maintain and troubleshoot on site — usually with no cell signal or other support. Like many other forms of automation, this one brings its own new job opportunities to the table.

AI plays Mother Nature

The actual planting process is deceptively complex.

The idea of loading up a drone with seeds and setting it free on a blasted landscape is easy enough to picture. Hell, it’s been done. There are efforts going back decades to essentially load seeds or seedlings into guns and fire them out into the landscape at speeds high enough to bury them in the dirt: in theory this combines the benefits of manual planting with the scale of carpeting the place with seeds.

But whether it was slapdash placement or the shock of being fired out of a seed gun, this approach never seemed to work.

Forestry researchers have shown the effectiveness of finding the right “microsite” for a seed or seedling; in fact, it’s why manual planting works as well as it does. Trained humans find perfect spots to put seedlings: in the lee of a log; near but not too near the edge of a stream; on the flattest part of a slope, and so on. If you really want a forest to grow, you need optimal placement, perfect conditions and preventative surgical strikes with pesticides.

Ryan Warner/DroneSeed

Although it’s difficult, it’s also the kind of thing that a machine learning model can become good at. Sorting through messy, complex imagery and finding local minima and maxima is a specialty of today’s ML systems, and the aerial imagery from the drones is rich in relevant data.

The company’s CTO led the creation of an ML model that determines the best locations to put trees at a site — though this task can be highly variable depending on the needs of the forest. A logging company might want a tree every couple of feet, even if that means putting them in sub-optimal conditions — but a few inches to the left or right may make all the difference. On the other hand, national forests may want more sparse deployments or specific species in certain locations to curb erosion or establish sustainable firebreaks.

Once the data has been crunched, the map is loaded into the drones’ hive mind and the convoy goes to the location, where the craft are loaded with seeds instead of herbicides.

But not just any old seeds! You see, that’s one more wrinkle. If you just throw a sagebrush seed on the ground, even if it’s in the best spot in the world, it could easily be snatched up by an animal, roll or wash down to a nearby crevasse, or simply fail to find the right nutrients in time despite the planter’s best efforts.

That’s why DroneSeed’s head of Planting and his team have been working on a proprietary seed packet that they were unbelievably reticent to detail.

From what I could gather, they’ve put a ton of work into packaging the seeds into nutrient-packed little pucks held together with a biodegradable fiber. The outside is dusted with capsaicin, the chemical that makes spicy food spicy (and also what makes bear spray do what it does). If they hadn’t told me, I might have guessed, since the workshop area was hazy with it, leading us all to cough and tear up a little. If I were a marmot, I’d learn to avoid these things real fast.

The pucks, or “seed vessels,” can and must be customized for the location and purpose — you have to match the content and acidity of the soil, things like that. DroneSeed will have to make millions of these things, but it doesn’t plan to be the manufacturer.

Finally these pucks are loaded in a special puck-dispenser which, closely coordinating with the drone, spits one out at the exact moment and speed needed to put it within a few centimeters of the microsite.

All these factors should improve the survival rate of seedlings substantially. That means that the company’s methods will not only be more efficient, but more effective. Reforestation is a numbers game played at scale, and even slight improvements — and DroneSeed is promising more than that — are measured in square miles and millions of tons of biomass.

Proof of life

DroneSeed has already signed several big contracts for spraying, and planting is next. Unfortunately, the timing on their side meant they missed this year’s planting season, though by doing a few small sites and showing off the results, they’ll be in pole position for next year.

After demonstrating the effectiveness of the planting technique, the company expects to expand its business substantially. That’s the scaling part — again, not easy, but easier than hiring another couple thousand planters every year.

Ryan Warner/DroneSeed

Ideally the hardware can be assigned to local teams that do the on-site work, producing loci of activity around major forests from which jobs can be deployed at large or small scales. A set of five or six drones does the work of one helicopter, roughly speaking, so depending on the volume requested by a company or forestry organization, you may need dozens on demand.

That’s all yet to be explored, but DroneSeed is confident that the industry will see the writing on the wall when it comes to the old methods, and identify them as a solution that fits the future.

If it sounds like I’m cheerleading for this company, that’s because I am. It’s not often in the world of tech startups that you find a group of people not just attempting to solve a serious problem — it’s common enough to find companies hitting this or that issue — but who have spent the time, gathered the expertise and really done the dirty, boots-on-the-ground work that needs to happen so it goes from great idea to real company.

That’s what I felt was the case with DroneSeed, and here’s hoping their work pays off — for their sake, sure, but mainly for ours.