This neural network detects whether faces have been Photoshopped

Using Photoshop and other image manipulation software to tweak faces in photos has become common practice, but it’s not always made clear when it’s been done. Berkeley and Adobe researchers have created a tool that not only can tell when a face has been Photoshopped, but can suggest how to undo it.

Right off the bat it must be noted that this project applies only to Photoshop manipulations, and in particular those made with the “Face Aware Liquify” feature, which allows for both subtle and major adjustments to many facial features. A universal detection tool is a long way off, but this is a start.

The researchers (among them Alexei Efros, who just appeared at our AI+Robotics event) began from the assumption that a great deal of image manipulation is performed with popular tools like Adobe’s, and as such a good place to start would be looking specifically at the manipulations possible in those tools.

They set up a script to take portrait photos and manipulate them slightly in various ways: move the eyes a bit and emphasize the smile, narrow the cheeks and nose, things like that. They then fed the originals and warped versions to the machine learning model en masse, with the hopes that it would learn to tell them apart.

Learn it did, and quite well. When humans were presented with images and asked which had been manipulated, they performed only slightly better than chance. But the trained neural network identified the manipulated images 99 percent of the time.

What is it seeing? Probably tiny patterns in the optical flow of the image that humans can’t really perceive. And those same little patterns also suggest to it what exact manipulations have been made, letting it suggest an “undo” of the manipulations even having never seen the original.

Since it’s limited to just faces tweaked by this Photoshop tool, don’t expect this research to form any significant barrier against the forces of evil lawlessly tweaking faces left and right out there. But this is just one of many small starts in the growing field of digital forensics.

“We live in a world where it’s becoming harder to trust the digital information we consume,” said Adobe’s Richard Zhang, who worked on the project, “and I look forward to further exploring this area of research.”

You can read the paper describing the project and inspect the team’s code at the project page.

Price tag to return to the Moon could be $30 billion

NASA’s ambitious plan to return to the moon may cost as much as $30 billion over the next five years, the agency’s administrator, Jim Bridenstine, indicated in an interview this week. This is only a ballpark figure, but it’s the first all-inclusive one we’ve seen and, despite being a large amount of money, is lower than some might have guessed.

Bridenstine floated the figure in an interview with CNN, suggesting that the agency would need somewhere between $20 billion and $30 billion for the purpose of returning to the surface of the Moon. Anything beyond that, such as fleshing out the Lunar Gateway or establishing a persistent presence, would incur additional costs.

To put this figure in perspective, NASA’s annual budget is about $20 billion, very little compared to many other agencies and budget items in the federal government. The speculated additional costs would average $4-6 billion per year, though spending may not be so consistent. NASA only asked for an additional $1.6 billion for the upcoming year, for instance.

The idea that this return to the Moon could cost the same in 2019 dollars as Apollo cost in 1960s dollars (about $30 billion) may be surprising to some. But of course we are not inventing crewed interplanetary travel from scratch this time around. Billions have already been invested in the technologies and infrastructure underpinning the Artemis mission, both flight-proven and recently developed.

In addition to that, Bridenstine is likely counting on the cost savings NASA will see by partnering with commercial aerospace concerns far more extensively than in previous missions of this scale. Cost-sharing, co-development and use of commercial services rather than internal ones will likely save billions.

A secondary goal, Bridenstine told CNN, was “to make sure that we’re not cannibalizing parts of NASA to fund the Artemis program.” So sucking money out of other missions, or co-opting tech or parts from other projects, isn’t an option.

Whether Congress will approve the money is an open question. More concerning is the fundamental timeline of technology development and deployment over the next five years. Even with billions at its disposal, NASA may find that a mission to the lunar surface simply isn’t feasible to complete in that duration, even if all goes according to plan. The SLS and Orion projects are over budget and have been repeatedly delayed, for instance.

Ambition and aggressive timelines are part of NASA’s DNA, however, and although they can plan for the best, you better believe their engineers and program managers are preparing for the worst as well. We’ll get there when we get there.

DEEPFAKES Accountability Act would impose unenforceable rules — but it’s a start

The new DEEPFAKES Accountability Act in the House — and yes, that’s an acronym — would take steps to criminalize the synthetic media referred to in its name, but its provisions seem too optimistic in the face of the reality of this threat. On the other hand, it also proposes some changes that will help bring the law up to date with the tech.

The bill, proposed by Representative Yvette Clarke (D-NY), it must be said, has the most ridiculous name I’ve encountered: the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act. Amazingly, that acronym (backronym, really) actually makes sense.

It’s intended to stem the potential damage of synthetic media purporting to be authentic, which is rare enough now but soon may be commonplace. With just a few minutes (or even a single frame) of video and voice, a fake version of a person, perhaps a public figure or celebrity, can be created that is convincing enough to fool anyone not looking too closely. And the quality is only getting better.

DEEPFAKES would require anyone creating a piece of synthetic media imitating a person to disclose that the video is altered or generated, using “irremovable digital watermarks, as well as textual descriptions.” Failing to do so will be a crime.

The act also establishes a right on the part of victims of synthetic media to sue the creators and/or otherwise “vindicate their reputations” in court.

Many of our readers will have already spotted the enormous loopholes gaping in this proposed legislation.

First, if a creator of a piece of media is willing to put their name to it and document that it is fake, those are almost certainly not the creators or the media we need to worry about. Jordan Peele is the least of our worries (and in fact the subject of many of our hopes). Requiring satirists and YouTubers to document their modified or generated media seems only to assign paperwork to people already acting legally and with no harmful intentions.

Second, watermark and metadata-based markers are usually trivial to remove. Text can be cropped, logos removed (via more smart algorithms), and even a sophisticated whole-frame watermark might be eliminated simply by being re-encoded for distribution on Instagram or YouTube. Metadata and documentation are often stripped or otherwise made inaccessible. And the inevitable reposters seem to have no responsibility to keep that data intact, either — so as soon as this piece of media leaves the home of its creator, it is out of their control and very soon will no longer be in compliance with the law.

Third, it’s far more likely that truly damaging synthetic media will be created with an eye to anonymity and distributed by secondary methods. The law here is akin to asking bootleggers to mark their barrels with their contact information. No malicious actor will even attempt to mark their work as an “official” fake.

That said, just because these rules are unlikely to prevent people from creating and distributing damaging synthetic media — what the bill calls “advanced technological false personation records” — that doesn’t mean the law serves no purpose here.

One of the problems with the pace of technology is that it frequently is some distance ahead of the law, not just in spirit but in letter. With something like revenge porn or cyberbullying, there’s often literally no legal recourse because these are unprecedented behaviors that may not fit neatly under any specific criminal code. A law like this, flawed as it is, defines the criminal behavior and puts it on the books, so it’s clear what is and isn’t against the law. So while someone faking a Senator’s face may not voluntarily identify themselves, if they are identified, they can be charged.

To that end a later portion of the law is more relevant and realistic: It seeks to place unauthorized digital recreations of people under the umbrella of unlawful impersonation statutes. Just as it’s variously illegal to pretend you’re someone you’re not, to steal someone’s ID, to pretend you’re a cop, and so on, it would be illegal to nefariously misrepresent someone digitally.

That gives police and the court system a handhold when cases concerning synthetic media begin pouring in. They can say “ah, this falls under statute so and so” rather than arguing about jurisdiction or law and wasting everyone’s time — an incredibly common (and costly) occurrence.

The bill puts someone at the U.S. Attorney’s Office in charge of things like revenge porn (“false intimate depictions”) to coordinate prosecution and so on. Again, these issues are so new that it’s often not even clear who you or your lawyer or your local police are supposed to call.

Lastly the act would create a task force at the Department of Homeland Security that would form the core of government involvement with the practice of creating deep fakes, and any countermeasures created to combat them. The task force would collaborate with private sector companies working on their own to prevent synthetic media from gumming up their gears (Facebook has just had a taste), and report regularly on the state of things.

It’s a start, anyway — rare it is that the government acknowledges something is a problem and attempts to mitigate it before that thing is truly a problem. Such attempts are usually put down as nanny state policies, alas, so we wait for a few people to have their lives ruined then get to work with hindsight. So while the DEEPFAKES Accountability Act would not, I feel, create much in the way of accountability for the malicious actors most likely to cause problems, it does begin to set a legal foundation for victims and law enforcement to fight against those actors.

You can track the progress of the bill (H.R. 3230 in the 116th Congress) here.

Every secure messaging app needs a self-destruct button

The growing presence of encrypted communications apps makes a lot of communities safer and stronger. But the possibility of physical device seizure and government coercion is growing as well, which is why every such app should have some kind of self-destruct mode to protect its user and their contacts.

End to end encryption like that you see in Signal and (if you opt into it) WhatsApp is great at preventing governments and other malicious actors from accessing your messages while they are in transit. But as with nearly all cybersecurity matters, physical access to either device or user or both changes things considerably.

For example, take this Hong Kong citizen who was forced to unlock their phone and reveal their followers and other messaging data to police. It’s one thing to do this with a court order to see if, say, a person was secretly cyberstalking someone in violation of a restraining order. It’s quite another to use as a dragnet for political dissidents.

This particular protestor ran a Telegram channel that had a number of followers. But it could just as easily be a Slack room for organizing a protest, or a Facebook group, or anything else. For groups under threat from oppressive government regimes it could be a disaster if the contents or contacts from any of these were revealed to the police.

Just as you should be able to choose exactly what you say to police, you should be able to choose how much your phone can say as well. Secure messaging apps should be the vanguard of this capability.

There are already some dedicated “panic button” type apps, and Apple has thoughtfully developed an “emergency mode” (activated by hitting the power button five times quickly) that locks the phone to biometrics and will wipe it if it is not unlocked within a certain period of time. That’s effective against “Apple pickers” trying to steal a phone or during border or police stops where you don’t want to show ownership by unlocking the phone with your face.

Those are useful and we need more like them — but secure messaging apps are a special case. So what should they do?

The best-case scenario, where you have all the time in the world and internet access, isn’t really an important one. You can always delete your account and data voluntarily. What needs work is deleting your account under pressure.

The next best-case scenario is that you have perhaps a few seconds or at most a minute to delete or otherwise protect your account. Signal is very good about this: The deletion option is front and center in the options screen, and you don’t have to input any data. WhatsApp and Telegram require you to put in your phone number, which is not ideal — fail to do this correctly and your data is retained.

Signal, left, lets you get on with it. You’ll need to enter your number in WhatsApp (right) and Telegram.

Obviously it’s also important that these apps don’t let users accidentally and irreversibly delete their account. But perhaps there’s a middle road whereby you can temporarily lock it for a preset time period, after which it deletes itself if not unlocked manually. Telegram does have self-destructing accounts, but the shortest time you can delete after is a month.

What really needs improvement is emergency deletion when your phone is no longer in your control. This could be a case of device seizure by police, or perhaps being forced to unlock the phone after you have been arrested. Whatever the case, there need to be options for a user to delete their account outside the ordinary means.

Here are a couple options that could work:

  • Trusted remote deletion: Selected contacts are given the ability via a one-time code or other method to wipe each other’s accounts or chats remotely, no questions asked and no notification created. This would let, for instance, a friend who knows you’ve been arrested remotely remove any sensitive data from your device.
  • Self-destruct timer: Like Telegram’s feature, but better. If you’re going to a protest, or have been “randomly” selected for additional screening or questioning, you can just tell the app to delete itself after a certain duration (as little as a minute perhaps) or at a certain time of the day. Deactivate any time you like, or stall for the five required minutes for it to trigger.
  • Poison PIN: In addition to a normal unlock PIN, users can set a poison PIN that when entered has a variety of user-selectable effects. Delete certain apps, clear contacts, send prewritten messages, unlock or temporarily hard-lock the device, etc.
  • Customizable panic button: Apple’s emergency mode is great, but it would be nice to be able to attach conditions like the poison PIN’s. Sometimes all someone can do is smash that button.

Obviously these open new avenues for calamity and abuse as well, which is why they will need to be explained carefully and perhaps initially hidden in “advanced options” and the like. But overall I think we’ll be safer with them available.

Eventually these roles may be filled by dedicated apps or by the developers of the operating systems on which they run, but it makes sense for the most security-forward app class out there to be the first in the field.

Krisp’s smart noise-cancelling gets official release and pricing

Background noise on calls could be a thing of the past if Krisp has anything to do with it. The app, now available on Windows and Macs after a long beta, uses machine learning to silence the bustle of a home, shared office, or coffee shop so your voice and the voices of others comes through clearly.

I first encountered Krisp in prototype form when we were visiting UC Berkeley’s Skydeck accelerator, which ended up plugging $500,000 into the startup alongside $1.5M round from Sierra Ventures and Shanda Group.

Like so many apps and services these days, Krisp uses machine learning. But unlike many of them, it uses the technology in a fairly straightforward, easily understandable way.

The machine learning model the company has created is trained to recognize the voice of a person talking into a microphone. By definition pretty much everything else is just noise — so the model just sort of subtracts it from the waveform, leaving your audio clean even if there’s a middle school soccer team invading the cafe where you’re running the call from.

It can also mute sound coming the other direction — that is, the noise on your friend’s side. So if they’re in a noisy street and you’re safe at home, you can apply the smart noise reduction to them as well.

Because it changes the audio signal before it gets to any apps or services, it’s compatible with pretty much everything: Skype, Messenger, Slack, whatever. You could even use it to record podcasts when there’s a leaf blower outside. A mobile version is on the way for release later this year.

It works — I’ve tested it, as have thousands of other users during the beta. But now comes the moment of truth: will anyone pay for it?

The new, official release of the app will let you mute the noise you hear on the line — that is, the noise coming from the microphones of people you talk to — for free, forever. But clearing the noise on your own line, like the baby crying next to you, after a two week trial period, will cost you $5 per month or $50 per year. You can collect free time by referring people to the app, but eventually you’ll probably have to shell out.

Not that there’s anything wrong with that: a straightforward pay-as-you-go business model is refreshing in an age of intrusive data collection, pushy “freemium” platforms, and services that lack any way to make money whatsoever.

Sense Photonics flashes onto the lidar scene with a new approach and $26M

Lidar is a critical part of many autonomous cars and robotic systems, but the technology is also evolving quickly. A new company called Sense Photonics just emerged from stealth mode today with a $26M A round, touting a whole new approach that allows for an ultra-wide field of view and (literally) flexible installation.

Still in prototype phase but clearly enough to attract eight figures of investment, Sense Photonics’ lidar doesn’t look dramatically different from others at first, but the changes are both under the hood and, in a way, on both sides of it.

Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.

All these systems are “scanning” systems in that they sweep a beam, column, or spot of light across the scene in some structured fashion — faster than we can perceive, but still piece by piece. Few companies, however, have managed to implement what’s called “flash” lidar, which illuminates the whole scene with one giant, well, flash.

That’s what Sense has created, and it claims to have avoided the usual shortcomings of such systems — namely limited resolution and range. Not only that, but by separating the laser emitting part and the sensor that measures the pulses, Sense’s lidar could be simpler to install without redesigning the whole car around it.

I talked with CEO and co-founder Scott Burroughs, a veteran engineer of laser systems, about what makes Sense’s lidar a different animal from the competition.

“It starts with the laser emitter,” he said. “We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.”

These tiny laser elements are stuck on a flexible backing, meaning the array can be curved — providing a vastly improved field of view. Lidar units (except for the 360-degree ones) tend to be around 120 degrees horizontally, since that’s what you can reliably get from a sensor and emitter on a flat plane, and perhaps 50 or 60 degrees vertically.

“We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal,” said Burroughs proudly. “And that’s something auto makers we’ve talked to have been very excited about.”

Here it is worth mentioning that lidar systems have also begun to bifurcate into long-range, forward-facing lidar (like those from Luminar and Lumotive) for detecting things like obstacles or people 200 meters down the road, and more short-range, wider-field lidar for more immediate situational awareness — a dog behind the vehicle as it backs up, or a car pulling out of a parking spot just a few meters away. Sense’s devices are very much geared toward the second use case.

These are just prototype units, but they work and you can see they’re more than just renders.

Particularly because of the second interesting innovation they’ve included: the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.

The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.

The depth data and traditional camera image correspond pixel-to-pixel right out of the system.

Most lidars output a 3D point cloud, the result of the beam finding millions of points with different ranges. This is a very different form of “image” than a traditional camera, and it can take some work to convert or compare the depths and shapes of a point cloud to a 2D RGB image. Sense’s unit not only outputs a 2D depth map natively, but that data can be synced with a twin camera so the visible light image matches pixel for pixel to the depth map. It saves on computing time and therefore on delay — always a good thing for autonomous platforms.

Sense Photonics’ unit also can output a point cloud, as you see here.

The benefits of Sense’s system are manifest, but of course right now the company is still working on getting the first units to production. To that end it has of course raised the $26 million A round, “co-led by Acadia Woods and Congruent Ventures, with participation from a number of other investors, including Prelude Ventures, Samsung Ventures and Shell Ventures,” as the press release puts it.

Cash on hand is always good. But it has also partnered with Infineon and others, including an unnamed tier-1 automotive company, which is no doubt helping shape the first commercial Sense Photonics product. The details will have to wait until later this year when that offering solidifies, and production should start a few months after that — no hard timeline yet, but expect this all before the end of the year.

“We are very appreciative of this strong vote of investor confidence in our team and our technology,” Burroughs said in the press release. “The demand we’ve encountered – even while operating in stealth mode – has been extraordinary.”

Nintendo teases ‘Breath of the Wild’ sequel, raising Zelda hype to new levels

The capstone to an eventful Nintendo’s E3 Direct was an unexpected joy: A sequel to the modern classic in the Zelda series, Breath of the Wild. Of course, all they said was that it’s “in development,” but that’s enough for me.

Concluding the video that Nintendo has opted for instead of a press conference in recent years, the company’s Shinya Takahashi said, confidentially: “We have more games in development than what we’ve shown you today. I’m looking forward to the day we can introduce them to you. Speaking of… before we end this Direct, I actually have one more thing to show you.”

With that he threw to a final trailer that instantly identified itself to the trained eye as Breath of the Wild related — superfans will have recognized the green magical trails and corruption slime from that game immediately. But any doubt was cleared away when we got a closeup of Zelda herself (sporting a stylish new short hairdo), who accompanied by Link appears to be leading an expedition into a dungeon of some kind.

The two encounter a mysterious figure that appears to be mummified, gripped by a magical hand, and deeply evil — you can tell from the streams of horrible goo coming from it, and from how its eyes glowed red when it detected the presence of our heroes. A few flashes of desperate action and it cuts to the overworld, where Hyrule castle appears to sink into the ground and set off an earthquake with who knows what effects.

“The sequel to The Legend of Zelda: Breath of the Wild is now in development,” the trailer concluded.

If the trailer is any indication, the tone of this game is much darker and more dangerous than the last one, which in its promotional materials emphasized freedom, nature, and openness. In this one however all is dark, cramped, and dangerous.

Clearly Zelda and Link have awakened an ancient evil, perhaps that which first corrupted Ganondorf to begin with which the Sheikah carefully sealed away.

What could it mean? Here’s hoping the next Zelda focuses more on the intricate, dangerous dungeons that previous titles did — everyone loved Breath of the Wild, but the most common criticism was the brevity and scarcity of its big dungeons. (Scores of smaller shrines helped offset this complaint, but it’s still valid.)

I’m hoping for a huge “underworld” to mirror the vast overworld that was such a joy to explore. Caves, temples, secrets, darkness and survival elements galore!

It seems likely that Nintendo has listened to critics while also playing to its strengths, and the core gameplay systems of the last game will be married to more structured gameplay and narrative. At any rate we don’t know anything for sure other than that the game is being developed — which anyone might have guessed. But it’s nice to see that confirmed, and be given a glimpse of the next game’s darker finery.

To detect fake news, this AI first learned to write it

One of the biggest problems in media today is so-called “fake news,” which is so highly pernicious in part because it superficially resembles the real thing. AI tools promise to help identify it, but in order for it to do so, researchers have found that the best way is for that AI to learn to create fake news itself — a double-edged sword, though perhaps not as dangerous as it sounds.

Grover is a new system created by the University of Washington and Allen Institute for AI (AI2) computer scientists that is extremely adept at writing convincing fake news on myriad topics and as many styles — and as a direct consequence is also no slouch at spotting it. The paper describing the model is available here.

The idea of a fake news generator isn’t new — in fact, OpenAI made a splash recently by announcing that its own text-generating AI was too dangerous to release publicly. But Grover’s creators believe we’ll only get better at fighting generated fake news by putting the tools to create it out there to be studied.

“These models are not capable, we think right now, of inflicting serious harm. Maybe in a few years they will be, but not yet,” the lead on the project, Rowan Zeller, told me. “I don’t think it’s too dangerous to release — really, we need to release it, specifically to researchers who are studying this problem, so we can build better defenses. We need all these communities, security, machine learning, natural language processing, to talk to each other — we can’t just hide the model, or delete it and pretend it never happened.”

Therefore and to that end, you can try Grover yourself right here. (Though you might want to read the rest of this article first so you know what’s going on.)

Voracious reader

The AI was created by having it ingest an enormous corpus of real news articles, a dataset called RealNews that is being introduced alongside Grover. The 120-gigabyte library contains articles from the end of 2016 through March of this year, from the top 5,000 publications tracked by Google News.

By studying the style and content of millions of real news articles, Grover builds a complex model of how certain phrases or styles are used, what topics and features follow one another in an article, how they’re associated with different outlets, ideas, and so on.

This is done using an “adversarial” system, wherein one aspect of the model generates content and another rates how convincing it is — if it doesn’t meet a threshold, the generator tries again, and eventually it learns what is convincing and what isn’t. Adversarial setups are a powerful force in AI research right now, often being used to create photorealistic imagery from scratch.

It isn’t just spitting out random articles, either. Grover is highly parameterized, meaning its output is highly dependent on input. So if you tell it to create a fake article about a study linking vaccines and autism spectrum disorders, you are also free to specify that the article should seem as if it appeared on CNN, Fox News, or even TechCrunch.

I generated a few articles, which I’ve pasted at the bottom of this one, but here’s the first bit of an example:

Serial entrepreneur Dennis Mangler raises 6M to create blockchain-based drone delivery

May 29, 2019 – Devin Coldewarg

Drone delivery — not so new, and that raises a host of questions: How reliable is the technology? Will service and interference issues flare up?

Drone technology is changing a lot, but its most obvious use — package delivery — has never been perfected on a large scale, much less by a third party. But perhaps that is about to change.

Serial entrepreneur Dennis Mangler has amassed an impressive — by the cybernetic standards of this short-lived and crazy industry — constellation of companies ranging from a top-tier Korean VC to a wholly owned subsidiary of Amazon, ranging from a functional drone repair shop to a developer of commercial drone fleets.

But while his last company (Amazon’s Prime Air) folded, he has decided to try his hand at delivery by drone again with Tripperell, a San Francisco-based venture that makes sense of the cryptocurrency token space to create a bridge from blockchain to delivery.

The system they’re building is sound — as described in a new Medium post, it will first use Yaman Yasmine’s current simple crowdsourced drone repair platform, SAA, to create a drone organization that taps into a mix of overseas networks and domestic industry.

From there the founders will form Tripperell, with commercialized drones running on their own smart contracts to make deliveries.

Not bad considering it only took about ten seconds to appear after I gave it the date, domain, my name (ish), and the headline. (I’d probably tweak that lede, but if you think about it, it does sort of make sense.)

Note that it doesn’t actually know who I am, or what TechCrunch is. But it associates certain data with other data. For instance, one example the team offered was an editorial “in the style of,” to co-opt cover bands’ lingo, Paul Krugman’s New York Times editorials.

I don’t think it’s too dangerous to release — really, we <em>need</em> to release it.
“There’s nothing hard coded — we haven’t told the model who Paul Krugman is. But it learns from reading a lot,” Zeller told me. The system is just trying to make sure that the generated article is sufficiently like the other data it associates with that domain and author. “And it’s going to learn things like, ‘Paul Krugman’ tends to talk about ‘economics,’ without us telling it that he’s an economist.”

It’s hard to say how much it will attempt to affect a given author’s style — that may or may not be something it “noticed,” and AI models are notoriously opaque to analysis. Its style aping goes beyond the author; it even went so far as creating the inter-paragraph “Read more” links in a “Fox News” article I generated.

But this facility in creating articles rests on the ability to tell when an article is not convincing — that’s the “discriminator” that evaluates whether the output of the “generator” is any good. So what happens if you feed the discriminator other stuff? Turns out it’s better than any other AI system right now, at least within the limits of the tasks they tested it on, at determining what’s fake and what’s real.

Natural language limitations

Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI’s GPT2, with high accuracy. This is because current text-generation systems share certain weaknesses, and with a few examples those weaknesses become even more obvious to the discriminator.

“These models have to make one of two bad choices. The first bad option is you just trust the model,” Zeller said. In this case, you get a sort of error-compounding issue where a single bad choice, which is inevitable given the number of choices it has to make, leads to another bad one, and another, and so on; “Without supervision they often just go off the rails.”

“The other choice is to play it a bit safer,” Zeller explained, citing OpenAI’s decision to have the generator create dozens of options and pick the most likely one. This conservative approach avoids unlikely word combinations or phrases — but as Zeller points out, “human speech is a mix of high probability and low probability words. If I knew what you were going to tell me, you wouldn’t be speaking. So there have to be some things that are hard to anticipate.”

These and other habits in text generation algorithms make it possible for Grover to identify generated articles with 92 percent accuracy.

And no, you’re very clever, but you can’t just take the ones it doesn’t detect and sort of breed them together to make more convincing ones. As it turns out, this type of strategy doesn’t actually help a lot — the resulting “super-algorithms” still stumble in similar ways.

Self-extinguishing danger

On the face of it, Grover seems like a pretty dangerous tool. With a bit of tweaking the articles it created for me could easily pass the smell test of a casual reader unfamiliar with the topic. So why is the team releasing it and the dataset it’s based on?

The more articles we have from an adversary, the easier it is to detect that adversary.
First of all it’s not just going to be an app you download — “We want researchers to easily be able to use the model, but we’re not making it completely public,” Zeller clarified. But even so, the likelihood of it being used for evil is counterintuitively low.

“If you just wanted to write ten take news articles, you could just write them yourself,” he points out — and indeed, it’s hard to imagine some mastermind going to all this trouble just to generate a handful. “But if you want to write a hundred thousand, you could use our tool — but the more articles we have from an adversary, the easier it is to detect that adversary.” So it would be a self-defeating plot resulting in a sort of “known fake news” that’s easy to flag.

That assumes, however, that there’s a method for applying algorithms like Grover’s to news at large, or that individuals are motivated to question or verify articles they read in the first place. Unfortunately that’s not anywhere near the case.

“This is a totally non machine learning related question,” Zeller admitted. “How do we make this useful for people? How do we make it so when people read news online, they can check if it’s fake, and want to?”

No AI can generate a good answer to that prompt. Here’s hoping we humans are up to the task.


Appendix

I generated several articles to study, and some were better than others. I didn’t save all of them, but I’ve put a few below for anyone curious to see what Grover’s output looks like without going and generating their own. The results are entertaining and far from nonsense, though anyone who reads closely at all would see some red flags. I’ve only slightly formatted for readability and haven’t changed any words.

The first is a redo of the one I quoted above, with a slightly tweaked headline (that’s misinterpreted in an unexpected yet logical way):

Serial founder Dennis Mangler raises 6M to create blockchain-based drone delivery

May 30, 2019 – Devin Coldewey, techcrunch.com

Dennis Mangler of Serial, one of the first open source blockchain-based systems to launch and now a key part of the Ethereum ecosystem, has raised a couple of million dollars in venture funding in order to bring drones and cryptocurrencies together.

Armed with $1.4 million of seed funding, they’re now offering new features for their system to seed investors, including a marketplace for goods sold using the platform. In some ways this could be analogous to the early days of Bitcoin — it had a primary market in petabytes of processing power, and all it took to mine something was an individual machine with the right electricity and gigabytes of RAM. Drones could eventually deliver that stuff just as easily, but they’d also need to run some kind of distributed ledger.

“There’s a tremendous opportunity for companies to seamlessly connect the logistics side of the industry and the blockchain side of the industry,” Mangler told me. “A lot of companies are already already doing it. But we want to supercharge their efforts and help them change how they operate.”

If they can plug the crypto in, where it comes from and why would people care about it as a means of exchanging value or keeping track of it, and what could be done to legitimize or add interest to the platform.

The fundraiser is still going — Mangler sent me some emails, asking that I not make too much of it — but the raise has now reached 6 million. You can find information on the project here.

First, take note of the speed with which this started to sprout. You’d figure in this day and age that looking at how much money was being raised, accredited investors and large corporations would surpass crowdsourced funding — but Mangler says not so fast.

“The coin exchange is going to be enabled in a couple of months,” he told me. “And I believe the push-ups are going to become a new industry before the cryptocurrency market itself is.”

To do that, some smart marketplaces are going to have to be created; however, these might have to function with information and transactions distributed far across the network rather than in clusters running the decentralized network. An air-traffic control system would theoretically be in place as well — a little like Microsoft’s Azure, or Facebook’s Open Graph, but an open blockchain-based variant.

And finally, he says the buzz is you should look at Amazon as a model; they invented the space, and just through focus and sharp execution have pretty much changed it. They need a little time to build it out but they’re getting there.

This one was done in the style of Fox News. There’s no such person as Dr Nicholas Colvin — I checked. Bobby Scott is indeed a Member of Congress – but in Virginia, not Florida.

Multi-year study links vaccines to higher incidence of Autism spectrum disorders

May 29, 2019 – Sofia Ojeda, foxnews.com

Dr. Nicholas Colvin, lead author on a new multi-year study published by the National Institutes of Health, says as a vaccine advocate, he understands the risks and benefits of vaccines in the United States.

“At the core of it, it’s about safety. You know, we have options for our children, and parents have choices in making those choices. And vaccines provide, you know, safety for all those kids, all those families,” Dr. Colvin said.

READ MORE: Autism experts call California vaccine study ‘shaky science’

Colvin and colleagues looked at all medical records of nearly 3 million kids in the first decade of this century. They found girls tend to be more sensitive to the harmful side effects of vaccines than boys.

“Specifically in autism, and other neurodevelopmental disorders in kids, our analyses show that there is higher prevalence of autism in kids that have been vaccinated than in kids that haven’t been vaccinated,” he said.

In fact, people born around 2000 were more likely to have autism or similar neurodevelopmental disorders than kids born in earlier decades.

“And then we also found that younger kids in the United States, those born between 2000 and 2011, had a somewhat higher rate of autism — and that higher rate was driven by girls,” Colvin said.

READ MORE: Trump endorses FL Rep. Bobby Scott’s position on vaccine bill

Dr. Colvin points out the findings aren’t meant to scare parents from vaccines.

“I don’t want to scare people. I just want to make sure parents are aware that there is a risk. We don’t think this is a big risk. It is certainly real and is, you know, consistent with other research, but not in a way that I would say is something that’s life-threatening,” Colvin said.

He also points out that there is no known cause of autism, which is why he says anyone with suspicion or concern should talk to their doctor.

The National Institutes of Health says there is no vaccine for autism at this time. Colvin says that uncertainty has caused some misconceptions and a decrease in vaccine uptake

Lastly, I was interested to see what would happen if I threw a monkey wrench in there:

Founder Daenarys Targaryen raises 17M Series A round for new AI startup offering “self-driving blockchain”

May 29, 2019 – Kenneth Turan, techcrunch.com

One thing about “Game of Thrones” is that the characters are an active group of entrepreneurs, all with new enterprises at the ready when the show’s storylines take off. And it looks like the show’s creators, David Benioff and D.B. Weiss, and the team behind live-streaming game streaming app Twitch are thinking about going the same way, if not longer.

Good behavior indeed. First, the Lannisters get their Hand: Haylie Duff is on board as an executive producer. Today, we learn that Rene Oberyn Martell, one of the “impossible sons” we saw in season six (the name was borrowed from a line in Robert’s Rebellion) has established himself as the new face and voice of a new company called Margaery One.

We learn that Margaery is a decentralized data machine; indeed, she’s acting as the network’s self-appointed captain of the board, wielding primary command authority. Through an AI-powered network of blockchain token dubbed REDL (or “red gold”), she controls an operation that enables her team to develop and collect decentralized data in the real world, secure from the needs of tyrannical governments such as that of King Robert.

It’s a cool little concept, and part of a litany of “Blockchain”-based product launches the team behind the firm is demonstrating and introducing this week at the inaugural Game of Money. As of this writing, the firm has achieved 27 million REDLs (which are tokens comprised of “real” money in the Bitcoin form), which amount to more than $16 million. This meant that by the end of today’s conference, Omo and his team had raised $17 million for its existence, according to the firm’s CEO, Rene Oberyn Martell.

As of today, one of Rene’s institutions, dubbed the Economics Research Centre, has already created value of $3.5 million on the back of crowd-funding. (On each ROSE token, you can purchase a service)

The real-world business side is provided by Glitrex Logistics, which Martell co-founded along with Jon Anderson, an engineer, and the firm’s COO, Lucas Pirkis. They have developed a blockchain-based freight logistics platform that allows shippers to specify “valued goods in your portfolio,” and get information along with prices on things like goods with a certain quality, or untraditional goods such as food and pharmaceuticals.

How will the firm use ROSE tokens? For starters, the aim is to break down the areas where it can have an effect, including distribution and how goods get to market, and build a community for self-improvement and growth.

This echoes comments from Neal Baer, chairman of NBC Entertainment, about the future of distribution. In a recent blog post, he said he hopes that the Internet of Things and artificial intelligence will become integrated to create the new economic system that will follow the loss of “the earnings power of traditional media and entertainment content,” telling readers that the next round of innovation and disruption will be “powered by the Internet of Things.”

If so, this has the whiff of the future of entertainment — not just new revenue sources, but realms of competence, naturally distinct from the impact of algorithm-based algorithms. And while it can be argued that entertainment and fashion are separate, the result could be a complex world where characters rise to the occasion based not on the smarts of the writer but of the cast.

As noted above, you can create your own fake articles at Grover.

FCC passes measure urging carriers to block robocalls by default

The FCC voted at its open meeting this week to adopt an anti-robocall measure, but it may or may not lead to any abatement of this maddening practice — and it might not be free, either. That said, it’s a start towards addressing a problem that’s far from simple and enormously irritating to consumers.

The last two years have seen the robocall problem grow and grow, and although there are steps you can take right now to improve things, they may not totally eliminate the issue or perhaps won’t be available on your plan or carrier.

Under fire for not acting quick enough in the face of a nationwide epidemic of scam calls, the FCC has taken action about as fast as a federal regulator can be expected to, and there are two main parts to its plan to fight robocalls, one of which was approved today at the Commission’s open meeting.

The first item was proposed formally last month by Chairman Ajit Pai, and although it amounts to little more than nudging carriers, it could be helpful.

Carriers have the ability to apply whatever tools they have to detect and block robocalls before they even reach users’ phones. But it’s possible, if unlikely, that a user may prefer not to have that service active. And carriers have complained that they are afraid blocking calls by default may in fact be prohibited by existing FCC regulations.

The FCC has said before that this is not the case and that carriers should go ahead and opt everyone into these blocking services (one can always opt out), but carriers have balked. The rulemaking approved today basically just makes it crystal clear that carriers are permitted, and indeed encouraged, to opt consumers into call-blocking schemes.

That’s good, but to be clear, Wednesday’s resolution does not require carriers to do anything, nor does it prohibit carriers from charging for such a service — as indeed Sprint, AT&T, and Verizon already do in some form or another. (TechCrunch is owned by Verizon Media, but this does not affect our coverage.)

Commissioner Starks noted in his approving statement that the FCC will be watching the implementation of this policy carefully for the possibility of abuse by carriers.

At my request, the item [i.e. his addition to the proposal] will give us critical feedback on how our tools are performing. It will now study the availability of call blocking solutions; the fees charged, if any, for these services; the effectiveness of various categories of call blocking tools; and an assessment of the number of subscribers availing themselves of available call blocking tools.

A second rule is still gestating, existing right now more or less only as a threat from the FCC should carriers fail to step up their game. The industry has put together a sort of universal caller ID system called STIR/SHAKEN (Secure Telephony Identity Revisited / Secure Handling of Asserted information using toKENs), but has been slow to roll it out. Pai said late last year that if carriers didn’t put it in place by the end of 2019, the FCC would be forced to take regulatory action.

Why the Commission didn’t simply take regulatory action in the first place is a valid question, and one some Commissioners and others have asked. Be that as it may, the threat is there and seems to have spurred carriers to action. There have been tests, but as yet no carrier has rolled out a working anti-robocall system based on STIR/SHAKEN.

Pai has said regarding these systems that “we [i.e. the FCC] do not anticipate that there would be costs passed on to the consumer,” and it does seem unlikely that your carrier will opt you into a call-blocking scheme that costs you money. But never underestimate the underhandedness and avarice of a telecommunications company. I would not be surprised if new subscribers get this added as a line item or something; Watch your bills carefully.

Apple Store designer proposes restoring Notre-Dame as… basically an Apple Store

Eight Inc, the design firm best known for conceptualizing the Apple Store and the now-iconic giant glass cube on 5th Ave in New York, has proposed to restore Notre-Dame’s sadly destroyed roof and spire — with a giant glass roof and spire. I don’t think the French will go for it.

The idea is to recreate the top of the building entirely out of structural glass, which is stronger than normal glass and thus could support itself without any internal framework.

It’s hard to know what to make of the proposal. It seems to me so inappropriate that it borders on parody. Leaving aside the practical concerns of keeping the glass clean and replacing any portion that’s cracked or something, the very idea of capping a gothic cathedral made almost entirely of stone with a giant sunroof seems like the exact opposite of what the church’s creators would have wanted.

[gallery ids="1838599,1838597,1838598,1838600"]

Tim Kobe, founder of Eight, disagrees.

“I believe this definitive example of French gothic architecture requires a deep respect and appreciation of the history and intent of the original design,” he told Dezeen. “It should not be about the ego of a new architectural expression but a solution to honor this historic structure.”

I find that statement, especially the part about ego of new architectural expression, a little difficult to swallow when the proposal is to rebuild a nearly thousand-year-old cathedral in the style of an Apple Store.

He called the glass roof and spire “spiritual and luminous,” saying they evoked “the impermanence of architecture and the impermanence of life.”

That seems an odd thing to strive for. I’m not a religious person, but I as I understand it the entire idea of a cathedral is to create a permanent, solid representation of the very permanent presence of God and His everlasting kingdom of heaven. Life is fleeting, sure, but giant stone cathedrals that have outlasted empires seem a poor mascot for that fact.

Of course, it must be said that this wouldn’t be the only garish glass structure in the city that traditionalists would hate: The pyramid at the Louvre has attracted great ire for many years now. And it’s much smaller.

The French Senate (and many others) have expressed that they would like the cathedral to be restored to as close to its original state as possible — preferably with something better than centuries-old dry tinder holding up the roof. But President Macron has called for something more than simple reconstruction, and Prime Minister Philippe backs him, especially concerning the spire, which was a relatively late addition and as such isn’t quite as historic as the rest.

A design competition is to be held to create a new spire “adapted to the techniques and the challenges of our era,” which certainly could mean many things and inspire many interesting ideas. Here’s hoping they’re a little better than this one.