Siri will now answer your election questions

Apple’s built-in voice assistant won’t help you figure out who to vote for, but it will be able to update you on different races around the U.S. during election season, as well as deliver live results as votes are counted. The new feature, announced today, is part of Apple News’ 2020 election coverage, which also includes a series of curated news, resources and data from a variety of sources, with the goal of serving users on both sides of the political spectrum.

With the added Siri integration, you’ll be able to ask the assistant both informational queries, plus those requiring real-time information.

For example, you may ask Siri something like “When are the California primaries?,” which is a more straightforward question, or “Who’s winning the New Hampshire primaries?,” which requires updated information.

Siri will speak the answers to the question in addition to presenting the information visually, which makes the feature useful from an accessibility standpoint, too.

The live results are being delivered via the Associated Press, Apple says. The company is also leveraging the AP’s real-time results in its Apple News app in order to give county-by-county results and a national map tracking candidate wins by each state primary, among other things.

As it has done in previous years, Apple’s news editorial team has added special coverage of the U.S. election to its app, by working with news partners. This year, Apple’s coverage comes from news organizations inducing ABC News, CBS News, CNN, FiveThirtyEight, Fox News, NBC News, ProPublica, Reuters, The Los Angeles Times, The New York Times, The Wall Street Journal, The Washington Post, TIME, USA Today and others.

In Apple News, readers are able to learn about candidates and their positions, track major election moments — like the debates, conventions and Super Tuesday — and stay on top of election news and analysis all the way through election night in the U.S. and the subsequent presidential inauguration. A partnership with ABC News announced in December will also bring video coverage, including real-time streams, into the app.

The Siri feature draws on Apple News for its answers and offers a link to “Full Coverage” in the Apple News app, if you want to learn more.

The feature appears to still be rolling out. In tests, Siri was able to answer some questions but defaulted to web results for others, as before. A staggered rollout is standard for Apple launches, however, as new features take time to reach all users.

Spotify gains Siri support on iOS 13, arrives on Apple TV

In a long-awaited move, Spotify announced this morning its iOS 13 app would now offer Siri support and its streaming music service would also become available on Apple TV. That means you can now request your favorite music or podcasts using Siri voice commands, by preferencing the command with “Hey Siri, play…,” followed by the audio you want, and concluding the command with “on Spotify.”

The Siri support had been spotted earlier while in beta testing, but the company hadn’t confirmed when it would be publicly available.

According to Spotify, the Siri support will also work over Apple AirPods, on CarPlay, and via AirPlay on Apple HomePod.

In addition, the Spotify iOS app update will include support for iPhone’s new data-saver mode, which aids when bandwidth is an issue.

Spotify is also today launching on Apple TV, joining other Spotify apps for TV platforms, including Roku, Android TV, Samsung Tizen, and Amazon Fire TV.

The app updates are still rolling out, so you may need to wait to take advantage of the Apple TV support and other new features.

The lack of Siri support for Spotify was not the streaming music service’s fault — it wasn’t until iOS 13 that such support even became an option. With the new mobile operating system launched in September, Apple finally opened up its SiriKit framework to third-party apps, allowing end-users to better control their apps using voice commands. That includes audio playback on music services like Spotify, as well as the ability to like and dislike tracks, skip or go to the next song, and get track information.

Pandora, Google Maps and Waze were among the first to adopt Siri integration when it became available in iOS 13 — a clear indication that some of Apple’s chief rivals have been ready and willing to launch Siri support as soon as it was possible.

Though the integration with Siri will be useful for end-users and beneficial to Spotify’s business, it may also weaken the streaming company’s antitrust claims against Apple.

Spotify has long stated that Apple engages in anti-competitive business practices when it comes to its app platform, which is designed to favor its own apps and services, like Apple Music, it says. Among its chief complaints was the inability of third-party apps to work with Siri, which gave Apple’s own apps a favored position. Spotify also strongly believes the 30% revenue share required by the App Store hampers its growth potential.

The streamer filed an antitrust complaint against Apple in the European Union in March. And now, U.S. lawmakers have reached out to Spotify to request information as a part of an antitrust probe here in the states, reports claim. 

Despite its new ability to integrate with Siri in iOS 13, Spotify could argue that it’s still not enough. Users will have to say “on Spotify” to take advantage of the new functionality, instead of being able to set their default music app to Spotify, which would be easier. It could also point out that the support is only available to iOS 13 devices, not the entire iOS market.

Along with the Apple-related news, Spotify today also announced support for Google Nest Home Max, Sonos Move, Sonos One SL, Samsung Galaxy Fold, and preinstallation on Michael Kors Acess, Diesel and Emporio Armani Wear OS smartwatches.

 

Apple still has work to do on privacy

There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.

On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.

As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.

Are we really so sure that thesis holds?

But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?

On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.

But even directly on privacy, Apple is running into problems, too.

 

To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)

Daily Crunch: Apple changes audio review program

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Apple is turning Siri audio clip review off by default and bringing it in house

Following reports that contractors were reviewing customers’ Siri audio samples for quality control, Apple says it has revamped the process. Moving forward, users have to opt-in to participate, and the audio samples will only be reviewed by Apple employees.

“As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize,” the company said.

2. Mozilla CEO Chris Beard will step down at the end of the year

Mozilla is currently seeking a replacement for Beard, though he’s agreed to stay on through year’s end. Executive chairwoman Mitchell Baker announced in her own post that she’s agreed to step into an interim role if needed.

3. Federal grand jury indicts Paige Thompson on two counts related to the Capital One data breach

Thompson allegedly created software that allowed her to see which customers of a cloud computing company (although the indictment does not name the company, it has been identified as Amazon Web Services) had misconfigured their firewalls, and as a result accessed data from Capital One and more than 30 others.

Woman holding Juul e-cig

A woman is holding a Juul e-cigarette, in Montreal. (Photo: Josie_Desmarais/Getty Images)

4. Juul introduces new POS standards to restrict sales to minors

The Retail Access Control Standards program, or RACS for short, automatically locks the point-of-sale system each time a Juul product is scanned until a valid, adult ID is scanned as well.

5. Apple expands access to official repair parts for third-party shops

Until today, if you were a non-authorized repair shop, you couldn’t get official parts. This could result in mixed experiences for customers.

6. Spotify aims to turn podcast fans into podcast creators with ‘Create podcast’ test

The streaming music service is testing a new ‘Create podcast’ feature that shows up above a user’s list of subscribed podcasts. It directs them to download Anchor, the podcast creation app that Spotify acquired in February.

7. How UK VCs are managing the risk of a ‘no deal’ Brexit

The prevailing view among investors about founders is that Brexit means uncertain business as usual. One response: “Resilience is the mother of entrepreneurship!” (Extra Crunch membership required.)

Nike Huaraches get updated for the smartphone age

Ever since they went from Back to the Future fantasy to real world wearable tech, Nike has promised that the Adapt line was more than just a one-off gimmick. Slowly but surely, the company has made its self-lacing motor technology more accessible, most notably though its long awaited Adapt BB sneakers, which arrived earlier this year.

The company announced today that it will be bringing the tech to its Huarache line next month, with the release of the Adapt Huaraches. Introduced in 1991, the line was built around a neoprene bootie derived from water skits. The new shoes feature a similar structure updated for 2019 style and along with smartphone integration.

Like the Adapt BB, the new Huaraches feature a pair of LED lights in the sole that change color based on their connection to the device. The mobile app, meanwhile, is used to adjust the lacing fit. FitAdapt features a bunch of different tension levels, based on different situations. The shoes also, notably, can be used with Apple Watch and Siri, meaning you can ask Apple’s assistant to tighten up your laces.

NikeNews AdaptHuarache Interface 2 square 1600

“This makes the Nike Adapt Huarache a double-barreled revolution,” Nike writes in a release. “First, it brings a storied franchise into the future. Second, and most significant, it propels Nike FitAdapt into the fast-paced, quick-shifting world of the everyday athlete — offering the personalized comfort needed in, say, the sprint to catch the bus, before seamlessly shifting fit as you settle into an empty seat with a sigh of quiet relief.”

The shoes are due out September 13. No pricing yet, but it seems likely they’ll be in the same ballpark as the $350 BBs.

Apple is turning Siri audio clip review off by default and bringing it in house

The top line news is that Apple is making changes to the way that Siri audio review, or ‘grading’ works across all of its devices. First, it is making audio review an explicitly opt-in process in an upcoming software update. This will be applicable for every current and future user of Siri.

Second, only Apple employees, not contractors, will review any of this opt-in audio in an effort to bring any process that uses private data closer to the company’s core processes.

Apple has released a blog post outlining some Siri privacy details that may not have been common knowledge as they were previously described in security white papers.

Apple apologizes for the issue.

“As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize. As we previously announced, we halted the Siri grading program. We plan to resume later this fall when software updates are released to our users — but only after making the following changes…”

It then outlines three changes being made to the way Siri grading works.

  • First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve.
  • Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.
  • Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.

Apple is not implementing any of these changes, nor is it lifting the suspension on the Siri grading process that it halted until the software update becomes available for its operating systems that will allow users to opt in. Once people update to the new versions of its OS, they will have the chance to say yes to the grading process that uses audio recordings to help verify requests that users make of Siri. This effectively means that every user of Siri will be opted out of this process once the update goes live and is installed.

Apple says that it will continue using anonymized computer generated written transcripts of your request to feed its machine learning engines with data, in a fashion similar to other voice assistants. These transcripts may be subject to Apple employee review.

Amazon and Google had previous revelations that their assistants were being helped along by human review of audio, and they have begun putting opt-ins in place as well.

Apple is making changes to the grading process itself as well, noting that, for example, “the names of the devices and rooms you setup in the Home app will only be accessible by the reviewer if the request being graded involves controlling devices in the home.”

A story in The Guardian in early August outlined how Siri audio samples were sent to contractors Apple had hired to evaluate the quality of responses and transcription that Siri produced for its machine learning engines to work on. The practice is not unprecedented, but it certainly was not made as clear as it should have been in Apple’s privacy policies that humans were involved in the process. There was also the matter that contractors, rather than employees, were being used to evaluate these samples. One contractor described as containing sensitive and private information that, in some cases, may have been able to be tied to a user, even with Apple’s anonymizing processes in place.

In response, Apple halted the grading process worldwide while it reviewed the process. This post and updates to its process are the result of that review.

Apple says that around 0.2% of all Siri requests got this audio treatment in the first place, but given that there are 15B requests per month, the quick maths tell us that though it is statistically insignificant, the raw numbers could be quite high.

The move away from contractors was signaled by Apple releasing employees in Europe, as noted by Alex Hearn earlier on Wednesday.

Apple is also publishing an FAQ on how Siri’s privacy controls fit in with its grading process, you can read that in full here.

The blog post from Apple and the FAQ provide some details to consumers about how Apple handles the grading process, how it is minimizing the data given to data reviewers in the grading process and how Siri privacy is preserved.

Apple’s work with Siri from the beginning has focused enormously on on-device processing whenever possible. This has led a lot of experts to say that Apple was trading raw capability for privacy by eschewing the data-center heavy processes of assistants from companies like Amazon or Google in favor of keeping a ‘personal cloud’ of data on device. Sadly, the lack of transparency on human review processes and the use of contractors undercut all of this foundational work Apple has been doing from the beginning. So it’s good that Apple is cranking all the way back to past industry standard on its privacy policies regarding grading and improvement. That is where it needs to be.

The fact is that no other assistant product is nearly as privacy focused as Siri — as I said above, some would say to the point of hampering its ability to advance as quickly. Hopefully this episode leads to better transparency on the part of Apple when humans get involved in processes that are presumed to be fully automated.

Most people assume that ‘AI’ or ‘machine learning’ mean computers only, but the sad fact is that most of those processes are intensely human driven still because AI (which doesn’t really exist) and ML are still pretty crap. Humans will be involved in making them seem smarter for a very long time yet.

The BBC is developing a voice assistant, code named ‘Beeb’

The BBC — aka, the British Broadcasting Corporation, aka the Beeb, aka Auntie — is getting into the voice assistant game.

The Guardian reports the plan to launch an Alexa rival, which has been given the working title ‘Beeb’, and will apparently be light on features given the Corp’s relatively slender developer resources vs major global tech giants.

The BBC’s own news site says the digital voice assistant will launch next year without any proprietary hardware to house it. Instead the corporation is designing the software to work on “all smart speakers, TVs and mobiles”.

Why is a publicly funded broadcaster ploughing money into developing an AI when the market is replete with commercial offerings — from Amazon’s Alexa to Google’s Assistant, Apple’s Siri and Samsung’s Bixby to name a few? The intent is to “experiment with new programmes, features and experiences without someone else’s permission to build it in a certain way”, a BBC spokesperson told BBC news.

The corporation is apparently asking its own staff to contribute voice data to help train the AI to understand the country’s smorgasbord of regional accents.

“Much like we did with BBC iPlayer, we want to make sure everyone can benefit from this new technology, and bring people exciting new content, programmes and services — in a trusted, easy-to-use way,” the spokesperson added. “This marks another step in ensuring public service values can be protected in a voice-enabled future.”

While at first glance the move looks reactionary and defensive, set against the years of dev already ploughed into cutting edge commercial voice AIs, the BBC has something those tech giant rivals lack: Not just regional British accents on tap — but easy access to a massive news and entertainment archive to draw on to design voice assistants that could serve up beloved personalities as a service.

Imagine being able to summon the voice of Tom Baker, aka Doctor Who, to tell you what the (cosmic) weather’s like — or have the Dad’s Army cast of characters chip in to read out your to-do list. Or get a summary of the last episode of The Archers from a familiar Ambridge resident.

Or what about being able to instruct ‘Beeb’ to play some suitably soothing or dramatic sound effects to entertain your kids?

On one level a voice AI is just a novel delivery mechanism. The BBC looks to have spotted that — and certainly does not lack for rich audio content that could be repackaged to reach its audience on verbal command and extend its power to entertain and delight.

When it comes to rich content, the same cannot be said of the tech giants who have pioneered voice AIs.

There have been some attempts to force humor (AIs that crack bad jokes) and/or shoehorn in character — largely flat-footed. As well as some ethically dubious attempts to pass off robot voices as real. All of which is to be expected, given they’re tech companies not entertainers. Dev not media is their DNA.

The BBC is coming at the voice assistant concept from the other way round: Viewing it as a modern mouthpiece for piping out more of its programming.

So while Beeb can’t hope to compete at the same technology feature level as Alexa and all the rest, the BBC could nonetheless show the tech giants a trick or two about how to win friends and influence people.

At the very least it should give their robotic voices some much needed creative competition.

It’s just a shame the Beeb didn’t tickle us further by christening its proto AI ‘Auntie’. A crisper two syllable trigger word would be hard to utter…

Amazon’s lead EU data regulator is asking questions about Alexa privacy

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.

Daily Crunch: Apple responds to Siri privacy concerns

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Apple suspends Siri response grading after privacy concerns

After The Guardian ran a story last week about how Siri recordings are used for quality control, Apple says it’s suspending the program worldwide while it reviews the process.

The practice, known as grading, involves sharing audio snippets with contractors, who determine whether Siri is hearing the requests accurately. Apple says that in the future, users will be able to choose whether or not they participate in the grading.

2. DoorDash is buying Caviar from Square in a deal worth $410 million

Square bought Caviar about five years ago in a deal worth about $90 million. Now, Caviar has found a new home with DoorDash.

3. President throws latest wrench in $10B JEDI cloud contract selection process

Throughout the months-long selection process, the Pentagon repeatedly denied accusations that the contract was somehow written to make Amazon a favored vendor, but The Washington Post reports President Trump has asked his newly appointed defense secretary to examine the process.

LAS VEGAS, NV – APRIL 21: Twitch streamer and professional gamer Tyler “Ninja” Blevins streams during Ninja Vegas ’18 at Esports Arena Las Vegas at Luxor Hotel and Casino on April 21, 2018 in Las Vegas, Nevada.  (Photo by Ethan Miller/Getty Images)

4. Following Ninja’s news, Mixer pops to top of the App Store’s free charts

Yesterday, Tyler “Ninja” Blevins announced that he’s leaving Twitch, moving his streaming career over to Microsoft’s Mixer platform. This morning, Mixer shot to the top of the App Store’s free app charts.

5. Google ordered to halt human review of voice AI recordings over privacy risks

Apple isn’t the only company to face scrutiny over its handling of user audio recordings.

6. UrbanClap, India’s largest home services startup, raises $75M

Through its platform, UrbanClap matches service people such as cleaners, repair staff and beauticians with customers across 10 cities in India, as well as Dubai and Abu Dhabi.

7. Why AWS gains big storage efficiencies with E8 acquisition

The team at Amazon Web Services is always looking to find an edge and reduce the costs of operations in its data centers. (Extra Crunch membership required.)

Google ordered to halt human review of voice AI recordings over privacy risks

A German privacy watchdog has ordered Google to cease manual reviews of audio snippets generated by its voice AI. 

This follows a leak last month of scores of audio snippets from the Google Assistant service. A contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips. It reported being able to hear people’s addresses, discussion of medical conditions, and recordings of a woman in distress.

The Hamburg data protection authority told Google of its intention to use Article 66 powers of the General Data Protection Regulation (GDPR) to begin an “urgency procedure” under Article 66 of GDPR last month.

Article 66 allows a DPA to order data processing to stop if it believes there is “an urgent need to act in order to protect the rights and freedoms of data subjects”.

This appears to be the first use of the power since GDPR came into force across the bloc in May last year.

Google says it responded to the DPA on July 26 to say it had already ceased the practice — taking the decision to manually suspend audio reviews of Google Assistant across the whole of Europe, and doing so on July 10, after learning of the data leak.

Last month it also informed its lead privacy regulator in Europe, the Irish Data Protection Commission (DPC), of the breach — which also told us it is now “examining” the issue that’s been highlighted by Hamburg’s order.

The Irish DPC’s head of communications, Graham Doyle, said Google Ireland filed an Article 33 breach notification for the Google Assistant data “a couple of weeks ago”, adding: “We note that as of 10 July Google Ireland ceased the processing in question and that they have committed to the continued suspension of processing for a period of at least three months starting today (1 August). In the meantime we are currently examining the matter.”

It’s not clear whether Google will be able to reinstate manual reviews in Europe in a way that’s compliant with the bloc’s privacy rules. The Hamburg DPA writes in a statement [in German] on its website that it has “significant doubts” about whether Google Assistant complies with EU data-protection law.

“We are in touch with the Hamburg data protection authority and are assessing how we conduct audio reviews and help our users understand how data is used,” Google’s spokesperson also told us.

In a blog post published last month after the leak, Google product manager for search, David Monsees, claimed manual reviews of Google Assistant queries are “a critical part of the process of building speech technology”, couching them as “necessary” to creating such products.

“These reviews help make voice recognition systems more inclusive of different accents and dialects across languages. We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips,” Google’s spokesperson added now.

But it’s far from clear whether human review of audio recordings captured by any of the myriad always-on voice AI products and services now on the market will be able to be compatible with European’s fundamental privacy rights.

These AIs typically have trigger words for activating the recording function which streams audio data to the cloud. But the technology can easily be accidentally triggered — and leaks have shown they are able to hoover up sensitive and intimate personal data not just of their owner but anyone in their vicinity (which of course includes people who never got within sniffing distance of any T&Cs).

In its website the Hamburg DPA says the intended proceedings against Google are intended to protect the privacy rights of affected users in the immediate term, noting that GDPR allows for concerned authorities in EU Member States to issue orders of up to three months.

In a statement Johannes Caspar, the Hamburg commissioner for data protection, added: “The use of language assistance systems in the EU must comply with the data protection requirements of the GDPR. In the case of the Google Assistant, there are currently significant doubts. The use of language assistance systems must be done in a transparent way, so that an informed consent of the users is possible. In particular, this involves providing sufficient information and transparently informing those concerned about the processing of voice commands, but also about the frequency and risks of mal-activation. Finally, due regard must be given to the need to protect third parties affected by the recordings. First of all, further questions about the functioning of the speech analysis system have to be clarified. The data protection authorities will then have to decide on definitive measures that are necessary for a privacy-compliant operation. ”

The DPA also urges other regional privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — name-checking rival providers of voice AIs, Apple and Amazon .

This suggests there could be wider ramifications for other tech giants operating voice AIs in Europe flowing from this single notification of an Article 66 order.

The real enforcement punch packed by GDPR is not the headline-grabbing fines, which can scale as high as 4% of a company’s global annual turnover — it’s the power that Europe’s DPAs now have in their regulatory toolbox to order that data stops flowing.

“This is just the beginning,” one expert on European data protection legislation told us, speaking on condition of anonymity. “The Article 66 chest is open and it has a lot on offer.”

In a sign of the potential scale of the looming privacy problems for voice AIs, Apple also said earlier today that it’s suspending a similar human review ‘quality control program’ for its Siri voice assistant.

The move, which does not appear to be linked to any regulatory order, follows a Guardian report last week detailing claims by a whistleblower that contractors working for Apple ‘regularly hear confidential details’ on Siri recordings, such as audio of people having sex and identifiable financial details, regardless of the processes Apple uses to anonymize the records.

Apple’s suspension of manual reviews of Siri snippets applies worldwide.