Apple still has work to do on privacy

There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.

On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.

As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.

And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.

Are we really so sure that thesis holds?

But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?

On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.

But even directly on privacy, Apple is running into problems, too.

 

To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)

Daily Crunch: Apple changes audio review program

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Apple is turning Siri audio clip review off by default and bringing it in house

Following reports that contractors were reviewing customers’ Siri audio samples for quality control, Apple says it has revamped the process. Moving forward, users have to opt-in to participate, and the audio samples will only be reviewed by Apple employees.

“As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize,” the company said.

2. Mozilla CEO Chris Beard will step down at the end of the year

Mozilla is currently seeking a replacement for Beard, though he’s agreed to stay on through year’s end. Executive chairwoman Mitchell Baker announced in her own post that she’s agreed to step into an interim role if needed.

3. Federal grand jury indicts Paige Thompson on two counts related to the Capital One data breach

Thompson allegedly created software that allowed her to see which customers of a cloud computing company (although the indictment does not name the company, it has been identified as Amazon Web Services) had misconfigured their firewalls, and as a result accessed data from Capital One and more than 30 others.

Woman holding Juul e-cig

A woman is holding a Juul e-cigarette, in Montreal. (Photo: Josie_Desmarais/Getty Images)

4. Juul introduces new POS standards to restrict sales to minors

The Retail Access Control Standards program, or RACS for short, automatically locks the point-of-sale system each time a Juul product is scanned until a valid, adult ID is scanned as well.

5. Apple expands access to official repair parts for third-party shops

Until today, if you were a non-authorized repair shop, you couldn’t get official parts. This could result in mixed experiences for customers.

6. Spotify aims to turn podcast fans into podcast creators with ‘Create podcast’ test

The streaming music service is testing a new ‘Create podcast’ feature that shows up above a user’s list of subscribed podcasts. It directs them to download Anchor, the podcast creation app that Spotify acquired in February.

7. How UK VCs are managing the risk of a ‘no deal’ Brexit

The prevailing view among investors about founders is that Brexit means uncertain business as usual. One response: “Resilience is the mother of entrepreneurship!” (Extra Crunch membership required.)

Nike Huaraches get updated for the smartphone age

Ever since they went from Back to the Future fantasy to real world wearable tech, Nike has promised that the Adapt line was more than just a one-off gimmick. Slowly but surely, the company has made its self-lacing motor technology more accessible, most notably though its long awaited Adapt BB sneakers, which arrived earlier this year.

The company announced today that it will be bringing the tech to its Huarache line next month, with the release of the Adapt Huaraches. Introduced in 1991, the line was built around a neoprene bootie derived from water skits. The new shoes feature a similar structure updated for 2019 style and along with smartphone integration.

Like the Adapt BB, the new Huaraches feature a pair of LED lights in the sole that change color based on their connection to the device. The mobile app, meanwhile, is used to adjust the lacing fit. FitAdapt features a bunch of different tension levels, based on different situations. The shoes also, notably, can be used with Apple Watch and Siri, meaning you can ask Apple’s assistant to tighten up your laces.

NikeNews AdaptHuarache Interface 2 square 1600

“This makes the Nike Adapt Huarache a double-barreled revolution,” Nike writes in a release. “First, it brings a storied franchise into the future. Second, and most significant, it propels Nike FitAdapt into the fast-paced, quick-shifting world of the everyday athlete — offering the personalized comfort needed in, say, the sprint to catch the bus, before seamlessly shifting fit as you settle into an empty seat with a sigh of quiet relief.”

The shoes are due out September 13. No pricing yet, but it seems likely they’ll be in the same ballpark as the $350 BBs.

Apple is turning Siri audio clip review off by default and bringing it in house

The top line news is that Apple is making changes to the way that Siri audio review, or ‘grading’ works across all of its devices. First, it is making audio review an explicitly opt-in process in an upcoming software update. This will be applicable for every current and future user of Siri.

Second, only Apple employees, not contractors, will review any of this opt-in audio in an effort to bring any process that uses private data closer to the company’s core processes.

Apple has released a blog post outlining some Siri privacy details that may not have been common knowledge as they were previously described in security white papers.

Apple apologizes for the issue.

“As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize. As we previously announced, we halted the Siri grading program. We plan to resume later this fall when software updates are released to our users — but only after making the following changes…”

It then outlines three changes being made to the way Siri grading works.

  • First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve.
  • Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.
  • Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.

Apple is not implementing any of these changes, nor is it lifting the suspension on the Siri grading process that it halted until the software update becomes available for its operating systems that will allow users to opt in. Once people update to the new versions of its OS, they will have the chance to say yes to the grading process that uses audio recordings to help verify requests that users make of Siri. This effectively means that every user of Siri will be opted out of this process once the update goes live and is installed.

Apple says that it will continue using anonymized computer generated written transcripts of your request to feed its machine learning engines with data, in a fashion similar to other voice assistants. These transcripts may be subject to Apple employee review.

Amazon and Google had previous revelations that their assistants were being helped along by human review of audio, and they have begun putting opt-ins in place as well.

Apple is making changes to the grading process itself as well, noting that, for example, “the names of the devices and rooms you setup in the Home app will only be accessible by the reviewer if the request being graded involves controlling devices in the home.”

A story in The Guardian in early August outlined how Siri audio samples were sent to contractors Apple had hired to evaluate the quality of responses and transcription that Siri produced for its machine learning engines to work on. The practice is not unprecedented, but it certainly was not made as clear as it should have been in Apple’s privacy policies that humans were involved in the process. There was also the matter that contractors, rather than employees, were being used to evaluate these samples. One contractor described as containing sensitive and private information that, in some cases, may have been able to be tied to a user, even with Apple’s anonymizing processes in place.

In response, Apple halted the grading process worldwide while it reviewed the process. This post and updates to its process are the result of that review.

Apple says that around 0.2% of all Siri requests got this audio treatment in the first place, but given that there are 15B requests per month, the quick maths tell us that though it is statistically insignificant, the raw numbers could be quite high.

The move away from contractors was signaled by Apple releasing employees in Europe, as noted by Alex Hearn earlier on Wednesday.

Apple is also publishing an FAQ on how Siri’s privacy controls fit in with its grading process, you can read that in full here.

The blog post from Apple and the FAQ provide some details to consumers about how Apple handles the grading process, how it is minimizing the data given to data reviewers in the grading process and how Siri privacy is preserved.

Apple’s work with Siri from the beginning has focused enormously on on-device processing whenever possible. This has led a lot of experts to say that Apple was trading raw capability for privacy by eschewing the data-center heavy processes of assistants from companies like Amazon or Google in favor of keeping a ‘personal cloud’ of data on device. Sadly, the lack of transparency on human review processes and the use of contractors undercut all of this foundational work Apple has been doing from the beginning. So it’s good that Apple is cranking all the way back to past industry standard on its privacy policies regarding grading and improvement. That is where it needs to be.

The fact is that no other assistant product is nearly as privacy focused as Siri — as I said above, some would say to the point of hampering its ability to advance as quickly. Hopefully this episode leads to better transparency on the part of Apple when humans get involved in processes that are presumed to be fully automated.

Most people assume that ‘AI’ or ‘machine learning’ mean computers only, but the sad fact is that most of those processes are intensely human driven still because AI (which doesn’t really exist) and ML are still pretty crap. Humans will be involved in making them seem smarter for a very long time yet.

The BBC is developing a voice assistant, code named ‘Beeb’

The BBC — aka, the British Broadcasting Corporation, aka the Beeb, aka Auntie — is getting into the voice assistant game.

The Guardian reports the plan to launch an Alexa rival, which has been given the working title ‘Beeb’, and will apparently be light on features given the Corp’s relatively slender developer resources vs major global tech giants.

The BBC’s own news site says the digital voice assistant will launch next year without any proprietary hardware to house it. Instead the corporation is designing the software to work on “all smart speakers, TVs and mobiles”.

Why is a publicly funded broadcaster ploughing money into developing an AI when the market is replete with commercial offerings — from Amazon’s Alexa to Google’s Assistant, Apple’s Siri and Samsung’s Bixby to name a few? The intent is to “experiment with new programmes, features and experiences without someone else’s permission to build it in a certain way”, a BBC spokesperson told BBC news.

The corporation is apparently asking its own staff to contribute voice data to help train the AI to understand the country’s smorgasbord of regional accents.

“Much like we did with BBC iPlayer, we want to make sure everyone can benefit from this new technology, and bring people exciting new content, programmes and services — in a trusted, easy-to-use way,” the spokesperson added. “This marks another step in ensuring public service values can be protected in a voice-enabled future.”

While at first glance the move looks reactionary and defensive, set against the years of dev already ploughed into cutting edge commercial voice AIs, the BBC has something those tech giant rivals lack: Not just regional British accents on tap — but easy access to a massive news and entertainment archive to draw on to design voice assistants that could serve up beloved personalities as a service.

Imagine being able to summon the voice of Tom Baker, aka Doctor Who, to tell you what the (cosmic) weather’s like — or have the Dad’s Army cast of characters chip in to read out your to-do list. Or get a summary of the last episode of The Archers from a familiar Ambridge resident.

Or what about being able to instruct ‘Beeb’ to play some suitably soothing or dramatic sound effects to entertain your kids?

On one level a voice AI is just a novel delivery mechanism. The BBC looks to have spotted that — and certainly does not lack for rich audio content that could be repackaged to reach its audience on verbal command and extend its power to entertain and delight.

When it comes to rich content, the same cannot be said of the tech giants who have pioneered voice AIs.

There have been some attempts to force humor (AIs that crack bad jokes) and/or shoehorn in character — largely flat-footed. As well as some ethically dubious attempts to pass off robot voices as real. All of which is to be expected, given they’re tech companies not entertainers. Dev not media is their DNA.

The BBC is coming at the voice assistant concept from the other way round: Viewing it as a modern mouthpiece for piping out more of its programming.

So while Beeb can’t hope to compete at the same technology feature level as Alexa and all the rest, the BBC could nonetheless show the tech giants a trick or two about how to win friends and influence people.

At the very least it should give their robotic voices some much needed creative competition.

It’s just a shame the Beeb didn’t tickle us further by christening its proto AI ‘Auntie’. A crisper two syllable trigger word would be hard to utter…

Amazon’s lead EU data regulator is asking questions about Alexa privacy

Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.

A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.

We’ve reached out to Amazon for comment.

Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.

However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…

In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.

These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.

Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.

The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.

Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.

A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.

Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.

Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.

The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.

In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.

At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.

In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:

We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.

Daily Crunch: Apple responds to Siri privacy concerns

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Apple suspends Siri response grading after privacy concerns

After The Guardian ran a story last week about how Siri recordings are used for quality control, Apple says it’s suspending the program worldwide while it reviews the process.

The practice, known as grading, involves sharing audio snippets with contractors, who determine whether Siri is hearing the requests accurately. Apple says that in the future, users will be able to choose whether or not they participate in the grading.

2. DoorDash is buying Caviar from Square in a deal worth $410 million

Square bought Caviar about five years ago in a deal worth about $90 million. Now, Caviar has found a new home with DoorDash.

3. President throws latest wrench in $10B JEDI cloud contract selection process

Throughout the months-long selection process, the Pentagon repeatedly denied accusations that the contract was somehow written to make Amazon a favored vendor, but The Washington Post reports President Trump has asked his newly appointed defense secretary to examine the process.

LAS VEGAS, NV – APRIL 21: Twitch streamer and professional gamer Tyler “Ninja” Blevins streams during Ninja Vegas ’18 at Esports Arena Las Vegas at Luxor Hotel and Casino on April 21, 2018 in Las Vegas, Nevada.  (Photo by Ethan Miller/Getty Images)

4. Following Ninja’s news, Mixer pops to top of the App Store’s free charts

Yesterday, Tyler “Ninja” Blevins announced that he’s leaving Twitch, moving his streaming career over to Microsoft’s Mixer platform. This morning, Mixer shot to the top of the App Store’s free app charts.

5. Google ordered to halt human review of voice AI recordings over privacy risks

Apple isn’t the only company to face scrutiny over its handling of user audio recordings.

6. UrbanClap, India’s largest home services startup, raises $75M

Through its platform, UrbanClap matches service people such as cleaners, repair staff and beauticians with customers across 10 cities in India, as well as Dubai and Abu Dhabi.

7. Why AWS gains big storage efficiencies with E8 acquisition

The team at Amazon Web Services is always looking to find an edge and reduce the costs of operations in its data centers. (Extra Crunch membership required.)

Google ordered to halt human review of voice AI recordings over privacy risks

A German privacy watchdog has ordered Google to cease manual reviews of audio snippets generated by its voice AI. 

This follows a leak last month of scores of audio snippets from the Google Assistant service. A contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips. It reported being able to hear people’s addresses, discussion of medical conditions, and recordings of a woman in distress.

The Hamburg data protection authority told Google of its intention to use Article 66 powers of the General Data Protection Regulation (GDPR) to begin an “urgency procedure” under Article 66 of GDPR last month.

Article 66 allows a DPA to order data processing to stop if it believes there is “an urgent need to act in order to protect the rights and freedoms of data subjects”.

This appears to be the first use of the power since GDPR came into force across the bloc in May last year.

Google says it responded to the DPA on July 26 to say it had already ceased the practice — taking the decision to manually suspend audio reviews of Google Assistant across the whole of Europe, and doing so on July 10, after learning of the data leak.

Last month it also informed its lead privacy regulator in Europe, the Irish Data Protection Commission (DPC), of the breach — which also told us it is now “examining” the issue that’s been highlighted by Hamburg’s order.

The Irish DPC’s head of communications, Graham Doyle, said Google Ireland filed an Article 33 breach notification for the Google Assistant data “a couple of weeks ago”, adding: “We note that as of 10 July Google Ireland ceased the processing in question and that they have committed to the continued suspension of processing for a period of at least three months starting today (1 August). In the meantime we are currently examining the matter.”

It’s not clear whether Google will be able to reinstate manual reviews in Europe in a way that’s compliant with the bloc’s privacy rules. The Hamburg DPA writes in a statement [in German] on its website that it has “significant doubts” about whether Google Assistant complies with EU data-protection law.

“We are in touch with the Hamburg data protection authority and are assessing how we conduct audio reviews and help our users understand how data is used,” Google’s spokesperson also told us.

In a blog post published last month after the leak, Google product manager for search, David Monsees, claimed manual reviews of Google Assistant queries are “a critical part of the process of building speech technology”, couching them as “necessary” to creating such products.

“These reviews help make voice recognition systems more inclusive of different accents and dialects across languages. We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips,” Google’s spokesperson added now.

But it’s far from clear whether human review of audio recordings captured by any of the myriad always-on voice AI products and services now on the market will be able to be compatible with European’s fundamental privacy rights.

These AIs typically have trigger words for activating the recording function which streams audio data to the cloud. But the technology can easily be accidentally triggered — and leaks have shown they are able to hoover up sensitive and intimate personal data not just of their owner but anyone in their vicinity (which of course includes people who never got within sniffing distance of any T&Cs).

In its website the Hamburg DPA says the intended proceedings against Google are intended to protect the privacy rights of affected users in the immediate term, noting that GDPR allows for concerned authorities in EU Member States to issue orders of up to three months.

In a statement Johannes Caspar, the Hamburg commissioner for data protection, added: “The use of language assistance systems in the EU must comply with the data protection requirements of the GDPR. In the case of the Google Assistant, there are currently significant doubts. The use of language assistance systems must be done in a transparent way, so that an informed consent of the users is possible. In particular, this involves providing sufficient information and transparently informing those concerned about the processing of voice commands, but also about the frequency and risks of mal-activation. Finally, due regard must be given to the need to protect third parties affected by the recordings. First of all, further questions about the functioning of the speech analysis system have to be clarified. The data protection authorities will then have to decide on definitive measures that are necessary for a privacy-compliant operation. ”

The DPA also urges other regional privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — name-checking rival providers of voice AIs, Apple and Amazon .

This suggests there could be wider ramifications for other tech giants operating voice AIs in Europe flowing from this single notification of an Article 66 order.

The real enforcement punch packed by GDPR is not the headline-grabbing fines, which can scale as high as 4% of a company’s global annual turnover — it’s the power that Europe’s DPAs now have in their regulatory toolbox to order that data stops flowing.

“This is just the beginning,” one expert on European data protection legislation told us, speaking on condition of anonymity. “The Article 66 chest is open and it has a lot on offer.”

In a sign of the potential scale of the looming privacy problems for voice AIs, Apple also said earlier today that it’s suspending a similar human review ‘quality control program’ for its Siri voice assistant.

The move, which does not appear to be linked to any regulatory order, follows a Guardian report last week detailing claims by a whistleblower that contractors working for Apple ‘regularly hear confidential details’ on Siri recordings, such as audio of people having sex and identifiable financial details, regardless of the processes Apple uses to anonymize the records.

Apple’s suspension of manual reviews of Siri snippets applies worldwide.

Apple suspends Siri response grading in response to privacy concerns

In response to concerns raised by a Guardian story last week over how recordings of Siri queries are used for quality control, Apple is suspending the program world wide. Apple says it will review the process that it uses, called grading, to determine whether Siri is hearing queries correctly, or being invoked by mistake.

In addition, it will be issuing a software update in the future that will let Siri users choose whether they participate in the grading process or not. 

The Guardian story from Alex Hern quoted extensively from a contractor at a firm hired by Apple to perform part of a Siri quality control process it calls grading. This takes snippets of audio, which are not connected to names or IDs of individuals, and has contractors listen to them to judge whether Siri is accurately hearing them — and whether Siri may have been invoked by mistake.

“We are committed to delivering a great Siri experience while protecting user privacy,” Apple said in a statement to TechCrunch. “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

The contractor claimed that the audio snippets could contain personal information, audio of people having sex and other details like finances that could be identifiable, regardless of the process Apple uses to anonymize the records. 

They also questioned how clear it was to users that their raw audio snippets may be sent to contractors to evaluate in order to help make Siri work better. When this story broke, I dipped into Apple’s terms of service myself and, though there are mentions of quality control for Siri and data being shared, I found that it did fall short of explicitly and plainly making it clear that live recordings, even short ones, are used in the process and may be transmitted and listened to. 

The figures Apple has cited put the amount of queries that may be selected for grading under 1 percent of daily requests.

The process of taking a snippet of audio a few seconds long and sending it to either internal personnel or contractors to evaluate is, essentially, industry standard. Audio recordings of requests made to Amazon and Google assistants are also reviewed by humans. 

An explicit way for users to agree to the audio being used this way is table stakes in this kind of business. I’m glad Apple says it will be adding one. 

It also aligns better with the way that Apple handles other data like app performance data that can be used by developers to identify and fix bugs in their software. Currently, when you set up your iPhone, you must give Apple permission to transmit that data. 

Apple has embarked on a long campaign of positioning itself as the most privacy conscious of the major mobile firms and therefore holds a heavier burden when it comes to standards. Doing as much as the other major companies do when it comes to things like using user data for quality control and service improvements cannot be enough if it wants to maintain the stance and the market edge that it brings along with it.

Siri recordings ‘regularly’ sent to Apple contractors for analysis, claims whistleblower

Apple has joined the dubious company of Google and Amazon in secretly sharing with contractors audio recordings of its users, confirming the practice to The Guardian after a whistleblower brought it to the outlet. The person said that Siri queries are routinely sent to human listeners for closer analysis, something not disclosed in Apple’s privacy policy.

The recordings are reportedly not associated with an Apple ID, but can be several seconds long, include content of a personal nature and are paired with other revealing data, like location, app data and contact details.

Like the other companies, Apple says this data is collected and analyzed by humans to improve its services, and that all analysis is done in a secure facility by workers bound by confidentiality agreements. And like the other companies, Apple failed to say that it does this until forced to.

Apple told The Guardian that less than 1% of daily queries are sent, cold comfort when the company is also constantly talking up the volume of Siri queries. Hundreds of millions of devices use the feature regularly, making a conservative estimate of a fraction of 1% rise quickly into the hundreds of thousands.

This “small portion” of Siri requests is apparently randomly chosen, and as the whistleblower notes, it includes “countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on.”

Some of these activations of Siri will have been accidental, which is one of the things listeners are trained to listen for and identify. Accidentally recorded queries can be many seconds long and contain a great deal of personal information, even if it is not directly tied to a digital identity.

Only in the last month has it come out that Google likewise sends clips to be analyzed, and that Amazon, which we knew recorded Alexa queries, retains that audio indefinitely.

Apple’s privacy policy states regarding non-personal information (under which Siri queries would fall):

We may collect and store details of how you use our services, including search queries. This information may be used to improve the relevancy of results provided by our services. Except in limited instances to ensure quality of our services over the Internet, such information will not be associated with your IP address.

It’s conceivable that the phrase “search queries” is inclusive of recordings of search queries. And it does say that it shares some data with third parties. But nowhere is it stated simply that questions you ask your phone may be recorded and shared with a stranger. Nor is there any way for users to opt out of this practice.

Given Apple’s focus on privacy and transparency, this seems like a major, and obviously a deliberate, oversight. I’ve contacted Apple for more details and will update this post when I hear back.