Alexa’s voice apps for kids can now offer purchases that parents approve

Amazon will now allow developers to offer premium content for purchase in Alexa skills aimed at children. The company on Friday introduced new tools for building skills with in-app purchases that requires the Amazon account holder — typically mom or dad — to approve or decline the requested purchase via a text or email.

In-skill purchasing was first introduced to all U.S. Alexa developers last year, and more recently became available to international developers. But like any app aimed at children, Alexa skills needed to offer a purchase approval workflow for those in its kids’ category, or it would risk unapproved purchases initiated by younger users.

That’s where these new developer tools come in.

Now, developers can create premium kid skills using either the Alexa Skills Kit Command-Line Interface (ASK CLI) or the Alexa Developer Console. Other tools allow the skills to route purchase requests to the account holder over SMS or email. The account holder then has 24 hours to act on the request, or the request is automatically canceled.

The premium content can come in the form of either one-time purchases or subscriptions, says Amazon.

A group of developers had early access to the tools and already added premium content to their own kid skills. This includes the grand prize winner from one of Amazon’s developer contests, Kids Court; plus You Choose Superman Adventures; Travel Quest; Animal Sounds; and Master Swords.

Parents who don’t want their kids asking to buy anything have two options to opt out of all this.

They can disable the feature in the Alexa app under Settings -> Alexa Account -> Voice Purchasing -> Kid Skills Purchasing. Meanwhile, FreeTime on Alexa customers, which comes with the Echo Dot Kids Edition, won’t receive offers to purchase premium content. And those who upgrade to FreeTime Unlimited will get much of this premium content included with their subscription.

The addition of premium purchases to kid skills comes at a challenging time for Amazon.

Amazon updated its Echo Dot for kids this week with new designs and other under-the-hood features, as new lawsuits over Alexa’s children’s privacy violations were filed. The suits say Amazon recorded children’s voices without consent.

As a part of its updated Echo Dot for kids experience, Amazon said it worked with the Family Online Safety Institute (FOSI) and various industry groups to rebuild FreeTime on Alexa so that it adheres U.S. children’s privacy law, COPPA (the Children’s Online Privacy Protection Act).

Amazon now restricts Alexa skills from accessing or collecting personal information from children and offers ways for parents to delete children’s voice recordings, it says.

But its changes to the Kids Edition Echo smart speaker and related feature set don’t fully address the plaintiffs’ allegations.

According to Amazon’s announcement this week, parents can now review and delete recordings through the Alexa app or the Alexa Privacy Hub, and they can contact Customer Service to request deletion of their child’s profile. However, the lawsuits said the way Amazon manages recordings — by asking parents to take manual action — is not ideal. They point out that Apple’s Siri only stores recordings for a short period of time and then automatically deletes them.

In addition, CNET found that Amazon may retain the text transcripts even when people delete the recordings themselves.

Privacy regulations take time to catch up to the pace of technology and today’s issues around how smart speakers should operate in family homes where children are present is another example of that problem. While parents are the ones buying and installing these devices, many weren’t aware that Alexa’s intelligence is aided not only by algorithms and AI, but by human beings on the other end who listen to recordings, check them for errors, then use this data to improve how Alexa works.

Of course, there are people who are less concerned about this sort of thing and just enjoy using the device regardless of its potential invasiveness. They may appreciate the ability to upgrade their skills and support favorite developers’ efforts, especially if the family enjoys the skills together or they feel they add value.

Amazon is not offering all developers the ability to sell through their kids skills at present. Instead, interested developers who want to build kid skills with purchases can fill out a form that tells Amazon about their plans and the company will reach out if the application is selected.

Thousands of medical injury claim records exposed by ad agency

An internet advertising company specializing in helping law firms sign up potential clients has exposed close to 150,000 records from a database that was left unsecured.

The database contained submissions as part of a lead-generation effort by X Social Media, a Florida-based ad firm that largely uses Facebook to advertise various campaigns for its law firm customers. Law firms pay the ad company to set up individual websites that aim to sign up victims from specific categories of harm and injuries — from medical implants, malpractice, sexual abuse and more — who submit their information in the hope of receiving legal relief.

But the database was left unprotected and without a password, allowing anyone to look inside.

Security researchers Noam Rotem and Ran Locar found the database and reported it to the company, which pulled the database offline. The researchers also shared their discovery exclusively with TechCrunch and posted their findings on vpnMentor.

The database contained names, addresses, phone numbers, the date and time of a person’s submission and the circumstances and explanation of their accident, injury or illness. Often this included personal health information, sensitive medical information, details of procedures or the consumption of certain medications or specifics of traumatic events.

Several records seen by TechCrunch include records from campaigns targeting combat veterans who were injured on duty. Other campaigns sought to sign up those who suffered illnesses from pesticides or medications.

Other campaigns included soliciting claims for sexual abuse. We found several names, postal and email addresses and phone numbers of victims, many of which also described their sexual abuse as part of filling out the website form.

One of the records in the database. (Image: supplied)

The researchers said the exposed data could be “easily traced” back to the individuals who filled out the website forms.

The exposed database also contained a list of more than 300 law firms who paid X Social Media to set up the lead-generation operation. It also contained records of how much each law firm paid the ad company — in some cases amounting to tens of thousands of dollars. The database also contained the bank routing and account numbers of the ad company, which law firms used to pay the company for its services.

In reporting this story, we found a second, smaller database. In an effort to get the database secured, we provided the IP address to Jacob Malherbe, founder of X Social Media, in an email. Within an hour, the database had been pulled offline.

Despite this, Malherbe denied that the company stored medical data, described the findings as “inaccurate” and asked we “direct all other emails to our company lawyers.”

When presented with several files containing the data, Malherbe responded:

After being notified by TechCrunch about a security problems in MongoDB the X Social Media developer team immediately shut down the vulnerability create [sic] by a MongoDB database and did a night long log file review and we only found the two IP addresses, associated with TechCrunch accessing our database. Our log files show that nobody else accesses the database while in transit. We will continue to investigating this incident and work closely with state and Federal agencies as more information becomes available.

When asked, Malherbe declined to provide the logs to verify his claims. The company also wouldn’t say how long the database was exposed.

This is the latest exposed database found by the researchers in recent months.

The researchers have previously found data leaking on Fortune 500 firm Tech Data, exposed user records and private messages of Jewish dating app JCrush and leaking data from Canadian cell network Freedom Mobile and online retailer Gearbest.

Read more:

Privacy policies are still too horrible to read in full

A year on from Europe’s flagship update to the pan-EU data protection framework the Commission has warned that too many privacy policies are still too hard to read and has urged tech companies to declutter and clarify their T&Cs. (So full marks to Twitter for the timing of this announcement.)

Announcing the results of a survey of the attitudes of 27,000 Europeans vis-a-vis data protection, the Commission said a large majority (73%) of EU citizens have heard of at least one of the six tested rights guaranteed by the General Data Protection Regulation (GDPR), which came into force at the end of May last year. But only a minority (30%) are aware of all their rights under the framework.

The Commission said it will launch a campaign to boost awareness of privacy rights and encourage EU citizens to optimise their privacy settings — “so that they only share the data they are willing to share”.

In instances of consent-based data processing, the GDPR guaranteed rights include the right to access personal data and get a copy of it without charge; the right to request rectification of incomplete or inaccurate personal data; the right to have data deleted; the right to restrict processing; and the right to data portability.

The highest levels of awareness recorded by the survey was for the right to access their own data (65%); the right to correct the data if they are wrong (61%); the right to object to receiving direct marketing (59%) and the right to have their own data deleted (57%).

 

Commenting in a statement, Andrus Ansip, VP for the Digital Single Market, said: “European citizens have become more aware of their digital rights and this is encouraging news. However, only three in ten Europeans have heard of all their new data rights. For companies, their customers’ trust is hard currency and this trust starts with the customers’ understanding of, and confidence in, privacy settings. Being aware is a precondition to being able to exercise your rights. Both sides can only win from clearer and simpler application of data protection rules.”

“Helping Europeans regain control over their personal data is one of our biggest priorities,” added Věra Jourová, commissioner for justice, consumers and gender equality, in another supporting statement. “But, of the 60% Europeans who read their privacy statements, only 13% read them fully. This is because the statements are too long or too difficult to understand. I once again urge all online companies to provide privacy statements that are concise, transparent and easily understandable by all users. I also encourage all Europeans to use their data protection rights and to optimise their privacy settings.”

Speaking at a Commission event to mark the one-year anniversary of the GDPR, Jourova couched the regulation as “growing fast” and “doing well” but said it needs continued nurturing to deliver on its promise — warning against fragmentation, or so-called ‘gold-plating’, by national agencies adding additional conditions or taking an expansive interpretation of the rules.

She also said “strong and coherent” enforcement is essential — but claimed fears that national watchdogs will become “sanctioning machines have not materialised”.

Though she made a point of emphasizing that: “National data protection authorities are the key for success.”

And it’s fair to day that enforcement remains a rare sight one year on from the regulation being applied — certainly in complaints attached to tech giants (Google is an exception) — which has fuelled a narrative in some media outlets that tries to brand the entire update a failure. But it was never likely data watchdogs would rush to judgement on a sharply increased workload at the same time as they were bedding into a new way of working for cross-border complaints, under GDPR’s one-stop-shop mechanism.

Regulators have also been conscious that data handlers are finding their feet under the new framework, and have allowed time for their compliance. But from here on in it’s fair to say there will be growing expectation from EU citizens for enforcement to uphold their rights.

The EU data protection agency with the biggest bunch of strategic keys where GDPR is concerned is the Irish Data Protection Commission — which has seen complaints filed since the regulation came into force more than double, thanks to the country being a (low tax) favorite for tech giants to base their European HQs.

The Irish DPC has around 18 open investigations into tech giants at this stage — including, most recently, a formal probe of Google’s adtech, which is in response to a number of complaints filed across Europe about how real-time bidding systems handle personal data.

Adtech veteran Quantcast‘s processing and aggregating of personal data is also being formally probed.

Other open investigations on the Irish DPC’s plate include a large number of investigations into various aspects of multiple Facebook owned businesses, as well as a smaller number of probes into Apple, LinkedIn and Twitter’s data handling. So it is certainly one to watch.

In comments at today’s event to mark the one-year anniversary of the GDPR, Ireland’s data protection commissioner indicated that some of these investigations will result in judgements this summer.

“We prioritise fair and high quality judgements. We keep our focus on the job. We have a big quantity of large scale investigations on the way and some of them will be finalised this summer,” said Helen Dixon .

Also speaking at the event, Qwant’s founder Eric Leandri said GDPR has been a boon to his pro-privacy search engine business — suggesting it has increased its growth rate to 30% per week.

“People who understand what data privacy means are inclined to protect their privacy,” he added.

Privacy policies are still too horrible to read in full

A year on from Europe’s flagship update to the pan-EU data protection framework the Commission has warned that too many privacy policies are still too hard to read and has urged tech companies to declutter and clarify their T&Cs. (So full marks to Twitter for the timing of this announcement.)

Announcing the results of a survey of the attitudes of 27,000 Europeans vis-a-vis data protection, the Commission said a large majority (73%) of EU citizens have heard of at least one of the six tested rights guaranteed by the General Data Protection Regulation (GDPR), which came into force at the end of May last year. But only a minority (30%) are aware of all their rights under the framework.

The Commission said it will launch a campaign to boost awareness of privacy rights and encourage EU citizens to optimise their privacy settings — “so that they only share the data they are willing to share”.

In instances of consent-based data processing, the GDPR guaranteed rights include the right to access personal data and get a copy of it without charge; the right to request rectification of incomplete or inaccurate personal data; the right to have data deleted; the right to restrict processing; and the right to data portability.

The highest levels of awareness recorded by the survey was for the right to access their own data (65%); the right to correct the data if they are wrong (61%); the right to object to receiving direct marketing (59%) and the right to have their own data deleted (57%).

 

Commenting in a statement, Andrus Ansip, VP for the Digital Single Market, said: “European citizens have become more aware of their digital rights and this is encouraging news. However, only three in ten Europeans have heard of all their new data rights. For companies, their customers’ trust is hard currency and this trust starts with the customers’ understanding of, and confidence in, privacy settings. Being aware is a precondition to being able to exercise your rights. Both sides can only win from clearer and simpler application of data protection rules.”

“Helping Europeans regain control over their personal data is one of our biggest priorities,” added Věra Jourová, commissioner for justice, consumers and gender equality, in another supporting statement. “But, of the 60% Europeans who read their privacy statements, only 13% read them fully. This is because the statements are too long or too difficult to understand. I once again urge all online companies to provide privacy statements that are concise, transparent and easily understandable by all users. I also encourage all Europeans to use their data protection rights and to optimise their privacy settings.”

Speaking at a Commission event to mark the one-year anniversary of the GDPR, Jourova couched the regulation as “growing fast” and “doing well” but said it needs continued nurturing to deliver on its promise — warning against fragmentation, or so-called ‘gold-plating’, by national agencies adding additional conditions or taking an expansive interpretation of the rules.

She also said “strong and coherent” enforcement is essential — but claimed fears that national watchdogs will become “sanctioning machines have not materialised”.

Though she made a point of emphasizing that: “National data protection authorities are the key for success.”

And it’s fair to day that enforcement remains a rare sight one year on from the regulation being applied — certainly in complaints attached to tech giants (Google is an exception) — which has fuelled a narrative in some media outlets that tries to brand the entire update a failure. But it was never likely data watchdogs would rush to judgement on a sharply increased workload at the same time as they were bedding into a new way of working for cross-border complaints, under GDPR’s one-stop-shop mechanism.

Regulators have also been conscious that data handlers are finding their feet under the new framework, and have allowed time for their compliance. But from here on in it’s fair to say there will be growing expectation from EU citizens for enforcement to uphold their rights.

The EU data protection agency with the biggest bunch of strategic keys where GDPR is concerned is the Irish Data Protection Commission — which has seen complaints filed since the regulation came into force more than double, thanks to the country being a (low tax) favorite for tech giants to base their European HQs.

The Irish DPC has around 18 open investigations into tech giants at this stage — including, most recently, a formal probe of Google’s adtech, which is in response to a number of complaints filed across Europe about how real-time bidding systems handle personal data.

Adtech veteran Quantcast‘s processing and aggregating of personal data is also being formally probed.

Other open investigations on the Irish DPC’s plate include a large number of investigations into various aspects of multiple Facebook owned businesses, as well as a smaller number of probes into Apple, LinkedIn and Twitter’s data handling. So it is certainly one to watch.

In comments at today’s event to mark the one-year anniversary of the GDPR, Ireland’s data protection commissioner indicated that some of these investigations will result in judgements this summer.

“We prioritise fair and high quality judgements. We keep our focus on the job. We have a big quantity of large scale investigations on the way and some of them will be finalised this summer,” said Helen Dixon .

Also speaking at the event, Qwant’s founder Eric Leandri said GDPR has been a boon to his pro-privacy search engine business — suggesting it has increased its growth rate to 30% per week.

“People who understand what data privacy means are inclined to protect their privacy,” he added.

Every secure messaging app needs a self-destruct button

The growing presence of encrypted communications apps makes a lot of communities safer and stronger. But the possibility of physical device seizure and government coercion is growing as well, which is why every such app should have some kind of self-destruct mode to protect its user and their contacts.

End to end encryption like that you see in Signal and (if you opt into it) WhatsApp is great at preventing governments and other malicious actors from accessing your messages while they are in transit. But as with nearly all cybersecurity matters, physical access to either device or user or both changes things considerably.

For example, take this Hong Kong citizen who was forced to unlock their phone and reveal their followers and other messaging data to police. It’s one thing to do this with a court order to see if, say, a person was secretly cyberstalking someone in violation of a restraining order. It’s quite another to use as a dragnet for political dissidents.

This particular protestor ran a Telegram channel that had a number of followers. But it could just as easily be a Slack room for organizing a protest, or a Facebook group, or anything else. For groups under threat from oppressive government regimes it could be a disaster if the contents or contacts from any of these were revealed to the police.

Just as you should be able to choose exactly what you say to police, you should be able to choose how much your phone can say as well. Secure messaging apps should be the vanguard of this capability.

There are already some dedicated “panic button” type apps, and Apple has thoughtfully developed an “emergency mode” (activated by hitting the power button five times quickly) that locks the phone to biometrics and will wipe it if it is not unlocked within a certain period of time. That’s effective against “Apple pickers” trying to steal a phone or during border or police stops where you don’t want to show ownership by unlocking the phone with your face.

Those are useful and we need more like them — but secure messaging apps are a special case. So what should they do?

The best-case scenario, where you have all the time in the world and internet access, isn’t really an important one. You can always delete your account and data voluntarily. What needs work is deleting your account under pressure.

The next best-case scenario is that you have perhaps a few seconds or at most a minute to delete or otherwise protect your account. Signal is very good about this: The deletion option is front and center in the options screen, and you don’t have to input any data. WhatsApp and Telegram require you to put in your phone number, which is not ideal — fail to do this correctly and your data is retained.

Signal, left, lets you get on with it. You’ll need to enter your number in WhatsApp (right) and Telegram.

Obviously it’s also important that these apps don’t let users accidentally and irreversibly delete their account. But perhaps there’s a middle road whereby you can temporarily lock it for a preset time period, after which it deletes itself if not unlocked manually. Telegram does have self-destructing accounts, but the shortest time you can delete after is a month.

What really needs improvement is emergency deletion when your phone is no longer in your control. This could be a case of device seizure by police, or perhaps being forced to unlock the phone after you have been arrested. Whatever the case, there need to be options for a user to delete their account outside the ordinary means.

Here are a couple options that could work:

  • Trusted remote deletion: Selected contacts are given the ability via a one-time code or other method to wipe each other’s accounts or chats remotely, no questions asked and no notification created. This would let, for instance, a friend who knows you’ve been arrested remotely remove any sensitive data from your device.
  • Self-destruct timer: Like Telegram’s feature, but better. If you’re going to a protest, or have been “randomly” selected for additional screening or questioning, you can just tell the app to delete itself after a certain duration (as little as a minute perhaps) or at a certain time of the day. Deactivate any time you like, or stall for the five required minutes for it to trigger.
  • Poison PIN: In addition to a normal unlock PIN, users can set a poison PIN that when entered has a variety of user-selectable effects. Delete certain apps, clear contacts, send prewritten messages, unlock or temporarily hard-lock the device, etc.
  • Customizable panic button: Apple’s emergency mode is great, but it would be nice to be able to attach conditions like the poison PIN’s. Sometimes all someone can do is smash that button.

Obviously these open new avenues for calamity and abuse as well, which is why they will need to be explained carefully and perhaps initially hidden in “advanced options” and the like. But overall I think we’ll be safer with them available.

Eventually these roles may be filled by dedicated apps or by the developers of the operating systems on which they run, but it makes sense for the most security-forward app class out there to be the first in the field.

Facebook collected device data on 187,000 users using banned snooping app

Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.

The social media giant said in a letter to Sen. Richard Blumenthal’s office — which TechCrunch obtained — that it collected data on 31,000 users in the U.S., including 4,300 teenagers. The rest of the collected data came from users in India.

Earlier this year, a TechCrunch investigation found both Facebook and Google were abusing their Apple-issued enterprise developer certificates, designed to only allow employees to run iPhone and iPad apps used only inside the company. The investigation found the companies were building and providing apps for consumers outside Apple’s App Store, in violation of Apple’s rules. The apps paid users in return for collecting data on how participants used their devices and to understand app habits by gaining access to all of the network data in and out of their device.

Apple banned the apps by revoking Facebook’s enterprise developer certificate — and later Google’s enterprise certificate. In doing so, the revocation knocked offline both companies’ fleet of internal iPhone or iPad apps that relied on the same certificates.

But in response to lawmakers’ questions, Apple said it didn’t know how many devices installed Facebook’s rule-violating app.

“We know that the provisioning profile for the Facebook Research app was created on April 19, 2017, but this does not necessarily correlate to the date that Facebook distributed the provisioning profile to end users,” said Timothy Powderly, Apple’s director of federal affairs, in his letter.

Facebook said the app dated back to 2016.

TechCrunch also obtained the letters sent by Apple and Google to lawmakers in early March, but were never made public.

These “research” apps relied on willing participants to download the app from outside the app store and use the Apple-issued developer certificates to install the apps. Then, the apps would install a root network certificate, allowing the app to collect all the data out of the device — like web browsing histories, encrypted messages and mobile app activity — potentially also including data from their friends — for competitive analysis.

A response by Facebook about the number of users involved in Project Atlas (Image: TechCrunch)

In Facebook’s case, the research app — dubbed Project Atlas — was a repackaged version of its Onavo VPN app, which Facebook was forced to remove from Apple’s App Store last year for gathering too much device data.

Just this week, Facebook relaunched its research app as Study, only available on Google Play and for users who have been approved through Facebook’s research partner, Applause. Facebook said it would be more transparent about how it collects user data.

Facebook’s vice president of public policy Kevin Martin defended the company’s use of enterprise certificates, saying it “was a relatively well-known industry practice.” When asked, a Facebook spokesperson didn’t quantify this further. Later, TechCrunch found dozens of apps that used enterprise certificates to evade the app store.

Facebook previously said it “specifically ignores information shared via financial or health apps.” In its letter to lawmakers, Facebook stuck to its guns, saying its data collection was focused on “analytics,” but confirmed “in some isolated circumstances the app received some limited non-targeted content.”

“We did not review all of the data to determine whether it contained health or financial data,” said a Facebook spokesperson. “We have deleted all user-level market insights data that was collected from the Facebook Research app, which would include any health or financial data that may have existed.”

But Facebook didn’t say what kind of data, only that the app didn’t decrypt “the vast majority” of data sent by a device.

Facebook describing the type of data it collected — including “limited, non-targeted content” (Image: TechCrunch)

Google’s letter, penned by public policy vice president Karan Bhatia, did not provide a number of devices or users, saying only that its app was a “small scale” program. When reached, a Google spokesperson did not comment by our deadline.

Google also said it found “no other apps that were distributed to consumer end users,” but confirmed several other apps used by the company’s partners and contractors, which no longer rely on enterprise certificates.

Google explaining which of its apps were improperly using Apple-issued enterprise certificates (Image: TechCrunch)

Apple told TechCrunch that both Facebook and Google “are in compliance” with its rules as of the time of publication. At its annual developer conference last week, the company said it now “reserves the right to review and approve or reject any internal use application.”

Facebook’s willingness to collect this data from teenagers — despite constant scrutiny from press and regulators — demonstrates how valuable the company sees market research on its competitors. With its restarted paid research program but with greater transparency, the company continues to leverage its data collection to keep ahead of its rivals.

Facebook and Google came off worse in the enterprise app abuse scandal, but critics said in revoking enterprise certificates Apple retains too much control over what content customers have on their devices.

The Justice Department and the Federal Trade Commission are said to be examining the big four tech giants — Apple, Amazon, Facebook and Google-owner Alphabet — for potentially falling afoul of U.S. antitrust laws.

LaLiga fined $280k for soccer app’s privacy violating spy mode

Spanish soccer’s premier league, LaLiga, has netted itself a €250,000 (~$280k) fine for privacy violations of Europe’s General Data Protection Regulation (GDPR) related to its official app.

As we reported a year ago, users of the LaLiga app were outraged to discover the smartphone software does rather more than show minute-by-minute commentary of football matches — but can use the microphone and GPS of fans’ phones to record their surroundings in a bid to identify bars which are unofficially streaming games instead of coughing up for broadcasting rights.

Unwitting fans who hadn’t read the tea leaves of opaque app permissions took to social media to vent their anger at finding they’d been co-opted into an unofficial LaLiga piracy police force as the app repurposed their smartphone sensors to rat out their favorite local bars.

The spy mode function is not mentioned in the app’s description.

El Diaro reports the fine being issued by Spain’s data protection watchdog, the AEPD. A spokesperson for the watchdog confirmed the penalty but told us the full decision has not yet been published.

Per El Diaro’s report, the AEPD found LaLiga failed to be adequately clear about how the app recorded audio, violating Article 5.1 of the GDPR — which requires that personal data be processed lawfully, fairly and in a transparent manner. It said LaLiga should have indicated to app users every time the app remotely switched on the microphone to record their surroundings.

If LaLiga had done so that would have required some form of in-app notification once per minute every time a football match is in play, being as — once granted permission to record audio — the app does so for five sections every minute when a league game is happening.

Instead the app only asks for permission to use the microphone twice per user (per LaLiga’s explanation).

The AEPD found the level of notification the app provides to users inadequate — pointing out, per El Diaro’s reports, that users are unlikely to remember what they have previously consented each time they use the app.

It suggests active notification could be provided to users each time the app is recording, such as by displaying an icon that indicates the microphone is listening in, according to the newspaper. 

The watchdog also found LaLiga to have violated Article 7.3 of the GDPR which stipulates that when consent is being used as the legal basis for processing personal data users should have the right to withdraw their consent at any time. Whereas, again, the LaLiga app does not offer users an ongoing chance to withdraw consent to its spy mode recording after the initial permission requests.

LaLiga has been given a month to correct the violations with the app. However in a statement responding to the AEPD’s decision the association has denied any wrongdoing — and said it plans to appeal the fine.

“LaLiga disagrees deeply with the interpretation of the AEPD and believes that it has not made the effort to understand how the technology [functions],” it writes. “For the microphone functionality to be active, the user has to expressly, proactively and on two occasions grant consent, so it can not be attributed to LaLiga lack of
transparency or information about this functionality.”

“LaLiga will appeal the decision in court to prove that has acted in accordance with data protection regulations,” it adds.

A video produced by LaLiga to try to sell the spy mode function to fans following last year’s social media backlash claims it does not capture any personal data — and describes the dual permission requests to use the microphone as “an exercise in transparency”.

Clearly, the AEPD takes a very different view.

LaLiga’s argument against the AEPD’s decision that it violated the GDPR appears to rest on its suggestion that the watchdog does not understand the technology it’s using — which it claims “neither record, store, or listen to conversations”.

So it looks to be trying to push its own self-serving interpretation of what is and isn’t personal data. (Nor is it the only commercial entity attempting that, of course.)

In the response statement, which we’ve translated from Spanish, LaLiga writes:

The technology used is designed to generate exclusively a specific sound footprint (fingerprint acoustic). This fingerprint only contains 0.75% of the information, discarding the remaining 99.25%, so it is technically impossible to interpret the voice or human conversations.

This fingerprint is transformed into an alphanumeric code (hash) that cannot be reversed to recreate the original sound. The technology’s operation is backed by an independent expert report, that among other arguments that favor our position, concludes that it “does not allow LaLiga to know the contents of any conversation or identify potential speakers”. Furthermore, it adds that this fraud control mechanism “does not store the information captured from the microphone of the mobile” and “the information captured by the microphone of the mobile is subjected to a complex transformation process that is irreversible”.

In comments to El Diaro, LaLiga also likens its technology to the Shazam app — which compares an audio fingerprint to try to identify a song also being recorded in real-time via the phone’s microphone.

However Shazam users manually activate its listening feature, and are shown a visual ‘listening’ icon during the process. Whereas LaLiga has created an embedded spy mode that systematically switches itself after a couple of initial permissions. So it’s perhaps not the best comparison to try to suggest.

LaLiga’s statement adds that the audio eavesdropping on fans’ surroundings is intended to “achieve a legitimate goal” of fighting piracy. 

“LaLiga would not be acting diligently if it did not use all means and technologies at its fingertips to fight against piracy,” it writes. “It is a particularly relevant task taking into account the enormous magnitude of fraud in the marketing system, which is estimated at approximately 400 million euros per year.”

LaLiga also says it will not be making any changes to how the app functions because it already intends to remove what it describes to El Diario as “experimental” functionality at the end of the current football season, which ends June 30.

What can cities learn from Amazon HQ?

With the backlash from Amazon HQ fresh in our minds, it’s time to strategically think about how lessons from corporate innovations and digital technology services can improve and inform urban life in a way that puts people front and center. Doing so properly, however, will require an investment in structured engagement processes from the onset to ensure community buy-in, legitimacy and genuine co-creation with the private sector.

The move toward urban life

Increasingly, people are living in cities — with 55% living in cities today and the UN estimating more than two-thirds of the globe’s population moving to cities by 2050. Moreover, cities are also becoming hubs of technological innovation. Metropolitan statistical area data shows us that cities are becoming home to more and more STEM and high-tech workers.

And in 2018, New York City raised almost $11.5 billion in venture capital (VC) funding, second only to Silicon Valley as one of the highest-performing innovation ecosystems. Global real estate firm Savills UK and many others are even referring to New York and similar cities as “Silicon Alley.” The original Silicon Valley now has a lot of competition when it comes to VC funding, a more diverse and skilled talent pool and opportunities.

But companies aren’t the only change in urban areas.

Census data, indebted to the analysis of William Frey, shows that American cities are becoming the home for a younger and younger population, a more skilled population and a more diverse population of more individuals born outside the city or even out the country. These demographic changes are going to have a major impact: Today’s shifting populations come with their own cultures, needs and, more importantly, expectations of what governance and service delivery looks like.

The role of technology companies

Insert tech companies. All companies from Amazon to car companies are now also data-collection companies. McKinsey’s report from 2016 estimates that the data that car companies collect on users will be valued as a $750 billion industry by 2030. This data includes location-based data, driving patterns and behavior and vehicle-use data, like from sensors to sense speed and road markings, all of which are all transmitted directly to automakers.

There are more worrying indicators too — like newer cars recording drivers’ eye movements, the weight of people in the front seats and whether the driver’s smartphone was connected to the car — pointing to targeted uses of data. What’s more pernicious is that this data is tenuously held, or worse, could be used against the driver.

A lawsuit against General Motors found that warrantless tracking was not permitted, and made its way into a 2012 Supreme Court decision on the same. While the information gathered can help driving performance and safety, it still constitutes a huge infringement of privacy when it comes to losing control over your own data to massive monopolies. Moreover, the consumer gives up the right to advocate for themselves if the only anecdote of an accident or a defect a company is receptive to is the vehicle’s.

How will we ensure this data is not used perniciously?

As these companies continue to amass large quantities of data on people, they are able to deliver tailored experiences and services to a population growing increasingly used to receiving tailored experiences. Try using Google Maps with privacy settings checked and see what happens. Cities and its residents have become used to navigating with the help of data that knows where you’re going and where you’ve been. Regardless of how the data is used, people have fundamentally gotten used to a personalized and tailored system of services — whether it is Google Maps knowing how far locations are from their home, a Nest cam telling them when someone enters the baby’s room or a Lyft car coming directly to their door on a rainy night.

Tech companies’ new powers pose two challenges to government: While their services raise privacy concerns that demand government involvement and regulation, these corporations also change how these new urban populations expect to receive basic services.

Amazon HQ2 may be out of New York City, but Amazon continues to set the standard for what New Yorkers expect from their companies. For example, Amazon’s recent push for next-day shipping creates an industry standard that puts pressure on other companies. But, there are a lot of lessons to learn from Amazon leaving.

First, the benefits of a tailored service delivery needs to benefit all, not the few. And as The New York Times’ recent privacy series shows us, the disadvantages of data collection cannot fall disproportionately on the few and the most vulnerable. All companies have access to an unprecedented level of data on their consumer basis, but there is now an opportunity to use this to expand an audience base so that all city residents are beneficiaries of tailored tech services rather than only the few. Economies of scale will allow companies to serve residents outside of the downtown core.

How will we ensure this data is not used perniciously? That’s where the public sector steps in. If we’ve learned anything from Amazon and the rise of ridesharing apps, it’s that residents are seeking tailored service delivery, but not at the expense of their own privacy. The public sector can use multiple tools: enforcement of guidelines to protect residents, punitive measures against organizations that seek to harm and expanding digital access so the benefits of innovation can be shared.

Second, the public sector can leverage some of the same innovations and digital technologies that their private counterparts are using. No, not CompStat, but moving from disparately sourced Excel files or analog notes, it’s high time for the government to opt into CRMs to enable quick, speedy and efficient service delivery. At a time when city residents can get a car and groceries delivered to their house at any time of the day, it’s high time that governments, too, meet where their constituents are.

Third, the question then arises, how do you create a structured engagement process to enable co-creation from the onset to set realistic expectations, but also to move beyond public affairs toward genuine community empowerment? How do you get residents and governments to come together? Moreover, how is this structured engagement process going to co-create with all communities, rather than some. This must include traditionally marginalized communities and communities of color.

The “middleware” of the future

Companies are moving faster than governments on questions around the future of people’s privacy with large implications for governance.

How do we create “middleware,” as Ari Wallach, founder of Longpath, describes the space, for new forms of understanding to arise.

The idea of encouraging “middleware” comes out of a common challenge: a lack of realistic expectations set on behalf of both companies and communities themselves. Currently, real, structural limitations prevent dialog and co-production. Too often, it’s public affairs shops or removed experts running community engagement on behalf of technology companies without true experience on the ground. On the other side, NGOs need a nuanced understanding of the changing nature of society and the opportunity for technology companies to be productive community members. If successful, what arises is then, a space for structured dialog, deliberation and engagement to lead to productive, co-produced outcomes.

This middleware of the future will enable participatory mechanisms to ensure mutual respect and cooperation between communities and the companies that will increasingly shape the urban landscape, be it in the built environment, the data-sphere or some combination of both.

We need to create third-party spaces and processes that have transparency and accountability, and that actively engage and empower communities. These spaces can meet communities where they are now. If done well, technology companies can work with communities to help them grow, adapt and become more responsive and better equipped for the changing societal trends facing the future.

There is no putting the genie back in the bottle.

What would these convenings look like in practice? These convenings will create transparent, open processes that bring together community leaders, academia, industry and experts in facilitation to foster genuine dialog and understanding. On the one hand, it will require community groups gaining deeper expertise of the vast quantities of data being collected on them. On the other hand, the public also needs awareness about the opportunities for leveraging that data to improve their communities and public services. And grassroots groups need government support to make sure that data collection is fair, reasonable and regulated.

Through structured and facilitated engagement, communities will make road maps, share their expectations, air their frustrations, outline the opportunities and work toward actionable solutions. These engagements will enable opportunities for weighing realistic trade-offs, identifying barriers to implementation and addressing the very real concerns around equity and structural inequities.

The future is already here. Community organizations bring deep know-how of residents and neighborhoods. Technology companies both possess vast amounts of data on people but also are intricately linked to the way people live their lives today and in the future. They’d both benefit by speaking to one another and co-creating this “middleware.”

There is no putting the genie back in the bottle.

There is, however, an opportunity for new dialog and process. Companies will continue to outpace the public sector and the role of government for important governance decisions. Whether or not Amazon HQ left Long Island City, there is the need for better processes and understanding about these companies’ roles and responsibilities: a participatory business model that is not based on conflict, but rather empowers people to be active participants in shaping their future.

What can cities learn from Amazon HQ?

With the backlash from Amazon HQ fresh in our minds, it’s time to strategically think about how lessons from corporate innovations and digital technology services can improve and inform urban life in a way that puts people front and center. Doing so properly, however, will require an investment in structured engagement processes from the onset to ensure community buy-in, legitimacy and genuine co-creation with the private sector.

The move toward urban life

Increasingly, people are living in cities — with 55% living in cities today and the UN estimating more than two-thirds of the globe’s population moving to cities by 2050. Moreover, cities are also becoming hubs of technological innovation. Metropolitan statistical area data shows us that cities are becoming home to more and more STEM and high-tech workers.

And in 2018, New York City raised almost $11.5 billion in venture capital (VC) funding, second only to Silicon Valley as one of the highest-performing innovation ecosystems. Global real estate firm Savills UK and many others are even referring to New York and similar cities as “Silicon Alley.” The original Silicon Valley now has a lot of competition when it comes to VC funding, a more diverse and skilled talent pool and opportunities.

But companies aren’t the only change in urban areas.

Census data, indebted to the analysis of William Frey, shows that American cities are becoming the home for a younger and younger population, a more skilled population and a more diverse population of more individuals born outside the city or even out the country. These demographic changes are going to have a major impact: Today’s shifting populations come with their own cultures, needs and, more importantly, expectations of what governance and service delivery looks like.

The role of technology companies

Insert tech companies. All companies from Amazon to car companies are now also data-collection companies. McKinsey’s report from 2016 estimates that the data that car companies collect on users will be valued as a $750 billion industry by 2030. This data includes location-based data, driving patterns and behavior and vehicle-use data, like from sensors to sense speed and road markings, all of which are all transmitted directly to automakers.

There are more worrying indicators too — like newer cars recording drivers’ eye movements, the weight of people in the front seats and whether the driver’s smartphone was connected to the car — pointing to targeted uses of data. What’s more pernicious is that this data is tenuously held, or worse, could be used against the driver.

A lawsuit against General Motors found that warrantless tracking was not permitted, and made its way into a 2012 Supreme Court decision on the same. While the information gathered can help driving performance and safety, it still constitutes a huge infringement of privacy when it comes to losing control over your own data to massive monopolies. Moreover, the consumer gives up the right to advocate for themselves if the only anecdote of an accident or a defect a company is receptive to is the vehicle’s.

How will we ensure this data is not used perniciously?

As these companies continue to amass large quantities of data on people, they are able to deliver tailored experiences and services to a population growing increasingly used to receiving tailored experiences. Try using Google Maps with privacy settings checked and see what happens. Cities and its residents have become used to navigating with the help of data that knows where you’re going and where you’ve been. Regardless of how the data is used, people have fundamentally gotten used to a personalized and tailored system of services — whether it is Google Maps knowing how far locations are from their home, a Nest cam telling them when someone enters the baby’s room or a Lyft car coming directly to their door on a rainy night.

Tech companies’ new powers pose two challenges to government: While their services raise privacy concerns that demand government involvement and regulation, these corporations also change how these new urban populations expect to receive basic services.

Amazon HQ2 may be out of New York City, but Amazon continues to set the standard for what New Yorkers expect from their companies. For example, Amazon’s recent push for next-day shipping creates an industry standard that puts pressure on other companies. But, there are a lot of lessons to learn from Amazon leaving.

First, the benefits of a tailored service delivery needs to benefit all, not the few. And as The New York Times’ recent privacy series shows us, the disadvantages of data collection cannot fall disproportionately on the few and the most vulnerable. All companies have access to an unprecedented level of data on their consumer basis, but there is now an opportunity to use this to expand an audience base so that all city residents are beneficiaries of tailored tech services rather than only the few. Economies of scale will allow companies to serve residents outside of the downtown core.

How will we ensure this data is not used perniciously? That’s where the public sector steps in. If we’ve learned anything from Amazon and the rise of ridesharing apps, it’s that residents are seeking tailored service delivery, but not at the expense of their own privacy. The public sector can use multiple tools: enforcement of guidelines to protect residents, punitive measures against organizations that seek to harm and expanding digital access so the benefits of innovation can be shared.

Second, the public sector can leverage some of the same innovations and digital technologies that their private counterparts are using. No, not CompStat, but moving from disparately sourced Excel files or analog notes, it’s high time for the government to opt into CRMs to enable quick, speedy and efficient service delivery. At a time when city residents can get a car and groceries delivered to their house at any time of the day, it’s high time that governments, too, meet where their constituents are.

Third, the question then arises, how do you create a structured engagement process to enable co-creation from the onset to set realistic expectations, but also to move beyond public affairs toward genuine community empowerment? How do you get residents and governments to come together? Moreover, how is this structured engagement process going to co-create with all communities, rather than some. This must include traditionally marginalized communities and communities of color.

The “middleware” of the future

Companies are moving faster than governments on questions around the future of people’s privacy with large implications for governance.

How do we create “middleware,” as Ari Wallach, founder of Longpath, describes the space, for new forms of understanding to arise.

The idea of encouraging “middleware” comes out of a common challenge: a lack of realistic expectations set on behalf of both companies and communities themselves. Currently, real, structural limitations prevent dialog and co-production. Too often, it’s public affairs shops or removed experts running community engagement on behalf of technology companies without true experience on the ground. On the other side, NGOs need a nuanced understanding of the changing nature of society and the opportunity for technology companies to be productive community members. If successful, what arises is then, a space for structured dialog, deliberation and engagement to lead to productive, co-produced outcomes.

This middleware of the future will enable participatory mechanisms to ensure mutual respect and cooperation between communities and the companies that will increasingly shape the urban landscape, be it in the built environment, the data-sphere or some combination of both.

We need to create third-party spaces and processes that have transparency and accountability, and that actively engage and empower communities. These spaces can meet communities where they are now. If done well, technology companies can work with communities to help them grow, adapt and become more responsive and better equipped for the changing societal trends facing the future.

There is no putting the genie back in the bottle.

What would these convenings look like in practice? These convenings will create transparent, open processes that bring together community leaders, academia, industry and experts in facilitation to foster genuine dialog and understanding. On the one hand, it will require community groups gaining deeper expertise of the vast quantities of data being collected on them. On the other hand, the public also needs awareness about the opportunities for leveraging that data to improve their communities and public services. And grassroots groups need government support to make sure that data collection is fair, reasonable and regulated.

Through structured and facilitated engagement, communities will make road maps, share their expectations, air their frustrations, outline the opportunities and work toward actionable solutions. These engagements will enable opportunities for weighing realistic trade-offs, identifying barriers to implementation and addressing the very real concerns around equity and structural inequities.

The future is already here. Community organizations bring deep know-how of residents and neighborhoods. Technology companies both possess vast amounts of data on people but also are intricately linked to the way people live their lives today and in the future. They’d both benefit by speaking to one another and co-creating this “middleware.”

There is no putting the genie back in the bottle.

There is, however, an opportunity for new dialog and process. Companies will continue to outpace the public sector and the role of government for important governance decisions. Whether or not Amazon HQ left Long Island City, there is the need for better processes and understanding about these companies’ roles and responsibilities: a participatory business model that is not based on conflict, but rather empowers people to be active participants in shaping their future.

Wire collaborates with EY for on-premise end-to-end encrypted messaging app

End-to-end encrypted messaging app and service Wire announced a partnership with accounting and consulting company EY. Essentially, Wire is providing an on-promise version of its messaging service so that EY can control the servers and use it for their communication needs.

Both companies announced the deal a few weeks ago, and I talked with Wire and EY executives about the thinking behind this implementation.

“It’s very hard to monetize [Wire] on the consumer market,” Wire CEO Morten Brøgger told me. The company thinks it'll never become a big messaging app with hundreds of millions of users — that ship has sailed. That's why the company launched a team messaging product a couple of years ago.

Teams can sign up to Wire and use it as a sort of Slack replacement with end-to-end encryption on messages, files, calls, etc. The company uses a software-as-a-service approach and charges €4 to €6 per user per month.

Around 600 companies are using this solution across a wide range of industries, from cybersecurity to M&A firms. They share confidential data so Slack is not an option.

Wire is now going one step further by providing on-premise deployment and custom integrations. EY wanted an end-to-end messaging service to share messages and files with customers. And the company wanted to control the servers in house.

“Comparing with some other solutions, when you dig into the technology, sometimes you discover that messages are encrypted and not attachments. With Wire, everything has the same level of encryption,” EY France Chief Digital Office Yannick de Kerhor told me.

Even though EY manages the servers, everything is still encrypted on devices. It means that EY or potential hackers have no way to access the servers and decrypt messages. EY conducted a pilot with 150 people and roughly 20 clients. And the company now plans to roll it out to more teams across the company.

There are currently 70 people working for Wire and the company is already working on more on-premise implementations. Let’s see if this enterprise strategy creates a reliable business model to make end-to-end encryption more ubiquitous.