OneTrust raises $200M at a $1.3B valuation to help organizations navigate online privacy rules

GDPR, and the newer California Consumer Privacy Act, have given a legal bite to ongoing developments in online privacy and data protection: it’s always good practice for companies with an online presence to take measures to safeguard people’s data, but now failing to do so can land them in some serious hot water.

Now — to underscore the urgency and demand in the market — one of the bigger companies helping organizations navigate those rules is announcing a huge round of the funding. OneTrust, which builds tools to help companies navigate data protection and privacy policies both internally and with its customers, has raised $200 million in a Series A round of funding led by Insight that values the company at $1.3 billion.

It’s an outsized round for a Series A, being made at an equally outsized valuation — especially considering that the company is only three years old — but that’s because, according to CEO Kabir Barday, of the wide-ranging nature of the issue, and OneTrust’s early moves and subsequent pole position in tackling it.

“We’re talking about an operational overhaul in a company’s practices,” Barday said in an interview. “That requires the right technology and reach to be able to deliver that at a low cost.” Notably, he said that OneTrust wasn’t actually in search of funding — it’s already revenue generating and could have grown off its own balance sheet — although he noted that having the capitalization and backing sends a signal to the market and in particular to larger organizations of its stability and staying power.

Currently, OneTrust has around 3,000 customers across 100 countries (and 1,000 employees), and the plan will be to continue to expand its reach geographically and to more businesses. Funding will also go towards the company’s technology: it already has 50 patents filed and another 50 applications in progress securing its own IP in the area of privacy protection.

OneTrust offers technology and services covering three different aspects of data protection and privacy management.

Its Privacy Management Software helps an organization manage how they collect data as well as generate compliance reports in line with how a site is working relative to different jurisdictions. Then there is the famous (or infamous) service that lets internet users set their preferences for how they want their data to be handled on different sites. The third is a larger database and risk management platform that assesses how various third-party services (for example advertising providers) work on a site and where they might pose data protection risks.

These are all provided either as a cloud-based software as a service, or an on-premises solution, depending on the customer in question.

The startup also has an interesting backstory that sheds some light on how it was founded and how it identified the gap in the market relatively early.

Alan Dabbiere, who is the co-chairman of OneTrust, had been the chairman of Airwatch — the mobile device management company acquired by VMware in 2014 (Airwatch’s CEO and founder, John Marshall, is OneTrust’s other co-chairman). In an interview, he told me that it was when they at Airwatch — where Barday had worked across consulting, integration, engineering and product management — that they began to see just how a smartphone “could be a quagmire of information.”

“We could capture apps that an employee was using so that we could show them to IT to mitigate security risks,” he said, “but that actually presented a big privacy issue. If you have dyslexia or if you use a dating app, you’ve now shown things to IT that you shouldn’t have.”

He admitted that in the first version of the software, “we weren’t even thinking about whether that was inappropriate, but then we quickly realised that we needed to be thinking about privacy.”

Dabbiere said that it was Barday who first brought that sensibility to light, and “that is something that we have evolved from.” After that, and after the VMware sale, it seemed a no-brainer that he and Marshall would come on to help the new startup grow.

Airwatch made a relatively quick exit, I pointed out. His response: the plan is to stay the course at OneTrust, with a lot more room for expansion in this market. He describes the issues of data protection and privacy as “death by 1,000 cuts.” I guess when you think about it from an enterprising point of view, that essentially presents 1,000 business opportunities.

Indeed, there is obvious growth potential to expand not just its funnel of customers, but to add in more services, such as proactive detection of malware that might leak customers’ data (such as in the recently-fined breach at British Airways), as well as tools to help stop that once identified.

While there are a million other companies also looking to fix those problems today, what’s interesting is the point from which OneTrust is starting: by providing tools to organizations simply to help them operate in the current regulatory climate as good citizens of the online world.

This is what caught Insight’s eye with this investment.

“OneTrust has truly established themselves as leaders in this space in a very short timeframe, and are quickly becoming for privacy professionals what Salesforce became for salespeople,” said Richard Wells of Insight. “They offer such a vast range of modules and tools to help customers keep their businesses compliant with varying regulatory laws, and the tailwinds around GDPR and the upcoming CCPA make this an opportune time for growth. Their leadership team is unparalleled in their ambition and has proven their ability to convert those ambitions into reality.”

He added that while this is a big round for a Series A it’s because it is something of an outlier — not a mark of how Series A rounds will go soon.

“Investors will always be interested in and keen to partner with companies that are providing real solutions, are already established and are led by a strong group of entrepreneurs,” he said in an interview. “This is a company that has the expertise to help solve for what could be one of the greatest challenges of the next decade. That’s the company investors want to partner with and grow, regardless of fund timing.”

What everyone at a startup needs to know about immigration

The immigration process in the U.S. has become a high-stakes undertaking for employers, workers, and entrepreneurs. Predictability has eroded. Processing times have soared. And any mistake or misstep now has dire consequences.

Over the past three years, immigration policies and procedures have been in a state of flux and the process has become more unforgiving for even the smallest mistakes. Putting your best foot forward is crucial. Employers and individuals need to formulate a long-term strategy and backup options to stay protected.

The increase in Requests for Evidence and the backlog for many visa and green card categories has meant longer waiting times. What’s more, the Trump administration’s recent decision to close all USCIS’s international offices—and shift that workload back to the U.S.—is expected to compound the backlogs and delays.

We are seeing these issues affect startups every day. My law firm works with hundreds of startups every year to help them and their employers figure out their immigration paperwork. The overall piece of advice we give is to decide on a specific goal based on a deep understanding of the company and the individual and by examining the options strategically.

Then, you can figure out the right approach for a visa, green card, or citizenship application. Regardless of my personal interest in the matter, now more than ever, I recommend consulting with an experienced immigration attorney who can handle the process with integrity, creativity, compassion, and rigor.

Screen Shot 2019 07 03 at 1.43.53 PM

Screen Shot 2019 07 03 at 1.44.10 PM

What employers should know

The new normal for immigration means increased employee recruiting and retention costs for employers. However, hiring immigrants remains possible.

With a fresh $10 million in the bank, DotLab hopes to bring endometriosis test to market

Thirty-three-year-old founder of personalized medicine company DotLab, Heather Bowerman, wants to shake up the women’s health industry with what she believes to be a better, cheaper, less painful test for endometriosis.

Her company has just completed a Yale University -led validation study and raised $10 million in Series A funding from CooperSurgical, TigerGlobal Management, Luxor Capital Group and the law firm Wilson Sonsini Goodrich & Rosati to bring a new, non-invasive diagnostic test to market.

DotEndo

Endometriosis is an often painful disorder in which tissue begins to grow outside of the uterus and into a woman’s ovaries, fallopian tubes and pelvis. The disease may affect up to one in 10 women of childbearing years and about half of all women who experience infertility, according to the U.S. Department of Health and Human Services.

However, even with clear symptoms of the disease, doctors often try to test for endometriosis as a last resort. The only way currently to test for it is through an invasive laparoscopic procedure, which comes with risks like internal bleeding, infections and hernia.

Called DotEndo the new DotLab test eliminates that risk with a simple diagnostic test. “The rationale for using our test is to test as early as possible and also to use it non-invasively,” Bowerman told TechCrunch.

The CEO was also quick to point out DotEndo is not a genetic test, as there are plenty of tests out on the market helping women discover possible genetic markers around fertility. Rather, it’s a physician-ordered diagnostic test you would take through a lab to find out if you have this specific disease.

“The revolutionary technology behind DotLab’s endometriosis test could improve the lives of the hundreds of millions of women affected by this debilitating disease which has been under-researched and deprioritized for too long,” Bowerman said in a statement.

While there has been some innovation in the space lately — U.S. regulators just approved a new pill to treat endometriosis pain — Bowerman is right in that we definitely still have a long way to go in diagnosing and curing the disease and that will take a lot more capital from investors in the future.

Meanwhile, the next step for DotLab will be to get its test into the hands of physicians, with the hope they recommend DotEndo right off the bat to patients exhibiting symptoms.

U.S. Senator and consumer advocacy groups urge FTC to take action on YouTube’s alleged COPPA violations

The groups behind a push to get the U.S. Federal Trade Commission to investigate YouTube’s alleged violation of children’s privacy law, COPPA, have today submitted a new letter to the FTC that lays out the appropriate sanctions the groups want the FTC to now take. The letter comes shortly after news broke that the FTC was in the final stages of its probe into YouTube’s business practices regarding this matter.

They’re joined in pressing the FTC to act by COPPA co-author, Senator Ed Markey, who penned a letter of his own, which was also submitted today.

The groups’ formal complaint with the FTC was filed back in April 2018. The coalition, which then included 20 child advocacy, consumer and privacy groups, had claimed YouTube doesn’t get parental consent before collecting the data from children under the age of 13 — as is required by the Children’s Online Privacy Protection Act, also known as COPPA.

The organizations said, effectively, that YouTube was hiding behind its terms of service which claims that YouTube is “not intended for children under 13.”

This simply isn’t true, as any YouTube user knows. YouTube is filled with videos that explicitly cater to children, from cartoons to nursery rhymes to toy ads — the latter which often come about by way of undisclosed sponsorships between toy makers and YouTube stars. The video creators will excitedly unbox or demo toys they received for free or were paid to feature, and kids just eat it all up.

In addition, YouTube curates much of its kid-friendly content into a separate YouTube Kids app that’s designed for the under-13 crowd — even preschoolers.

Meanwhile, YouTube treats children’s content like any other. That means targeted advertising and commercial data collection are taking place, the groups’ complaint states. YouTube’s algorithms also recommend videos and autoplay its suggestions — a practice that led to kids being exposed to inappropriate content in the past.

Today, two of the leading groups behind the original complaint — the Campaign for a Commercial-Free Childhood (CCFC) and Center for Digital Democracy (CDD) — are asking the FTC to impose the maximum civil penalties on YouTube because, as they’ve said:

Google had actual knowledge of both the large number of child-directed channels on YouTube and the large numbers of children using YouTube. Yet, Google collected personal information from nearly 25 million children in the U.S over a period of years, and used this data to engage in very sophisticated digital marketing techniques. Google’s wrongdoing allowed it to profit in two different ways: Google has not only made a vast amount of money by using children’s personal information as part of its ad networks to target advertising, but has also profited from advertising revenues from ads on its YouTube channels that are watched by children.

The groups are asking the FTC to impose a 20-year consent degree on YouTube.

They want the FTC to order YouTube to destroy all data from children under 13, including any inferences drawn from the data, that’s in Google’s possession. YouTube should also stop collecting data from anyone under 13, including anyone viewing a channel or video directed at children. Kids’ ages also need to be identified so they can be prevented from accessing YouTube.

Meanwhile, the groups suggest that all the channels in the Parenting and Family lineup, plus any other channels or video directed at children, be removed from YouTube and placed into a separate platform for children. (e.g. the YouTube Kids app).

This is something YouTube is already considering, according to a report from The Wall Street Journal last week.

This separate kids platform would have a variety restrictions, including no commercial data collection; no links out to other sites or online services; no targeted marketing; no product or brand integration; no influencer marketing; and even no recommendations or autoplay.

The removal of autoplaying videos and recommendations, in particular, would be a radical change to how YouTube operates, but one that could protect kids from inappropriate content that slips in. It’s also a change that some employees inside YouTube itself were vying for, according to The WSJ’s report. 

The groups also urge the FTC to require Google to fund educational campaigns around the true nature of Google’s data-driven marketing systems, admit publicly that it violated the law, and submit to annual audits to ensure its ongoing compliance. They want Google to commit $100 million to establish a fund that supports the production of noncommercial, high-quality and diverse content for kids.

Finally, the groups are asking that Google faces the maximum possible civil penalties —  $42,530 per violation, which could be counted as either per child or per day. This monetary relief needs to be severe, the groups argue, so Google and YouTube will be deterred from ever violating COPPA in the future.

While this laundry list of suggestions is more like a wish list of what the ideal resolution would look like, it doesn’t mean that the FTC will follow through on all these suggestions.

However, it seems likely that the Commission would at least require YouTube to delete the improperly collected data and isolate the kids’ YouTube experience in some way. After all, that’s precisely what it just did with Tik Tok (previously Musical.ly) which earlier this year paid a record $5.7 million fine for its own COPPA violations. It also had to implement an age gate where under-13 kids were restricted from publishing content.

The advocacy groups aren’t the only ones making suggestions to the FTC.

Senator Ed Markey (D-Mass.) also sent the FTC a letter today about YouTube’s violations of COPPA — a piece of legislation that he co-authored.

In his letter, he urges the FTC take a similar set of actions, saying:

“I am concerned that YouTube has failed to comply with COPPA. I therefore, urge the Commission to use all necessary resources to investigate YouTube, demand that YouTube pay all monetary penalties it owes as a result of any legal violations, and instruct YouTube to institute policy changes that put children’s well-being first.”

His suggestions are similar to those being pushed by the advocacy groups. They include demands for YouTube to delete the children’s data and cease data collection on those under 13; implement an age gate on YouTube to come into compliance with COPPA; prohibit targeted and influencer marketing; offer detailed explanations of what data is collected if for “internal purposes;” undergo a yearly audit; provide documentation of compliance upon request; and establish a fund for noncommercial content.

He also wants Google to sponsor a consumer education campaign warning parents that no one under 13 should use YouTube and want Google to be prohibited from launching any new child-directed product until it’s been reviewed by an independent panel of experts.

The FTC’s policy doesn’t allow it to confirm or deny nonpublic investigations. YouTube hasn’t yet commented on the letters.

Privacy policies are still too horrible to read in full

A year on from Europe’s flagship update to the pan-EU data protection framework the Commission has warned that too many privacy policies are still too hard to read and has urged tech companies to declutter and clarify their T&Cs. (So full marks to Twitter for the timing of this announcement.)

Announcing the results of a survey of the attitudes of 27,000 Europeans vis-a-vis data protection, the Commission said a large majority (73%) of EU citizens have heard of at least one of the six tested rights guaranteed by the General Data Protection Regulation (GDPR), which came into force at the end of May last year. But only a minority (30%) are aware of all their rights under the framework.

The Commission said it will launch a campaign to boost awareness of privacy rights and encourage EU citizens to optimise their privacy settings — “so that they only share the data they are willing to share”.

In instances of consent-based data processing, the GDPR guaranteed rights include the right to access personal data and get a copy of it without charge; the right to request rectification of incomplete or inaccurate personal data; the right to have data deleted; the right to restrict processing; and the right to data portability.

The highest levels of awareness recorded by the survey was for the right to access their own data (65%); the right to correct the data if they are wrong (61%); the right to object to receiving direct marketing (59%) and the right to have their own data deleted (57%).

 

Commenting in a statement, Andrus Ansip, VP for the Digital Single Market, said: “European citizens have become more aware of their digital rights and this is encouraging news. However, only three in ten Europeans have heard of all their new data rights. For companies, their customers’ trust is hard currency and this trust starts with the customers’ understanding of, and confidence in, privacy settings. Being aware is a precondition to being able to exercise your rights. Both sides can only win from clearer and simpler application of data protection rules.”

“Helping Europeans regain control over their personal data is one of our biggest priorities,” added Věra Jourová, commissioner for justice, consumers and gender equality, in another supporting statement. “But, of the 60% Europeans who read their privacy statements, only 13% read them fully. This is because the statements are too long or too difficult to understand. I once again urge all online companies to provide privacy statements that are concise, transparent and easily understandable by all users. I also encourage all Europeans to use their data protection rights and to optimise their privacy settings.”

Speaking at a Commission event to mark the one-year anniversary of the GDPR, Jourova couched the regulation as “growing fast” and “doing well” but said it needs continued nurturing to deliver on its promise — warning against fragmentation, or so-called ‘gold-plating’, by national agencies adding additional conditions or taking an expansive interpretation of the rules.

She also said “strong and coherent” enforcement is essential — but claimed fears that national watchdogs will become “sanctioning machines have not materialised”.

Though she made a point of emphasizing that: “National data protection authorities are the key for success.”

And it’s fair to day that enforcement remains a rare sight one year on from the regulation being applied — certainly in complaints attached to tech giants (Google is an exception) — which has fuelled a narrative in some media outlets that tries to brand the entire update a failure. But it was never likely data watchdogs would rush to judgement on a sharply increased workload at the same time as they were bedding into a new way of working for cross-border complaints, under GDPR’s one-stop-shop mechanism.

Regulators have also been conscious that data handlers are finding their feet under the new framework, and have allowed time for their compliance. But from here on in it’s fair to say there will be growing expectation from EU citizens for enforcement to uphold their rights.

The EU data protection agency with the biggest bunch of strategic keys where GDPR is concerned is the Irish Data Protection Commission — which has seen complaints filed since the regulation came into force more than double, thanks to the country being a (low tax) favorite for tech giants to base their European HQs.

The Irish DPC has around 18 open investigations into tech giants at this stage — including, most recently, a formal probe of Google’s adtech, which is in response to a number of complaints filed across Europe about how real-time bidding systems handle personal data.

Adtech veteran Quantcast‘s processing and aggregating of personal data is also being formally probed.

Other open investigations on the Irish DPC’s plate include a large number of investigations into various aspects of multiple Facebook owned businesses, as well as a smaller number of probes into Apple, LinkedIn and Twitter’s data handling. So it is certainly one to watch.

In comments at today’s event to mark the one-year anniversary of the GDPR, Ireland’s data protection commissioner indicated that some of these investigations will result in judgements this summer.

“We prioritise fair and high quality judgements. We keep our focus on the job. We have a big quantity of large scale investigations on the way and some of them will be finalised this summer,” said Helen Dixon .

Also speaking at the event, Qwant’s founder Eric Leandri said GDPR has been a boon to his pro-privacy search engine business — suggesting it has increased its growth rate to 30% per week.

“People who understand what data privacy means are inclined to protect their privacy,” he added.

Privacy policies are still too horrible to read in full

A year on from Europe’s flagship update to the pan-EU data protection framework the Commission has warned that too many privacy policies are still too hard to read and has urged tech companies to declutter and clarify their T&Cs. (So full marks to Twitter for the timing of this announcement.)

Announcing the results of a survey of the attitudes of 27,000 Europeans vis-a-vis data protection, the Commission said a large majority (73%) of EU citizens have heard of at least one of the six tested rights guaranteed by the General Data Protection Regulation (GDPR), which came into force at the end of May last year. But only a minority (30%) are aware of all their rights under the framework.

The Commission said it will launch a campaign to boost awareness of privacy rights and encourage EU citizens to optimise their privacy settings — “so that they only share the data they are willing to share”.

In instances of consent-based data processing, the GDPR guaranteed rights include the right to access personal data and get a copy of it without charge; the right to request rectification of incomplete or inaccurate personal data; the right to have data deleted; the right to restrict processing; and the right to data portability.

The highest levels of awareness recorded by the survey was for the right to access their own data (65%); the right to correct the data if they are wrong (61%); the right to object to receiving direct marketing (59%) and the right to have their own data deleted (57%).

 

Commenting in a statement, Andrus Ansip, VP for the Digital Single Market, said: “European citizens have become more aware of their digital rights and this is encouraging news. However, only three in ten Europeans have heard of all their new data rights. For companies, their customers’ trust is hard currency and this trust starts with the customers’ understanding of, and confidence in, privacy settings. Being aware is a precondition to being able to exercise your rights. Both sides can only win from clearer and simpler application of data protection rules.”

“Helping Europeans regain control over their personal data is one of our biggest priorities,” added Věra Jourová, commissioner for justice, consumers and gender equality, in another supporting statement. “But, of the 60% Europeans who read their privacy statements, only 13% read them fully. This is because the statements are too long or too difficult to understand. I once again urge all online companies to provide privacy statements that are concise, transparent and easily understandable by all users. I also encourage all Europeans to use their data protection rights and to optimise their privacy settings.”

Speaking at a Commission event to mark the one-year anniversary of the GDPR, Jourova couched the regulation as “growing fast” and “doing well” but said it needs continued nurturing to deliver on its promise — warning against fragmentation, or so-called ‘gold-plating’, by national agencies adding additional conditions or taking an expansive interpretation of the rules.

She also said “strong and coherent” enforcement is essential — but claimed fears that national watchdogs will become “sanctioning machines have not materialised”.

Though she made a point of emphasizing that: “National data protection authorities are the key for success.”

And it’s fair to day that enforcement remains a rare sight one year on from the regulation being applied — certainly in complaints attached to tech giants (Google is an exception) — which has fuelled a narrative in some media outlets that tries to brand the entire update a failure. But it was never likely data watchdogs would rush to judgement on a sharply increased workload at the same time as they were bedding into a new way of working for cross-border complaints, under GDPR’s one-stop-shop mechanism.

Regulators have also been conscious that data handlers are finding their feet under the new framework, and have allowed time for their compliance. But from here on in it’s fair to say there will be growing expectation from EU citizens for enforcement to uphold their rights.

The EU data protection agency with the biggest bunch of strategic keys where GDPR is concerned is the Irish Data Protection Commission — which has seen complaints filed since the regulation came into force more than double, thanks to the country being a (low tax) favorite for tech giants to base their European HQs.

The Irish DPC has around 18 open investigations into tech giants at this stage — including, most recently, a formal probe of Google’s adtech, which is in response to a number of complaints filed across Europe about how real-time bidding systems handle personal data.

Adtech veteran Quantcast‘s processing and aggregating of personal data is also being formally probed.

Other open investigations on the Irish DPC’s plate include a large number of investigations into various aspects of multiple Facebook owned businesses, as well as a smaller number of probes into Apple, LinkedIn and Twitter’s data handling. So it is certainly one to watch.

In comments at today’s event to mark the one-year anniversary of the GDPR, Ireland’s data protection commissioner indicated that some of these investigations will result in judgements this summer.

“We prioritise fair and high quality judgements. We keep our focus on the job. We have a big quantity of large scale investigations on the way and some of them will be finalised this summer,” said Helen Dixon .

Also speaking at the event, Qwant’s founder Eric Leandri said GDPR has been a boon to his pro-privacy search engine business — suggesting it has increased its growth rate to 30% per week.

“People who understand what data privacy means are inclined to protect their privacy,” he added.

LaLiga fined $280k for soccer app’s privacy violating spy mode

Spanish soccer’s premier league, LaLiga, has netted itself a €250,000 (~$280k) fine for privacy violations of Europe’s General Data Protection Regulation (GDPR) related to its official app.

As we reported a year ago, users of the LaLiga app were outraged to discover the smartphone software does rather more than show minute-by-minute commentary of football matches — but can use the microphone and GPS of fans’ phones to record their surroundings in a bid to identify bars which are unofficially streaming games instead of coughing up for broadcasting rights.

Unwitting fans who hadn’t read the tea leaves of opaque app permissions took to social media to vent their anger at finding they’d been co-opted into an unofficial LaLiga piracy police force as the app repurposed their smartphone sensors to rat out their favorite local bars.

The spy mode function is not mentioned in the app’s description.

El Diaro reports the fine being issued by Spain’s data protection watchdog, the AEPD. A spokesperson for the watchdog confirmed the penalty but told us the full decision has not yet been published.

Per El Diaro’s report, the AEPD found LaLiga failed to be adequately clear about how the app recorded audio, violating Article 5.1 of the GDPR — which requires that personal data be processed lawfully, fairly and in a transparent manner. It said LaLiga should have indicated to app users every time the app remotely switched on the microphone to record their surroundings.

If LaLiga had done so that would have required some form of in-app notification once per minute every time a football match is in play, being as — once granted permission to record audio — the app does so for five sections every minute when a league game is happening.

Instead the app only asks for permission to use the microphone twice per user (per LaLiga’s explanation).

The AEPD found the level of notification the app provides to users inadequate — pointing out, per El Diaro’s reports, that users are unlikely to remember what they have previously consented each time they use the app.

It suggests active notification could be provided to users each time the app is recording, such as by displaying an icon that indicates the microphone is listening in, according to the newspaper. 

The watchdog also found LaLiga to have violated Article 7.3 of the GDPR which stipulates that when consent is being used as the legal basis for processing personal data users should have the right to withdraw their consent at any time. Whereas, again, the LaLiga app does not offer users an ongoing chance to withdraw consent to its spy mode recording after the initial permission requests.

LaLiga has been given a month to correct the violations with the app. However in a statement responding to the AEPD’s decision the association has denied any wrongdoing — and said it plans to appeal the fine.

“LaLiga disagrees deeply with the interpretation of the AEPD and believes that it has not made the effort to understand how the technology [functions],” it writes. “For the microphone functionality to be active, the user has to expressly, proactively and on two occasions grant consent, so it can not be attributed to LaLiga lack of
transparency or information about this functionality.”

“LaLiga will appeal the decision in court to prove that has acted in accordance with data protection regulations,” it adds.

A video produced by LaLiga to try to sell the spy mode function to fans following last year’s social media backlash claims it does not capture any personal data — and describes the dual permission requests to use the microphone as “an exercise in transparency”.

Clearly, the AEPD takes a very different view.

LaLiga’s argument against the AEPD’s decision that it violated the GDPR appears to rest on its suggestion that the watchdog does not understand the technology it’s using — which it claims “neither record, store, or listen to conversations”.

So it looks to be trying to push its own self-serving interpretation of what is and isn’t personal data. (Nor is it the only commercial entity attempting that, of course.)

In the response statement, which we’ve translated from Spanish, LaLiga writes:

The technology used is designed to generate exclusively a specific sound footprint (fingerprint acoustic). This fingerprint only contains 0.75% of the information, discarding the remaining 99.25%, so it is technically impossible to interpret the voice or human conversations.

This fingerprint is transformed into an alphanumeric code (hash) that cannot be reversed to recreate the original sound. The technology’s operation is backed by an independent expert report, that among other arguments that favor our position, concludes that it “does not allow LaLiga to know the contents of any conversation or identify potential speakers”. Furthermore, it adds that this fraud control mechanism “does not store the information captured from the microphone of the mobile” and “the information captured by the microphone of the mobile is subjected to a complex transformation process that is irreversible”.

In comments to El Diaro, LaLiga also likens its technology to the Shazam app — which compares an audio fingerprint to try to identify a song also being recorded in real-time via the phone’s microphone.

However Shazam users manually activate its listening feature, and are shown a visual ‘listening’ icon during the process. Whereas LaLiga has created an embedded spy mode that systematically switches itself after a couple of initial permissions. So it’s perhaps not the best comparison to try to suggest.

LaLiga’s statement adds that the audio eavesdropping on fans’ surroundings is intended to “achieve a legitimate goal” of fighting piracy. 

“LaLiga would not be acting diligently if it did not use all means and technologies at its fingertips to fight against piracy,” it writes. “It is a particularly relevant task taking into account the enormous magnitude of fraud in the marketing system, which is estimated at approximately 400 million euros per year.”

LaLiga also says it will not be making any changes to how the app functions because it already intends to remove what it describes to El Diario as “experimental” functionality at the end of the current football season, which ends June 30.

China says apps should get user consent before tracking

Chinese regulators might follow the European Union’s lead to make life harder for internet companies such as TikTok that closely track behavior of their users in a move that could significantly hurt their revenue.

Last week, Beijing proposed a new set of measures to enforce data security for individuals and the nation overall. According to Article 23 of the draft (see translation from China Law Translate), companies that are “using user data and algorithms to deliver news information or commercial advertisements shall conspicuously label them with the words ‘targeted’ and provide users with functionality to stop receiving information from targeted delivery.”

This is good news for users in China, who could potentially take more control over what they are shown and what tech companies collect about them.

On the flip side of the coin, stepped up data protection will “definitely have an impact” on companies that rely heavily on data crunching business, Michael Tan, partner at law firm Taylor Wessing specializing in data policies, told TechCrunch.

Advances in artificial intelligence have helped adtech players get better at predicting people’s clicks, and, boost their income. Few have done it better in the Chinese mobile age than Bytedance, the startup that operates TikTok and the popular Chinese news app Jinri Toutiao. In between viral videos and news are customized ads that help the eight-year-old company, which was last valued at a whopping $75 billion, make money.

Bytedance’s success with programmatic ads prompted more entrenched tech giants to follow suit. Baidu, which is China’s answer to Google with a lucrative ad business, added a personalized news feed to its search app in 2016 as Toutiao hit the mainstream. Tencent and Alibaba also incorporated customized feeds into their main products.

“Data is too important for internet companies,” a product manager at a Shenzhen-based tech firm told TechCrunch. A lot of businesses, he said, including Bytedance, are well-prepared for regulatory scrutiny so they have plenty of backup plans and have explored alternative revenue streams.

“For instance, the apps might trick you into giving them access to your data,” the person added. “Even if you consent, you still don’t know how your data is being used.”

Traffic control

In mid-2017, China introduced a sweeping Cybersecurity Law as Beijing sought more control over how data flows within its online borders. A lot of the clauses are broad and vague, but the government has taken incremental steps to solidify them overtime, including efforts like the proposed measures for data protection.

“So far there is no unified data protection legal framework in place, though the topic is addressed by various laws and regulations including the PRC Cybersecurity Law,” explained Tan. “This is quite different from many other jurisdictions like that of the E.U. where there is unified protection framework in place with primary focus on personal data and privacy protection.”

While the set of data regulations touch on individual privacy, Tan noted that the laws’ real focus is on topics “relating to national security protection.”

For example, Article 29 of the proposed data policies stipulates that “where mainland users visit the mainland internet, their traffic must not be routed outside the mainland.” The authority does not elaborate on what counts as “routing,” though some speculate that it might be targeting people accessing overseas websites through a VPN, the tool that allows them to get around China’s censorship apparatus.

Tan suggested otherwise, arguing that the clause might be introduced “with good intention to prevent fraudulent cases including conscious or unconscious visits to overseas websites which promote illegal business under Chinese law, for example, gambling sites,” although doing so may “inadvertently hurt China-based multinational companies that have their I.T. facilities deployed globally.”

The draft measures for data protection, which were published by the Cyberspace Administration of China, the country’s top internet authority, are currently soliciting public comment until June 28.

China says apps should get user consent before tracking

Chinese regulators might follow the European Union’s lead to make life harder for internet companies such as TikTok that closely track behavior of their users in a move that could significantly hurt their revenue.

Last week, Beijing proposed a new set of measures to enforce data security for individuals and the nation overall. According to Article 23 of the draft (see translation from China Law Translate), companies that are “using user data and algorithms to deliver news information or commercial advertisements shall conspicuously label them with the words ‘targeted’ and provide users with functionality to stop receiving information from targeted delivery.”

This is good news for users in China, who could potentially take more control over what they are shown and what tech companies collect about them.

On the flip side of the coin, stepped up data protection will “definitely have an impact” on companies that rely heavily on data crunching business, Michael Tan, partner at law firm Taylor Wessing specializing in data policies, told TechCrunch.

Advances in artificial intelligence have helped adtech players get better at predicting people’s clicks, and, boost their income. Few have done it better in the Chinese mobile age than Bytedance, the startup that operates TikTok and the popular Chinese news app Jinri Toutiao. In between viral videos and news are customized ads that help the eight-year-old company, which was last valued at a whopping $75 billion, make money.

Bytedance’s success with programmatic ads prompted more entrenched tech giants to follow suit. Baidu, which is China’s answer to Google with a lucrative ad business, added a personalized news feed to its search app in 2016 as Toutiao hit the mainstream. Tencent and Alibaba also incorporated customized feeds into their main products.

“Data is too important for internet companies,” a product manager at a Shenzhen-based tech firm told TechCrunch. A lot of businesses, he said, including Bytedance, are well-prepared for regulatory scrutiny so they have plenty of backup plans and have explored alternative revenue streams.

“For instance, the apps might trick you into giving them access to your data,” the person added. “Even if you consent, you still don’t know how your data is being used.”

Traffic control

In mid-2017, China introduced a sweeping Cybersecurity Law as Beijing sought more control over how data flows within its online borders. A lot of the clauses are broad and vague, but the government has taken incremental steps to solidify them overtime, including efforts like the proposed measures for data protection.

“So far there is no unified data protection legal framework in place, though the topic is addressed by various laws and regulations including the PRC Cybersecurity Law,” explained Tan. “This is quite different from many other jurisdictions like that of the E.U. where there is unified protection framework in place with primary focus on personal data and privacy protection.”

While the set of data regulations touch on individual privacy, Tan noted that the laws’ real focus is on topics “relating to national security protection.”

For example, Article 29 of the proposed data policies stipulates that “where mainland users visit the mainland internet, their traffic must not be routed outside the mainland.” The authority does not elaborate on what counts as “routing,” though some speculate that it might be targeting people accessing overseas websites through a VPN, the tool that allows them to get around China’s censorship apparatus.

Tan suggested otherwise, arguing that the clause might be introduced “with good intention to prevent fraudulent cases including conscious or unconscious visits to overseas websites which promote illegal business under Chinese law, for example, gambling sites,” although doing so may “inadvertently hurt China-based multinational companies that have their I.T. facilities deployed globally.”

The draft measures for data protection, which were published by the Cyberspace Administration of China, the country’s top internet authority, are currently soliciting public comment until June 28.

Facebook fails to stop Europe’s top court weighing in on EU-US data transfers

Facebook has failed in its last ditch attempt to block a referral by Ireland’s High Court of questions over the legality of EU-US data transfer mechanisms to the region’s top court.

Ireland’s Supreme Court unanimously refused its request to appeal the decision Friday, via Reuters.

Irish law does not include a provision to appeal against referrals to the CJEU but Facebook sought to both stay and appeal the decision anyway.

It was denied the stay but granted leave to appeal last year — only for the Supreme Court to extinguish its hope of preventing Europe’s top judges from weighing in on privacy concerns attached to key data transfer mechanisms which are used by thousands of companies to authorize flows of EU citizens’ personal data to the US.

The case originates in a complaint against Facebook’s use of another data transfer mechanism, Standard Contractual Clauses (SCCs), by lawyer and EU privacy campaigner, Max Schrems .

He famously brought down the prior EU-US data transfer arrangement, Safe Harbor — after successfully challenging its legality in the wake of NSA whistleblower Edward Snowden’s 2013 disclosures about US government mass surveillance programs. (Hence this follow-on challenge being referred to as ‘Schrems II‘.)

“Facebook likely again invested millions to stop this case from progressing. It is good to see that the Supreme Court has not followed Facebook’s arguments that were in total denial of all existing findings so far,” said Schrems in a statement responding to the Supreme Court’s rejection of Facebook’s appeal. “We are now looking forward to the hearing at the Court of Justice in Luxembourg next month.”

Also commenting in a statement, a Facebook spokesperson said: “We are grateful for the consideration of the Irish Court and look ahead to the Court of Justice of the European Union to now decide on these complex questions. Standard Contract Clauses provide important safeguards to ensure that Europeans’ data are protected once transferred overseas. SCCs have been designed and endorsed by the European Commission and are used by thousands of companies across Europe to do business.”

Schrems’ complaint to the Irish data protection regulator led, rather unusually, to the watchdog to itself refer privacy concerns to the courts — which then widened the complaint, asking for the CJEU’s opinion on a range of fine-grained points around whether EU citizens’ rights are being adequately protected by both Privacy Shield and SCCs.

It’s shaping up as to be a replay of the close CJEU scrutiny that skewered Safe Harbor — a definitive strike down that instantly left thousands of companies scrambling to put in place alternative legal arrangements to avoid illegally processing EU citizens’ data.

At the time of writing there are 4,756 organizations signed up to the replacement Privacy Shield framework.

The European data protection landscape has also evolved since 2015 — with the General Data Protection Regulation (GDPR) ramping up the size of potential fines for privacy violations.

SCCs were one of the alternative mechanisms the European Commission suggested companies use in the interim between Safe Harbor’s demise in fall 2015, and Privacy Shield getting up and running, in mid 2016, although they too are now facing legal questions, per the Schrems II case.

During renegotiations and since, the European Commission has always maintained that the Privacy Shield framework — which bakes in an annual review, and makes provision for an ombudsperson to handle any EU citizens’ complaints about how US companies handle their data — is more robust than its predecessor mechanism, claiming too that it is confident it will survive legal testing.

That confidence will soon be tested at Europe’s highest legal level.

Meanwhile both Privacy Shield and SCCs are not short of critics — including from within the EU’s institutions, with both the parliament and an influential body representing national data protection watchdogs expressing ongoing concerns

The Trump administration’s entrenchment of privacy hostile surveillance laws targeting non-US citizens has not helped Privacy Shield’s cause.

The US under Trump has also been tardy to fulfil key posts the Commission has said are required for full functioning of privacy shield — leading even it to apply a compliance deadline earlier this year.

To a degree, all that is just fiddling round the edges, though, vs the core contention at the heart of the complaints driving challenges to Privacy Shield and SCCs — i.e. that there is a fundamental incompatibility between US law that prioritizes national security and EU law which privileges personal privacy — which Europe’s top judges will soon be weighing in on again.

It’s already been more than a year since Ireland’s High Court referred eleven questions to the CJEU. And while the court can take years to deliberate it’s worth noting that it did not do so with the original Schrems challenge. In that case judges took only a little over a year to return their landmark verdict to strike down Safe Harbor, demonstrating they are willing to move quickly to defend EU privacy rights against the threat of mass surveillance.

Now, with Facebook’s last ditch attempt to de-rail the CJEU referral kicked into touch, it’s quite possible they will could move just as quickly towards a verdict on Schrems II. Certainly if they feel EU citizens’ fundamental rights are being infringed.

Privacy Shield is also facing a legal challenge brought by French digital rights groups — who similarly argue that it breaches fundamental EU rights. That complaint will be heard by the General Court of the EU on July 1 and 2.