Another US court says police cannot force suspects to turn over their passwords

The highest court in Pennsylvania has ruled that the state’s law enforcement cannot force suspects to turn over their passwords that would unlock their devices.

The state’s Supreme Court said compelling a password from a suspect is a violation of the Fifth Amendment, a constitutional protection that protects suspects from self-incrimination.

It’s not an surprising ruling, given other state and federal courts have almost always come to the same conclusion. The Fifth Amendment grants anyone in the U.S. the right to remain silent, which includes the right to not turn over information that could incriminate them in a crime. These days, those protections extend to the passcodes that only a device owner knows.

But the ruling is not expected to affect the ability by police to force suspects to use their biometrics — like their face or fingerprints — to unlock their phone or computer.

Because your passcode is stored in your head and your biometrics are not, prosecutors have long argued that police can compel a suspect into unlocking a device with their biometrics, which they say are not constitutionally protected. The court also did not address biometrics. In a footnote of the ruling, the court said it “need not address” the issue, blaming the U.S. Supreme Court for creating “the dichotomy between physical and mental communication.”

Peter Goldberger, president of the ACLU of Pennsylvania, who presented the arguments before the court, said it was “fundamental” that suspects have the right to “to avoid self-incrimination.”

Despite the spate of rulings in recent years, law enforcement have still tried to find their way around compelling passwords from suspects. The now-infamous Apple-FBI case saw the federal agency try to force the tech giant to rewrite its iPhone software in an effort to beat the password on the handset of the terrorist Syed Rizwan Farook, who with his wife killed 14 people in his San Bernardino workplace in 2015. Apple said the FBI’s use of the 200-year-old All Writs Act would be “unduly burdensome” by putting potentially every other iPhone at risk if the rewritten software leaked or was stolen.

The FBI eventually dropped the case without Apple’s help after the agency paid hackers to break into the phone.

Brett Max Kaufman, a senior staff attorney at the ACLU’s Center for Democracy, said the Pennsylvania case ruling sends a message to other courts to follow in its footsteps.

“The court rightly rejects the government’s effort to create a giant, digital-age loophole undermining our time-tested Fifth Amendment right against self-incrimination,” he said. “The government has never been permitted to force a person to assist in their own prosecution, and the courts should not start permitting it to do so now simply because encrypted passwords have replaced the combination lock.”

“We applaud the court’s decision and look forward to more courts to follow in the many pending cases to be decided next,” he added.

Amnesty International latest to slam surveillance giants Facebook and Google as “incompatible” with human rights

Human rights charity Amnesty International is the latest to call for reform of surveillance capitalism — blasting the business models of “surveillance giants” Facebook and Google in a new report which warns the pair’s market dominating platforms are “enabling human rights harm at a population scale”.

“[D]despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost,” Amnesty warns. “The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”

“This isn’t the internet people signed up for,” it adds.

What’s most striking about the report is the familiarly of the arguments. There is now a huge weight of consensus criticism around surveillance-based decision-making — from Apple’s own Tim Cook through scholars such as Shoshana Zuboff and Zeynep Tufekci to the United Nations — that’s itself been fed by a steady stream of reportage of the individual and societal harms flowing from platforms’ pervasive and consentless capturing and hijacking of people’s information for ad-based manipulation and profit.

This core power asymmetry is maintained and topped off by self-serving policy positions which at best fiddle around the edges of an inherently anti-humanitarian system. While platforms have become practiced in dark arts PR — offering, at best, a pantomime ear to the latest data-enabled outrage that’s making headlines, without ever actually changing the underlying system. That surveillance capitalism’s abusive modus operandi is now inspiring governments to follow suit — aping the approach by developing their own data-driven control systems to straitjacket citizens — is exceptionally chilling.

But while the arguments against digital surveillance are now very familiar what’s still sorely lacking is an effective regulatory response to force reform of what is at base a moral failure — and one that’s been allowed to scale so big it’s attacking the democratic underpinnings of Western society.

“Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights,” Amnesty writes.

“The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations.”

Needless to say Facebook and Google do not agree with Amnesty’s assessment. But, well, they would say that wouldn’t they?

Amnesty’s report notes there is now a whole surveillance industry feeding this beast — from adtech players to data brokers — while pointing out that the dominance of Facebook and Google, aka the adtech duopoly, over “the primary channels that most of the world relies on to engage with the internet” is itself another harm, as it lends the pair of surveillance giants “unparalleled power over people’s lives online”.

“The power of Google and Facebook over the core platforms of the internet poses unique risks for human rights,” it warns. “For most people it is simply not feasible to use the internet while avoiding all Google and Facebook services. The dominant internet platforms are no longer ‘optional’ in many societies, and using them is a necessary part of participating in modern life.”

Amnesty concludes that it is “now evident that the era of self-regulation in the tech sector is coming to an end” — saying further state-based regulation will be necessary. Its call there is for legislators to follow a human rights-based approach to rein in surveillance giants.

You can read the report in full here (PDF).

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.

In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.

“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”

“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”

One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.

“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.

Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.

In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”

There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.

“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.

There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.

The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”

At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.

Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.

Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.

“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.

“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.

“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”

Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.

“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.

“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”

On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.

He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.

He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.

“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”

Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”

There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.

“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”

In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.

While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.

The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.

The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.

Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

A 10-point plan to reboot the data industrial complex for the common good

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.

In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.

“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”

“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”

One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.

“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.

Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.

In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”

There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.

“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.

There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.

The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”

At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.

Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.

Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.

“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.

“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.

“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”

Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.

“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.

“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”

On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.

He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.

He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.

“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”

Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”

There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.

“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”

In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.

While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.

The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.

The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.

Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

TriNet sent remote workers an email that some thought was a phishing attack

It was the one of the best phishing emails we’ve seen… that wasn’t.

Phishing remains one of the most popular attack choices for scammers. Phishing emails are designed to impersonate companies or executives to trick users into turning over sensitive information, typically usernames and passwords, so that scammers can log into online services and steal money or data. But detecting and preventing phishing isn’t just a user problem — it’s a corporate problem too, especially when companies don’t take basic cybersecurity precautions and best practices to hinder scammers from ever getting into a user’s inbox.

Enter TriNet, a human resources giant, which this week became the poster child for how how to make a genuine email to its customers look inadvertently as suspicious as it gets.

Remote employees at companies across the U.S. who rely on TriNet for access to outsourced human resources, like their healthcare benefits and workplace policies, were sent an email this week as part of an effort to keep employees “informed and up-to-date on the labor and employment laws that affect you.”

Workers at one Los Angeles-based health startup that manages its employee benefits through TriNet all got the email at the same time. But one employee wasn’t convinced it was a real email, and forwarded it — and its source code — to TechCrunch.

TriNet is one of the largest outsourced human resources providers in the United States, primarily for small-to-medium-sized businesses that may not have the funding to hire dedicated human resources staff. And this time of year is critical for companies that rely on TriNet, since health insurance plans are entering open enrollment and tax season is only a few weeks away. With benefit changes to consider, it’s not unusual for employees to receive a rash of TriNet-related emails towards the end of the year.

But this email didn’t look right. In fact when we looked under the hood of the email, everything about it looked suspicious.

This is the email that remote workers received. TriNet said the use of an Imgur-hosted image in the email was “mistakenly” used. (Image: TechCrunch/supplied)

We looked at the source code of the email, including its headers. These email headers are like an envelope — they say where an email came from, who it’s addressed to, how it was routed, and if there were any complications along the way, such as being marked as spam.

There were more red flags than we could count.

Chief among the issues were that the TriNet logo in the email was hosted on Imgur, a free image-hosting and meme-sharing site, and not the company’s own website. That’s a common technique among phishing attackers — they use Imgur to host images they use in their spam emails to avoid detection. Since the image was uploaded in July, that logo was viewed more than 70,000 times until we reached out to TriNet, which removed the image, suggesting thousands of TriNet customers had received one of these emails. And, although the email contained a link to a TriNet website, the page that loaded had an entirely different domain with nothing on it to suggest it was a real TriNet-authorized site besides a logo, which if it were a phishing site could’ve been easily spoofed.

Fearing that somehow scammers had sent out a phishing email to potentially thousands of TriNet customers, we reached out to security researcher John Wethington, founder of security firm Condition:Black, to examine the email.

It turns out he was just as convinced as us that the email may have been fake.

“As hackers and self-proclaimed social engineers, we often think that spotting a phishing email is ‘easy’,” said Wethington. “The truth is it’s hard.”

“When we first examined the email every alarm bell was going off. The deeper we dug into it the more confusing things became. We looked at the domain name records, the site’s source code, and even the webpage hashes,” he said.

There was nothing, he said, that gave us “100% confidence” that the site was genuine until we contacted TriNet.

TriNet spokesperson Renee Brotherton confirmed to TechCrunch that the email campaign was legitimate, and that it uses the third-party site “for our compliance ePoster service offering. She added: “The Imgur image you reference is an image of the TriNet logo that Poster Elite mistakenly pointed to and it has since been removed.”

“The email you referenced was sent to all employees who do not go into an employer’s physical workspace to ensure their access to required notices,” said TriNet’s spokesperson.

When reached, Poster Elite also confirmed the email was legitimate.

This is not a phishing site, but it sure looks like one. (Image: TechCrunch)

How did TriNet get this so wrong? This culmination of errors had some who received the email worried that their information might have been breached.

“When companies communicate with customers in ways that are similar to the way scammers communicate, it can weaken their customer’s ability over time to spot and shut down security threats in future communications,” said Rachel Tobac, a hacker, social engineer, and founder of SocialProof Security.

Tobac pointed to two examples of where TriNet got it wrong. First, it’s easy for hackers to send spoofed emails to TriNet’s workers because TriNet’s DMARC policy on its domain name is not enforced.

Second, the inconsistent use of domain names is confusing for the user. TriNet confirmed that it pointed the link in the email — posters.trinet.com — to eposterservice.com, which hosts the company’s compliance posters for remote workers. TriNet thought that forwarding the domain would suffice, but instead we thought someone had hijacked TriNet’s domain name settings — a type of attack that’s on the increase, though primarily carried out by state actors. TriNet is a huge target — it stores workers’ benefits, pay details, tax information and more. We had assumed the worst.

“This is similar to an issue we see with banking fraud phone communications,” said Tobac. “Spammers call bank customers, spoof the bank’s number, and pose as the bank to get customers to give account details to ‘verify their account’ before ‘hearing about the fraud the bank noticed on their account — which, of course, is an attack,” she said.

“This is surprisingly exactly what the legitimate phone call sounds like when the bank is truly calling to verify fraudulent transactions,” Tobac said.

Wethington noted that other suspicious indicators were all techniques used by scammers in phishing attacks. The posters.trinet.com subdomain used in the email was only set up a few weeks ago, and the eposterservice.com domain it pointed to used an HTTPS certificate that wasn’t associated with either TriNet or Poster Elite.

These all point to one overarching problem. TriNet may have sent out a legitimate email but everything about it looked problematic.

On one hand, being vigilant about incoming emails is a good thing. And while it’s a cat-and-mouse game to evade phishing attacks, there are things that companies can do to proactively protect themselves and their customers from scams and phishing attacks. And yet TriNet failed in almost every way by opening itself up to attacks by not employing these basic security measures.

“It’s hard to distinguish the good from the bad even with proper training, and when in doubt I recommend you throw it out,” said Wethington.

California’s new data privacy law brings U.S. closer to GDPR

Data privacy has become one of the defining business and cultural issues of our time.

Companies around the world are scrambling to properly protect their customers’ personal information (PI). However, new regulations have actually shifted the definition of the term, making everything more complicated. With the California Consumer Privacy Act (CCPA) taking effect in January 2020, companies have limited time to get a handle on the customer information they have and how they need to care for it. If they don’t, they not only risk being fined, but also loss of brand reputation and consumer trust — which are immeasurable.

California was one of the first states to provide an express right of privacy in its constitution and the first to pass a data breach notification law, so it was not surprising when state lawmakers in June 2018 passed the CCPA, the nation’s first statewide data privacy law. The CCPA isn’t just a state law — it will become the defacto national standard for the foreseeable future, because the sheer numbers of Californians means most businesses in the country will have to comply. The requirements aren’t insignificant. Companies will have to disclose to California customers what data of theirs has been collected, delete it and stop selling it if the customer requests. The fines could easily add up — $7,500 per violation if intentional, $2,500 for those lacking intent and $750 per affected user in civil damages.

Evolution of personal information

It used to be that the meaning of personally identifiable information (PII) from a legal standpoint was clear — data that can distinguish the identity of an individual. By contrast, the standard for mere PI was lower because there was so much more of it; if PI is a galaxy, PII was the solar system. However, CCPA, and the EU’s General Data Protection Regulation GDPR, which went into effect in 2018, have shifted the definition to include additional types of data that were once fairly benign. The CCPA enshrines personal data rights for consumers, a concept that GDPR first brought into play.

The GDPR states: “Personal data should be as broadly interpreted as possible,” which includes all data associated with an individual, which we call “contextual” information. This includes any information that can “directly or indirectly” identify a person, including real names and screen names, identification numbers, birth date, location data, network addresses, device IDs, and even characteristics that describe the “physical, physiological, genetic, mental, commercial, cultural, or social identity of a person.” This conceivably could include any piece of information about a person that isn’t anonymized.

With the CCPA, the United States is playing catch up to the GDPR and similarly expanding the scope of the definition of personal data. Under the CCPA, personal information is “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” This includes a host of information that typically don’t raise red flags but which when combined with other data can triangulate to a specific individual like biometric data, browsing history, employment and education data, as well as inferences drawn from any of the relevant information to create a profile “reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities and aptitudes.”

Know the rules, know the data

These regulations aren’t checklist rules; they require big changes to technology and processes, and a rethinking of what data is and how it should be treated. Businesses need to understand what rules apply to them and how to manage their data. Information management has become a business imperative, but most companies lack a clear road map to do it properly. Here are some tips companies can follow to ensure they are meeting the letter and the spirit of the new regulations.

  • Figure out which regulations apply to you

The regulatory landscape is constantly changing with new rules being adopted at a rapid rate.  Every organization needs to know which regulations they need to comply with and understand the distinctions between them. Some core aspects CCPA and GDPR share include data subject rights fulfillment and automated deletion. But there will be differences so having a platform that allows you to handle a heterogenous environment at scale is important.

  • Create a privacy compliance team that works well with others

Messaging app Wire confirms $8.2M raise, responds to privacy concerns after moving holding company to the US

Big changes are afoot for Wire, an enterprise-focused end-to-end encrypted messaging app and service that advertises itself as “the most secure collaboration platform”. In February, Wire quietly raised $8.2 million from Morpheus Ventures and others, we’ve confirmed — the first funding amount it has ever disclosed — and alongside that external financing, it moved its holding company in the same month to the US from Luxembourg, a switch that Wire’s CEO Morten Brogger described in an interview as “simple and pragmatic.”

He also said that Wire is planning to introduce a freemium tier to its existing consumer service — which itself has half a million users — while working on a larger round of funding to fuel more growth of its enterprise business — a key reason for moving to the US, he added: There is more money to be raised there.

“We knew we needed this funding and additional to support continued growth. We made the decision that at some point in time it will be easier to get funding in North America, where there’s six times the amount of venture capital,” he said.

While Wire has moved its holding company to the US, it is keeping the rest of its operations as is. Customers are licensed and serviced from Wire Switzerland; the software development team is in Berlin, Germany; and hosting remains in Europe.

The news of Wire’s US move and the basics of its February funding — sans value, date or backers — came out this week via a blog post that raises questions about whether a company that trades on the idea of data privacy should itself be more transparent about its activities.

Specifically, the changes to Wire’s financing and legal structure were only communicated to users when news started to leak out, which brings up questions not just about transparency, but about the state of Wire’s privacy policy, given the company’s holding company now being on US soil.

It was an issue picked up and amplified by NSA whistleblower Edward Snowden . Via Twitter, he described the move to the US as “not appropriate for a company claiming to provide a secure messenger — claims a large number of human rights defenders relied on.”

The key question is whether Wire’s shift to the US puts users’ data at risk — a question that Brogger claims is straightforward to answer: “We are in Switzerland, which has the best privacy laws in the world” — it’s subject to Europe’s General Data Protection Regulation framework (GDPR) on top of its own local laws — “and Wire now belongs to a new group holding, but there no change in control.” 

In its blog post published in the wake of blowback from privacy advocates, Wire also claims it “stands by its mission to best protect communication data with state-of-the-art technology and practice” — listing several items in its defence:

  • All source code has been and will be available for inspection on GitHub (github.com/wireapp).
  • All communication through Wire is secured with end-to-end encryption — messages, conference calls, files. The decryption keys are only stored on user devices, not on our servers. It also gives companies the option to deploy their own instances of Wire in their own data centers.
  • Wire has started working on a federated protocol to connect on-premise installations and make messaging and collaboration more ubiquitous.
  • Wire believes that data protection is best achieved through state-of-the-art encryption and continues to innovate in that space with Messaging Layer Security (MLS).

But where data privacy and US law are concerned, it’s complicated. Snowden famously leaked scores of classified documents disclosing the extent of US government mass surveillance programs in 2013, including how data-harvesting was embedded in US-based messaging and technology platforms.

Six years on, the political and legal ramifications of that disclosure are still playing out — with a key judgement pending from Europe’s top court which could yet unseat the current data transfer arrangement between the EU and the US.

Privacy versus security

Wire launched at a time when interest in messaging apps was at a high watermark. The company made its debut in the middle of February 2014, and it was only one week later that Facebook acquired WhatsApp for the princely sum of $19 billion.

We described Wire’s primary selling point at the time as a “reimagining of how a communications tool like Skype should operate had it been built today” rather than in in 2003. That meant encryption and privacy protection, but also better audio tools and file compression and more.

It was a pitch that seemed especially compelling considering the background of the company. Skype co-founder Janus Friis and funds connected to him were the startup’s first backers (and they remain the largest shareholders); Wire was co-founded in by Skype alums Jonathan Christensen and Alan Duric (no longer with the company); and even new investor Morpheus has Skype roots.

Yet even with that Skype pedigree, the strategy faced a big challenge.

“The consumer messaging market is lost to the Facebooks of the world, which dominate it,” Brogger said today. “However, we made a clear insight, which is the core strength of Wire: security and privacy.”

That, combined with trend around the consumerization of IT that’s brought new tools to business users, is what led Wire to the enterprise market in 2017 — a shift that’s seen it pick up a number of big names among its 700 enterprise customers, including Fortum, Aon, EY and SoftBank Robotics.

But fast forward to today, and it seems that even as security and privacy are two sides of the same coin, it may not be so simple when deciding what to optimise in terms of features and future development, which is part of the question now and what critics are concerned with.

“Wire was always for profit and planned to follow the typical venture backed route of raising rounds to accelerate growth,” one source familiar with the company told us. “However, it took time to find its niche (B2B, enterprise secure comms).

“It needed money to keep the operations going and growing. [But] the new CEO, who joined late 2017, didn’t really care about the free users, and the way I read it now, the transformation is complete: ‘If Wire works for you, fine, but we don’t really care about what you think about our ownership or funding structure as our corporate clients care about security, not about privacy.'”

And that is the message you get from Brogger, too, who describes individual consumers as “not part of our strategy”, but also not entirely removed from it, either, as the focus shifts to enterprises and their security needs.

Brogger said there are still half a million individuals on the platform, and they will come up with ways to continue to serve them under the same privacy policies and with the same kind of service as the enterprise users. “We want to give them all the same features with no limits,” he added. “We are looking to switch it into a freemium model.”

On the other side, “We are having a lot of inbound requests on how Wire can replace Skype for Business,” he said. “We are the only one who can do that with our level of security. It’s become a very interesting journey and we are super excited.”

Part of the company’s push into enterprise has also seen it make a number of hires. This has included bringing in two former Huddle C-suite execs, Brogger as CEO and Rasmus Holst as chief revenue officer — a bench that Wire expanded this week with three new hires from three other B2B businesses: a VP of EMEA sales from New Relic, a VP of finance from Contentful; and a VP of Americas sales from Xeebi.

Such growth comes with a price-tag attached to it, clearly. Which is why Wire is opening itself to more funding and more exposure in the US, but also more scrutiny and questions from those who counted on its services before the change.

Brogger said inbound interest has been strong and he expects the startup’s next round to close in the next two to three months.

A US federal court finds suspicionless searches of phones at the border is illegal

A federal court in Boston has ruled that the government is not allowed to search travelers’ phones and devices at the U.S. border without first having reasonable suspicion of a crime.

That’s a significant victory for civil liberties advocates who have said that the government’s own rules that allow its border agents to search electronic devices at the border are unconstitutional.

The court said that the government’s policies on warrantless searches of devices without reasonable suspicion “violate the Fourth Amendment,” which provides constitutional protections against warrantless searches and seizures, the court said.

The case was brought by 11 travelers — ten of which are U.S. citizens — with support from the American Civil Liberties Union and the Electronic Frontier Foundation, who said border agents searched their smartphones and laptops without a warrant, or any suspicion of wrongdoing or criminal activity. But the travelers said the government was overreaching its powers.

The border remains a bizarre legal space, where the government asserts powers that it cannot claim against citizens or residents within the United States. The government has long said it doesn’t need a warrant to search devices at the border.

Any data collected by Customs & Border Protection without a warrant can still be shared with federal, state, local and foreign law enforcement.

Esha Bhandari, staff attorney with the ACLU’s Speech, Privacy, and Technology Project, said the ruling “significantly advances” protections under the Fourth Amendment.

“This is a great day for travelers who now can cross the international border without fear that the government will, in the absence of any suspicion, ransack the extraordinarily sensitive information we all carry in our electronic devices,” said Sophia Cope, a senior staff attorney at the EFF.

Millions of travelers arrive into the U.S. every day. Last year, border officials searched 33,000 travelers’ devices — a fourfold increase since 2015 — without any need for reasonable suspicion. In recent months, travelers have been told to inform the government of any social media handles they have, all of which are subject to inspection. But some have been denied entry to the U.S. for content on their phones shared by other people.

Earlier this year, a federal appeals court found that traffic enforcement officers using chalk to mark car tires was deemed unconstitutional.

A spokesperson for Customs & Border Protection did not immediately comment.

Facebook says a bug caused its iPhone app’s inadvertent camera access

Facebook has faced a barrage of concern over an apparent bug that resulted in the social media giant’s iPhone app exposing the camera as users scroll through their feed.

A tweet over the weekend blew up after Joshua Maddux tweeted a screen recording of the Facebook app on his iPhone. He noticed that the camera would appear behind the Facebook app as he scrolled through his social media feed.

Several users had already spotted the bug earlier in the month. One person called it “a little worrying.”

Some immediately assumed the worst — as you might expect, given the long history of security vulnerabilities, data breaches and inadvertent exposures at Facebook over the past year. Just last week, the company confirmed that some developers had improperly retained access to some Facebook user data for more than a year.

Will Strafach, chief executive at Guardian Firewall, said it looked like a “harmless but creepy looking bug.”

The bug appears to only affect iPhone users running the latest iOS 13 software, and those who have already granted the app access to the camera and microphone. It’s believed the bug relates to the “story” view in the app, which opens the camera for users to take photos.

One workaround is to simply revoke camera and microphone access to the Facebook app in their iOS settings.

Facebook vice president of integrity Guy Rosen tweeted this morning that it “sounds like a bug” and the company was investigating. Only after we published, a spokesperson confirmed to TechCrunch that the issue was in fact a bug.

“We recently discovered that version 244 of the Facebook iOS app would incorrectly launch in landscape mode,” said the spokesperson. “In fixing that issue last week in v246 — launched on November 8th — we inadvertently introduced a bug that caused the app to partially navigate to the camera screen adjacent to News Feed when users tapped on photos.”

“We have seen no evidence of photos or videos being uploaded due to this bug,” the spokesperson added. The bug fix was submitted for Apple’s approval today.

“I guess it does say something when Facebook trust has eroded so badly that it will not get the benefit of the doubt when people see such a bug,” said Strafach.

Updated with Facebook comment.

Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit

A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol, and other well known celebrities.

The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his and other celebrities’ likeness to shill Bitcoin scams via fake ads run on its platform.

In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide data on the accounts running them within a week.

Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.

The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams.

Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.

In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook. It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing celebrity images on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.

However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.

It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.

Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered. Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced

Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising.

It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.

The judgement orders Facebook to remove fake scam ads containing celebrity likenesses from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).

The court order also requires that Facebook provides data to the affected celebrity on the accounts that had been misusing their likeness within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.

Facebook has also been ordered to pay the case costs.

Responding to the judgement in a statement, a Facebook spokesperson told us:

We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.

One legal expert describes the judgement as “pivotal“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.

Injunctions are faster and more effective, Hildebrandt added.

The judgement also raises questions around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.

Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.

The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order.