The ethics of internet culture: a conversation with Taylor Lorenz

Taylor Lorenz was in high demand this week. As a prolific journalist at The Atlantic and about-to-be member of Harvard’s prestigious Nieman Fellowship for journalism, that’s perhaps not surprising. Nor was this the first time she’s had a bit of a moment: Lorenz has already served as an in-house expert on social media and the internet for several major companies, while having written and edited for publications as diverse as The Daily Beast, The Hill, People, The Daily Mail, and Business Insider, all while remaining hip and in touch enough to currently serve as a kind of youth zeitgeist translator, on her beat as a technology writer for The Atlantic.

Lorenz is in fact publicly busy enough that she’s one of only two people I personally know to have openly ‘quit email,’ the other being my friend Russ, an 82 year-old retired engineer and MIT alum who literally spends all day, most days, working on a plan to reinvent the bicycle.

I wonder if any of Lorenz’s previous professional experiences, however, could have matched the weight of the events she encountered these past several days, when the nightmarish massacre in Christchurch, New Zealand brought together two of her greatest areas of expertise: political extremism (which she covered for The Hill), and internet culture. As her first Atlantic piece after the shootings said, the Christchurch killer’s manifesto was “designed to troll.” Indeed, his entire heinous act was a calculated effort to manipulate our current norms of Internet communication and connection, for fanatical ends.

Taylor Lorenz

Lorenz responded with characteristic insight, focusing on the ways in which the stylized insider subcultures the Internet supports can be used to confuse, distract, and mobilize millions of people for good and for truly evil ends:

Before people can even begin to grasp the nuances of today’s internet, they can be radicalized by it. Platforms such as YouTube and Facebook can send users barreling into fringe communities where extremist views are normalized and advanced. Because these communities have so successfully adopted irony as a cloaking device for promoting extremism, outsiders are left confused as to what is a real threat and what’s just trolling. The darker corners of the internet are so fragmented that even when they spawn a mass shooting, as in New Zealand, the shooter’s words can be nearly impossible to parse, even for those who are Extremely Online.”

Such insights are among the many reasons I was so grateful to be able to speak with Taylor Lorenz for this week’s installment of my TechCrunch series interrogating the ethics of technology.

As I’ve written in my previous interviews with author and inequality critic Anand Giridharadas, and with award-winning Google exec turned award-winning tech critic James Williams, I come to tech ethics from 25 years of studying religion. My personal approach to religion, however, has essentially always been that it plays a central role in human civilization not only or even primarily because of its theistic beliefs and “faith,” but because of its culture — its traditions, literature, rituals, history, and the content of its communities.

And because I don’t mind comparing technology to religion (not saying they are one and the same, but that there is something to be learned from the comparison), I’d argue that if we really want to understand the ethics of the technologies we are creating, particularly the Internet, we need to explore, as Taylor and I did in our conversation below, “the ethics of internet culture.”

What resulted was, like Lorenz’s work in general, at times whimsical, at times cool enough to fly right over my head, but at all times fascinating and important.

Editor’s Note: we ungated the first of 11 sections of this interview. Reading time: 22 minutes / 5,500 words.

Joking with the Pope

Greg Epstein: Taylor, thanks so much for speaking with me. As you know, I’m writing for TechCrunch about religion, ethics, and technology, and I recently discovered your work when you brought all those together in an unusual way. You subtweeted the Pope, and it went viral.

Taylor Lorenz: I know. [People] were freaking out.

Greg: What was that experience like?

Taylor: The Pope tweeted some insane tweet about how Mary, Jesus’ mother, was the first influencer. He tweeted it out, and everyone was spamming that tweet to me because I write so much about influencers, and I was just laughing. There’s a meme on Instagram about Jesus being the first influencer and how he killed himself or faked his death for more followers.

Because it’s fluid, it’s a lifeline for so many kids. It’s where their social network lives. It’s where identity expression occurs.

I just tweeted it out. I think a lot of people didn’t know the joke, the meme, and I think they just thought that it was new & funny. Also [some people] were saying, “how can you joke about Jesus wanting more followers?” I’m like, the Pope literally compared Mary to a social media influencer, so calm down. My whole family is Irish Catholic.

A bunch of people were sharing my tweet. I was like, oh, god. I’m not trying to lead into some religious controversy, but I did think whether my Irish Catholic mother would laugh. She has a really good sense of humor. I thought, I think she would laugh at this joke. I think it’s fine.

Greg: I loved it because it was a real Rorschach test for me. Sitting there looking at that tweet, I was one of the people who didn’t know that particular meme. I’d like to think I love my memes but …

Taylor: I can’t claim credit.

Greg: No, no, but anyway most of the memes I know are the ones my students happen to tell me about. The point is I’ve spent 15 plus years being a professional atheist. I’ve had my share of religious debates, but I also have had all these debates with others I’ll call Professional Strident Atheists.. who are more aggressive in their anti-religion than I am. And I’m thinking, “Okay, this is clearly a tweet that Richard Dawkins would love. Do I love it? I don’t know. Wait, I think I do!”

Taylor: I treated it with the greatest respect for all faiths. I thought it was funny to drag the Pope on Twitter .

The influence of Instagram

Alexander Spatari via Getty Images

Extra Crunch: What’s the cost of buying users from Facebook and the other ad networks?

This post reveals the cost of acquiring a customer on every ad channel my agency has tested.

The ad channels include Facebook, Instagram, YouTube, Quora, Google Search, Google Shopping, Snapchat, LinkedIn and others.

Using this data, you can reduce your costs by identifying which channels are a likely fit for your own product. Then you can focus on testing just those channels to start.

I’m pulling data from my agency’s experience testing 15+ ad channels and running thousands of ads for dozens of Y Combinator startups.

This post leaves you with a prioritized to-do list of which channels might work for your product, and reference points for how much you can expect to pay if you get those channels to work.

Which ad channels should I use?

Daily Crunch: Facebook admits password security lapse

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext

Prompted by a report by cybersecurity reporter Brian Krebs, Facebook confirmed that it stored “hundreds of millions” of account passwords in plaintext for years.

The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.

2. To fund Y Combinator’s top startups, VCs scoop them before Demo Day

What many don’t realize about the Demo Day tradition is that pitching isn’t a requirement; in fact, some YC graduates skip out on their stage opportunity altogether. Why? Because they’ve already raised capital or are in the final stages of closing a deal.

3. MoviePass parent’s CEO says its rebooted subscription service is already (sort of) profitable

We interviewed the CEO of Helios and Matheson Analytics to discuss the service’s tumultuous year and future plans.

4. Robotics process automation startup UiPath raising $400M at more than $7B valuation

UiPath develops automated software workflows meant to facilitate the tedious, everyday tasks within business operations.

5. Microsoft Defender comes to the Mac

Previously, this was a Windows solution for protecting the machines of Microsoft 365 subscribers, and the assets of the IT admins that try to keep them safe. It was previously called Windows Defender ATP, but launching on the Mac has prompted a name change.

6. Homeland Security warns of critical flaws in Medtronic defibrillators

The government-issued alert warned that Medtronic’s proprietary radio communications protocol, known as Conexus, wasn’t encrypted and did not require authentication, allowing a nearby attacker with radio-intercepting hardware to modify data on an affected defibrillator.

7. Nintendo’s Labo: VR Kit is not Virtual Boy 2.0

Like the first Labo kits, the VR Kit a friendly reminder that Nintendo’s chief job is to surprise and delight, and it delivers on both fronts. But just as the Labo piano shouldn’t be mistaken for a real musical instrument, Labo VR should not be viewed as “real” virtual reality.

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

An ICO spokesperson told us: “We are aware of these reports and will be considering the points made as part of our ongoing investigation.”

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.

This report was updated with comment from the ICO

Facebook’s AI couldn’t spot mass murder

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.

Why a top antitrust lawmaker thinks it’s time to break up Facebook

When the newly-minted chair of a congressional antitrust committee calls you out, it’s probably time to start worrying.

In an op-ed for the New York Times, Rhode Island Representative David N. Cicilline has called on the Federal Trade Commission to look into Facebook’s behavior for potential antitrust violations, citing TechCrunch’s own reporting that the company collected data on teens through a secret paid program among many other scandals.

“After each misdeed becomes public, Facebook alternates between denial, hollow promises and apology campaigns,” Cicilline wrote. “But nothing changes. That’s why, as chairman of the House Subcommittee on Antitrust, Commercial and Administrative Law, I am calling for an investigation into whether Facebook’s conduct has violated antitrust laws.”

Cicilline’s op-ed intends to put pressure on the FTC, a useful regulatory arm that he accuses of “facing a massive credibility crisis” due to its inaction to date against Facebook. And while the FTC is the focus of Cicilline’s call to action, the op-ed provides an insightful glimpse into what Facebook actions are salient for the lawmaker that Bloomberg called “the most powerful person in tech” when he became the ranking member of the House Judiciary’s Subcommittee on Antitrust, Commercial and Administrative Law this year.

That committee, now led by a Democratic party increasingly interested in breaking up big tech as a platform pillar, is a potentially powerful mechanism for antitrust action against the monopolistic power brokers that dominate the Silicon Valley we’ve come to know.

“For years, privacy advocates have alerted the commission that Facebook was likely violating its commitments under the agreement. Not only did the commission fail to enforce its order, but by failing to block Facebook’s acquisition of WhatsApp and Instagram, it enabled Facebook to extend its dominance,” Cicilline wrote, noting a fine must be multiple billions of dollars to impact the massive company at all. As we reported last month, the FTC is reportedly looking at a potentially multi-billion dollar fine but such a costly reprimand has yet to materialize.

The lawmaker also cites Facebook’s “predatory acquisition strategy” in which it buys up potential competitors before they can pose a threat, stifling innovation in the process. Cicilline also views the company’s decision to restrict API access for competing products as “evidence of anticompetitive conduct” from the social giant.

Cicilline also takes a familiar cynical view of Mark Zuckerberg’s recent announcement that Facebook would weave its products together in a move toward private messaging, calling it “a dangerous power grab to head off antitrust action.” That perspective gives us a clear glimpse of the what lies ahead for Facebook faces as the antitrust headwinds pick up around the 2020 presidential race.

“American antitrust agencies have not pursued a significant monopoly case in more than two decades, even as corporate concentration and monopoly power have reached historic levels,” Cicilline wrote.

“It’s clear that serious enforcement is long overdue.”

Facebook settles ACLU job advertisement discrimination suit

Facebook and the ACLU issued a joint statement this morning, noting that they have settled a class action job discrimination suit. The ACLU filed the suit in September, along with Outten & Golden LLC and the Communications Workers of America, alleging that Facebook allowed employers to target ads based on categories like race, national origin, age and gender.

The initial charges were filed on behalf of female workers who alleged they were not served up employment opportunities based on gender. Obviously all of that’s against all sort of federal, state and local laws, including, notably, section VII of the Civil Rights Act of 1964.

Today’s announcement finds Facebook implementing “sweeping changes” to its advertising platform in order to address these substantial concerns. The company outlined a laundry list of “far-reaching changes and steps,” including the development of a separate ad portal to handle topics like housing, employment and credit (HEC) for Facebook, Instagram and Facebook Messenger.

Targeting based on gender, age and race will not be allowed within the confines of the new system. Ditto for the company’s Lookalike Audience tool, which is similarly designed to target customers based on things like gender, age, religious views and the like.

“Civil rights leaders and experts – including members of the Congressional Black Caucus, the Congressional Hispanic Caucus, the Congressional Asian Pacific American Caucus, and Laura Murphy, the highly respected civil rights leader who is overseeing the Facebook civil rights audit – have also raised valid concerns about this issue,” Sheryl Sandberg wrote in a blog post tied to the announcement. “We take their concerns seriously and, as part of our civil rights audit, engaged the noted civil rights law firm Relman, Dane & Colfax to review our ads tools and help us understand what more we could do to guard against misuse.”

In addition to the above portal, Facebook will be creating a one-stop site where users can search amongst all job listings, independent of how ads are served up. The company has also promised to offer up “educational materials to advertisers about these new anti-discrimination measures. Facebook will also be meeting regularly with the suit’s plaintiffs to assure that it is continuing to meet all of the parameters of the settlement.

“As the internet — and platforms like Facebook — play an increasing role in connecting us all to information related to economic opportunities, it’s crucial that micro-targeting not be used to exclude groups that already face discrimination,” ACLU senior staff attorney Galen Sherwin said in the joint statement. “We are pleased Facebook has agreed to take meaningful steps to ensure that discriminatory advertising practices are not given new life in the digital era, and we expect other tech companies to follow Facebook’s lead.”

Further details of the settlement haven’t been disclosed by either party, but the update is clearly a bit of a consolatory move from a company that’s landed itself on the wrong side of a large lawsuit. Even so, it ought to be regarded as a positive outcome for a problematic product offering.

Snap is under NDA with UK Home Office discussing how to centralize age checks online

Snap is under NDA with the UK’s Home Office as part of a working group tasked with coming up with more robust age verification technology that’s able to robustly identify children online.

The detail emerged during a parliamentary committee hearing as MPs in the Department for Digital, Culture, Media and Sport (DCMS) questioned Stephen Collins, Snap’s senior director for public policy international, and Will Scougal, director of creative strategy EMEA.

A spokesman in the Home Office press office hadn’t immediately heard of any discussions with the messaging company on the topic of age verification. But we’ll update this story with any additional context on the department’s plans if more info is forthcoming.

Under questioning by the committee Snap conceded its current age verification systems are not able to prevent under 13 year olds from signing up to use its messaging platform.

The DCMS committee’s interest here is it’s running an enquiry into immersive and addictive technologies.

Snap admitted that the most popular means of signing up to its app (i.e. on mobile) is where its age verification system is weakest, with Collins saying it had no ability to drop a cookie to keep track of mobile users to try to prevent repeat attempts to get around its age gate.

But he emphasized Snap does not want underage users on its platform.

“That brings us no advantage, that brings us no commercial benefit at all,” he said. “We want to make it an enjoyable place for everybody using the platform.”

He also said Snap analyzes patterns of user behavior to try to identify underage users — investigating accounts and banning those which are “clearly” determined not to be old enough to use the service.

But he conceded there’s currently “no foolproof way” to prevent under 13s from signing up.

Discussing alternative approaches to verifying kids’ age online the Snap policy staffer agreed parental consent approaches are trivially easy for children to circumvent — such as by setting up spoof email accounts or taking a photo of a parent’s passport or credit card to use for verification.

Social media company Facebook is one such company that relies a ‘parental consent’ system to ‘verify’ the age of teen users — though, as we’ve previously reported, it’s trivially easy for kids to workaround.

“I think the most sustainable solution will be some kind of central verification system,” Collins suggested, adding that such a system is “already being discussed” by government ministers.

“The home secretary has tasked the Home Office and related agencies to look into this — we’re part of that working group,” he continued.

“We actually met just yesterday. I can’t give you the details here because I’m under an NDA,” Collins added, suggesting Snap could send the committee details in writing.

“I think it’s a serious attempt to really come to a proper conclusion — a fitting conclusion to this kind of conundrum that’s been there, actually, for a long time.”

“There needs to be a robust age verification system that we can all get behind,” he added.

The UK government is expected to publish a White Paper setting out its policy ideas for regulating social media and safety before the end of the winter.

The detail of its policy plans remain under wraps so it’s unclear whether the Home Office intends to include setting up a centralized system of online age verification for robustly identifying kids on social media platforms as part of its safety-focused regulation. But much of the debate driving the planned legislation has fixed on content risks for kids online.

Such a step would also not be the first time UK ministers have pushed the envelop around online age verification.

A controversial system of age checks for viewing adult content is due to come into force shortly in the UK under the Digital Economy Act — albeit, after a lengthy delay. (And ignoring all the hand-wringing about privacy and security risks; not to mention the fact age checks will likely be trivially easy to dodge by those who know how to use a VPN etc, or via accessing adult content on social media.)

But a centralized database of children for age verification purposes — if that is indeed the lines along which the Home Office is thinking — sounds rather closer to Chinese government Internet controls.

Given that, in recent years, the Chinese state has been pushing games companies to age verify users to enforce limits on play time for kids (also apparently in response to health concerns around video gaming addiction).

The UK has also pushed to create centralized databases of web browsers’ activity for law enforcement purposes, under the 2016 Investigatory Powers Act. (Parts of which it’s had to rethink following legal challenges, with other legal challenges ongoing.)

In recent years it has also emerged that UK spy agencies maintain bulk databases of citizens — known as ‘bulk personal datasets‘ — regardless of whether a particular individual is suspected of a crime.

So building yet another database to contain children’s ages isn’t perhaps as off piste as you might imagine for the country.

Returning to the DCMS committee’s enquiry, other questions for Snap from MPs included several critical ones related to its ‘streaks’ feature — whereby users who have been messaging each other regularly are encouraged not to stop the back and forth.

The parliamentarians raised constituent and industry concerns about the risk of peer pressure being piled on kids to keep the virtual streaks going.

Snap’s reps told the committee the feature is intended to be a “celebration” of close friendship, rather than being intentionally designed to make the platform sticky and so encourage stress.

Though they conceded users have no way to opt out of streak emoji appearing.

They also noted they have previously reduced the size of the streak emoji to make it less prominent.

But they added they would take concerns back to product teams and re-examine the feature in light of the criticism.

You can watch the full committee hearing with Snap here.

Facebook says the original New Zealand shooter video was viewed about 4,000 times before removal

Facebook released new figures about its attempts to stop the spread of videos after a shooter livestreamed his attacks on two Christchurch, New Zealand mosques last Friday, killing 50 people.

In a blog post, Facebook vice president and deputy general counsel Chris Sonderby said that the video was viewed less than 200 times during the live broadcast, during which no users reported the video. Including views during the live broadcast, the video was viewed about 4,000 times before it was removed from Facebook. It was first reported 29 minutes after it started streaming, or 12 minutes after it had ended. Sonderby said a link to a copy was posted onto 8chan, the message board that played a major role in the the video’s propogation online, before Facebook was alerted to it.

Before the shootings the suspect, a 28-year-old white man, posted an anti-Muslim and pro-facism manifesto. Sonderby said the shooter’s personal accounts had been removed from Facebook and Instagram, and that it is “actively identifying and removing” imposter accounts.

Facebook’s new numbers come one day after the company said it had removed about 1.5 million videos of the shooting in the first 24 hours after the attack, including 1.2 million that were blocked at upload, and therefore not available for viewing. But that means it failed to block 20 percent of those videos, or 300,000, which were uploaded to the platform and therefore could be watched.

Both sets of figures, while meant to provide transparency, seem unlikely to quell criticism of the social media platform’s role in spreading violent videos and dangerous ideologies, especially since Facebook Live launched three years ago. They call into question why the platform is still heavily reliant on user reports, despite its AI and machine learning-based moderation tools, and why removals don’t happen more quickly, especially during a crisis (and even routine moderation takes a deep psychological toll on the human monitors tasked with filling in the gaps left by AI). The challenges of moderation on a platform of Facebook’s scale (it now claims more than 2 billion monthly users).

Sonderby also said that the company has hashed the original Facebook Live video to help detect and remove other visually similar videos from Facebook and Instagram. It has also shared more than 800 visually-distinct video related to the attack through a database it shares with members of the Global Internet Forum to Counter Terrorism (GIFCT). “This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online,” he wrote.

Other online platforms, however, have also struggled to stop the video’s spread. For example, uploaders were able to use minor modifications, like watermarks or altering the size of clips, to stymie YouTube’s content moderation tools.

EU gov’t and public health sites lousy with adtech, study finds

A study of tracking cookies running on government and public sector health websites in the European Union has found commercial adtech to be operating pervasively even in what should be core not-for-profit corners of the Internet.

The researchers used searches including queries related to HIV, mental health, pregnancy, alcoholism and cancer to examine how frequently European Internet users are tracked when accessing national health service webpages to look for publicly funded information about sensitive concerns.

The study also found that most EU government websites have commercial trackers embedded on them, with 89 per cent of official government websites found to contain third party ad tracking technology.

The research was carried out by Cookiebot using its own cookie scanning technology to examine trackers on public sector websites, scanning 184,683 pages on all 28 EU main government websites.

Only the Spanish, German and the Dutch websites were found not to contain any commercial trackers.

The highest number of tracking companies were present on the websites of the French (52), Latvian (27), Belgian (19) and Greek (18) governments.

The researchers also ran a sub-set of 15 health-related queries across six EU countries (UK, Ireland, Spain, France, Italy and Germany) to identify relevant landing pages hosted on the websites of the corresponding national health service — going on to count and identify tracking domains operating on the landing pages.

Overall, they found a majority (52 per cent) of landing pages on the national health services of the six EU countries contained third party trackers.

Broken down by market, the Irish health service ranked worst — with 73 per cent of landing pages containing trackers.

While the UK, Spain, France and Italy had trackers on 60 per cent, 53 per cent, 47 per cent and 47 per cent of landing pages, respectively.

Germany ranked lowest of the six, yet they still found a third of the health service landing pages contained trackers.

Searches on publicly funded health service sites being compromised by the presence of adtech suggests highly sensitive inferences could be being made about web users by the commercial companies behind the trackers.

Cookiebot found a very long list of companies involved — flagging for example how 63 companies were monitoring a single German webpage about maternity leave; and 21 different companies were monitoring a single French webpage about abortion.

Vulnerable citizens who seek official health advice are shown to be suffering sensitive personal data leakage,” it writes in the report. “Their behaviour on these sites can be used to infer sensitive facts about their health condition and life situation. This data will be processed and often resold by the ad tech industry, and is likely to be used to target ads, and potentially affect economic outcomes, such as insurance risk scores.”

“These citizens have no clear way to prevent this leakage, understand where their data is sent, or to correct or delete the data,” it warns. 

It’s worth noting that Cookiebot and its parent company Cybot’s core business is related to selling EU data protection compliance services. So it’s not without its own commercial interests here. Though there’s no doubting the underlying adtech sprawl the report flags.

Where there’s some fuzziness is around exactly what these trackers are doing, as some could be used for benign site functions like website analytics.

Albeit, if/when the owner of the freebie analytics services in question is also adtech giant Google that still may not feel reassuring, from a privacy point of view.

100+ firms tracking EU public sector site users

Across both government and health service websites, Cookiebot says it identified a total of 112 companies using trackers that send data to a total of 131 third party tracking domains.

It also found 10 companies which actively masked their identity — with no website hosted at their tracking domains, and domain ownership (WHOIS) records hidden by domain privacy services, meaning they could not be identified. That’s obviously of concern. 

Here’s the table of identified tracking companies — which, disclosure alert, includes AOL and Yahoo which are owned by TechCrunch’s parent company, Verizon.

Adtech giants Google and Facebook are also among adtech companies tracking users across government and health service websites, along with a few other well known tech names — such as Oracle, Microsoft and Twitter.

Cookiebot’s study names Google “the kingpin of tracking” — finding the company performed more than twice as much tracking as any other, seemingly as a result of Google owning several of the most dominant ad tracking domains.

Google-owned YouTube.com, DoubleClick.net and Google.com were the top three tracking domains IDed by the study. 

“Through the combination of these domains, Google tracks website visits to 82% of the EU’s main government websites,” Cookiebot writes. “On each of the 22 main government websites on which YouTube videos have been installed, YouTube has automatically loaded a tracker from DoubleClick .net (Google’s primary ad serving domain). Using DoubleClick.net and Google.com, Google tracks visits to 43% of the scanned health service landing pages.”

 

Given its control of many of the Internet’s top platforms (Google Analytics, Maps, YouTube, etc.), it is no surprise that Google has greater success at gaining tracking access to more webpages than anyone else,” it continues. “It is of special concern that Google is capable of cross-referencing its trackers with its 1st party account details from popular consumer-oriented services such as Google Mail, Search, and Android apps (to name a few) to easily associate web activity with the identities of real people.”

Under European data protection law “subjective” information that’s associated with an individual — such as opinions or assessments — is absolutely considered personal data.

So tracker-fuelled inferences being made about site visitors are subject to EU data protection law — which has even more strict rules around the processing of sensitive categories of information like health data.

That in turn suggests that any adtech companies doing third-party-tracking of Internet users and linking sensitive health queries to individual identities would need explicit user consent to do so.

The presence of adtech trackers on sensitive health data pages certainly raises plenty of questions.

We asked Google for a response to the Cookiebot report, and a spokesperson sent us the following statement regarding sensitive category data specifically — in which it claims: “We do not permit publishers to use our technology to collect or build targeting lists based on users’ sensitive information, including health conditions like pregnancy or HIV.”

Google also claims it does not itself infer sensitive user interest categories.

Furthermore it said its policies for personalized ads prohibit its advertisers from collecting or using sensitive interest categories to target users. (Though saying you’re telling someone not to do something is not the same as that thing not being done. That would depend on the enforcement.)

Google’s spokesperson was also keen to point to its EU user consent policy — where it says it requires site owners that use its services to ensure they have correct disclosures and consents for personalised ads and cookies from European end users.

The company warns it may suspend or terminate a site’s use of its services if they have not obtained the right disclosures and consents. It adds there’s no exception for government sites.

On tags and disclosure generally, the Google spokesperson provided the following comment: “Our policies are clear: If website publishers choose to use Google web or advertising products, they must obtain consent for cookies associated with those products.”

Where Google Analytics cookies are concerned, Google said traffic data is only collected and processed per instructions it receives from site owners and publishers — further emphasizing that such data would not be used for ads or Google purposes without authorization from the website owner or publisher.

Albeit sloppy implementations of freebie Google tools by resource-strapped public sector site administrators might make such authorizations all too easy to unintentionally enable.

So, tl;dr — as Google tells it — the onus for privacy compliance is on the public sector websites themselves.

Though given the complex and opaque mesh of technology that’s grown up sheltering under the modern ‘adtech’ umbrella, opting out of this network’s clutches entirely may be rather easier said than done.

Cookiebot’s founder, Daniel Johannsen, makes a similar point to Google’s in the report intro, writing: “Although the governments presumably do not control or benefit from the documented data collection, they still allow the safety and privacy of their citizens to be compromised within the confines of their digital domains — in violation of the laws that they have themselves put in place.”

More than nine months into the GDPR [General Data Protection Regulation], a trillion-dollar industry is continuing to systematically monitor the online activity of EU citizens, often with the unintentional assistance of the very governments that should be regulating it,” he adds, calling for public sector bodies to “lead by example – at a minimum by shutting down any digital rights infringements that they are facilitating on their own websites”.

“The fact that so many public sector websites have failed to protect themselves and their visitors against the inventive methods of the tracking industry clearly demonstrates the educational challenge that the wider web faces: How can any organisation live up to its GDPR and ePrivacy obligations if it does not control unauthorised tracking actors accessing their website?”

Trackers creeping in by the backdoor

On the “inventive methods” front, the report flags how third party javascript technologies — used by websites for functions like video players, social sharing widgets, web analytics, galleries and comments sections — can offer a particularly sneaky route for trackers to be smuggled into sites and apps by the ‘backdoor’.

Cookiebot gives the example of social sharing tool, ShareThis, which automatically adds buttons to each webpage to make it easy for visitors to share information across social media platforms.

The ShareThis social plugin is used by Ireland’s public health service, the Health Service Executive (HSE). And there Cookiebot found it releases trackers from more than 20 ad tech companies into every webpage it is installed on.

“By analysing web pages on HSE.ie, we found that ShareThis loads 25 other trackers, which track users without permission,” it writes. “This result was confirmed on pages linked from search queries for “mortality rates of cancer patients” and “symptoms of postpartum depression”.”

“Although website operators like the HSE do control which 3rd parties (like ShareThis) they add to their websites, they have no direct control over what additional “4th parties” those 3rd parties might smuggle in,” it warns.

We’ve reached out to ShareThis for a response.

Another example flagged by the report is what Cookiebot dubs “YouTube’s Tracking Cover-Up”.

Here it says it found that even when a website has enabled YouTube’s so-called “Privacy-enhanced Mode”, in a bid to limit its ability to track site users, the mode “currently stores an identifier named “yt-remote-device -id” in the web browser’s “Local Storage”” which Cookiebot found “allows tracking to continue regardless of whether users click, watch, or in any other way interact with a video – contrary to Google’s claims”.

“Rather than disabling tracking, “privacy-enhanced mode” seems to cover it up,” they claim. 

Google did not provide an on the record comment regarding that portion of the report.

Instead the company sent some background information about “privacy-enhanced mode” — though its points did not engage at all with Cookiebot’s claim that tracking continues regardless of whether a user watches or interacts with a video in any way.

Overall, Google’s main point of rebuttal vis-a-vis the report’s conclusion — i.e. that even on public sector sites surveillance capitalism is carrying on business as usual — is that not all cookies and pixels are ad trackers. So it’s claim is a cookie ‘signal’ might just be harmless background ‘noise’.

(In additional background comments Google suggested that if a website is running an advertising campaign using its services — which presumably might be possible in a public sector scenario if an embedded YouTube video contains an ad (for example) — then an advertising cookie could be a conversion pixel used (only) to measure the effectiveness of the ad, rather than to track a user for ad targeting.

For DoubleClick cookies on websites in general, Google told us this type of cookie would only appear if the website specifically signed up with its ad services or another vendor which uses its ad services.

It further claimed it does not embed tracking pixels on random pages or via Google Analytics with Doubleclick cookies.)

The problem here is the lack of opacity in the adtech industry which requires users to take ad targeters at their word — and trust that an adtech giant like Google, which makes pots of money off of tracking web users to target them with ads, has nonetheless built perfectly privacy-respecting, non-leaky infrastructure that operates 100% as separately and cleanly as claimed, even as the entire adtech industry’s business incentives are pushing in the opposite direction.

Also a problem: Certain adtech giants having a long and storied history of bundling purposes for user data and manipulating consent in privacy-hostile ways.

And with trust in adtech at such a historic low — plus regulation having been rebooted in Europe to put the focus on enforcement (which is encouraging a cottage industry of GDPR ‘compliance’ services to wade in) — the industry’s preferred cloak of complex opacity is under attack on multiple front (including from policymakers) and does look to be on borrowed time.

And as more light shines in and risk steps up, sensitive public sector websites could just decide to nix using any of these freebie plugins.

In another “inventive” case study highlighted by the report, Cookiebot writes that it documented instances of Facebook using a first party cookie workaround for Safari’s intelligent tracker blocking system to harvest user data on two Irish and UK health landing pages.

So even though Apple’s browser natively purges third party cookies to enhance user privacy by default Facebook’s engineers appear to have managed to create a workaround.

Cookiebot says this works by Facebook’s new first party cookie — “_fbp” — storing a unique user ID that’s then forwarded as a URL parameter in the pixel tracker “tr” to Facebook.com — “thus allowing Facebook to track users after all”, i.e. despite Safari’s best efforts to prevent pervasive third party tracking.

“In our study, this combined tracking practice was documented on 2 Irish and UK landing pages featuring health information about HIV and mental illness,” it writes. “These types of workarounds of browser tracking prevention are highly intrusive as they undermine users’ attempts to protect their personal data – even when using browsers and extensions with the most advanced protection settings.”

Reached for a response to the Cookiebot report Facebook also did not engage with the case study of its Safari third party cookie workaround.

Instead, a spokesman sent us the following line: “[Cookiebot’s] investigation highlights websites that have chosen to use Facebook’s Business Tools — for example, the Like and Share buttons, or the Facebook pixel. Our Business Tools help websites and apps grow their communities or better understand how people use their services. For example, we could tell them that their site is most popular among people aged 20-25.”

In further information provided to us on background the company confirmed that data it receives from websites can be used for enhancing ad targeting on Facebook. (It said Facebook users can switch off ad personalization based on such signals — via the “Ads Based on Data from Partners” setting in Ad Preferences.)

It also said organizations that make use of its tools are subject to its Business Tools terms — which Facebook said require them to provide users with notice and obtain any required legal consent, including being clear with users about any information they share with it. 

Facebook further claimed it prohibits apps and websites from sending it sensitive data — saying it takes steps to detect and remove data that should not be shared with it.

ePrivacy Regulation needed to raise the bar

Commenting on the report in a statement, Diego Naranjo, senior policy advisor at digital rights group EDRi, called for European regulators to step up to defend citizens’ privacy.

For the last 20 years, Europe has fought to regulate the sprawling chaos of data tracking. The GDPR is a historical attempt to bring the information economy in line with our core civil liberties, securing the same level of democratic control and trust online as we take for granted in our offline world. Yet, as this study has provided evidence of, nine months into the new regulation, online tracking remains as hidden, uncontrollable, and plentiful as ever,” he writes in the report. “We stress that it is the duty of regulators to ensure their citizens’ privacy.”

Naranjo also warned that another EU privacy regulation, the ePrivacy Regulation — which is intended to deal directly with tracking technologies — risks being watered down.

In the wake of GDPR it’s become the focus of major lobbying efforts, as we’ve reported before.

“One of the great added values of the ePrivacy Regulation is that it is meant to raise the bar for companies and other actors who want to track citizens’ behaviour on the Internet. Regrettably, now we are seeing signs of the ePrivacy Regulation becoming watered out, specifically in areas concerning “legitimate interest” and “consent”,” he warns.

“A watering down of the ePrivacy Regulation will open a Pandora’s box of more and more sharing, merging and reselling of personal data in huge online commercial surveillance networks, in which citizens are being unwittingly tracked and micro-targeted with commercial and political manipulation. Instead, the ePrivacy Regulation must set the bar high in line with the wishes of the European Parliament, securing that the privacy of our fellow citizens does not succumb to the dominion of the ad tech industry.”