How ‘ghost work’ in Silicon Valley pressures the workforce, with Mary Gray

The phrase “pull yourself up by your own bootstraps” was originally meant sarcastically.

It’s not actually physically possible to do — especially while wearing Allbirds and having just fallen off a Bird scooter in downtown San Francisco, but I should get to my point.

This week, Ken Cuccinelli, the acting Director of the United States Citizenship and Immigrant Services Office, repeatedly referred to the notion of bootstraps in announcing shifts in immigration policy, even going so far as to change the words to Emma Lazarus’s famous poem “The New Colossus:” no longer “give me your tired, your poor, your huddled masses yearning to breathe free,” but “give me your tired and your poor who can stand on their own two feet, and who will not become a public charge.”

We’ve come to expect “alternative facts” from this administration, but who could have foreseen alternative poems?

Still, the concept of ‘bootstrapping’ is far from limited to the rhetorical territory of the welfare state and social safety net. It’s also a favorite term of art in Silicon Valley tech and venture capital circles: see for example this excellent (and scary) recent piece by my editor Danny Crichton, in which young VC firms attempt to overcome a lack of the startup capital that is essential to their business model by creating, as perhaps an even more essential feature of their model, impossible working conditions for most everyone involved. Often with predictably disastrous results.

It is in this context of unrealistic expectations about people’s labor, that I want to introduce my most recent interviewee in this series of in-depth conversations about ethics and technology.

Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and a Senior Researcher at Microsoft Research. One of the world’s leading experts in the emerging field of ethics in AI, Mary is also an anthropologist who maintains a faculty position at Indiana University. With her co-author Siddharth Suri (a computer scientist), Gray coined the term “ghost work,” as in the title of their extraordinarily important 2019 book, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. 

a mathiowetz crop 2 768x960

Image via Mary L. Gray / Ghostwork / Adrianne Mathiowetz Photography

Ghost Work is a name for a rising new category of employment that involves people scheduling, managing, shipping, billing, etc. “through some combination of an application programming interface, APIs, the internet and maybe a sprinkle of artificial intelligence,” Gray told me earlier this summer. But what really distinguishes ghost work (and makes Mary’s scholarship around it so important) is the way it is presented and sold to the end consumer as artificial intelligence and the magic of computation.

In other words, just as we have long enjoyed telling ourselves that it’s possible to hoist ourselves up in life without help from anyone else (I like to think anyone who talks seriously about “bootstrapping” should be legally required to rephrase as “raising oneself from infancy”), we now attempt to convince ourselves and others that it’s possible, at scale, to get computers and robots to do work that only humans can actually do.

Ghost Work’s purpose, as I understand it, is to elevate the value of what the computers are doing (a minority of the work) and make us forget, as much as possible, about the actual messy human beings contributing to the services we use. Well, except for the founders, and maybe the occasional COO.

Facebook now has far more employees than Harvard has students, but many of us still talk about it as if it were little more than Mark Zuckerberg, Cheryl Sandberg, and a bunch of circuit boards.

But if working people are supposed to be ghosts, then when they speak up or otherwise make themselves visible, they are “haunting” us. And maybe it can be haunting to be reminded that you didn’t “bootstrap” yourself to billions or even to hundreds of thousands of dollars of net worth.

Sure, you worked hard. Sure, your circumstances may well have stunk. Most people’s do.

But none of us rise without help, without cooperation, without goodwill, both from those who look and think like us and those who do not. Not to mention dumb luck, even if only our incredible good fortune of being born with a relatively healthy mind and body, in a position to learn and grow, here on this planet, fourteen billion years or so after the Big Bang.

I’ll now turn to the conversation I recently had with Gray, which turned out to be surprisingly more hopeful than perhaps this introduction has made it seem.

Greg Epstein: One of the most central and least understood features of ghost work is the way it revolves around people constantly making themselves available to do it.

Mary Gray: Yes, [What Siddarth Suri and I call ghost work] values having a supply of people available, literally on demand. Their contributions are collective contributions.

It’s not one person you’re hiring to take you to the airport every day, or to confirm the identity of the driver, or to clean that data set. Unless we’re valuing that availability of a person, to participate in the moment of need, it can quickly slip into ghost work conditions.

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Facebook to admit ownership of Instagram, WhatsApp in hard-to-read small-print

For the first time in more than half a decade, Facebook wants to inform you that it owns Instagram, the hyper-popular rival social networking app it acquired for a $1BN steal back in 2012.

Ditto messaging platform WhatsApp — which Mark Zuckerberg splurged $19BN on a couple of years later to keep feeding eyeballs into his growth engine.

Facebook is adding its own brand name alongside the other two — in the following format: ‘Instagram from Facebook’; ‘WhatsApp from Facebook.’

The cheap perfume style rebranding was first reported by The Information which cites three people familiar with the matter who told it employees for the two apps were recently notified internally of the plan to rebrand.

“The move to add Facebook’s name to the apps has been met with surprise and confusion internally, reflecting the autonomy that the units have operated under,” it said. Although it also reported that CEO Mark Zuckerberg has also been frustrated that Facebook doesn’t get more credit for the growth of Instagram and WhatsApp.

So it sounds like Facebook may be hoping for a little reverse osmosis brand-washing — aka leveraging the popularity of its cleaner social apps to detoxify the scandal-hit mothership.

Not that Facebook is saying anything like that publicly, of course.

In a statement to The Information confirming the rebranding it explained it thus: “We want to be clearer about the products and services that are part of Facebook.”

The rebranding also comes at a time when Facebook is facing at least two antitrust investigations on its home turf — where calls for Facebook and other big tech giants to be broken up are now a regular feature of the campaign trail…

We can only surmise the legal advice Facebook must be receiving vis-a-vis what it should do to try to close down break up arguments that could deprive it of its pair of golden growth geese.

Arguments such as the fact most Instagram (and WhatsApp) users don’t even know they’re using a Facebook-owned app. Hence, as things stand, it would be pretty difficult for Facebook’s lawyers to successfully argue Instagram and WhatsApp users would be harmed if the apps were cut free by a break-up order.

But now — with the clumsy ‘from Facebook’ construction — Facebook can at least try to make a case that users are in a knowing relationship with Facebook in which they willingly, even if not lovingly, place their eyeballs in Zuckerberg’s bucket.

In which case Facebook is not telling you the Instagram user that it owns Instagram for your benefit. Not even slightly.

Note, for example, the use of the comparative adjective “clearer” in Facebook’s statement to explain its intent for the rebranding — rather than a simple statement: ‘we want to be clear’.

It’s definitely not saying it’s going to individually broadcast its ownership of Instagram and WhatsApp to each and every user on those networks. More like it’s going to try to creep the Facebook brand in. Which is far more in corporate character.

At the time of writing a five day old update of of Instagram’s iOS app already features the new construction — although it looks far more dark pattern than splashy rebrand, with just the faintest whisker of grey text at the base of the screen to disclose that you’re about to be sucked into the Facebook empire (vs a giant big blue ‘Create new account’ button winking to be tapped up top… )

Here’s the landing screen — with the new branding. Blink and you’ll miss it…

image2

So not full disclosure then. More like just an easily overlooked dab of the legal stuff — to try to manage antitrust risk vs the risk of Facebook brand toxicity poisoning the (cleaner) wells of Instagram and WhatsApp.

There are signs the company is experimenting in some extremely dilute cross-brand-washing too.

The iOS app description for Instagram includes the new branding — tagged to an ad style slogan that gushes: “Bringing you closer to the people and things you love.”  But, frankly, who reads app descriptions?

image1

Up until pretty recently, both Instagram and WhatsApp had a degree of independence from their rapacious corporate parent — granted brand and operational independence under the original acquisition terms and leadership of their original founders.

Not any more, though. Instagram’s founders cleared out last year. While WhatsApp’s jumped ship between 2017 and 2018.

Zuckerberg lieutenants and/or long time Facebookers are now running both app businesses. The takeover is complete.

Facebook is also busy working on entangling the backends of its three networks — under a claimed ‘pivot to privacy‘ which it announced earlier this year.

This also appears intended to try to put regulators off by making breaking up Facebook much harder than it would be if you could just split it along existing app lines. Theories of user harm potentially get more complicated if you can demonstrate cross-platform chatter.

The accompanying 3,000+ word screed from Zuckerberg introduced the singular notion of “the Facebook network”; aka one pool for users to splash in, three differently colored slides to funnel you in there.

“In a few years, I expect future versions of Messenger and WhatsApp to become the main ways people communicate on the Facebook network,” he wrote. “If this evolution is successful, interacting with your friends and family across the Facebook network will become a fundamentally more private experience.”

The ‘from Facebook’ rebranding thus looks like just a little light covering fire for the really grand dodge Facebook is hoping to pull off as the break-up bullet speeds down the pipe: Aka Entangling its core businesses at the infrastructure level.

From three networks to one massive Facebook-owned user data pool. 

One network to rule them all, one network to find them,
One network to bring them all, and in the regulatory darkness bind them

Europe’s top court sharpens guidance for sites using leaky social plug-ins

Europe’s top court has made a ruling that could affect scores of websites that embed the Facebook ‘Like’ button and receive visitors from the region.

The ruling by the Court of Justice of the EU states such sites are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

The ruling is significant because, as currently seems to be the case, Facebook’s Like buttons transfer personal data automatically, when a webpage loads — without the user even needing to interact with the plug-in — which means if websites are relying on visitors’ ‘consenting’ to their data being shared with Facebook they will likely need to change how the plug-in functions to ensure no data is sent to Facebook prior to visitors being asked if they want their browsing to be tracked by the adtech giant.

The background to the case is a complaint against online clothes retailer, Fashion ID, by a German consumer protection association, Verbraucherzentrale NRW — which took legal action in 2015 seeking an injunction against Fashion ID’s use of the plug-in which it claimed breached European data protection law.

Like ’em or loath ’em, Facebook’s ‘Like’ buttons are an impossible-to-miss component of the mainstream web. Though most Internet users are likely unaware that the social plug-ins are used by Facebook to track what other websites they’re visiting for ad targeting purposes.

Last year the company told the UK parliament that between April 9 and April 16 the button had appeared on 8.4M websites, while its Share button social plug-in appeared on 931K sites. (Facebook also admitted to 2.2M instances of another tracking tool it uses to harvest non-Facebook browsing activity — called a Facebook Pixel — being invisibly embedded on third party websites.)

The Fashion ID case predates the introduction of the EU’s updated privacy framework, GDPR, which further toughens the rules around obtaining consent — meaning it must be purpose specific, informed and freely given.

Today’s CJEU decision also follows another ruling a year ago, in a case related to Facebook fan pages, when the court took a broad view of privacy responsibilities around platforms — saying both fan page administrators and host platforms could be data controllers. Though it also said joint controllership does not necessarily imply equal responsibility for each party.

In the latest decision the CJEU has sought to draw some limits on the scope of joint responsibility, finding that a website where the Facebook Like button is embedded cannot be considered a data controller for any subsequent processing, i.e. after the data has been transmitted to Facebook Ireland (the data controller for Facebook’s European users).

The joint responsibility specifically covers the collection and transmission of Facebook Like data to Facebook Ireland.

“It seems, at the outset, impossible that Fashion ID determines the purposes and means of those operations,” the court writes in a press release announcing the decision.

“By contrast, Fashion ID can be considered to be a controller jointly with Facebook Ireland in respect of the operations involving the collection and disclosure by transmission to Facebook Ireland of the data at issue, since it can be concluded (subject to the investigations that it is for the Oberlandesgericht Düsseldorf [German regional court] to carry out) that Fashion ID and Facebook Ireland determine jointly the means and purposes of those operations.”

Responding the judgement in a statement attributed to its associate general counsel, Jack Gilbert, Facebook told us:

Website plugins are common and important features of the modern Internet. We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools. We are carefully reviewing the court’s decision and will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.

The company said it may make changes to the Like button to ensure websites that use it are able to comply with Europe’s GDPR.

Though it’s not clear what specific changes these could be, such as — for example — whether Facebook will change the code of its social plug-ins to ensure no data is transferred at the point a page loads. (We’ve asked Facebook and will update this report with any response.)

Facebook also points out that other tech giants, such as Twitter and LinkedIn, deploy similar social plug-ins — suggesting the CJEU ruling will apply to other social platforms, as well as to thousands of websites across the EU where these sorts of plug-ins crop up.

“Sites with the button should make sure that they are sufficiently transparent to site visitors, and must make sure that they have a lawful basis for the transfer of the user’s personal data (e.g. if just the user’s IP address and other data stored on the user’s device by Facebook cookies) to Facebook,” Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, told TechCrunch.

“If their lawful basis is consent, then they’ll need to get consent before deploying the button for it to be valid — otherwise, they’ll have done the transfer before the visitor has consented

“If relying on legitimate interests — which might scrape by — then they’ll need to have done a legitimate interests assessment, and kept it on file (against the (admittedly unlikely) day that a regulator asks to see it), and they’ll need to have a mechanism by which a site visitor can object to the transfer.”

“Basically, if organisations are taking on board the recent guidance from the ICO and CNIL on cookie compliance, wrapping in Facebook ‘Like’ and other similar things in with that work would be sensible,” Brown added.

Also commenting on the judgement, Michael Veale, a UK-based researcher in tech and privacy law/policy, said it raises questions about how Facebook will comply with Europe’s data protection framework for any further processing it carries out of the social plug-in data.

“The whole judgement to me leaves open the question ‘on what grounds can Facebook justify further processing of data from their web tracking code?'” he told us. “If they have to provide transparency for this further processing, which would take them out of joint controllership into sole controllership, to whom and when is it provided?

“If they have to demonstrate they would win a legitimate interests test, how will that be affected by the difficulty in delivering that transparency to data subjects?’

“Can Facebook do a backflip and say that for users of their service, their terms of service on their platform justifies the further use of data for which individuals must have separately been made aware of by the website where it was collected?

“The question then quite clearly boils down to non-users, or to users who are effectively non-users to Facebook through effective use of technologies such as Mozilla’s browser tab isolation.”

How far a tracking pixel could be considered a ‘similar device’ to a cookie is another question to consider, he said.

The tracking of non-Facebook users via social plug-ins certainly continues to be a hot-button legal issue for Facebook in Europe — where the company has twice lost in court to Belgium’s privacy watchdog on this issue. (Facebook has continued to appeal.)

Facebook founder Mark Zuckerberg also faced questions about tracking non-users last year, from MEPs in the European Parliament — who pressed him on whether Facebook uses data on non-users for any other uses vs the security purpose of “keeping bad content out” that he claimed requires Facebook to track everyone on the mainstream Internet.

MEPs also wanted to know how non-users can stop their data being transferred to Facebook? Zuckerberg gave no answer, likely because there’s currently no way for non-users to stop their data being sucked up by Facebook’s servers — short of staying off the mainstream Internet.

Facebook ignored staff warnings about “sketchy” Cambridge Analytica in September 2015

Facebook employees tried to alert the company about the activity of Cambridge Analytica as early as September 2015, per the SEC’s complaint against the company which was published yesterday.

This chimes with a court filing that emerged earlier this year — which also suggested Facebook knew of concerns about the controversial data company earlier than it had publicly said, including in repeat testimony to a UK parliamentary committee last year.

Facebook only finally kicked the controversial data firm off its ad platform in March 2018 when investigative journalists had blown the lid off the story.

In a section of the SEC complaint on “red flags” raised about the scandal-hit company Cambridge Analytica’s potential misuse of Facebook user data, the SEC complaint reveals that it already knew of concerns raised by staffers in its political advertising unit — who described CA as a “sketchy (to say the least) data modeling company that has penetrated our market deeply”.

Screenshot 2019 07 25 at 11.43.17

Amid a flurry of major headlines for the company yesterday, including a $5BN FTC fine — all of which was selectively dumped on the same day media attention was focused on Mueller’s testimony before Congress — Facebook quietly disclosed it had also agreed to pay $100M to the SEC to settle a complaint over failures to properly disclose data abuse risks to its investors.

This tidbit was slipped out towards the end of a lengthy blog post by Facebook general counsel Colin Stretch which focused on responding to the FTC order with promises to turn over a new leaf on privacy.

CEO Mark Zuckerberg also made no mention of the SEC settlement in his own Facebook note about what he dubbed a “historic fine”.

As my TC colleague Devin Coldewey wrote yesterday, the FTC settlement amounts to a ‘get out of jail’ card for the company’s senior execs by granting them blanket immunity from known and unknown past data crimes.

‘Historic fine’ is therefore quite the spin to put on being rich enough and powerful enough to own the rule of law.

And by nesting its disclosure of the SEC settlement inside effusive privacy-washing discussion of the FTC’s “historic” action, Facebook looks to be hoping to detract attention from some really awkward details in its narrative about the Cambridge Analytica scandal which highlight ongoing inconsistencies and contradictions to put it politely.

The SEC complaint underlines that Facebook staff were aware of the dubious activity of Cambridge Analytica on its platform prior to the December 2015 Guardian story — which CEO Mark Zuckerberg has repeatedly claimed was when he personally became aware of the problem.

Asked about the details in the SEC document, a Facebook spokesman pointed us to comments it made earlier this year when court filings emerged that also suggested staff knew in September 2015. In this statement, from March, it says “employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service”, and further claims it was “not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015”, adding: “When Facebook learned about Kogan’s breach of Facebook’s data use policies, we took action.”

Facebook staffers were also aware of concerns about Cambridge Analytica’s “sketchy” business when, around November 2015, Facebook employed psychology researcher Joseph Chancellor — aka the co-founder of app developer GSR — which, as Facebook has sought to pain it, is the ‘rogue’ developer that breached its platform policies by selling Facebook user data to Cambridge Analytica.

This means Facebook employed a man who had breached its own platform policies by selling user data to a data company which Facebook’s own staff had urged, months prior, be investigated for policy-violating scraping of Facebook data, per the SEC complaint.

Fast forward to March 2018 and press reports revealing the scale and intent of the Cambridge Analytica data heist blew up into a global data scandal for Facebook, wiping billions off its share price.

The really awkward question that Facebook has continued not to answer — and which every lawmaker, journalist and investor should therefore be putting to the company at every available opportunity — is why it employed GSR co-founder Chancellor in the first place?

Chancellor has never been made available by Facebook to the media for questions. He also quietly left Facebook last fall — we must assume with a generous exit package in exchange for his continued silence. (Assume because neither Facebook nor Chancellor have explained how he came to be hired.)

At the time of his departure, Facebook also made no comment on the reasons for Chancellor leaving — beyond confirming he had left.

Facebook has never given a straight answer on why it hired Chancellor. See, for example, its written response to a Senate Commerce Committee’s question — which is pure, textbook misdirection, responding with irrelevant details that do not explain how Facebook came to identify him for a role at the company in the first place (“Mr. Chancellor is a quantitative researcher on the User Experience Research team at Facebook, whose work focuses on aspects of virtual reality. We are investigating Mr. Chancellor’s prior work with Kogan through counsel”).

Screenshot 2019 07 25 at 12.02.10

What was the outcome of Facebook’s internal investigation of Chancellor’s prior work? We don’t know because again Facebook isn’t saying anything.

More importantly, the company has continued to stonewall on why it hired someone intimately linked to a massive political data scandal that’s now just landed it an “historic fine”.

We asked Facebook to explain why it hired Chancellor — given what the SEC complaint shows it knew of Cambridge Analytica’s “sketchy” dealings — and got the same non-answer in response: “Mr Chancellor was a quantitative researcher on the User Experience Research team at Facebook, whose work focused on aspects of virtual reality. He is no longer employed by Facebook.”

We’ve asked Facebook to clarify why Chancellor was hired despite internal staff concerns linked to the company his company was set up to sell Facebook data to; and how of all possible professionals it could hire Facebook identified Chancellor in the first place — and will update this post with any response. (A search for ‘quantitative researcher’ on LinkedIn’s platform returns more than 177,000 results of professional who are using the descriptor in their profiles.)

Earlier this month a UK parliamentary committee accused the company of contradicting itself in separate testimonies on both sides of the Atlantic over knowledge of improper data access by third-party apps.

The committee grilled multiple Facebook and Cambridge Analytica employees (and/or former employees) last year as part of a wide-ranging enquiry into online disinformation and the use of social media data for political campaigning — calling in its final report for Facebook to face privacy and antitrust probes.

A spokeswoman for the DCMS committee told us it will be writing to Facebook next week to ask for further clarification of testimonies given last year in light of the timeline contained in the SEC complaint.

Under questioning in Congress last year, Facebook founder Zuckerberg also personally told congressman Mike Doyle that Facebook had first learned about Cambridge Analytica using Facebook data as a result of the December 2015 Guardian article.

Yet, as the SEC complaint underlines, Facebook staff had raised concerns months earlier. So, er, awkward.

There are more awkward details in the SEC complaint that Facebook seems keen to bury too — including that as part of a signed settlement agreement, GSR’s other co-founder Aleksandr Kogan told it in June 2016 that he had, in addition to transferring modelled personality profile data on 30M Facebook users to Cambridge Analytica, sold the latter “a substantial quantity of the underlying Facebook data” on the same set of individuals he’d profiled.

This US Facebook user data included personal information such as names, location, birthdays, gender and a sub-set of page likes.

Raw Facebook data being grabbed and sold does add some rather colorful shading around the standard Facebook line — i.e. that its business is nothing to do with selling user data. Colorful because while Facebook itself might not sell user data — it just rents access to your data and thereby sells your attention — the company has built a platform that others have repurposed as a marketplace for exactly that, and done so right under its nose…

Screenshot 2019 07 25 at 12.40.29

The SEC complaint also reveals that more than 30 Facebook employees across different corporate groups learned of Kogan’s platform policy violations — including senior managers in its comms, legal, ops, policy and privacy divisions.

The UK’s data watchdog previously identified three senior managers at Facebook who it said were involved in email exchanges prior to December 2015 regarding the GSR/Cambridge Analytica breach of Facebook users data, though it has not made public the names of the staff in question.

The SEC complaint suggests a far larger number of Facebook staffers knew of concerns about Cambridge Analytica earlier than the company narrative has implied up to now. Although the exact timeline of when all the staffers knew is not clear from the document — with the discussed period being September 2015 to April 2017.

Despite 30+ Facebook employees being aware of GSR’s policy violation and misuse of Facebook data — by April 2017 at the latest — the company leaders had put no reporting structures in place for them to be able to pass the information to regulators.

“Facebook had no specific policies or procedures in place to assess or analyze this information for the purposes of making accurate disclosures in Facebook’s periodic filings,” the SEC notes.

The complaint goes on to document various additional “red flags” it says were raised to Facebook throughout 2016 suggesting Cambridge Analytica was misusing user data — including various press reports on the company’s use of personality profiles to target ads; and staff in Facebook’s own political ads unit being aware that the company was naming Facebook and Instagram ad audiences by personality trait to certain clients, including advocacy groups, a commercial enterprise and a political action committee.

“Despite Facebook’s suspicions about Cambridge and the red flags raised after the Guardian article, Facebook did not consider how this information should have informed the risk disclosures in its periodic filings about the possible misuse of user data,” the SEC adds.

Facebook’s regulation dodge: Let us, or China will

Facebook is leaning on fears of China exporting its authoritarian social values to counter arguments that it should be broken up or slowed down. Its top executives have each claimed that if the U.S. limits its size, blocks its acquisitions, or bans its cryptocurrency, Chinese company’s absent these restrictions will win abroad, bringing more power and data to their government. CEO Mark Zuckerberg, COO Sheryl Sandberg, and VP of communications Nick Clegg have all expressed this position.

The latest incarnation of this talking point came in today and yesterday’s congressional hearings over Libra, the Facebook-spearheaded digital currency it hopes to launch in the first half of 2020. Facebook’s head of its blockchain subsidiary Calibra David Marcus wrote in his prepared remarks to the House Financial Services Committee today that:

“I believe that if America does not lead innovation in the digital currency and payments area, others will. If we fail to act, we could soon see a digital currency controlled by others whose values are dramatically different”

Senate Banking Committee Holds Hearing On Facebook's Proposed Crypto Currency

WASHINGTON, DC – JULY 16: Head of Facebook’s Calibra David Marcus testifies during a hearing before Senate Banking, Housing and Urban Affairs Committee July 16, 2019 on Capitol Hill in Washington, DC. The committee held the hearing on “Examining Facebook’s Proposed Digital Currency and Data Privacy Considerations.” (Photo by Alex Wong/Getty Images)

Marcus also told the Senate Banking Subcommittee yesterday that “I believe if we stay put we’re going to be in a situation in 10, 15 years where half the world is on a blockchain technology that is out of reach of our national-security apparatus” .

This argument is designed to counter House-drafted “Keep Big Tech Out Of Finance” legislation that Reuters reports would declare that companies like Facebook that earn over $25 billion in annual revenue “may not establish, maintain, or operate a digital asset . . .  that is intended to be widely used as medium of exchange, unit of account, store of value, or any other similar function.”

The message Facebook is trying to deliver is that cryptocurrencies are inevitable. Blocking Libra would just open the door to even less scrupulous actors controlling the technology. Facebook’s position here isn’t limited to cryptocurrencies, though.

The concept crystallized exactly a year ago when Zuckerberg said “I think you have this question from a policy perspective, which is, do we want American companies to be exporting across the world?” in an interview with Recode’s Kara Swisher.

“We grew up here, I think we share a lot of values that I think people hold very dear here, and I think it’s generally very good that we’re doing this, both for security reasons and from a values perspective. Because I think that the alternative, frankly, is going to be the Chinese companies. If we adopt a stance which is that, ‘Okay, we’re gonna, as a country, decide that we wanna clip the wings of these companies and make it so that it’s harder for them to operate in different places, where they have to be smaller,’ then there are plenty of other companies out that are willing and able to take the place of the work that we’re doing.”

When asked if he specifically meant Chinese companies, Zuckerberg doubled down, saying:

“Yeah. And they do not share the values that we have. I think you can bet that if the government hears word that it’s election interference or terrorism, I don’t think Chinese companies are going to wanna cooperate as much and try to aid the national interest there.”

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

This April, Zuckerberg went deeper when he described how Facebook would refuse to comply with data localization laws in countries with poor track records on human rights. The CEO explained the risk of data being stored in other countries, which is precisely might happen if regulators hamper Facebook and innovation happens elsewhere. Zuckerberg told philosopher Yuval Harari that:

“When I look towards the future, one of the things that I just get very worried about is the values that I just laid out [for the internet and data] are not values that all countries share. And when you get into some of the more authoritarian countries and their data policies, they’re very different from the kind of regulatory frameworks that across Europe and across a lot of other places, people are talking about or put into place . . . And the most likely alternative to each country adopting something that encodes the freedoms and rights of something like GDPR, in my mind, is the authoritarian model, which is currently being spread, which says every company needs to store everyone’s data locally in data centers and then, if I’m a government, I can send my military there and get access to whatever data I want and take that for surveillance or military.

I just think that that’s a really bad future. And that’s not the direction, as someone who’s building one of these internet services, or just as a citizen of the world, I want to see the world going. If a government can get access to your data, then it can identify who you are and go lock you up and hurt you and your family and cause real physical harm in ways that are just really deep.”

facebook logo down glitch

Facebook’s newly hired head of communications Nick Clegg told reporters back in January that:

“These are of course legitimate questions, but we don’t hear so much about China, which combines astonishing ingenuity with the ability to process data on a vast scale without the legal and regulatory constraints on privacy and data protection that we require on both sides of the Atlantic . . .  [and this data could be] put to more sinister surveillance ends, as we’ve seen with the Chinese government’s controversial social credit system.”

In response to Facebook co-founder Chris Hughes’ call that Facebook should be broken up, Clegg wrote in May that “Facebook shouldn’t be broken up — but it does need to be held to account. Anyone worried about the challenges we face in an online world should look at getting the rules of the internet right, not dismantling successful American companies.”

He hammered home the alternative the next month during a speech in Berlin:

“If we in Europe and America don’t turn off the white noise and begin to work together, we will sleepwalk into a new era where the internet is no longer a universal space but a series of silos where different countries set their own rules and authoritarian regimes soak up their citizens’ data while restricting their freedom . . . If the West doesn’t engage with this question quickly and emphatically, it may be that it isn’t ours to answer. The common rules created in our hemisphere can become the example the rest of the world follows.”

COO Sheryl Sandberg made the point most directly in an interview with CNBC in May:

“You could break us up, you could break other tech companies up, but you actually don’t address the underlying issues people are concerned about . . . While people are concerned with the size and power of tech companies, there’s also a concern in the United States about the size and power of Chinese tech companies and the … realization that those companies are not going to be broken up.

WASHINGTON, DC – SEPTEMBER 5: Facebook chief operating officer Sheryl Sandberg testifies during a Senate Intelligence Committee hearing concerning foreign influence operations’ use of social media platforms, on Capitol Hill, September 5, 2018 in Washington, DC. Twitter CEO Jack Dorsey and Facebook chief operating officer Sheryl Sandberg faced questions about how foreign operatives use their platforms in attempts to influence and manipulate public opinion. (Photo by Drew Angerer/Getty Images)

Scared Tactics

Indeed, China does not share the United States’ values on individual freedoms and privacy. And yes, breaking up Facebook could weaken its products like WhatsApp, providing more opportunities for apps like Chinese tech giant Tencent’s WeChat to proliferate.

But letting Facebook off the hook won’t solve the problems China’s influence poses to an open and just internet. If Framing the issue as ‘strong regulation lets China win’ creates a false dichotomy. There are more constructive approaches if Zuckerberg seriously wants to work with the government on exporting freedom via the web. And the distrust Facebook has accrued through the mistakes it’s made in the absence of proper regulation arguably do plenty to hurt the perception of how American ideals are spread through its tech companies.

Breaking up Facebook may not be the answer, especially if it’s done in retaliation for its wrong-doings instead of as a coherent way to prevent more in the future. To that end, a better approach might be stopping future acquisitions of large or rapidly growing social networks, forcing it to offer true data portability so existing users have the freedom to switch to competitors, applying proper oversight of its privacy policies, and requiring a slow rollout of Libra with testing in each phase to ensure it doesn’t screw consumers, enable terrorists, or jeopardize the world economy.

Resorting to scare tactics shows that it’s Facebook that’s scared. Years of growth over safety strategy might finally catch up with it. The $5 billion FTC fine is a slap on the wrist for a company that profits more than that per quarter, but a break-up would do real damage. Instead of fear-mongering, Facebook would be better served by working with regulators in good faith while focusing more on preempting abuse. Perhaps it’s politically savvy to invoke the threat of China to stoke the worries of government officials, and it might even be effective. That doesn’t make it right.

How Roblox avoided the gaming graveyard and grew into a $2.5B company

There are successful companies that grow fast and garner tons of press. Then there’s Roblox, a company which took at least a decade to hit its stride and has, relative to its current level of success, barely gotten any recognition or attention.

Why has Roblox’s story gone mostly untold? One reason is that it emerged from a whole generation of gaming portals and platforms. Some, like King.com, got lucky or pivoted their business. Others by and large failed.

Once companies like Facebook, Apple and Google got to the gaming scene, it just looked like a bad idea to try to build your own platform — and thus not worth talking about. Added to that, founder and CEO Dave Baszucki seems uninterested in press.

But overall, the problem has been that Roblox just seemed like an insignificant story for many, many years. The company had millions of users, sure. So did any number of popular games. In its early days, Roblox even looked like Minecraft, a game that was released long after Roblox went live, but that grew much, much faster.

Yet here we are today: Roblox now claims that half of all American children aged 9-12 are on its platform. It has jumped to 90 million monthly unique users and is poised to go international, potentially multiplying that number. And it’s unique. Essentially all other distribution services offering games through a portal have eventually fizzled, aside from some distant cousins like Steam.

This is the story of how Roblox not only survived, but built a thriving platform.

Seeds of an idea

GettyImages 1027412388

(Photo by Steve Jennings/Getty Images for TechCrunch)

Before Roblox, there was Knowledge Revolution, a company that made teaching software. While designed to allow students to simulate physics experiments, perhaps predictably, they also treated it like a game.

“The fun seemed to be in building your own experiment,” says Baszucki. “When people were playing it and we went into schools and labs, they were all making car crashes and buildings fall down, making really funny stuff.” Provided with a sandbox, kids didn’t just make dry experiments about mass or velocity — they made games, or experiences they could show off to friends for a laugh.

Knowledge Revolution was founded in 1989, by Dave Baszucki and his brother Greg (who didn’t later co-found Roblox, but is now on its board). Nearly a decade later, it was acquired for $20 million by MSC Software, which made professional simulation tools. Dave continued there for another four years before leaving to become an angel investor.

Baszucki put money into Friendster, a company that pre-dated Facebook and MySpace in the social networking category. That investment seeded another piece of the idea for Roblox. Taken together, the legacy of Knowledge Revolution and Friendster were the two key components undergirding Roblox: a physics sandbox with strong creation tools, and a social graph.

Baszucki himself is a third piece of the puzzle. Part of an older set of entrepreneurs, which might be called the Steve Jobs generation, Baszucki’s archetype seems closer to Mr. Rogers than Jobs himself: unfailingly polite and enthusiastic, never claiming superior insight, and preferring to pass credit for his accomplishments on to others. In conversation, he shows interests both central and tangential to Roblox, like virtual environments, games, education, digital identity and the future of tech. Somewhere in this heady mix, the idea of Roblox came about.

The first release

Sam Lessin and Andrew Kortina on their voice assistant’s workplace pivot

Sam Lessin, a former product management executive at Facebook and old friend to Mark Zuckerberg, incorporated his latest startup under the name “Fin Exploration Company.”

Why? Well, because he wanted to explore. The company — co-founded alongside Andrew Kortina, best known for launching the successful payments app Venmo — was conceived as a consumer voice assistant in 2015 after the two entrepreneurs realized the impact 24/7 access to a virtual assistant would have on their digital to-do lists.

The thing is, developing an AI assistant capable of booking flights, arranging trips, teaching users how to play poker, identifying places to purchase specific items for a birthday party and answering wide-ranging zany questions like “can you look up a place where I can milk a goat?” requires a whole lot more human power than one might think. Capital-intensive and hard-to-scale, an app for “instantly offloading” chores wasn’t the best business. Neither Lessin nor Kortina will admit to failure, but Fin‘s excursion into B2B enterprise software eight months ago suggests the assistant technology wasn’t a billion-dollar idea.

Staying true to its name, the Fin Exploration Company is exploring again.

Adopting a ratings system for social media like the ones used for film and TV won’t work

Internet platforms like Google, Facebook, and Twitter are under incredible pressure to reduce the proliferation of illegal and abhorrent content on their services.

Interestingly, Facebook’s Mark Zuckerberg recently called for the establishment of “third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards.” In a follow-up conversation with Axios, Kevin Martin of Facebook “compared the proposed standard-setting body to the Motion Picture Association of America’s system for rating movies.”

The ratings group, whose official name is the Classification and Rating Administration (CARA), was established in 1968 to stave off government censorship by educating parents about the contents of films. It has been in place ever since – and as longtime filmmakers, we’ve interacted with the MPAA’s ratings system hundreds of times – working closely with them to maintain our filmmakers’ creative vision, while, at the same time, keeping parents informed so that they can decide if those movies are appropriate for their children.  

CARA is not a perfect system. Filmmakers do not always agree with the ratings given to their films, but the board strives to be transparent as to why each film receives the rating it does. The system allows filmmakers to determine if they want to make certain cuts in order to attract a wider audience. Additionally, there are occasions where parents may not agree with the ratings given to certain films based on their content. CARA strives to consistently strike the delicate balance between protecting a creative vision and informing people and families about the contents of a film.

 CARA’s effectiveness is reflected in the fact that other creative industries including televisionvideo games, and music have also adopted their own voluntary ratings systems. 

While the MPAA’s ratings system works very well for pre-release review of content from a professionally- produced and curated industry, including the MPAA member companies and independent distributors, we do not believe that the MPAA model can work for dominant internet platforms like Google, Facebook, and Twitter that rely primarily on post hoc review of user-generated content (UGC).

Image: Bryce Durbin / TechCrunch

 Here’s why: CARA is staffed by parents whose judgment is informed by their experiences raising families – and, most importantly, they rate most movies before they appear in theaters. Once rated by CARA, a movie’s rating will carry over to subsequent formats, such as DVD, cable, broadcast, or online streaming, assuming no other edits are made.

By contrast, large internet platforms like Facebook and Google’s YouTube primarily rely on user-generated content (UGC), which becomes available almost instantaneously to each platform’s billions of users with no prior review. UGC platforms generally do not pre-screen content – instead they typically rely on users and content moderators, sometimes complemented by AI tools, to flag potentially problematic content after it is posted online.

The numbers are also revealing. CARA rates about 600-900 feature films each year, which translates to approximately 1,500 hours of content annually. That’s the equivalent of the amount of new content made available on YouTube every three minutes. Each day, uploads to YouTube total about 720,000 hours – that is equivalent to the amount of content CARA would review in 480 years!

 Another key distinction: premium video companies are legally accountable for all the content they make available, and it is not uncommon for them to have to defend themselves against claims based on the content of material they disseminate.

By contrast, as CreativeFuture said in an April 2018 letter to Congress: “the failure of Facebook and others to take responsibility [for their content] is rooted in decades-old policies, including legal immunities and safe harbors, that actually absolve internet platforms of accountability [for the content they host.]”

In short, internet platforms whose offerings consist mostly of unscreened user-generated content are very different businesses from media outlets that deliver professionally-produced, heavily-vetted, and curated content for which they are legally accountable.

Given these realities, the creative content industries’ approach to self-regulation does not provide a useful model for UGC-reliant platforms, and it would be a mistake to describe any post hoc review process as being “like MPAA’s ratings system.” It can never play that role.

This doesn’t mean there are not areas where we can collaborate. Facebook and Google could work with us to address rampant piracy. Interestingly, the challenge of controlling illegal and abhorrent content on internet platforms is very similar to the challenge of controlling piracy on those platforms. In both cases, bad things happen – the platforms’ current review systems are too slow to stop them, and harm occurs before mitigation efforts are triggered. 

Also, as CreativeFuture has previously said, “unlike the complicated work of actually moderating people’s ‘harmful’ [content], this is cut and dried – it’s against the law. These companies could work with creatives like never before, fostering a new, global community of advocates who could speak to their good will.”

Be that as it may, as Congress and the current Administration continue to consider ways to address online harms, it is important that those discussions be informed by an understanding of the dramatic differences between UGC-reliant internet platforms and creative content industries. A content-reviewing body like the MPAA’s CARA is likely a non-starter for the reasons mentioned above – and policymakers should not be distracted from getting to work on meaningful solutions.