Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.

Can data science save social media?

The unfettered internet is too often used for malicious purposes and is frequently woefully inaccurate. Social media — especially Facebook — has failed miserably at protecting user privacy and blocking miscreants from sowing discord.

That’s why CEO Mark Zuckerberg was just forced to testify about user privacy before both houses of Congress. And now governmental regulation of FaceBook and other social media appears to be a fait accompli.

At this key juncture, the crucial question is whether regulation — in concert with FaceBook’s promises to aggressively mitigate its weaknesses — correct the privacy abuses and continue to fulfill FaceBook’s goal of giving people the power to build transparent communities, bringing the world closer together?

The answer is maybe.

What has not been said is that FaceBook must embrace data science methodologies initially created in the bowels of the federal government to help protect its two billion users. Simultaneously, FaceBook must still enable advertisers — its sole source of revenue — to get the user data required to justify their expenditures.

Specifically, Facebook must promulgate and embrace what is known in high-level security circles as homomorphic encryption (HE), often considered the “Holy Grail” of cryptography, and data provenance (DP). HE would enable Facebook, for example, to generate aggregated reports about its user psychographic profiles so that advertisers could still accurately target groups of prospective customers without knowing their actual identities.

Meanwhile, data provenance – the process of tracing and recording true identities and the origins of data and its movement between data bases – could unearth the true identities of Russian perpetrators and other malefactors or at least identify unknown provenance, adding much needed transparency in cyberspace.

Both methodologies are extraordinarily complex. IBM and Microsoft, in addition to the National Security Agency, have been working on HE for years but the technology has suffered from significant performance challenges. Progress is being made, however. IBM, for example, has been granted a patent on a particular HE method – a strong hint it’s seeking a practical solution – and last month proudly announced that its rewritten HE encryption library now works up to 75 times faster. Maryland-based ENVEIL, a startup staffed by the former NSA HE team, has broken the performance barriers required to produce a commercially viable version of HE, benchmarking millions of times faster than IBM in tested use cases.

How Homomorphic Encryption Would Help FaceBook

HE is a technique used to operate on and draw useful conclusions from encrypted data without decrypting it, simultaneously protecting the source of the information. It is useful to FaceBook because its massive inventory of personally identifiable information is the foundation of the economics underlying its business model. The more comprehensive the datasets about individuals, the more precisely advertising can be targeted.

HE could keep Facebook information safe from hackers and inappropriate disclosure, but still extract the essence of what the data tells advertisers. It would convert encrypted data into strings of numbers, do math with these strings, and then decrypt the results to get the same answer it would if the data wasn’t encrypted at all.

A particularly promising sign for HE emerged last year, when Google revealed a new marketing measurement tool that relies on this technology to allow advertisers to see whether their online ads result in in-store purchases.

Unearthing this information requires analyzing datasets belonging to separate organizations, notwithstanding the fact that these organizations pledge to protect the privacy and personal information of the data subjects. HE skirts this by generating aggregated, non-specific reports about the comparisons between these datasets.

In pilot tests, HE enabled Google to successfully analyze encrypted data about who clicked on an advertisement in combination with another encrypted multi-company dataset that recorded credit card purchase records. With this data in hand, Google was able to provide reports to advertisers summarizing the relationship between the two databases to conclude, for example, that five percent of the people who clicked  on an ad wound up purchasing in a store.

Data Provenance

Data provenance has a markedly different core principle. It’s based on the fact that digital information is atomized into 1’s and 0’s with no intrinsic truth. The dual digits exist only to disseminate information, whether accurate or widely fabricated. A well-crafted lie can easily be indistinguishable from the truth and distributed across the internet. What counts is the source of these 1’s and 0’s. In short, is it legitimate?  What is the history of the 1’ and 0’s?

The art market, as an example, deploys DP to combat fakes and forgeries of the world’s greatest paintings, drawing and sculptures. It uses DP techniques to create a verifiable, chain-of-custody for each piece of the artwork, preserving the integrity of the market.

Much the same thing can be done in the online world. For example, a FaceBook post referencing a formal statement by a politician, with an accompanying photo, would  have provenance records directly linking the post to the politician’s press release and even the specifics of the photographer’s camera. The goal – again – is ensuring that data content is legitimate.

Companies such as Wal-Mart, Kroger, British-based Tesco and Swedish-based H&M, an international clothing retailer, are using or experimenting with new technologies to provide provenance data to the marketplace.

Let’s hope that Facebook and its social media brethren begin studying HE and DP thoroughly and implement it as soon as feasible. Other strong measures — such as the upcoming implementation of the European Union’s General Data Protection Regulation, which will use a big stick to secure personally identifiable information – essentially should be cloned in the U.S. What is best, however, are multiple avenues to enhance user privacy and security, while hopefully preventing breaches in the first place. Nothing less than the long-term viability of social media giants is at stake.

The psychological impact of an $11 Facebook subscription

Would being asked to pay Facebook to remove ads make you appreciate their value or resent them even more? As Facebook considers offering an ad-free subscription option, there are deeper questions than how much money it could earn. Facebook has the opportunity to let us decide how we compensate it for social networking. But choice doesn’t always make people happy.

In February I explored the idea of how Facebook could disarm data privacy backlash and boost well-being by letting us pay a monthly subscription fee instead of selling our attention to advertisers. The big takeaways were:

  • Mark Zuckerberg insists that Facebook will remain free to everyone, including those who can’t afford a monthly fee, so subscriptions would be an opt-in alternative to ads rather than a replacement that forces everyone to pay
  • Partially decoupling the business model from maximizing your total time spent on Facebook could let it actually prioritize time well spent because it wouldn’t have to sacrifice ad revenue
  • The monthly subscription price would need to offset Facebook’s ad earnings. In the US & Canada Facebook earned $19.9 billion in 2017 from 239 million users. That means the average user there would have to pay $7 per month

However, my analysis neglected some of the psychological fallout of telling people they only get to ditch ads if they can afford it, the loss of ubiquitous reach for advertisers, and the reality of which users would cough up the cash. Though on the other hand, I also neglected the epiphany a price tag could produce for users angry about targeted advertising.

What’s Best For Everyone

This conversation is relevant because Zuckerberg was asked twice by congress about Facebook potentially offering subscriptions. Zuckerberg endorsed the merits of ad-supported apps, but never ruled out letting users buy a premium version. “We don’t offer an option today for people to pay to not show ads” Zuckerberg said, later elaborating that “Overall, I think that the ads experience is going to be the best one. I think in general, people like not having to pay for a service. A lot of people can’t afford to pay for a service around the world, and this aligns with our mission the best.”

But that word ‘today’ gave a glimmer of hope that we might be able to pay in the future.

Facebook CEO and founder Mark Zuckerberg testifies during a US House Committee on Energy and Commerce hearing about Facebook on Capitol Hill in Washington, DC, April 11, 2018. (Photo: SAUL LOEB/AFP/Getty Images)

What would we be paying for beyond removing ads, though?. Facebook already lets users concerned about their privacy opt out of some ad targeting, just not seeing ads as a whole. Zuckerberg’s stumping for free Internet services make it seem unlikely that Facebook would build valuable features and reserve them for subscribers

Spotify only lets paid users play any song they want on-demand, while ad-supported users are stuck on shuffle. LinkedIn only lets paid users message anyone they want and appear as a ‘featured applicant’ to hirers, while ad-supported users can only message their connections. Netflix only lets paid users…use it at all.

But Facebook views social networking as a human right, and would likely want to give all users any extra features it developed like News Feed filters to weed out politics or baby pics. Facebook also probably wouldn’t sell features that break privacy like how LinkedIn subscribers can see who visited their profiles. In fact, I wouldn’t bet on Facebook offering any significant premium-only features beyond removing ads. That could make it a tough sell.

Meanwhile, advertisers trying to reach every member of a demographic might not want a way for people to pay to opt-out of ads. If they’re trying to promote a new movie, a restaurant chain, or an election campaign, they’d want as strong of penetration amongst their target audience as they can get. A subscription model punches holes in the ubiquity of Facebook ads that drive businesses to the app.

Resentment Vs Appreciation

But the biggest issue is that Facebook is just really good at monetizing with ads. For never charging users, it earns a ton of money. $40 billion in 2017. Convincing people to pay more with their wallets than their eyeballs may be difficult. And the ones who want to pay are probably worth much more than the average.

Let’s look at the US & Canada market where Facebook earns the most per user because they’re wealthier and have more disposable income than people in other parts of the world, and therefore command higher ad rates. On average US and Canada users earn Facebook $7 per month from ads. But those willing and able to pay are probably richer than the average user, so luxury businesses pay more to advertise to them, and probably spend more time browsing Facebook than the average user, so they see more of those ads.

Brace for sticker shock, because for Facebook to offset the ad revenue of these rich hardcore users, it might have to charge more like $11 to $14 per month.

With no bonus features, that price for something they can get for free could seem way too high. Many who could afford it still wouldn’t justify it, regardless of how much time they spend on Facebook compared to other media subscriptions they shell out for. Those who truly can’t afford it might suddenly feel more resentment towards the Facebook ads they’ve been scrolling past unperturbed for years. Each one would be a reminder that they don’t have the cash to escape Facebook’s data mines.

But perhaps it’s just as likely that people would feel the exact opposite — that having to see those ads really isn’t so bad when faced with the alternative of a steep subscription price.

People often don’t see worth in what they get for free. Being confronted with a price tag could make them more cognizant of the value exchange they’re voluntarily entering. Social networking costs money to operate, and they have to pay somehow. Seeing ads keeps Facebook’s lights on, its labs full of future products, and its investors happy.

That’s why it might not matter if Facebook can only get 4 percent, or 1 percent, or 0.1 percent of users to pay. It could be worth it for Facebook to build out a subscription option to empower users with a sense of choice and provide perspective on the value they already receive for free.

For more big news about Facebook, check out our recent coverage:

How Facebook gives an asymmetric advantage to negative messaging

Few Facebook critics are as credible as Roger McNamee, the managing partner at Elevation Partners. As an early investor in Facebook, McNamee was only only a mentor to Mark Zuckerberg but also introduce him to Sheryl Sandberg.

So it’s hard to underestimate the significance of McNamee’s increasingly public criticism of Facebook over the last couple of years, particularly in the light of the growing Cambridge Analytica storm.

According to McNamee, Facebook pioneered the building of a tech company on “human emotions”. Given that the social network knows all of our “emotional hot buttons”, McNamee believes, there is “something systemic” about the way that third parties can “destabilize” our democracies and economies. McNamee saw this in 2016 with both the Brexit referendum in the UK and the American Presidential election and concluded that Facebook does, indeed, give “asymmetric advantage” to negative messages.

McNamee still believes that Facebook can be fixed. But Zuckerberg and Sandberg, he insists, both have to be “honest” about what’s happened and recognize its “civic responsibility” in strengthening democracy. And tech can do its part too, McNamee believes, in acknowledging and confronting what he calls its “dark side”.

McNamee is certainly doing this. He has now teamed up with ex Google ethicist Tristan Harris in the creation of The Center for Human Technology — an alliance of Silicon Valley notables dedicated to “realigning technology with humanity’s best interests.”

Here are Mark Zuckerberg’s notes from today’s hearing

Facebook’s Mark Zuckerberg pulled off a smooth appearance in a joint Senate hearing today, dodging most questions while maintaining an adequately patient vibe through five hours of varied but mostly tame questioning.

The chief executive avoided admitting that Facebook is a publisher or a monopoly, refused to commit to any meaningful legislation and respectfully addressed lawmakers over a nearly five hour marathon testimony.

Still, he did make one rookie mistake.

Zuckerberg left his hearing notes open in front of his seat for long enough for AP Photographer Andrew Harnik to snap a high resolution shot with talking points in plain view. Twitter users and journalists scanning photos from the courtroom as they hit the wire were quick to notice, the irony of the minor privacy invasion not lost on them.

Most of the notes cover points that we heard Zuckerberg repeat during the course of the hearing, but there are a few more candid statements that didn’t come up. The notes also provide a glimpse into what lines of questioning Facebook expected. For one, they expected Congress might demand his resignation.

Below we’ve listed the subheadings on his notes in bold with any interesting bullet points pulled out. Our partial transcript retains the original emphasis from the document. Though we’ve italicized what was underlined, bold lettering is retained.

Cambridge Analytica

Compensation

Reverse lookup (scraping)

Accountability

  • Do you ever fire anyone? Yes; hold people accountable all the time; not going into specifics.
  • Resign? Founded Facebook. My decisions. I made mistakes. Big challenge, but we’ve solved problems before, going to solve this one. Already taking action.
  • No accountability for MZ? Accountable to you, to employees, to people who use FB.

Data Safety

  • I use FB every day, so does my family, invest a lot in security.

Business Model (ads)

  • Want FB to be a service that everyone can use, has to be free, can only do that with ads.
  • Let’s be clear. Facebook doesn’t sell data. You own your information. We give you controls.

???/ wellbeing

  • Time spent fell 5% Q4, pivot to MSI.

Defend Facebook

  • [If attacked: Respectfully, I reject that. Not who we are.]

Tim Cook on biz model

  • At FB, we try hard to charge you less. In fact, we’re free.
  • On data, we’re similar. When you install an app on your iPhone, you give it access to some information, just like when you login with FB.
  • Lots of stories about apps misusing Apple data, never seen Apple notifying people.
  • Important you hold everyone to the same standard.

Disturbing content

Election integrity (Russia)

Diversity

  • Silicon Valley has a problem, and Facebook is part of that problem.
  • Personally care about making progress; long way to go [3% African American, 5% Hispanics]

Competition

  • Consumer choice: consumers have lots of choices over how they spend their time.
  • Small part of ad market: advertisers have choices too – $650 billion market, we have 6%
  • Break up FB? US tech companies key asset for America; break up strengthens Chinese companies.

GDPR (Don’t say we already do what GDPR requires)

Zuckerberg tells Congress Facebook is not listening to you through your phone

Facebook CEO Mark Zuckerberg officially shot down the conspiracy theory that the social network has some way of keeping tabs on its users by tapping into the mics on people’s smartphones. During Zuckerberg’s testimony before the Senate this afternoon, Senator Gary Peters had asked the CEO if the social network is mining audio from mobile devices – something his constituents have been asking him about, he said.

Zuckerberg denied this sort of audio data collection was taking place.

The fact that so many people believe that Facebook is “listening” to their private conversations is representative of how mistrustful users have grown of the company and its data privacy practices, the Senator noted.

“I think it’s safe to say very simply that Facebook is losing the trust of an awful lot of Americans as a result of this incident,” said Peters, tying his constituents’ questions about mobile data mining to their outrage over the Cambridge Analytica scandal.

Questions about Facebook’s mobile data collection practices aren’t anything new, however.

In fact, Facebook went on record back in 2016 to state – full stop – that it does not use your phone’s microphone to inform ads or News Feed stories.

Despite this, it’s something that keeps coming up, time and again. The Wall St. Journal even ran an explainer video about the conspiracy last month. And yet none of the reporting seems to quash the rumor.

People simply refuse to believe it’s not happening. They’ll tell you of very specific times when something they swear they only uttered aloud quickly appeared in their Facebook News Feed.

Perhaps their inability to believe Facebook on the matter is more of an indication of how precise – and downright creepy – Facebook’s ad targeting capabilities have become over the years.

Peters took the opportunity today to ask Zuckerberg this question straight on today, during Zuckerberg’s testimony.

“Something that I’ve been hearing a lot from folks who have been coming up to me and talking about a kind of experience they’ve had where they’re having a conversation with friends – not on the phone, just talking. And then they see ads popping up fairly quickly on their Facebook,” Peters explained. “So I’ve heard constituents fear that Facebook is mining audio from their mobile devices for the purposes of ad targeting – which I think speaks to the lack of trust that we’re seeing here.”

He then asked Zuckerberg to state if this is something Facebook did.

“Yes or no: does Facebook use audio obtained from mobile devices to enrich personal information about its users?,” Peters asked.

Zuckerberg responded simply: “no.”

The CEO then added that his answer meant “no” in terms of the conspiracy theory that keeps getting passed around, but noted that the social network does allow users to record videos, which have an audio component. That was a bit of an unnecessary clarification, though, given that the question was about surreptitious recording, not something users were explicitly recording media to share.

“Hopefully that will dispel a lot of what I’ve been hearing,” Peters said, after hearing Zuckerberg’s response.

We wouldn’t be too sure.

There have been a number of lengthy explanations of the technical limitations regarding a project of this scale, which have also pointed out how easy it would be to detect this practice, if it were true. But there are still those people out there who believe things to be true because they feel true.

And at the end of the day, the fact that this conspiracy refuses to die says something about how Facebook users view the company: as a stalker that creeps on their privacy, and then can’t be believed when it tells you, “no, trust me, we don’t do that.”

Facebook demands ID verification for big Pages, ‘issue’ ad buyers

Facebook is looking to self-police by implementing parts of the proposed Honest Ads Act before the government tries to regulate it. To fight fake news and election interference, Facebook will require the admins of popular Facebook Pages and advertisers buying political or “issue” ads on “debated topics of national legislative importance” like education or abortion to verify their identity and location. Those that refuse, are found to be fraudulent or are trying to influence foreign elections will have their Pages prevented from posting to the News Feed or their ads blocked.

Meanwhile, Facebook plans to use this information to append a “Political Ad” label and “Paid for by” information to all election, politics and issue ads. Users can report any ads they think are missing the label, and Facebook will show if a Page has changed its name to thwart deception. Facebook started the verification process this week; users in the U.S. will start seeing the labels and buyer info later this spring, and Facebook will expand the effort to ads around the world in the coming months.

This verification and name change disclosure process could prevent hugely popular Facebook Pages from being built up around benign content, then sold to cheats or trolls who switch to sharing scams or misinformation.

Overall, it’s a smart start that comes way too late. As soon as Facebook started heavily promoting its ability to run influential election ads, it should have voluntarily adopted similar verification and labeling rules as traditional media. Instead, it was so focused on connecting people to politics, it disregarded how the connection could be perverted to power mass disinformation and destabilization campaigns.

“These steps by themselves won’t stop all people trying to game the system. But they will make it a lot harder for anyone to do what the Russians did during the 2016 election and use fake accounts and pages to run ads,” CEO Mark Zuckerberg wrote on Facebook. “Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act. This will help raise the bar for all political advertising online.” You can see his full post below.

The move follows Twitter’s November announcement that it too would label political ads and show who purchased them.

Twitter’s mockup for its “Political” ad labels and “paid for by” information

Facebook also gave a timeline for releasing both its tools for viewing all ads run by Pages and to create a Political Ad Archive. A searchable index of all ads with the “political” label, including their images, text, target demographics and how much was spent on them, will launch in June and keep ads visible for four years after they run. Meanwhile, the View Ads tool that’s been testing in Canada will roll out globally in June so users can see any ad run by a Page, not just those targeted to them.

Facebook announced in October it would require documentation from election advertisers and label their ads, but now is applying those requirements to a much wider swath of ads that deal with big issues impacted by politics. That could protect users from disinformation and divisive content not just during elections, but any time bad actors are trying to drive wedges into society. Facebook wouldn’t reveal the threshold of followers that will trigger Pages needing verification, but confirmed it will not apply to small to medium-size businesses.

By self-regulating, Facebook may be able to take the wind out of calls for new laws that apply to online ads buyer disclosure rules on TV and other traditional media ads. Zuckerberg will testify before the U.S. Senate Judiciary and Commerce committees on April 10, as well as the House Energy and Commerce Committee on April 11. Having today’s announcement to point to could give him more protection against criticism during the hearings, though Congress will surely want to know why these safeguards weren’t in place already.

With important elections coming up in the US, Mexico, Brazil, India, Pakistan and more countries in the next year, one…

Posted by Mark Zuckerberg on Friday, April 6, 2018

For more on Facebook’s recent troubles, check out our feature stories:

Facebook retracted Zuckerberg’s messages from recipients’ inboxes

You can’t remove Facebook messages from the inboxes of people you sent them to, but Facebook did that for Mark Zuckerberg and other executives. Three sources confirm to TechCrunch that old Facebook messages they received from Zuckerberg have disappeared from their Facebook inboxes, while their own replies to him conspiculously remain. An email receipt of a Facebook message from 2010 reviewed by TechCrunch proves Zuckerberg sent people messages that no longer appear in their Facebook chat logs or in the files available from Facebook’s Download Your Information tool.

When asked by TechCrunch about the situation, Facebook claimed it was done for corporate security in this statement:

“After Sony Pictures’ emails were hacked in 2014 we made a number of changes to protect our executives’ communications. These included limiting the retention period for Mark’s messages in Messenger. We did so in full compliance with our legal obligations to preserve messages.”

However, Facebook never publicly disclosed the removal of messages from users’ inboxes, nor privately informed the recipients. That raises the question of whether this was a breach of user trust. When asked that question directly over Messenger, Zuckerberg declined to provide a statement.

Tampering With Users’ Inboxes

A Facebook spokesperson confirmed to TechCrunch that users can only delete messages their own inboxes, and that they would still show up in the recipient’s thread. There appears to be no “retention period” for normal users’ messages, as my inbox shows messages from as early as 2005. That indicates Zuckerberg and other executives  special treatment in being able to pull back previously sent messages.

Facebook chats sent by Zuckerberg from several years ago or older were missing from the inboxes of both former employees and non-employees. What’s left makes it look the recipients were talking to themselves, as only their side of back-and-forth conversations with Zuckerberg still appear. Three sources asked to remain anonymous out of fear of angering Zuckerberg or burning bridges with the company.

None of Facebook’s terms of service appear to give it the right to remove content from users’ accounts unless it violates the company’s community standards. While it’s somewhat standard for corporations to have data retention policies that see them delete emails or other messages from their own accounts that were sent by employees, they typically can’t remove the messages from the accounts of recipients outside the company. It’s rare that these companies own the communication channel itself and therefore host both sides of messages as Facebook does in this case, which potentially warrants a different course of action with more transparency than quietly retracting the messages.

Facebook’s power to tamper with users’ private message threads could alarm some. The issue is amplified by the fact that Facebook Messenger now has 1.3 billion users, making it one of the most popular communication utilities in the world.

Zuckerberg is known to have a team that helps him run his Facebook profile, with some special abilities for managing his 105 million followers and constant requests for his attention. For example, Zuckerberg’s profile doesn’t show a button to add him as a friend on desktop, and the button is grayed out and disabled on mobile. But the ability to change the messaging inboxes of other users is far more concerning.

Facebook may have sought to prevent leaks of sensitive corporate communications. Following the Sony hack, emails of Sony’s president Michael Lynton who sat on Snap Inc’s board were exposed, revealing secret acquisitions and strategy.

Mark Zuckerberg during the early days of Facebook

However, Facebook may have also looked to thwart the publication of potentially embarrassing personal messages sent by Zuckerberg or other executives. In 2010, Silicon Valley Insider published now-infamous instant messages from a 19-year-old Zuckerberg to a friend shortly after starting The Facebook in 2004. “yea so if you ever need info about anyone at harvard . . . just ask . . . i have over 4000 emails, pictures, addresses, sns” Zuckerberg wrote to a friend. “what!? how’d you manage that one?” they asked. “people just submitted it . .  i don’t know why . . . they “trust me” . . . dumb fucks” Zuckerberg explained.

The New Yorker later confirmed the messages with Zuckerberg, who told the publication he “absolutely” regretted them. “If you’re going to go on to build a service that is influential and that a lot of people rely on, then you need to be mature, right? I think I’ve grown and learned a lot” said Zuckerberg.

If the goal of Facebook’s security team was to keep a hacker from accessing the accounts of executives and therefore all of their messages, they could have merely been deleted on their side the way any Facebook user is free to do, without them disappearing from the various recipients’ inboxes. If Facebook believed it needed to remove the messages entirely from its servers in case the company’s backend systems we breached, a disclosure of some kind seems reasonable.

Now as Facebook encounters increased scrutiny regarding how it treats users’ data in the wake of the Cambridge Analytica scandal, the retractions could become a bigger issue. Zuckerberg is slated to speak in front of the U.S. Senate Judiciary and Commerce committees on April 10 as well as the House Energy and Commerce Committee on April 11. They could request more information about Facebook removing messages or other data from users’ accounts without their consent. While Facebook is trying to convey that it understands its responsibilities, the black mark left on public opinion by past behavior may prove permanent.

For more on Facebook’s recent troubles, read our feature pieces:

 

Highlights and audio from Zuckerberg’s emotional Q&A on scandals

“This is going to be a never-ending battle” said Mark Zuckerberg . He just gave the most candid look yet into his thoughts about Cambridge Analytica, data privacy, and Facebook’s sweeping developer platform changes today during a conference call with reporters. Sounding alternately vulnerable about his past negligence and confident about Facebook’s strategy going forward, Zuckerberg took nearly an hour of tough questions.

You can listen to the entire on-the-record call here, which I recorded with Facebook’s consent:

The CEO started the call by giving his condolences to those affected by the shooting at YouTube yesterday. He then delivered this mea culpa on privacy:

We’re an idealistic and optimistic company . . . but it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well . . . We didn’t take a broad enough view of what our responsibility is and that was a huge mistake. That was my mistake.

It’s not enough to just connect people. We have to make sure those connections are positive and that they’re bringing people together.  It’s not enough just to give people a voice, we have to make sure that people are not using that voice to hurt people or spread misinformation. And it’s not enough to give people tools to sign into apps, we have to make sure that all those developers protect people’s information too.

It’s not enough to have rules requiring that they protect the information. It’s not enough to believe them when they’re telling us they’re protecting information. We actually have to ensure that everyone in our ecosystem protects people’s information.”

This is Zuckerberg’s strongest statement yet about his and Facebook’s failure to anticipate worst-case scenarios, which has led to a string of scandals that are now decimating the company’s morale. Spelling out how policy means nothing without enforcement, and pairing that with a massive reduction in how much data app developers can request from users makes it seem like Facebook is ready to turn over a new leaf.

Here are the highlights from the rest of the call:

On Zuckerberg calling fake news’ influence “crazy”: “I clearly made a mistake by just dismissing fake news as crazy — as having an impact . . . it was too flippant. I never should have referred to it as crazy.

On deleting Russian trolls: Not only did Facebook delete 135 Facebook and Instagram accounts belonging to Russian government-connected election interference troll farm the Internet Research Agency, as Facebook announced yesterday. Zuckerberg said Facebook removed “a Russian news organization that we determined was controlled and operated by the IRA”.

On the 87 million number: Regarding today’s disclosure that up to 87 million people had their data improperly access by Cambridge Analytica, “it very well could be less but we wanted to put out the maximum that we felt it could be as soon as we had that analysis.” Zuckerberg also referred to The New York Times’ report, noting that “We never put out the 50 million number, that was other parties.”

On users having their public info scraped: Facebook announced this morning that “we believe most people on Facebook could have had their public profile scraped” via its search by phone number or email address feature and account recovery system. Scammers abused these to punch in one piece of info and then pair it to someone’s name and photo . Zuckerberg said search features are useful in languages where it’s hard to type or a lot of people have the same names. But “the methods of react limiting this weren’t able to prevent malicious actors who cycled through hundreds of thousands of IP addresses and did a relatively small number of queries for each one, so given that and what we know to day it just makes sense to shut that down.”

On when Facebook learned about the scraping and why it didn’t inform the public sooner: This was my question, and Zuckerberg dodged, merely saying Facebook had looked more closely at it in the last few days.”

On implementing GDPR worldwide: Zuckerberg refuted a Reuters story from yesterday saying that Facebook wouldn’t bring GDPR privacy protections to the U.S. and elsewhere. Instead he says, “we’re going to make all the same controls and settings available everywhere, not just in Europe.”

On if board has discussed him stepping down as chairman: “Not that I’m aware of” Zuckerberg said happily.

On if he still thinks he’s the best person to run Facebook: “Yes. Life is about learning from the mistakes and figuring out what you need to do to move forward . . . I think what people should evaluate us on is learning from our mistakes . . .and if we’re building things people like and that make their lives better . . . there are billions of people who love the products we’re building.”

On the Boz memo and prioritizing business over safety: “The things that makes our product challenging to manage and operate are not the tradeoffs between people and the business. I actually think those are quite easy because over the long-term, the business will be better if you serve people. I think it would be near-sighted to focus on short-term revenue over people, and I don’t think we’re that short-sighted. All the hard decisions we have to make are tradeoffs between people. Different people who use Facebook have different needs. Some people want to share political speech that they think is valid, and other people feel like it’s hate speech . . . we don’t always get them right.”

On whether Facebook can audit all app developers: “We’re not going to be able to go out and necessarily find every bad use of data” Zuckerberg said, but confidently said “I actually do think we’re going to be be able to cover a large amount of that activity.

On whether Facebook will sue Cambridge Analytica: “We have stood down temporarily to let the [UK government] do their investigation and their audit. Once that’s done we’ll resume ours … and ultimately to make sure none of the data persists or is being used improperly. And at that point if it makes sense we will take legal action if we need to do that to get people’s information.”

On how Facebook will measure its impact on fixing privacy: Zuckerberg wants to be able to measure “the prevalence of different categories of bad content like fake news, hate speech, bullying, terrorism. . . That’s going to end up being the way we should be held accountable and measured by the public . . .  My hope is that over time the playbook and scorecard we put out will also be followed by other internet platforms so that way there can be a standard measure across the industry.”

On whether Facebook should try to earn less money by using less data for targeting “People tell us if they’re going to see ads they want the ads to be good . . . that the ads are actually relevant to what they care about . . On the one hand people want relevant experiences, and on the other hand I do think there’s some discomfort with how data is used in systems like ads. But I think the feedback is overwhelmingly on the side of wanting a better experience. Maybe it’s 95-5.”

On whether #DeleteFacebook has had an impact on usage or ad revenue: “I don’t think there’s been any meaningful impact that we’ve observed…but it’s not good.”

On the timeline for fixing data privacy: “This is going to be a never-ending battle. You never fully solve security. It’s an arms race” Zuckerberg said early in the call. Then to close Q&A, he said “I think this is a multi-year effort. My hope is that by the end of this year we’ll have turned the corner on a lot of these issues and that people will see that things are getting a lot better.”

Overall, this was the moment of humility, candor, and contrition Facebook desperately needed. Users, developers, regulators, and the company’s own employees have felt in the dark this last month, but Zuckerberg did his best to lay out a clear path forward for Facebook. His willingness to endure this question was admirable, even if he deserved the grilling.

The company’s problems won’t disappear, and its past transgressions can’t be apologized away. But Facebook and its leader have finally matured past the incredulous dismissals and paralysis that characterized its response to past scandals. It’s ready to get to work.

Cambridge Analytica denies accessing data on 87M Facebook users…claims 30M

Cambridge Analytica is refuting a report by Facebook today that said Cambridge Analytica improperly attained data on up to 87 million users. Instead, it claims it only “licensed data for no more than 30 million people” from Dr. Aleksandr Kogan’s research company Global Science Research. It also claims none of this data was used in work on the 2016 U.S. presidential election when it was hired by the Trump campaign, and that upon notice from Facebook immediately deleted all raw data and began removing derivative data.

The whole statement from Cambridge Analytica can be found below. We requested a comment from Facebook about the incongruencies in the two companies’ positions, but the social network declined to comment.

The he-said-she-said of the scandal seems to be amplifying as Facebook continues to endure criticism about weak data privacy policies and enforcement that led to the Cambridge Analytica fiasco that’s seen Facebook’s market cap drop nearly $100 billion.

NEW DELHI, INDIA – OCTOBER 9: Co-founder and chief executive of Facebook Mark Zuckerberg gestures as he announces the Internet.org Innovation Challenge in India on October 9, 2014 in New Delhi, India. Zuckerberg is on a two-day visit to India aimed at promoting the internet.org app, which allows people in underdeveloped areas to access basic online services. (Photo by Arun Sharma/Hindustan Times via Getty Images)

Today Facebook announced the 87 million figure as a maximum number of people potentially impacted and said it would notify those users with an alert atop the News Feed. It also rewrote its Terms of Service today to clarify how it collects and works with outside developers, and announced sweeping platform API restrictions that will break many apps built on Facebook but prevent privacy abuses. Zuckerberg then held a conference call with reporters to give insight on all the news.

Cambridge Analytica has repeatedly denied assertions about interactions with Facebook data, but Facebook hasn’t backed down. Instead, Facebook has used Cambridge Analytica as an example of abuse it’s trying to combat, and as a justification for cracking down on developers both malicious and benign around the world.

Cambridge Analytica responds to announcement that GSR dataset potentially contained 87 million records
Today Facebook reported that information for up to 87 million people may have been improperly obtained by research company GSR. Cambridge Analytica licensed data for  from GSR, as is clearly stated in our contract with the research company. We did not receive more data than this.

We did not use any GSR data in the work we did in the 2016 US presidential election.

Our contract with GSR stated that all data must be obtained legally, and this contract is now a matter of public record. We took legal action against GSR when we found out they had breached this contract.When Facebook contacted us to let us know the data had been improperly obtained, we immediately deleted the raw data from our file server, and began the process of searching for and removing any of its derivatives in our system.

When Facebook sought further assurances a year ago, we carried out an internal audit to make sure that all the data, all derivatives, and all backups had been deleted, and gave Facebook a certificate to this effect.

We are now undertaking an independent third-party audit to demonstrate that no GSR data remains in our systems.