To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

Here is where CEOs of heavily funded startups went to school

CEOs of funded startups tend to be a well-educated bunch, at least when it comes to university degrees.

Yes, it’s true college dropouts like Mark Zuckerberg and Bill Gates can still do well. But Crunchbase data shows that most startup chief executives have an advanced degree, commonly from a well-known and prestigious university.

Earlier this month, Crunchbase News looked at U.S. universities with strong track records for graduating future CEOs of funded companies. This unearthed some findings that, while interesting, were not especially surprising. Stanford and Harvard topped the list, and graduates of top-ranked business schools were particularly well-represented.

In this next installment of our CEO series, we narrowed the data set. Specifically, we looked at CEOs of U.S. companies funded in the past three years that have raised at least $100 million in total venture financing. Our intent was to see whether educational backgrounds of unicorn and near-unicorn leaders differ markedly from the broad startup CEO population.

Sort of, but not really

Here’s the broad takeaway of our analysis: Most CEOs of well-funded startups do have degrees from prestigious universities, and there are a lot of Harvard and Stanford grads. However, chief executives of the companies in our current data set are, educationally speaking, a pretty diverse bunch with degrees from multiple continents and all regions of the U.S.

In total, our data set includes 193 private U.S. companies that raised $100 million or more and closed a VC round in the past three years. In the chart below, we look at the universities most commonly attended by their CEOs:1

The rankings aren’t hugely different from the broader population of funded U.S. startups. In that data set, we also found Harvard and Stanford vying for the top slots, followed mostly by Ivy League schools and major research universities.

For heavily funded startups, we also found a high proportion of business school degrees. All of the University of Pennsylvania alum on the list attended its Wharton School of Business. More than half of Harvard-affiliated grads attended its business school. MBAs were a popular credential among other schools on the list that offer the degree.

Where the most heavily funded startup CEOs studied

When it comes to the most heavily funded startups, the degree mix gets quirkier. That makes sense, given that we looked at just 20 companies.

In the chart below, we look at alumni affiliations for CEOs of these companies, all of which have raised hundreds of millions or billions in venture and growth financing:

One surprise finding from the U.S. startup data set was the prevalence of Canadian university grads. Three CEOs on the list are alums of the University of Waterloo . Others attended multiple well-known universities. The list also offers fresh proof that it’s not necessary to graduate from college to raise billions. WeWork CEO Adam Neumann just finished his degree last year, 15 years after he started. That didn’t stop the co-working giant from securing more than $7 billion in venture and growth financing.

  1. Several CEOs attended more than one university on the list.

Facebook, Google face first GDPR complaints over “forced consent”

After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu

Zuckerberg didn’t make any friends in Europe today

Speaking in front of EU lawmakers today Facebook’s founder Mark Zuckerberg namechecked the GDPR’s core principles of “control, transparency and accountability” — claiming his company will deliver on all that, come Friday, when a new European Union data protection framework, GDPR, starts being applied, finally with penalties worth the enforcement.

However there was little transparency or accountability on show during the session, given the upfront questions format which saw Zuckerberg cherry-picking a few comfy themes to riff on after silently absorbing an hour of MEPs’ highly specific questions with barely a facial twitch in response.

The questions MEPs asked of Zuckerberg were wide ranging and often drilled deep into key pressure points around the ethics of Facebook’s business — ranging from how deep the app data misuse privacy scandal rabbithole goes; to whether the company is a monopoly that needs breaking up; to how users should be compensated for misuse of their data.

Is Facebook genuinely complying with GDPR, he was asked several times (unsurprisingly, given the scepticism of data protection experts on that front). Why did it choose to shift ~1.5BN users out of reach of the GDPR? Will it offer a version of its platform that lets people completely opt out of targeted advertising, as it has studiously avoided doing so so far.

Why did it refuse a public meeting with the EU parliament? Why has it spent “millions” lobbying against EU privacy rules? Will the company commit to paying taxes in the markets where it operates? What’s it doing to prevent fake accounts? What’s it doing to prevent bullying? Does it regulate content or is it a neutral platform?

Zuckerberg made like a sponge and absorbed all this fine-grained flak. But when the time came for responses the data flow was not reciprocal; Self-serving talking points on self-selected “themes” was all he had come prepared to serve up.

Yet — and here the irony is very rich indeed — people’s personal data flows liberally into Facebook, via all sorts of tracking technologies and techniques.

And as the Cambridge Analytica data misuse scandal has now made amply clear, people’s personal information has also very liberally leaked out of Facebook — oftentimes without their knowledge or consent.

But when it comes to Facebook’s own operations, the company maintains a highly filtered, extremely partial ‘newsfeed’ on its business empire — keeping a tight grip on the details of what data it collects and why.

Only last month Zuckerberg sat in Congress avoiding giving straight answers to basic operational questions. So if any EU parliamentarians had been hoping for actual transparency and genuine accountability from today’s session they would have been sorely disappointed.

Yes, you can download the data you’ve willingly uploaded to Facebook. Just don’t expect Facebook to give you a download of all the information it’s gathered and inferred about you.

The EU parliament’s political group leaders seemed well tuned to the myriad concerns now flocking around Facebook’s business. And were quick to seize on Zuckerberg’s dumbshow as further evidence that Facebook needs to be ruled.

Thing is, in Europe regulation is not a dirty word. And GDPR’s extraterritorial reach and weighty public profile looks to be further whetting political appetites.

So if Facebook was hoping the mere appearance of its CEO sitting in a chair in Brussels, going through the motions of listening before reading from his usual talking points, that looks to be a major miscalculation.

“It was a disappointing appearance by Zuckerberg. By not answering the very detailed questions by the MEPs he didn’t use the chance to restore trust of European consumers but in contrary showed to the political leaders in the European Parliament that stronger regulation and oversight is needed,” Green MEP and GDPR rapporteur Jan Philipp Albrecht told us after the meeting.

Albrecht had pressed Zuckerberg about how Facebook shares data between Facebook and WhatsApp — an issue that has raised the ire of regional data protection agencies. And while DPAs forced the company to turn off some of these data flows, Facebook continues to share other data.

The MEP had also asked Zuckerberg to commit to no exchange of data between the two apps. Zuckerberg determinedly made no such commitment.

Claude Moraes, chair of the EU parliament’s civil liberties, justice and home affairs (Libe) committee, issued a slightly more diplomatic reaction statement after the meeting — yet also with a steely undertone.

“Trust in Facebook has suffered as a result of the data breach and it is clear that Mr. Zuckerberg and Facebook will have to make serious efforts to reverse the situation and to convince individuals that Facebook fully complies with European Data Protection law. General statements like ‘We take privacy of our customers very seriously’ are not sufficient, Facebook has to comply and demonstrate it, and for the time being this is far from being the case,” he said.

“The Cambridge Analytica scandal was already in breach of the current Data Protection Directive, and would also be contrary to the GDPR, which is soon to be implemented. I expect the EU Data Protection Authorities to take appropriate action to enforce the law.”

Damian Collins, chair of the UK parliament’s DCMS committee, which has thrice tried and failed to get Zuckerberg to appear before it, did not mince his words at all. Albeit he has little reason to, having been so thoroughly rejected by the Facebook founder — and having accused the company of a pattern of evasive behavior to its CTO’s face — there’s clearly not much to hold out for now.

“What a missed opportunity for proper scrutiny on many crucial questions raised by the MEPs. Questions were blatantly dodged on shadow profiles, sharing data between WhatsApp and Facebook, the ability to opt out of political advertising and the true scale of data abuse on the platform,” said Collins in another reaction statement after the meeting. “Unfortunately the format of questioning allowed Mr Zuckerberg to cherry-pick his responses and not respond to each individual point.

“I echo the clear frustration of colleagues in the room who felt the discussion was shut down,” he added, ending with a fourth (doubtless equally forlorn) request for Zuckerberg to appear in front of the DCMS Committee to “provide Facebook users the answers they deserve”.

In the latter stages of today’s EU parliament session several MEPs — clearly very exasperated by the straightjacked format — resorted to heckling Zuckerberg to press for answers he had not given them.

“Shadow profiles,” interjected one, seizing on a moment’s hesitation as Zuckerberg sifted his notes for the next talking point. “Compensation,” shouted another, earning a snort of laughter from the CEO and some more theatrical note flipping to buy himself time.

Then, appearing slightly flustered, Zuckerberg looked up at one of the hecklers and said he would engage with his question — about shadow profiles (though Zuckerberg dare not speak that name, of course, given he claims not to recognize it) — arguing Facebook needs to hold onto such data for security purposes.

Zuckerberg did not specify, as MEPs had asked him to, whether Facebook uses data about non-users for any purposes other than the security scenario he chose to flesh out (aka “keeping bad content out”, as he put it).

He also ignored a second follow-up pressing him on how non-users can “stop that data being transferred”.

“On the security side we think it’s important to keep it to protect people in our community,” Zuckerberg said curtly, before turning to his lawyer for a talking point prompt (couched as an ask if there are “any other themes we wanted to get through”).

His lawyer hissed to steer the conversation back to Cambridge Analytica — to Facebook’s well-trodden PR about how they’re “locking down the platform” to stop any future data heists — and the Zuckbot was immediately back in action regurgitating his now well-practiced crisis PR around the scandal.

What was very clearly demonstrated during today’s session was the Facebook founder’s preference for control — that’s to say control which he is exercising.

Hence the fixed format of the meeting, which had been negotiated prior to Facebook agreeing to meet with EU politicians, and which clearly favored the company by allowing no formal opportunity for follow ups from MEPs.

Zuckerberg also tried several times to wrap up the meeting — by insinuating and then announcing time was up. MEPs ignored these attempts, and Zuckerberg seemed most uncomfortable at not having his orders instantly carried out.

Instead he had to sit and watch a micro negotiation between the EU parliament’s president and the political groups over whether they would accept written answers to all their specific questions from Facebook — before he was publicly put on the spot by president Antonio Tajani to agree to provide the answers in writing.

Although, as Collins has already warned MEPs, Facebook has had plenty of practice at generating wordy but empty responses to politicians’ questions about its business processes — responses which evade the spirit and specifics of what’s being asked.

The self-control on show from Zuckerberg today is certainly not the kind of guardrails that European politicians increasingly believe social media needs. Self-regulation, observed several MEPs to Zuckerberg’s face, hasn’t worked out so well has it?

The first MEP to lay out his questions warned Zuckerberg that apologizing is not enough. Another pointed out he’s been on a contrition tour for about 15 years now.

Facebook needs to make a “legal and moral commitment” to the EU’s fundamental values, he was told by Moraes. “Remember that you’re here in the European Union where we created GDPR so we ask you to make a legal and moral commitment, if you can, to uphold EU data protection law, to think about ePrivacy, to protect the privacy of European users and the many millions of European citizens and non-Facebook users as well,” said the Libe committee chair.

But self-regulation — or, the next best thing in Zuckerberg’s eyes: ‘Facebook-shaped regulation’ — was what he had come to advocate for, picking up on the MEPs’ regulation “theme” to respond with the same line he fed to Congress: “I don’t think the question here is whether or not there should be regulation. I think the question is what is the right regulation.”

“The Internet is becoming increasingly important in people’s lives. Some sort of regulation is important and inevitable. And the important thing is to get this right,” he continued. “To make sure that we have regulatory frameworks that help protect people, that are flexible so that they allow for innovation, that don’t inadvertently prevent new technologies like AI from being able to develop.”

He even brought up startups — claiming ‘bad regulation’ (I paraphrase) could present a barrier to the rise of future dormroom Zuckerbergs.

Of course he failed to mention how his own dominant platform is the attention-sapping, app gobbling elephant in the room crowding out the next generation of would-be entrepreneurs. But MEPs’ concerns about competition were clear.

Instead of making friends and influencing people in Brussels, Zuckerberg looks to have delivered less than if he’d stayed away — angering and alienating the very people whose job it will be to amend the EU legislation that’s coming down the pipe for his platform.

Ironically one of the few specific questions Zuckerberg chose to answer was a false claim by MEP Nigel Farage — who had wondered whether Facebook is still a “neutral political platform”, griping about drops in engagement for rightwing entities ever since Facebook’s algorithmic changes in January, before claiming, erroneously, that Facebook does not disclose the names of the third party fact checkers it uses to help it police fake news.

So — significantly, and as was also evident in the US Senate and Congress — Facebook was taking flak from both left and right of political spectrum, implying broad, cross-party support for regulating these algorithmic platforms.

Actually Facebook does disclose those fact checking partnerships. But it’s pretty telling that Zuckerberg chose to expend some of his oh-so-slender speaking time to debunk something that really didn’t merit the breath.

Farage had also claimed, during his three minutes, that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”. 

Funnily enough Zuckerberg didn’t make time to comment on that.

EU parliament pushes for Zuckerberg hearing to be live-streamed

There’s confusion about whether a meeting between Facebook founder Mark Zuckerberg and the European Union’s parliament — which is due to take place next Tuesday — will go ahead as planned or not.

The meeting was confirmed by the EU parliament’s president this week, and is the latest stop on Zuckerberg’s contrition tour, following the Cambridge Analytics data misuse story that blew up into a major public scandal in mid March. 

However, the discussion with MEPs that Facebook agreed to was due to take place behind closed doors. A private format that’s not only ripe with irony but was also unpalatable to a large number of MEPs. It even drew criticism from some in the EU’s unelected executive body, the European Commission, which further angered parliamentarians.

Now, as the FT reports, MEPs appear to have forced the parliament’s president, Antonio Tajani, to agree to live-streaming the event.

Guy Verhofstadt — the leader of the Alliance of Liberals and Democrats group of MEPs, who had said he would boycott the meeting if it took place in private — has also tweeted that a majority of the parliament’s groups have pushed for the event to be streamed online.

And a Green Group MEP, Sven Giegold, who posted an online petition calling for the meeting not to be held in secret — has also tweeted that there is now a majority among the groups wanting to change the format. At the time of writing, Giegold’s petition has garnered more than 25,000 signatures.

MEP Claude Moraes, chair of the EU parliament’s Civil Liberties, Justice and Home Affairs (LIBE) committee — and one of the handful of parliamentarians set to question Zuckerberg (assuming the meeting goes ahead as planned) — told TechCrunch this morning that there were efforts afoot among political group leaders to try to open up the format. Though any changes would clearly depend on Facebook agreeing to them.

After speaking to Moraes, we asked Facebook to confirm whether it’s open to Zuckerberg’s meeting being streamed online — say, via a Facebook Live. Seven hours later we’re still waiting for a response, including to a follow up email asking if it will accept the majority decision among MEPs for the hearing to be live-streamed.

The LIBE committee had been pushing for a fully open hearing with the Facebook founder — a format which would also have meant it being open to members of the public. But that was before a small majority of the parliament’s political groups accepted the council of presidents’ (COP) decision on a closed meeting.

Although now that decision looks to have been rowed back, with a majority of the groups pushing the president to agree to the event being streamed — putting the ball back in Facebook’s court to accept the new format.

Of course democracy can be a messy process at times, something Zuckerberg surely has a pretty sharp appreciation of these days. And if the Facebook founder pulls out of meeting simply because a majority of MEPs have voted to do the equivalent of “Facebook Live” the hearing, well, it’s hard to see a way for the company to salvage any face at all.

Zuckerberg has agreed to be interviewed onstage at the VivaTech conference in Paris next Thursday, and is scheduled to have lunch with French president Emmanuel Macron the same week. So pivoting to a last minute snub of the EU parliament would be a pretty high stakes game for the company to play. (Though it’s continued to deny a U.K. parliamentary committee any face time with Zuckerberg for months now.)

The EU Facebook agenda

The substance of the meeting between Zuckerberg and the EU parliament — should it go ahead — will include discussion about Facebook’s impact on election processes. That was the only substance detail flagged by Tajani in the statement on Wednesday when he confirmed Zuckerberg had accepted the invitation to talk to representatives of the EU’s 500 million citizens.

Moraes says he also intends to ask Zuckerberg wider questions — relating to how its business model impacts people’s privacy. And his hope is this discussion could help unblock negotiations around an update to the EU’s rules around online tracking technologies and the privacy of digital communications.

“One of the key things is that [Zuckerberg] gets a particular flavor of the genuine concern — not just about what Facebook is doing, but potentially other tech companies — on the interference in elections. Because I think that is a genuine, big, sort of tech versus real life and politics concern,” he says, discussing the questions he wants to ask.

“And the fact is he’s not going to go before the House of Commons. He’s not going to go before the Bundestag. And he needs to answer this question about Cambridge Analytica — in a little bit more depth, if possible, than we even saw in Congress. Because he needs to get straight from us the deepest concerns about that.

“And also this issue of processing for algorithmic targeting, and for political manipulation — some in depth questions on this.

“And we need to go more in depth and more carefully about what safeguards there are — and what he’s prepared to do beyond those safeguards.

“We’re aware of how poor US data protection law is. We know that GDPR is coming in but it doesn’t impact on the Facebook business model that much. It does a little bit but not sufficiently — I mean ePrivacy probably far more — so we need to get to a point where we understand what Facebook is willing to change about the way it’s been behaving up til now.

“And we have a real locus there — which is we have more Facebook users, and we have the clout as well because we have potential legislation, and we have regulation beyond that too. So I think for those reasons he needs to answer.”

“The other things that go beyond the obvious Cambridge Analytica questions and the impact on elections, are the consequences of the business model, data-driven advertising, and how that’s going to work, and there we need to go much more in depth,” he continues.

“Facebook on the one hand, it’s complying with GDPR [the EU’s incoming General Data Protection Regulation] which is fine — but we need to think about what the further protections are. So for example, how justified we are with the ePrivacy Regulation, for example, and its elements, and I think that’s quite important.

“I think he needs to talk to us about that. Because that legislation at the moment it’s seen as controversial, it’s blocked at the moment, but clearly would have more relevance to the problems that are currently being created.”

Negotiations between the EU parliament and the European Council to update the ePrivacy Directive — which governs the use of personal telecoms data and also regulates tracking cookies — and replace it with a regulation that harmonizes the rules with the incoming GDPR and expands the remit to include internet companies and cover both content and metadata of digital comms are effectively stalled for now, as EU Member States are still trying to reach agreement. The directive was last updated in 2009.

“When the Cambridge Analytica case happened, I was slightly concerned about people thinking GDPR is the panacea to this — it’s not,” argues Moraes. “It only affects Facebook’s business model a little bit. ePrivacy goes far more in depth — into data-driven advertising, personal comms and privacy.

“That tool was there because people were aware that this kind of thing can happen. But because of that the Privacy directive will be seen as controversial but I think people now need to look at it carefully and say look at the problems created in the Facebook situation — and not just Facebook — and then analyze whether ePrivacy has got merits. I think that’s quite an important discussion to happen.”

While Moraes believes Facebook-Cambridge Analytica could help unblock the log jam around ePrivacy, as the scandal makes some of the risks clear and underlines what’s at stake for politicians and democracies, he concedes there are still challenging barriers to getting the right legislation in place — given the fine-grained layers of complexity involved with imposing checks and balances on what are also poorly understood technologies outside their specific industry niches.

“This Facebook situation has happened when ePrivacy is more or less blocked because its proportionality is an issue. But the essence of it — which is all the problems that happened with the Facebook case, the Cambridge Analytica case, and data-driven advertising business model — that needs checks and balances… So we need to now just review the ePrivacy situation and I think it’s better that everyone opens this discussion up a bit.

“ePrivacy, future legislation on artificial intelligence, all of which is in our committee, it will challenge people because sometimes they just won’t want to look at it. And it speaks to parliamentarians without technical knowledge which is another issue in Western countries… But these are all wider issues about the understanding of these files which are going to come up.  

“This is the discussion we need to have now. We need to get that discussion right. And I think Facebook and other big companies are aware that we are legislating in these areas — and we’re legislating for more than one countries and we have the economies of scale — we have the user base, which is bigger than the US… and we have the innovation base, and I think those companies are aware of that.”

Moraes also points out that U.S. lawmakers raised the difference between the EU and U.S. data protection regimes with Zuckerberg last month — arguing there’s a growing awareness that U.S. law in this area “desperately needs to be modernized.”

So he sees an opportunity for EU regulators to press on their counterparts over the pond.

“We have international agreements that just aren’t going to work in the future and they’re the basis of a lot of economic activity, so it is becoming critical… So the Facebook debate should, if it’s pushed in the correct direction, give us a better handle on ePrivacy, on modernizing data protection standards in the US in particular. And modernizing safeguards for consumers,” he argues.

“Our parliaments across Europe are still filled with people who don’t have tech backgrounds and knowledge but we need to ensure that we get out of this mindset and start understanding exactly what the implications here are of these cases and what the opportunities are.”

In the short term, discussions are also continuing for a full meeting between the LIBE committee and Facebook.

Though that’s unlikely to be Zuckerberg himself. Moraes says the committee is “aiming for Sheryl Sandberg,” though he says other names have been suggested. No firm date has been conformed yet either — he’ll only say he “hopes it will take place as soon as possible.”

Threats are not on the agenda though. Moraes is unimpressed with the strategy the DCMS committee has pursued in trying (and so far failing) to get Zuckerberg to testify in front of the U.K. parliament, arguing threats of a summons were counterproductive. LIBE is clearly playing a longer game.

“Threatening him with a summons in UK law really was not the best approach. Because it would have been extremely important to have him in London. But I just don’t see why he would do that. And I’m sure there’s an element of him understanding that the European Union and parliament in particular is a better forum,” he suggests.

“We have more Facebook users than the US, we have the regulatory framework that is significant to Facebook — the UK is simply implementing GDPR and following Brexit it will have an adequacy agreement with the EU so I think there’s an understanding in Facebook where the regulation, the legislation and the audience is.”

“I think the quaint ways of the British House of Commons need to be thought through,” he adds. “Because I really don’t think that would have engendered much enthusiasm in [Zuckerberg] to come and really interact with the House of Commons which would have been a very positive thing. Particularly on the specifics of Cambridge Analytics, given that that company is in the UK. So that locus was quite important, but the approach… was not positive at all.”

Zuckerberg will meet with European parliament in private next week

Who says privacy is dead? Facebook’s founder Mark Zuckerberg has agreed to take European parliamentarians’ questions about how his platform impacts the privacy of hundreds of millions of European citizens — but only behind closed doors. Where no one except a handful of carefully chosen MEPs will bear witness to what’s said.

The private meeting will take place on May 22 at 17.45CET in Brussels. After which the president of the European Parliament, Antonio Tajani, will hold a press conference to furnish the media with his version of events.

It’s just a shame that journalists are being blocked from being able to report on what actually goes on in the room.

And that members of the public won’t be able to form their own opinions about how Facebook’s founder responds to pressing questions about what Zuckerberg’s platform is doing to their privacy and their fundamental rights.

Because the doors are being closed to journalists and citizens.

Even the intended contents of the meeting is been glossed over in public — with the purpose of the chat being vaguely couched as “to clarify issues related to the use of personal data” in a statement by Tajani (below).

The impact of Facebook’s platform on “electoral processes in Europe” is the only discussion point that’s specifically flagged.

Given Zuckerberg has thrice denied requests from UK lawmakers to take questions about his platform in a public hearing we can only assume the company made the CEO’s appearance in front of EU parliamentarians conditional on the meeting being closed.

Zuckerberg did agree to public sessions with US lawmakers last month, following a major global privacy scandal related to user data and political ad targeting.

But evidently the company’s sense of political accountability doesn’t travel very far. (Despite a set of ‘privacy principles’ that Facebook published with great fanfare at the start of the year — one of which reads: ‘We are accountable’. Albeit Facebook didn’t specify to who or what exactly Facebook feels accountable.)

We’ve reached out to Facebook to ask why Zuckerberg will not take European parliamentarians questions in a public hearing. And indeed whether Mark can find the time to hop on a train to London afterwards to testify before the DCMS committee’s enquiry into online disinformation — and will update this story with any response.

As Vera Jourova, the European commissioner for justice and consumers, put it in a tweet, it’s a pity the Facebook founder does not believe all Europeans deserve to know how their data is handled by his company. Just a select few, holding positions of elected office.

A pity or, well, a shame.

Safe to say, not all MEPs are happy with the arrangement…

But let’s at least be thankful that Zuckerberg has shown us, once again, how very much privacy matters — to him personally

These schools graduate the most funded startup CEOs

There is no degree required to be a CEO of a venture-backed company. But it likely helps to graduate from Harvard, Stanford or one of about a dozen other prominent universities that churn out a high number of top startup executives.

That is the central conclusion from our latest graduation season data crunch. For this exercise, Crunchbase News took a look at top U.S. university affiliations for CEOs of startups that raised $1 million or more in the past year.

In many ways, the findings weren’t too different from what we unearthed almost a year ago, looking at the university backgrounds of funded startup founders. However, there were a few twists. Here are some key findings:

Harvard fares better in its rivalry with Stanford when it comes to educating future CEOs than founders. The two universities essentially tied for first place in the CEO alum ranking. (Stanford was well ahead for founders.)

Business schools are big. While MBA programs may be seeing fewer applicants, the degree remains quite popular among startup CEOs.  At Harvard and the University of Pennsylvania, more than half of the CEOs on our list graduated as business school alum.

University affiliation is influential but not determinative for CEOs. The 20 schools featured on our list graduated CEOs of more than 800 global startups that raised $1M or more in roughly the past year, a minority of the total.
Below, we flesh out the findings in more detail.

Where startup CEOs went to school

First, let’s start with school rankings. There aren’t many big surprises here. Harvard and Stanford far outpace any other institutions on the CEO list. Each counts close to 150 known alum among chief executives of startups that raised $1 million or more over the past year.

MIT, University of Pennsylvania, and Columbia round out the top five. Ivy League schools and large research universities constitute most of the remaining institutions on our list of about twenty with a strong track record for graduating CEOs. The numbers are laid out in the chart below:

Traditional MBA popular with startup CEOs

Yes, Bill Gates and Mark Zuckerberg dropped out of Harvard. And Steve Jobs ditched college after a semester. But they are the exceptions in CEO-land.

The typical path for the leader of a venture-backed company is a bit more staid. Degrees from prestigious universities abound. And MBA degrees, particularly from top-ranked programs, are a pretty popular credential.

Top business schools enroll only a small percentage of students at their respective universities. However, these institutions produce a disproportionately large share of CEOs. Wharton School of Business degrees, for instance, accounted for the majority of CEO alumni from the University of Pennsylvania . Harvard Business School also graduated more than half of the Harvard-affiliated CEOs. And at Northwestern’s Kellogg School of Management, the share was nearly half.

CEO alumni background is really quite varied

While the educational backgrounds of startup CEOs do show a lot of overlap, there is also plenty of room for variance. About 3,000 U.S. startups and nearly 5,000 global startups with listed CEOs raised $1 million or more since last May. In both cases, those startups were largely led by people who didn’t attend a school on the list above.

Admittedly, the math for this is a bit fuzzy. A big chunk of CEO profiles in Crunchbase (probably more than a third) don’t include a university affiliation. Even taking this into account, however, it looks like more than half of the U.S. CEOs were not graduates of schools on the short list. Meanwhile, for non-U.S. CEOs, only a small number attended a school on the list.

So, with that, some words of inspiration for graduates: If your goal is to be a funded startup CEO, the surest path is probably to launch a startup. Degrees matter, but they’re not determinative.

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy -hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.

Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.

Can data science save social media?

The unfettered internet is too often used for malicious purposes and is frequently woefully inaccurate. Social media — especially Facebook — has failed miserably at protecting user privacy and blocking miscreants from sowing discord.

That’s why CEO Mark Zuckerberg was just forced to testify about user privacy before both houses of Congress. And now governmental regulation of FaceBook and other social media appears to be a fait accompli.

At this key juncture, the crucial question is whether regulation — in concert with FaceBook’s promises to aggressively mitigate its weaknesses — correct the privacy abuses and continue to fulfill FaceBook’s goal of giving people the power to build transparent communities, bringing the world closer together?

The answer is maybe.

What has not been said is that FaceBook must embrace data science methodologies initially created in the bowels of the federal government to help protect its two billion users. Simultaneously, FaceBook must still enable advertisers — its sole source of revenue — to get the user data required to justify their expenditures.

Specifically, Facebook must promulgate and embrace what is known in high-level security circles as homomorphic encryption (HE), often considered the “Holy Grail” of cryptography, and data provenance (DP). HE would enable Facebook, for example, to generate aggregated reports about its user psychographic profiles so that advertisers could still accurately target groups of prospective customers without knowing their actual identities.

Meanwhile, data provenance – the process of tracing and recording true identities and the origins of data and its movement between data bases – could unearth the true identities of Russian perpetrators and other malefactors or at least identify unknown provenance, adding much needed transparency in cyberspace.

Both methodologies are extraordinarily complex. IBM and Microsoft, in addition to the National Security Agency, have been working on HE for years but the technology has suffered from significant performance challenges. Progress is being made, however. IBM, for example, has been granted a patent on a particular HE method – a strong hint it’s seeking a practical solution – and last month proudly announced that its rewritten HE encryption library now works up to 75 times faster. Maryland-based ENVEIL, a startup staffed by the former NSA HE team, has broken the performance barriers required to produce a commercially viable version of HE, benchmarking millions of times faster than IBM in tested use cases.

How Homomorphic Encryption Would Help FaceBook

HE is a technique used to operate on and draw useful conclusions from encrypted data without decrypting it, simultaneously protecting the source of the information. It is useful to FaceBook because its massive inventory of personally identifiable information is the foundation of the economics underlying its business model. The more comprehensive the datasets about individuals, the more precisely advertising can be targeted.

HE could keep Facebook information safe from hackers and inappropriate disclosure, but still extract the essence of what the data tells advertisers. It would convert encrypted data into strings of numbers, do math with these strings, and then decrypt the results to get the same answer it would if the data wasn’t encrypted at all.

A particularly promising sign for HE emerged last year, when Google revealed a new marketing measurement tool that relies on this technology to allow advertisers to see whether their online ads result in in-store purchases.

Unearthing this information requires analyzing datasets belonging to separate organizations, notwithstanding the fact that these organizations pledge to protect the privacy and personal information of the data subjects. HE skirts this by generating aggregated, non-specific reports about the comparisons between these datasets.

In pilot tests, HE enabled Google to successfully analyze encrypted data about who clicked on an advertisement in combination with another encrypted multi-company dataset that recorded credit card purchase records. With this data in hand, Google was able to provide reports to advertisers summarizing the relationship between the two databases to conclude, for example, that five percent of the people who clicked  on an ad wound up purchasing in a store.

Data Provenance

Data provenance has a markedly different core principle. It’s based on the fact that digital information is atomized into 1’s and 0’s with no intrinsic truth. The dual digits exist only to disseminate information, whether accurate or widely fabricated. A well-crafted lie can easily be indistinguishable from the truth and distributed across the internet. What counts is the source of these 1’s and 0’s. In short, is it legitimate?  What is the history of the 1’ and 0’s?

The art market, as an example, deploys DP to combat fakes and forgeries of the world’s greatest paintings, drawing and sculptures. It uses DP techniques to create a verifiable, chain-of-custody for each piece of the artwork, preserving the integrity of the market.

Much the same thing can be done in the online world. For example, a FaceBook post referencing a formal statement by a politician, with an accompanying photo, would  have provenance records directly linking the post to the politician’s press release and even the specifics of the photographer’s camera. The goal – again – is ensuring that data content is legitimate.

Companies such as Wal-Mart, Kroger, British-based Tesco and Swedish-based H&M, an international clothing retailer, are using or experimenting with new technologies to provide provenance data to the marketplace.

Let’s hope that Facebook and its social media brethren begin studying HE and DP thoroughly and implement it as soon as feasible. Other strong measures — such as the upcoming implementation of the European Union’s General Data Protection Regulation, which will use a big stick to secure personally identifiable information – essentially should be cloned in the U.S. What is best, however, are multiple avenues to enhance user privacy and security, while hopefully preventing breaches in the first place. Nothing less than the long-term viability of social media giants is at stake.