VCs are failing diverse founders; Elizabeth Warren wants to step in

Elizabeth Warren, who earlier this year confirmed her intent to run for president in 2020, has an ambitious plan to advance entrepreneurs of color.

In a series of tweets published this morning, the Massachusetts senator proposed a $7 billion Small Business Equity Fund to provide grants to Black, Latinx, Native American and other minority entrepreneurs, if she’s elected president. The initiative will be covered by her “Ultra-Millionaire Tax,” a two-cent tax on every dollar of wealth above $50 million the presidential hopeful first outlined in January.

The fund would be managed by the Department of Economic Development, a new government entity to be constructed under the Warren administration. With a goal of creating and defending American jobs, the Department of Economic Development would replace the Commerce Department and “subsume other agencies like the Small Business Administration and the Patent and Trademark Office, and include research and development programs, worker training programs, and export and trade authorities like the Office of the U.S. Trade Representative,” Warren explained.

The Small Business Equity Fund will exclusively issue grant funding to entrepreneurs eligible to apply for the Small Business Administration’s existing 8(a) program and who have less than $100,000 in household wealth, aiming to provide capital to 100,000 new minority-owned businesses, creating 1.1 million new jobs.

Founders of color receive a disproportionate amount of venture capital funding. There’s insufficient data on the topic, but research from digitalundivided published last year suggests the median amount of funding raised by black women, for example, is $0. According to the same study, black women have raised just .0006% of all tech venture funding since 2009.

Startups founded by all-female teams, despite efforts to level the playing field for female entrepreneurs, raised just 2.2% of venture capital investment in 2018.

VCs are a majority white and male. Plus, they have a proven tendency to invest their capital into entrepreneurs who look like them or who resemble founders that were previously successful. In other words, VCs are continuously on the hunt for the next Mark Zuckerberg .

“Even if we fully close the startup capital gap, deep systemic issues will continue to tilt the playing field,” Warren wrote. “86% of venture capitalists are white, and studies show that investors are more likely to partner with entrepreneurs who look like them. This tilts the field against entrepreneurs of color. So I plan to address this disparity head on too. I will require states and cities administering my new Fund to work with diverse investment managers—putting $7 billion in the hands of minority-and women-owned managers.”

Warren this morning also announced plans to “direct” federal pension and retirement funds to recruit diverse investment managers and to require states and cities administering the Small Business Equity Fund to work with diverse investment managers. Finally, Warren, again, if elected, will triple the budget of the Minority Business Development Agency, which helps entrepreneurs of color access funding networks and business advice .

Warren, throughout her campaign for the presidency, has made a number of critiques of the tech industry.

In March, the senator announced her plan to break up big tech.

“Twenty-five years ago, Facebook, Google, and Amazon didn’t exist,” Warren wrote. “Now they are among the most valuable and well-known companies in the world. It’s a great story — but also one that highlights why the government must break up monopolies and promote competitive markets.”

Facebook will not remove deepfakes of Mark Zuckerberg, Kim Kardashian and others from Instagram

Facebook will not remove the faked videos featuring Mark Zuckerberg, Kim Kardashian and President Donald Trump from Instagram, the company said in a statement.

Earlier today, Vice News reported on the existence of videos created by the artists Bill Posters and Daniel Howe and video and audio manipulation companies including CannyAIRespeecher and Reflect. 

The work, featured in a site-specific installation in the UK as well as circulating in video online, was the first test of Facebook’s content review policies since the company’s decision not to remove a manipulated video of House Speaker Nancy Pelosi received withering criticism from Democratic political leadership.

“We have said all along, poor Facebook, they were unwittingly exploited by the Russians,” Pelosi said in an interview with radio station KQED, quoted by The New York Times. “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

After the late May incident Facebook’s Neil Potts testified before a smorgasbord of international regulators in Ottawa about deep fakes, saying the company would not remove a video of Mark Zuckerberg . This appears to be the first instance testing the company’s resolve.

“We will treat this content the same way we treat all misinformation on Instagram . If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages,” said an Instagram spokesperson in an email to TechCrunch.

The videos appear not to violate any Facebook policies, which means that they will be subject to the treatment any video containing misinformation gets on any of Facebook’s platforms. So the videos will be blocked from appearing in the Explore feature and hashtags won’t work with the offending material.

Facebook already uses image detection technology to find content that has been debunked by its third-party fact checking program on Instagram. When misinformation is only present on Instagram the company is testing the ability to promote links into the fact-checking product on Facebook.

“Spectre interrogates and reveals many of the common tactics and methods that are used by corporate or political actors to influence people’s behaviours and decision making,” said Posters in an artist’s statement about the project. “In response to the recent global scandals concerning data, democracy, privacy and digital surveillance, we wanted to tear open the ‘black box’ of the digital influence industry and reveal to others what it is really like.”

Facebook’s consistent decisions not to remove offending content stands in contrast with YouTube which has taken the opposite approach in dealing with manipulated videos and other material that violate its policies.

YouTube removed the Pelosi video and recently took steps to demonetize and remove videos from the platform that violated its policies of hate speech — including a wholesale purge of content about Nazism.

These issues take on greater significance as the U.S. heads into the next Presidential election in 2020.

“In 2016 and 2017, the UK, US and Europe witnessed massive political shocks as new forms of computational propaganda employed by social media platforms, the ad industry, and political consultancies like Cambridge Analytica [that] were exposed by journalists and digital rights advocates,” said Howe, in a statement about his Spectre project. “We wanted to provide a personalized experience that allows users to feel what is at stake when the data taken from us in countless everyday actions is used in unexpected and potentially dangerous ways.”

Perhaps, the incident will be a lesson to Facebook in what’s potentially at stake as well.

 

Indian PM Narendra Modi’s reelection spells more frustration for US tech giants

Amazon and Walmart’s problems in India look set to continue after Narendra Modi, the biggest force to embrace the country’s politics in decades, led his Hindu nationalist Bharatiya Janata Party to a historic landslide re-election on Thursday, reaffirming his popularity in the eyes of the world’s largest democracy.

The re-election, which gives Modi’s government another five years in power, will in many ways chart the path of India’s burgeoning startup ecosystem, as well as the local play of Silicon Valley companies that have grown increasingly wary of recent policy changes.

At stake is also the future of India’s internet, the second largest in the world. With more than 550 million internet users, the nation has emerged as one of the last great growth markets for Silicon Valley companies. Google, Facebook, and Amazon count India as one of their largest and fastest growing markets. And until late 2016, they enjoyed great dynamics with the Indian government.

But in recent years, New Delhi has ordered more internet shutdowns than ever before and puzzled many over crackdowns on sometimes legitimate websites. To top that, the government recently proposed a law that would require any intermediary — telecom operators, messaging apps, and social media services among others — with more than 5 million users to introduce a number of changes to how they operate in the nation. More on this shortly.

Growing tension

Gender, race and social change in tech; Moira Weigel on the Internet of Women, Part Two

Tech ethics can mean a lot of different things, but surely one of the most critical, unavoidable, and yet somehow still controversial propositions in the emerging field of ethics in technology is that tech should promote gender equality. But does it? And to the extent it does not, what (and who) needs to change?

In this second of a two-part interview “On The Internet of Women,” Harvard fellow and Logic magazine founder and editor Moira Weigel and I discuss the future of capitalism and its relationship to sex and tech; the place of ambivalence in feminist ethics; and Moira’s personal experiences with #MeToo.

Greg E.: There’s a relationship between technology and feminism, and technology and sexism for that matter. Then there’s a relationship between all of those things and capitalism. One of the underlying themes in your essay “The Internet of Women,” that I thought made it such a kind of, I’d call it a seminal essay, but that would be a silly term to use in this case…

Moira W.: I’ll take it.

Greg E.: One of the reasons I thought your essay should be required reading basic reading in tech ethics is that you argue we need to examine the degree to which sexism is a part of capitalism.

Moira W.: Yes.

Greg E.: Talk about that.

Moira W.: This is a big topic! Where to begin?

Capitalism, the social and economic system that emerged in Europe around the sixteenth century and that we still live under, has a profound relationship to histories of sexism and racism. It’s really important to recognize that sexism and racism themselves are historical phenomena.

They don’t exist in the same way in all places. They take on different forms at different times. I find that very hopeful to recognize, because it means they can change.

It’s really important not to get too pulled into the view that men have always hated women there will always be this war of the sexes that, best case scenario, gets temporarily resolved in the depressing truce of conventional heterosexuality.  The conditions we live under are not the only possible conditions—they are not inevitable.

A fundamental Marxist insight is that capitalism necessarily involves exploitation. In order to grow, a company needs to pay people less for their work than that work is worth. Race and gender help make this process of exploitation seem natural.

Image via Getty Images / gremlin

Certain people are naturally inclined to do certain kinds of lower status and lower waged work, and why should anyone be paid much to do what comes naturally? And it just so happens that the kinds of work we value less are seen as more naturally “female.” This isn’t just about caring professions that have been coded female—nursing and teaching and so on, although it does include those.

In fact, the history of computer programming provides one of the best examples. In the early decades, when writing software was seen as rote work and lower status, it was mostly done by women. As Mar Hicks and other historians have shown, as the profession became more prestigious and more lucrative, women were very actively pushed out.

You even see this with specific coding languages. As more women learn, say, Javascript, it becomes seen as feminized—seen as less impressive or valuable than Python, a “softer” skill. This perception, that women have certain natural capacities that should be free or cheap, has a long history that overlaps with the history of capitalism.  At some level, it is a byproduct of the rise of wage labor.

To a medieval farmer it would have made no sense to say that when his wife had their children who worked their farm, gave birth to them in labor, killed the chickens and cooked them, or did work around the house, that that wasn’t “work,” [but when he] took the chickens to the market to sell them, that was. Right?

A long line of feminist thinkers has drawn attention to this in different ways. One slogan from the 70s was, ‘whose work produces the worker?’ Women, but neither companies nor the state, who profit from this process, expect to pay for it.

Why am I saying all this? My point is: race and gender have been very useful historically for getting capitalism things for free—and for justifying that process. Of course, they’re also very useful for dividing exploited people against one another. So that a white male worker hates his black coworker, or his leeching wife, rather than his boss.

Greg E.: I want to ask more about this topic and technology; you are a publisher of Logic magazine which is one of the most interesting publications about technology that has come on the scene in the last few years.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

Facebook pivots to what it wishes it was

In Facebook’s dreams, it’s a clean and private place. People spend their time having thoughtful discussions in “meaningful” Groups, planning offline meetups with Events, or laughing together in a Facebook Watch party.

In reality, Facebook is a cluttered mess of features that seem to constantly leak user data. People waste their time viewing inane News Feed posts from “friends” they never talk to, enviously stalking through photos of peers, or chowing on click-bait articles and viral videos in isolation.

That’s why Facebook is rolling out what could be called an “aspirational redesign” known as FB5. Rather than polishing what Facebook was, it tries to spotlight what it wants to be. “This is the biggest change we’ve made to the Facebook app and site in five years” CEO Mark Zuckerberg said to open Facebook’s F8 conference yesterday.

 

The New Facebook

Most noticibly, that starts with sucking much of the blue out of the Facebook interface to making it look sparse and calming — despite a More button that unveils the social network’s bloat into dozens of rarely-used features. A new logo features a brighter blue bubble around Facebook’s distinctive white f, which attempts to but a more uplifting spin on a bruised brand.

Functionally, FB5 means placing Groups near the center of a freshly tabbed interface for the both Facebook’s website and app, and putting suggestions for new ones to join across the service. “Everywhere there are friends, there should be Groups” says the head of the Facebook app Fidji Simo. Groups already has 1 billion monthly users, so Facebook is following the behavior pattern and doubling down. But Facebook’s goal is not only to have 2.38 billion people using the feature — the same number as use its whole app — but to get them all into meaningful Groups that emblemize their identity. 400 million already are. And now Groups for specific interests like gaming or health support will get special features, and power users will get a dashboard of updates across all their communities.

Groups will be flanked by Marketplace, perhaps the Facebook feature with the most latent potential. It’s a rapidly emerging use case Facebook wants to fuel. Zuckerberg took Craigslist, added real identity to thwart bad behavior, and now is bolting it to the navigation bar of the most-used app on earth. The result is a place where it’s easy to put things up for sale and get tons of viewers. I once sold a couch on Marketplace in 20 minutes. Now sellers can take payments directly in the app instead of with cash or Venmo, and they can offer to ship items anywhere at the buyer’s expense. By following Zuckerberg’s mandate that 2019 focus on commerce, Facebook has become a viable Shopify competitor.

If Groups is what’s already working about Facebook’s future, Watch is the opposite. It’s a product designed to capture the video viewing bonanza Facebook observes on Netfli and YouTube. But without tentpole content like a “Game Of Thrones” or “Stranger Things”, it’s failed to impact the the cutlural zeitgeist. The closest thing it has to must-see video is Buffy The Vampire Slayer re-runs and a docuseries on NBA star Steph Curry. Facebook claims 75 million people now Watch for at least one minute per day though those 60 seconds don’t have to be  sequential. That’s still just 4 percent of its users. And a Diffusion study found 50 percent of adult US Facebook users had never even heard of Watch. Sticking it front and center demonstrates Facebook commitment to making Watch a hit even if it has to cram it down our throats.

Not The Old Facebook

The products of the past got little love on stage at F8. Nothing new for News Feed, Facebook’s mint but also the source of its misinformation woes. In the age of Snapchat and Zuckerberg’s newfound insistence on ephemerality to prevent embarassment, the Timeline profile chronicling your whole Facebook life got nary a mention. And Pages for businesses that were the center of its monetization strategy years ago didn’t find space in the keynote, similar to how they’ve been butted out of the News Feed by competition and Facebook’s philosophical shift from public content to friends and family.

 

The one thing we heard a lot about but didn’t actually see much of was privacy. Zuckerberg started the conference declaring “The future is private!” He spoke about how Facebook plans to make its messaging apps encrypted, how it wants to be a living room rather than just a townhall, and how it’s following the shift in user behavior away from broadcasting. But we didn’t see any new privacy protections for the developer platform, a replacement for its Chief Security Officer that’s been vacant for nine months, or the Clear History feature Zuckerberg announced last year.

“I get that a lot of people aren’t sure that we’re serious about this. I know that we don’t exactly have the strongest reputation on privacy right now, to put it lightly” Zuckerberg joked without seeming to generate a single laugh. Combined with having little to show to enhance privacy, making fun of such a dire situation doesn’t instill much confidence. When Zuckerberg does take things seriously, it quickly manifests itself in the product like with Facebook’s 2012 shift to mobile, or in the company like with 2018’s doubling of security headcount. He knew mobile and content moderation failures could kill his network. But does someone who told Time magazine in 2010 that “What people want isn’t complete privacy” truly see a loose stance on privacy as an existential threat?

Interoperable, encrypted messaging will boost privacy, but it’s also just good business logic given Zuckerberg’s intention to own chat — the heart of your phone. Facebook’s creepiness stems from it sucking in data to power ad targeting. Nothing new was announced to address that. Despite his words, perhaps Zuckerberg doesn’t aspire to make Facebook as private as he aspired to make it mobile and secure. 

Wired reported that Zuckerberg authored a strategy book given to all employees ahead of the IPO that noted “If we don’t create the thing that kills Facebook, someone else will.” But F8 offered a new interpretation. Maybe given the lack of direct competitors in its league, and the absence of a mass exodus over its constant privacy scandals, it was the outdated product itself that was killing Facebook. The permanent Facebook. The all-you-do-is-scroll Facebook. The bored-of-my-friends Facebook. Users were being neglected rather than pushed or stolen. By ignoring the past and emphasizing the products it aspires to have dominate tomorrow — Groups, Marketplace, Watch — Facebook can start to unchain itself from the toxic brand poisoning its potential.

Friend portability is the must-have Facebook regulation

Choice for consumers compels fair treatment by corporations. When people can easily move to a competitor, it creates a natural market dynamic coercing a business to act right. When we can’t, other regulations just leave us trapped with a pig in a fresh coat of lipstick.

That’s why as the FTC considers how many billions to fine Facebook or which executives to stick with personal liability or whether to go full-tilt and break up the company, I implore it to consider the root of how Facebook gets away with abusing user privacy: there’s no simple way to switch to an alternative.

If Facebook users are fed up with the surveillance, security breaches, false news, or hatred, there’s no western general purpose social network with scale for them to join. Twitter is for short-form public content, Snapchat is for ephemeral communication. Tumblr is neglected. Google+ is dead. Instagram is owned by Facebook. And the rest are either Chinese, single-purpose, or tiny.

No, I don’t expect the FTC to launch its own “Fedbook” social network. But what it can do is pave an escape route from Facebook so worthy alternatives become viable options. That’s why the FTC must require Facebook offer truly interoperable data portability for the social graph.

In other words, the government should pass regulations forcing Facebook to let you export your friend list to other social networks in a privacy-safe way. This would allow you to connect with or follow those people elsewhere so you could leave Facebook without losing touch with your friends. The increased threat of people ditching Facebook for competitors would create a much stronger incentive to protect users and society.

The slate of potential regulations for Facebook currently being discussed by the FTC’s heads include a $3 billion to $5 billion fine or greater, holding Facebook CEO personally liable for violations of an FTC consent decree, creating new privacy and compliance positions including one held by executive that could be filled by Zuckerberg, creating an independent oversight committee to review privacy and product decisions, accordng to the New York Times and Washington Post. More extreme measures like restricting how Facebook collects and uses data for ad targeting, blocking future acquisitions, or breaking up the company are still possible but seemingly less likely.

Facebook co-founder Chris Hughes (right) recently wrote a scathing call to break up Facebook.

Breaking apart Facebook is a tantalizing punishment for the company’s wrongdoings. Still, I somewhat agree with Zuckerberg’s response to co-founder Chris Hughes’ call to split up the company, which he said “isn’t going to do anything to help” directly fix Facebook’s privacy or misinformation issues. Given Facebook likely wouldn’t try to make more acquisitions of big social networks under all this scrutiny, it’d benefit from voluntarily pledging not to attempt these buys for at least three to five years. Otherwise, regulators could impose that ban, which might be more politically attainable with fewer messy downstream effects,

Yet without this data portability regulation, Facebook can pay a fine and go back to business as usual. It can accept additional privacy oversight without fundamentally changing its product. It can become liable for upholding the bare minimum letter of the law while still breaking the spirit. And even if it was broken up, users still couldn’t switch from Facebook to Instagram, or from Instagram and WhatsApp to somewhere new.

Facebook Kills Competition With User Lock-In

When faced with competition in the past, Facebook has snapped into action improving itself. Fearing Google+ in 2011, Zuckerberg vowed “Carthage must be destroyed” and the company scrambled to launch Messenger, the Timeline profile, Graph Search, photo improvements and more. After realizing the importance of mobile in 2012, Facebook redesigned its app, reorganized its teams, and demanded employees carry Android phones for “dogfooding” testing. And when Snapchat was still rapidly growing into a rival, Facebook cloned its Stories and is now adopting the philosophy of ephemerality.

Mark Zuckerberg visualizes his social graph at a Facebook conference

Each time Facebook felt threatened, it was spurred to improve its product for consumers. But once it had defeated its competitors, muted their growth, or confined them to a niche purpose, Facebook’s privacy policies worsened. Anti-trust scholar Dina Srinivasan explains this in her summary of her paper “The Anti-Trust Case Against Facebook”:

“When dozens of companies competed in an attempt to win market share, and all competing products were priced at zero—privacy quickly emerged as a key differentiator. When Facebook entered the market it specifically promised users: “We do not and will not use cookies to collect private information from any user.” Competition didn’t only restrain Facebook’s ability to track users. It restrained every social network from trying to engage in this behavior . . .  the exit of competition greenlit a change in conduct by the sole surviving firm. By early 2014, dozens of rivals that initially competed with Facebook had effectively exited the market. In June of 2014, rival Google announced it would shut down its competitive social network, ceding the social network market to Facebook.

For Facebook, the network effects of more than a billion users on a closed-communications protocol further locked in the market in its favor. These circumstances—the exit of competition and the lock-in of consumers—finally allowed Facebook to get consumers to agree to something they had resisted from the beginning. Almost simultaneous with Google’s exit, Facebook announced (also in June of 2014) that it would begin to track users’ behavior on websites and apps across the Internet and use the data gleaned from such surveillance to target and influence consumers. Shortly thereafter, it started tracking non-users too. It uses the “like” buttons and other software licenses to do so.”

This is why the FTC must seek regulation that not only punishes Facebook for wrongdoings, but that lets consumers do the same. Users can punch holes in Facebook by leaving, both depriving it of ad revenue and reducing its network effect for others. Empowering them with the ability to take their friend list with them gives users a taller seat at the table. I’m calling for what University Of Chicago professors Luigi Zingales and Guy Rolnik termed a Social Data Portability Act.

Luckily, Facebook already has a framework for this data portability through a feature called Find Friends. You connect your Facebook account to another app, and you can find your Facebook friends who are already on that app.

But the problem is that in the past, Facebook has repeatedly blocked competitors from using Find Friends. That includes cutting off Twitter, Vine, Voxer, and MessageMe, while Phhhoto was blocked from letting you find your Instagram friends…six months before Instagram copied Phhhoto’s core back-and-forth GIF feature and named it Boomerang. Then there’s the issue that you need an active Facebook account to use Find Friends. That nullifies its utility as a way to bring your social graph with you when you leave Facebook.

Facebook’s “Find Friends” feature used to let Twitter users follow their Facebook friends, but Facebook later cut off access for competitors including Twitter and Vine seen here

The social network does offer a way to “Download Your Information” which is helpful for exporting photos, status updates, messages, and other data about you. Yet the friend list can only be exported as a text list of names in HTML or JSON format. Names aren’t linked to their corresponding Facebook profiles or any unique identifier, so there’s no way to find your friend John Smith amongst everyone with that name on another app. And less than 5 percent of my 2800 connections had used the little-known option to allow friends to export their email address. What about the big “Data Transfer Project” Facebook announced 10 months ago in partnership with Google, Twitter, and Microsoft to provide more portability? It’s released nothing so far, raising questions of whether it was vaporware designed to ward off regulators.

Essentially, this all means that Facebook provides zero portability for your friendships. That’s what regulators need to change. There’s already precedent for this. The Telecommunications Act of 1996 saw FCC require phone service carriers to allow customers to easily port their numbers to another carrier rather than having to be assigned a new number. If you think of a phone number as a method by which friends connect with you, it would be reasonable for regulators to declare that the modern equivalent — your social network friend connections — must be similarly portable.

How To Unchain Our Friendships

Facebook should be required to let you export a truly interoperable friend list that can be imported into other apps in a privacy-safe way.

To do that, Facebook should allow you to download a version of the list that feature hashed versions of the phone numbers and email addresses friends used to sign up. You wouldn’t be able to read that contact info or freely import and spam people. But Facebook could be required to share documentation teaching developers of other apps to build a feature that safely cross-checks the hashed numbers and email addresses against those of people who had signed up for their app. That developer wouldn’t be able to read the contact info from Facebook either, or store any useful data about people who hadn’t signed up for their app. But if the phone number or email address of someone in your exported Facebook friend list matched one of their users, they could offer to let you connect with or follow them.

This system would let you save your social graph, delete your Facebook account, and then find your friends on other apps without ever jeopardizing the privacy of their contact info. Users would no longer be locked into Facebook and could freely choose to move their friendships to whatever social network treats them best. And Facebook wouldn’t be able to block competitors from using it.

The result would much more closely align the goals of users, Facebook, and the regulators. Facebook wouldn’t merely be responsible to the government for technically complying with new fines, oversight, or liability. It would finally have to compete to provide the best social app rather than relying on its network effect to handcuff users to its service.

This same model of data portability regulation could be expanded to any app with over 1 billion users, or even 100 million users to ensure YouTube, Twitter, Snapchat, or Reddit couldn’t lock down users either. By only applying the rule to apps with a sufficiently large user base, the regulation wouldn’t hinder new startup entrants to the market and accidentally create a moat around well-funded incumbents like Facebook that can afford the engineering chore. Data portability regulation combined with a fine, liability, oversight, and a ban on future acquisitions of social networks could set Facebook straight without breaking it up.

Users have a lot of complaints about Facebook that go beyond strictly privacy. But their recourse is always limited because for many functions there’s nowhere else to go, and it’s too hard to go there. By fixing the latter, the FTC could stimulate the rise of Facebook alternatives so that users rather regulators can play king-maker.

Zuckerberg says breaking up Facebook “isn’t going to help”

With the look of someone betrayed, Facebook’s CEO has fired back at co-founder Chris Hughes and his brutal NYT op-ed calling for regulators to split up Facebook, Instagram, and WhatsApp. “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference” Zuckerberg told France Info while in Paris to meet with French President Emmanuel Macron.

Zuckerberg’s argument boils down to the idea that Facebook’s specific problems with privacy, safety, misinformation, and speech won’t be directly addressed by breaking up the company, and instead would actually hinder its efforts to safeguard its social networks. The Facebook family of apps would theoretically have fewer economies of scale when investing in safety technology like artificial intelligence to spot bots spreading voter suppression content.

Facebook’s co-founders (from left): Dustin Moskovitz, Chris Hughes, and Mark Zuckerberg

Hughes claims that “Mark’s power is unprecedented and un-American” and that Facebook’s rampant acquisitions and copying have made it so dominant that it deters competition. The call echoes other early execs like Facebook’s first president Sean Parker and growth chief Chamath Palihapitiya who’ve raised alarms about how the social network they built impacts society.

But Zuckerberg argues that Facebook’s size benefits the public. “Our budget for safety this year is bigger than the whole revenue of our company was when we went public earlier this decade. A lot of that is because we’ve been able to build a successful business that can now support that. You know, we invest more in safety than anyone in social media” Zuckerberg told journalist Laurent Delahousse.

The Facebook CEO’s comments were largely missed by the media, in part because the TV interview was heavily dubbed into French with no transcript. But written out here for the first time, his quotes offer a window into how deeply Zuckerberg dismisses Hughes’ claims. “Well [Hughes] was talking about a very specific idea of breaking up the company to solve some of the social issues that we face” Zuckerberg says before trying to decouple solutions from anti-trust regulation. “The way that I look at this is, there are real issues. There are real issue around harmful content and finding the right balance between expression and safety, for preventing election interference, on privacy.”

Claiming that a breakup “isn’t going to do anything to help” is a more unequivocal refutation of Hughes’ claim than that of Facebook VP of communications and former UK deputy Prime Minster Nick Clegg . He wrote in his own NYT op-ed today that “what matters is not size but rather the rights and interests of consumers, and our accountability to the governments and legislators who oversee commerce and communications . . . Big in itself isn’t bad. Success should not be penalized.”

Mark Zuckerberg and Chris Hughes

Something certainly must be done to protect consumers. Perhaps that’s a break up of Facebook. At the least, banning it from acquiring more social networks of sufficient scale so it couldn’t snatch another Instagram from its crib would be an expedient and attainable remedy.

But the sharpest point of Hughes’ op-ed was how he identified that users are trapped on Facebook. “Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms” he writes. After Cambridge Analytica “people did not leave the company’s platforms en masse. After all, where would they go?”

That’s why given critics’ call for competition and Zuckerberg’s own support for interoperability, a core tenet of regulation must be making it easier for users to switch from Facebook to another social network. As I’ll explore in an upcoming piece, until users can easily bring their friend connections or ‘social graph’ somewhere else, there’s little to compel Facebook to treat them better.

Three ‘new rules’ worth considering for the internet

In a recent commentary, Facebook’s Mark Zuckerberg argues for new internet regulation starting in four areas: harmful content, election integrity, privacy and data portability. He also advocates that government and regulators “need a more active role” in this process. This call to action should be welcome news as the importance of the internet to nearly all aspects of people’s daily lives seems indisputable. However, Zuckerberg’s new rules could be expanded, as part of the follow-on discussion he calls for, to include several other necessary areas: security-by-design, net worthiness and updated internet business models.

Security-by-design should be an equal priority with functionality for network connected devices, systems and services which comprise the Internet of Things (IoT). One estimate suggests that the number of connected devices will reach 125 billion by 2030, and will increase 50% annually in the next 15 years. Each component on the IoT represents a possible insecurity and point of entry into the system. The Department of Homeland Security has developed strategic principles for securing the IoT. The first principle is to “incorporate security at the design phase.” This seems highly prudent and very timely, given the anticipated growth of the internet.

Ensuring net worthiness — that is, that our internet systems meet appropriate and up to date standards — seems another essential issue, one that might be addressed under Zuckerberg’s call for enhanced privacy. Today’s internet is a hodge-podge of different generations of digital equipment, unclear standards for what constitutes internet privacy and growing awareness of the likely scenarios that could threaten networks and user’s personal information.

Recent cyber incidents and concerns have illustrated these shortfalls. One need only look at the Office of Personnel Management (OPM) hack that exposed the private information of more than 22 million government civilian employees to see how older methods for storing information, lack of network monitoring tools and insecure network credentials resulted in a massive data theft. Many networks, including some supporting government systems and hospitals, are still running Windows XP software from the early 2000s. One estimate is that 5.5% of the 1.5 billion devices running Microsoft Windows are running XP, which is now “well past its end-of-life.” In 2016, a distributed denial of service attack against the web security firm Dyn exposed critical vulnerabilities in the IoT that may also need to be addressed.

Updated business models may also be required to address internet vulnerabilities. The internet has its roots as an information-sharing platform. Over time, a vast array of information and services have been made available to internet users through companies such as Twitter, Google and Facebook. And these services have been made available for modest and, in some cases, no cost to the user.

Regulation is necessary, but normally occurs only once potential for harm becomes apparent.

This means that these companies are expending their own resources to collect data and make it available to users. To defray the costs and turn a profit, the companies have taken to selling advertisements and user information. In turn, this means that private information is being shared with third parties.

As the future of the internet unfolds, it might be worth considering what people would be willing to pay for access to traffic cameras to aid commutes, social media information concerning friends or upcoming events, streaming video entertainment and unlimited data on demand. In fact, the data that is available to users has likely been compiled using a mix of publicly available and private data. Failure to revise the current business model will likely only encourage more of the same concerns with internet security and privacy issues. Finding new business models — perhaps even a fee-for-service for some high-end services — that would support a vibrant internet, while allowing companies to be profitable, could be a worthy goal.

Finally, Zuckerberg’s call for government and regulators to have a more active role is imperative, but likely will continue to be a challenge. As seen in attempts at regulating technologies such as transportation safety, offshore oil drilling and drones, such regulation is necessary, but normally occurs only once potential for harm becomes apparent. The recent accidents involving the Boeing 737 Max 8 aircraft could be seen as one example of the importance of such government regulation and oversight.

Zuckerberg’s call to action suggests a pathway to move toward a new and improved internet. Of course, as Zuckerberg also highlights, his four areas would only be a start, and a broader discussion should be had as well. Incorporating security-by-design, net worthiness and updated business models could be part of this follow-on discussion.

Facebook co-founder, Chris Hughes, calls for Facebook to be broken up

The latest call to break up Facebook looks to be the most uncomfortably close to home yet for supreme leader, Mark Zuckerberg.

“Mark’s power is unprecedented and un-American,” writes Chris Hughes, in an explosive op-ed published in the New York Times. “It is time to break up Facebook.”

It’s a long read but worth indulging for a well articulated argument against the market-denting power of monopolies, shot through with a smattering of personal anecdotes about Hughes’ experience of Zuckerberg — who he at one point almost paints as ‘only human’, before shoulder-dropping into a straight thumbs-down that “it’s his very humanity that makes his unchecked power so problematic.”

The tl;dr of Hughes’ argument against Facebook/Zuckerberg being allowed to continue its/his reign of the Internet knits together different strands of the techlash zeitgeist, linking Zuckerberg’s absolute influence over Facebook — and therefore over the unprecedented billions of people he can reach and behaviourally reprogram via content-sorting algorithms — to the crushing of innovation and startup competition; the crushing of consumer attention, choice and privacy, all hostage to relentless growth targets and an eyeball-demanding ad business model; to the crushing control of speech that Zuckerberg — as Facebook’s absolute monarch — personally commands, with Hughes worrying it’s a power too potent for any one human to wield.

“Mark may never have a boss, but he needs to have some check on his power,” he writes. “The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.”

His proposed solution is not just a break up of Facebook’s monopoly of online attention by re-separating Facebook, Instagram and WhatsApp — to try to reinvigorate a social arena it now inescapably owns — he also calls for US policymakers to step up to the plate and regulate, suggesting an oversight agency is also essential to hold Internet companies to account, and pointing to Europe’s recently toughened privacy framework, GDPR, as a start.

“Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy,” he writes. “A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time. The agency should also be charged with guaranteeing basic interoperability across platforms.”

Once an equally fresh faced co-founder of Facebook alongside his Harvard roommate, Hughes left Facebook in 2007, walking away with what would become eye-watering wealth writing later that he made half a billion dollars for three years’ work, off of the back of Facebook’s 2012 IPO.

It’s harder to put a value on the relief Hughes must also feel, having exited the scandal-hit behemoth so early on — getting out before early missteps hardened into a cynical parade of privacy, security and trust failures that slowly, gradually yet inexorably snowballed into world-wide scandal — with the 2016 revelations about the extent of Kremlin-backed political disinformation lighting up the dark underbelly of Facebook ads.

Soon after, the Cambridge Analytica data misuse scandal shone an equally dim light into similarly murky goings on Facebook’s developer platform. Some of which appeared to hit even closer to home. (Facebook had its own staff helping to target those political ads, and hired the co-founder of the company that had silently sucked out user data in order to sell manipulative political propaganda services to Cambridge Analytica.) 

It’s clear now that Facebook’s privacy, security and trust failures are no accident; but rather chain-linked to Zuckerberg’s leadership; to his strategy of neverending sprint for relentless, bottomless growth — via what was once literally a stated policy of “domination”. 

Hughes, meanwhile, dropped out — coming away from Facebook a very rich man and, if not entirely guilt-free given his own founding role in the saga, certainly lacking Zuckerberg-levels of indelible taint.

Though we can still wonder where his well-articulated concern, about how Facebook’s monopoly grip on markets and attention is massively and horribly denting the human universe, has been channelled prior to publishing this NYT op-ed — i.e. before rising alarm over Facebook’s impact on societies, democracies, human rights and people’s mental health scaled so disfiguringly into mainstream view.

Does he, perhaps, regret not penning a critical op-ed before Roger McNamee, an early Zuckerberg advisor with a far less substantial role in the whole drama, got his twenty-cents in earlier this year — publishing a critical book, Zucked, which recounts his experience trying and failing to get Zuckerberg to turn the tanker and chart a less collaterally damaging course.

It’s certainly curious it’s taken Hughes so long to come out of the woodwork and join the big techlash.

The NYT review of Zucked headlined it as an “anti-Facebook manifesto” — a descriptor that could apply equally to Hughes’ op-ed. And in an interview with TC back in February, McNamee — whose more limited connection to Zuckerberg Facebook has sought to dismiss — said of speaking out: “I may be the wrong messenger, but I don’t see a lot of other volunteers at the moment.”

Facebook certainly won’t be able to be so dismissive of Hughes’ critique, as a fellow co-founder. This is one Zuckerberg gut-punch that will both hurt and be harder to dodge. (We’ve asked Facebook if it has a response and will update if so.)

At the same time, hating on Facebook and Zuckerberg is almost fashionable these days — as the company’s consumer- and market-bending power has flipped its fortunes from winning friends and influencing people to turning frenemies into out-and-out haters and politically charged enemies.

Whether it’s former mentors, former colleagues — and now of course politicians and policymakers leading the charge and calling for the company to be broken up.

Seen from that angle, it’s a shame Hughes waited so long to add his two cents. It does risk him being labelled an opportunist — or, dare we say it, a techlash populist. (Some of us have been banging on about Facebook’s intrusive influence for years, so, er, welcome to the club Chris!) 

Though, equally, he may have been trying to protect his historical friendship with Zuckerberg. (The op-ed begins with Hughes talking about the last time he saw Zuckerberg, in summer 2017, which it’s hard not to read as him tacitly acknowledging there likely won’t be any more personal visits after this bombshell.)

Hughes is also not alone in feeling he needs to bide his time to come out against Zuckerberg.

The WhatsApp founders, who jumped the Facebook mothership last year, kept their heads down and their mouths shut for years, despite a product philosophy that boiled down to ‘fuck ads’ — only finally making their lack of love for their former employer’s ad-fuelled privacy incursions into WhatsApp clear post-exit from the belly of the beast — in their own subtle and not so subtle ways.

In their case they appear to have been mostly waiting for enough shares to vest. (Brian Acton did leave a bunch on the table.) But Hughes has been sitting on his money mountain for years.

Still, at least we finally have his critical — and rarer — account to add to the pile; A Facebook co-founder, who had remained close to Zuckerberg’s orbit, finally reaching for the unfriend button.