Building a great startup requires more than genius and a great invention

Many entrepreneurs assume that an invention carries intrinsic value, but that assumption is a fallacy.

Here, the examples of the 19th and 20th century inventors Thomas Edison and Nikola Tesla are instructive. Even as aspiring entrepreneurs and inventors lionize Edison for his myriad inventions and business acumen, they conveniently fail to recognize Tesla, despite having far greater contributions to how we generate, move, and harness power. Edison is the exception, with the legendary penniless Tesla as the norm.

Universities are the epicenter of pure innovation research. But the reality is that academic research is supported by tax dollars. The zero-sum game of attracting government funding is mastered by selling two concepts: Technical merit, and and broader impact toward benefiting society as a whole. These concepts are usually at odds with building a company, which succeeds only by generating and maintaining competitive advantage through barriers to entry.

In rare cases, the transition from intellectual merit to barrier to entry is successful. In most cases, the technology, though cool, doesn’t give the a fledgling company the competitive advantage it needs to exist among incumbents, and inevitable copycats. Academics, having emphasized technical merit and broader impact to attract support for their research, often fail to solve for competitive advantage, thereby creating great technology in search for a business application.

Of course there are exceptions: Time and time again, whether it’s driven by hype or perceived existential threat, big incumbents will be quick to buy companies purely for technology.  Cruise/GM (autonomous cars), DeepMind/Google (AI), and Nervana/Intel (AI chips). But as we move from 0-1 to 1-N in a given field, success is determined by winning talent over winning technology. Technology becomes less interesting; the onus on the startup to build a real business.

If a startup chooses to take venture capital, it not only needs to build a real business, but one that will be valued in the billions. the question becomes how a startup can create durable, attractive business, with a transient, short-lived technological advantage.

Most investors understand this stark reality. Unfortunately, while dabbling in technologies which appeared like magic to them during the cleantech boom, many investors were lured back into the innovation fallacy, believing that pure technological advancement would equal value creation. Many of them re-learned this lesson the hard way. As frontier technologies are attracting broader attention, I believe many are falling back into the innovation trap.

So what should aspiring frontier inventors solve for as they seek to invest capital to translate pure discovery to building billion-dollar companies?  How can the technology be cast into an unfair advantage that will yield big margins and growth that underpin billion-dollar businesses?

Talent productivity: In this age of automation, human talent is scarce, and there is incredible value attributed to retaining and maximizing human creativity.  Leading companies seek to gain an advantage by attracting the very best talent. If your technology can help you make more scarce talent more productive, or help your customers become more productive, then you are creating an unfair advantage internally, while establishing yourself as the de facto product for your customers.

Great companies such as Tesla and Google have built tools for their own scarce talent, and build products their customers, in their own ways, can’t do without. Microsoft mastered this with its Office products in the 90s, through innovation and acquisition, Autodesk with its creativity tools, and Amazon with its AWS Suite. Supercharging talent yields one of the most valuable sources of competitive advantage: switchover cost.  When teams are empowered with tools they love, they will loathe the notion of migrating to shiny new objects, and stick to what helps them achieve their maximum potential.

Marketing and Distribution Efficiency: Companies are worth the markets they serve.  They are valued for their audience and reach.  Even if their products in of themselves don’t unlock the entire value of the market they serve, they will be valued for their potential to, at some point in the future, be able to sell to the customers that have been tee’d up with their brands. AOL leveraged cheap CD-ROMs and the postal system to get families online, and on email.

Dollar Shave Club leveraged social media and an otherwise abandoned demographic to lock down a sales channel that was ultimately valued at a billion dollars. The inventions in these examples were in how efficiently these companies built and accessed markets, which ultimately made them incredibly valuable.

Network effects: Its power has ultimately led to its abuse in startup fundraising pitches. LinkedIn, Facebook, Twitter, and Instagram generate their network effects through Internet and Mobile. Most marketplace companies need to undergo the arduous, expensive process of attracting vendors and customers.  Uber identified macro trends (e.g., urban living) and leveraged technology (GPS in cheap smartphones) to yield massive growth in building up supply (drivers) and demand (riders).

Our portfolio company Zoox will benefit from every car benefitting from edge cases every vehicle encounters: akin to the driving population immediately learning from special situations any individual driver encounters. Startups should think about how their inventions can enable network effects where none existed, so that they are able to achieve massive scale and barriers by the time competitors inevitably get access to the same technology.

Offering an end-to-end solution: There isn’t intrinsic value in a piece of technology; it’s offering a complete solution that delivers on an unmet need deep-pocketed customers are begging for. Does your invention, when coupled to a few other products, yield a solution that’s worth far more than the sum of its parts? For example, are you selling a chip, along with design environments, sample neural network frameworks, and datasets, that will empower your customers to deliver magical products? Or, in contrast, does it make more sense to offer standard chips, licensing software, or tag data?

If the answer is to offer components of the solution, then prepare to enter a commodity, margin-eroding, race-to-the-bottom business. The former, “vertical” approach is characteristic of more nascent technologies, such as operating robots-taxis, quantum computing, and launching small payloads into space. As the technology matures and becomes more modular, vendors can sell standard components into standard supply chains, but face the pressure of commoditization.

A simple example is Personal Computers, where Intel and Microsoft attracted outsized margins while other vendors of disk drives, motherboards, printers, and memory faced crushing downward pricing pressure.  As technology matures, the earlier vertical players must differentiate with their brands, reach to customers, and differentiated product, while leveraging what’s likely going to be an endless number of vendors providing technology into their supply chains.

A magical new technology does not go far beyond the resumes of the founding team.

What gets me excited is how the team will leverage the innovation, and attract more amazing people to establish a dominant position in a market that doesn’t yet exist. Is this team and technology the kernel of a virtuous cycle that will punch above its weight to attract more money, more talent, and be recognized for more than it’s product?

YC-grad Papa raises $2.4M for its ‘grandkids-on-demand’ service

One of the latest additions to the on-demand economy is Papa, a mobile app that connects college students with adults over 60 in need of support and companionship.

The recent graduate of Y Combinator’s accelerator program has raised a $2.4 million round of funding to expand its service throughout Florida and to five additional states next year, beginning with Pennsylvania. Initialized Capital led the round, with participation from Sound Ventures.

Headquartered in Miami, the startup was founded last year by chief executive officer Andrew Parker. The idea came to him while he was juggling a full-time job at a startup and caring for his grandfather, who had early onset dementia.

“I’ve always been a connector of humans,” Parker, the former vice president of health systems at telehealth company MDLIVE, told TechCrunch. “I’ve always naturally felt comfortable with all walks of life and all age groups and have just felt human connection is really critical.”

Seniors can request a “Papa Pal” using the company’s mobile app, desktop site or by phone. The pals can pick them up and take them out for an activity or have them over to play a game, complete household chores, teach them how to use social media and other technology or simply to chat. A senior is matched with a student, who must complete a “rigorous” background check, in as little as 30 seconds.

Parker says there are 600 students working with Papa an average of 25 hours per month.

“We’ve been fortunate that this is something the students really want to be part of,” he said. “They aren’t doing this for a couple extra dollars. They are doing this to help the community.”

The service costs seniors $20 per hour, $12 of which is paid to the students and $8 is returned to Papa. It’s not a subscription-based service, but seniors can pay for a premium option that lets them choose between three Papa Pals instead of being randomly paired with one of the several hundred options. The students do not provide any personal care, like bathing or grooming. And they are not a pick-up and drop-off service, like Uber or Lyft.

“We believe the Papa team has found a unique way to combat loneliness and depression in older adults,” said Alexis Ohanian, co-founder and managing partner of Initialized Capital, in a statement. “The experience that Papa Pals bring their members make it seem like they are part of a family.”

In addition to expanding to new markets, Papa is in the process of partnering with insurance companies with a goal of allowing seniors to pay for some of its services through their Medicare plans.

“Loneliness is a crisis. It’s a disease. It’s killing people prematurely,” Parker said. “We are providing a really massive impact to these people’s lives.”

Looking back at Google+

Google+ is shutting down at last. Google announced today it’s sunsetting its consumer-facing social network due to lack of user and developer adoption, low usage and engagement. Oh, and a data leak. It even revealed how poorly the network is performing, noting that 90 percent of Google+ user sessions are less than five seconds long. Yikes.

But things weren’t always like this. Google+ was once heralded as a serious attempt to topple Facebook’s stranglehold on social networking, and was even met with excitement in its first days.

2011

June: The unveiling

The company originally revealed its new idea for social networking in June 2011. It wasn’t Google’s first foray into social, however. Google had made numerous attempts to offer a social networking service of some sort, with Orkut, launched in 2004 and shuttered in fall 2014; Google Friend Connect in 2008 (retired in 2012); and Google Buzz in 2010 (it closed the next year).

But Google+ was the most significant attempt the company had made, proclaiming at the time: “we believe online sharing is broken.”

The once top-secret project was the subject of several leaks ahead of its launch, allowing consumer interest in the project to build.

Led by Vic Gundotra and Bradley Horowitz, Google’s big idea to fix social was to get users to create groups of contacts — called “Circles” — in order to have more control over social sharing. That is, there are things that are appropriate for sharing with family or close friends, and other things that make more sense to share with co-workers, classmates or those who share a similar interest — like biking or cooking, for example.

But getting users to create groups is difficult because the process can be tedious. Google, instead, cleverly designed a user interface that made organizing contacts feel simpler — even fun, some argued. It also was better than the system for contact organization that Facebook was offering at the time.

Next thing you know, everyone was setting up their Circles by dragging-and-dropping little profile icons into these groups, and posting updates and photos to their newly created micro-networks.

Another key feature, “Sparks,” helped users find news and content related to a user’s particular interests. This way, Google could understand what people liked and wanted to track, without having an established base of topical pages for users to “Like,” as on Facebook. But it also paved the way for a new type of search. Instead of just returning a list of blue links, a search on Google+ could return people’s profiles who were relevant to the topic at hand, matching pages and other content.

Google+ also introduced Hangouts, a way to video chat with up to 10 people in one of your Circles at once.

At the time, the implementation was described as almost magical. This was due to a number of innovative features, like the way the software focused in on the person talking, for example, and the way everyone could share content within a chat.

Early growth looked promising

Within two weeks, it seemed Google had a hit on its hands, as the network had reached 10 million users. Just over a month after launch, it had grown to 25 million. By October 2011, it reached 40 million. And by year-end, 90 million. Even if Google was only tracking sign-up numbers, it still appeared like a massive threat to Facebook.

Facebook CEO Mark Zuckerberg’s first comment about Google+, however, smartly pointed out that any Facebook competitor will have to build up a social graph to be relevant. Facebook, which had 750 million users at the time, had already done this. Google+ was getting the sign-ups, but whether users would remain active over time was still in question.

There also were early signs that Google+’s embrace of non-friends could be challenging. It had to roll out blocking mechanisms months after launch, as the network became too spammy with unwanted notifications. Over the years that followed, its inability to control the spam became a major issue.

Even as late at 2017, people were still complaining that spam made Google+ unusable.

 

July: Backlashes over brands and Real Names policy

In an effort to compete with Facebook, Google+ also enforced a “real names” policy. This angered many users who wanted to use pseudonyms or nicknames, especially when Google began deleting their accounts for non-compliance. This was a larger issue than merely losing social networking access, because losing a Google account meant losing Gmail, Documents, Calendar and access to other Google products, too.

The company also flubbed its handling of brands’ pages, banning all Google business profiles in an ill-conceived fashion — something it later admitted was a mistake.

It wouldn’t fix some of these problems for years, in fact. Eric Schmidt even reportedly once suggested finding another social network if you didn’t want to use your real name — a comment that came across as condescending.

August: Social search

Google+ came to Google Search in August. The company announced Google+ posts would begin appearing in “social search” results that showed when users were signed in. Google called this new toggle “search plus your world.” But its slice of “your world” was pretty limited, as it couldn’t see into the posts shared among your friends and followers on Facebook and Twitter.

2012

January: Forced Google+ account creation

If you can’t beat ’em, force ’em! Google began to require users to have a Google+ account in order to sign up for Gmail. It was not a user-friendly change, and was the start of a number of forced integrations to come.

March: Criticism mounts

TechCrunch’s Devin Coldewey argued that Google failed to play the long game in social, and was too ambitious in its attempt with Google+. All the network really should have started with was its “+1” button — the clicks would generate piles of data tied to users that could then be searchable, private by default and shareable elsewhere.

June: Event spam goes viral

Spam remained an issue on Google+. This time, event spam had emerged, thanks to all the nifty integrations between Google+ and mission-critical products like Calendar.

Users were not thrilled that other people were able to “invite” them to events, and these automatically showed up on your Calendar — even if you had not yet confirmed that you would be attending. It made using Google+ feel like a big mistake.

November: Hangouts evolves

The following year after Google+’s launch, there was already a lot of activity around Hangouts — which interestingly, has since become one of the big products that will outlive its original Google+ home.

Video was a tough space to get right — which is why businesses like Skype were still thriving. And while Hangouts were designed for friends and family to use in Google+, Google was already seeing companies adopt the technology for meetings, and brands like the NBA for connecting with fans.

December: Google+ adds Communities

The focus on user interests in Google+ also continued to evolve this year with the launch of Communities — a way for people to set up topic-based forums on the site. The move was made in hopes of attracting more consumer interest, as growth had slowed.

2013

It’s not a destination; it’s a “social layer!” 

Google+ wasn’t working out as a “Facebook killer.” Engagement was low, distribution was mixed and it seemed it was only being used by tech early adopters, not the mainstream. So the new plan was to double down on Google+ not being a destination website, like Facebook, but rather make it a social layer across Google products.

It had already integrated Google+ with Gmail and Google Contacts, shortly after its launch. In June 2013, it offered a way for people to follow brands’ pages in Gmail.

It then decided to unify Google Talk (aka Gchat) with Google+ Messenger into Hangouts.

It launched a Google+ commenting system for Blogger.

It replaced Google sign-ins on third-party sites with Google+ logins.

It was all a bit much.

September: Google+ infiltrates YouTube

Then, most controversially, it took over YouTube comments. Now, if you wanted to comment on YouTube, you needed a Google+ account.

In other words, if Gmail’s then 200+ million users could juice up Google+, then maybe YouTube’s millions of commenters could, Google hoped.

People were not happy, to say the least.

It was a notable indication of how little love people had for Google+. YouTubers were downright pissed. One girl even crafted a profane music video in response, with lyrics like “You ruined our site and called it integration / I’m writing this song just to vent our frustration / Fuck you, Google Plusssssss!”

Google also started talking about Google+ as an “identity layer” with 500 million users to make it sound big.

2014

April: Vic Gundotra, Father of Google+, leaves Google

Google+ lost its founder. In April 2014, it was announced that Vic Gundotra, the father of Google+, was leaving the company. Google CEO Larry Page said at the time that the social network would still see investment, but it was a signal that a shift was coming in terms of Google’s approach.

Former TechCrunch co-editor Alexia Bonatsos (née Tsotsis) and editor Matthew Panzarino wrote at the time that Google+ was “walking dead,” having heard that Google+ was no longer going to be considered a product, but a platform.

The forced integrations of the past would be walked back, like those in Gmail and YouTube, and teams would be reshuffled.

July: Hangouts breaks free

Perhaps one of the most notable changes was letting Hangouts go free. Hangouts was a compelling product — too important to require a tie to Google+. In July 2014, Hangouts began to work without a Google+ account, rolled out to businesses and got itself an SLA.

July: Google+ drops its “real name” rule and apologizes

Another signal that Google+ was shifting following Gundotra’s exit was when it abandoned its “real name” policy, three years after the user outrage.

While Google had started rolling back on the real name policy in January of 2012 by opening rules to include maiden names and select nicknames, it still displayed your real name alongside your chosen name. It was nowhere near what people wanted.

Now, Google straight-up apologized for its decision around real names and hoped the change would bring users back. It did not. It was too late.

2015

May: Google Photos breaks free

Following Hangouts, Google realized that Google+’s photo-sharing features also deserved to become their own, standalone product.

At Google I/O 2015, the company announced its Google Photos revamp. The new product took advantage of AI and machine learning capabilities that originated on Google+. This included allowing users to search photos for persons, places and things, as well as an update on Google+’s “auto awesome” feature, which turned into the more robust Google Photos Assistant.

Later that year, Google Photos had scaled to 100 million monthly active users, after shutting down Google+ Photos in August 2015.

July: Google+ pulled from YouTube

In July 2015, Google reversed course on YouTube integrations with Google+ so YouTube comments stayed on YouTube, and not on Google+.

People were happy about this. But not happy enough to go back to Google+.

November: An all-new Google+ unveiled

Google+ got a big revamp in November 2015.

Bradley Horowitz, VP, Photos and Streams at Google and Product Director at Google, Luke Wroblewski, had teamed up to redesign Google+ around what Google’s data indicated was working: Communities and Collections. Essentially, the new Google+ was focused on users and their interests. It let people network around topics, but not necessarily their personal connections.

Google also rolled out “About Me” pages as an alternative to sites like About.me.

The new site got a colorful coat of paint, too, but it never regained traction.

2016

January: Google+ pulled from Android Gaming service

Google decoupled Google+ from another core product by dropping the requirement to have an account with the social network in order to use the Google Play Games services.

August: Google+ pulled from Play Store

The unbundling continued, as Google’s Play Store stopped requiring users to have a Google+ account to write reviews.

Horowitz explained at the time that Google had heard from users “that it doesn’t make sense for your Google+ profile to be your identity in all the other Google products you use,” and it was responding accordingly.

August: Hangouts on Air moved to YouTube Live

One of the social network’s last exclusive features, Hangouts on Air — a way to broadcast a Hangout — moved to YouTube Live in 2016, as well.

2017

Google+ went fairly quiet. The site was still there, but the communities were filling with spam. Community moderators said they couldn’t keep up. Google’s inattention to the problem was a signal in and of itself that the grand Google+ experiment may be coming to a close.

January: Classic design phased out

Google+ forced the change over to the new design first previewed in late 2015.

In January 2017, it no longer allowed users to switch back to the old look. It also took the time to highlight groups that were popular on Google+ to counteract the narrative that the site was “dead.” (Even though it was.)

August: Google+ removed share count from +1 button

The once ubiquitous “+1” button, launched in spring 2012, was getting a revamp. It would no longer display the number of shares. Google said this was to make the button load more quickly. But it was really because the share counts were not worth touting anymore.

2018

October 2018: Google+ got its Cambridge Analytica moment

A security bug allowed third-party developers to access Google+ user profile data since 2015 until Google discovered it in March, but decided not to inform users. In total, 496,951 users’ full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status were potentially exposed. Google says it doesn’t have evidence the data was misused, but it decided to shut down the consumer-facing Google+ site anyway, given its lack of use.

Data misuse scandals like Cambridge Analytica have damaged Facebook and Twitter’s reputations, but Google+ wasn’t similarly impacted. After all, Google was no longer claiming Google+ be a social network. And, as its own data shows, the network that remained was largely abandoned.

But the company still had piles of user profile data on hand, which were put at risk. That may lead Google to face a similar fate as the more active social networks, in terms of being questioned by Congress or brought up in lawmakers’ discussions about regulations.

In hindsight, then, maybe it would have been better if Google had shut down Google+ years ago.

Facebook, are you kidding?

Facebook is making a video camera. The company wants you to take it home, gaze into its single roving-yet-unblinking eye and speak private thoughts to your loved ones into its many-eared panel.

The thing is called Portal and it wants to live on your kitchen counter or in your living room or wherever else you’d like friends and family to remotely hang out with you. Portal adjusts to keep its subject in frame as they move around to enable casual at-home video chat. The device minimizes background noise to boost voice clarity. These tricks are neat but not revelatory.

Sounds useful, though. Everyone you know is on Facebook. Or they were anyway… things are a bit different now.

Facebook, champion of bad timing

As many users are looking for ways to compartmentalize or scale back their reliance on Facebook, the company has invited itself into the home. Portal is voice activated, listening for a cue-phrase (in this case “Hey Portal) and leverages Amazon’s Alexa voice commands, as well. The problem is that plenty of users are already creeped out enough by Alexa’s always-listening functionality and habit of picking up snippets of conversation from the next room over. It may have the best social graph in the world, but in 2018 people are looking to use Facebook for less — not more.

Facebook reportedly planned to unveil Portal at F8 this year but held the product back due to the Cambridge Analytica scandal, among other scandals. The fact that the company released the device on the tail end of a major data breach disclosure suggests that the company couldn’t really hold back the product longer without killing it altogether and didn’t see a break in the clouds coming any time soon. Facebook’s Portal is another way for Facebook to blaze a path that its users walk daily to connect to one another. Months after its original intended ship date, the timing still couldn’t be worse.

Over the last eight years Facebook insisted time and time again that it is not and never would be a hardware company. I remember sitting in the second row at a mysterious Menlo Park press event five years ago as reporters muttered that we might at last meet the mythological Facebook phone. Instead, Mark Zuckerberg introduced Graph Search.

It’s hard to overstate just how much better the market timing would have been back in 2013. For privacy advocates, the platform was already on notice, but most users still bobbed in and out of Facebook regularly without much thought. Friends who’d quit Facebook cold turkey were still anomalous. Soul-searching over social media’s inexorable impact on social behavior wasn’t quite casual conversation except among disillusioned tech reporters.

Trusting Facebook (or not)

Onion headline-worthy news timing aside, Facebook showed a glimmer of self-awareness, promising that Portal was “built with privacy and security in mind.” It makes a few more promises:

“Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. Your Portal conversations stay between you and the people you’re calling. In addition, video calls on Portal are encrypted, so your calls are always secure.”

“For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are.”

“Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, ‘Hey Portal.’ You can delete your Portal’s voice history in your Facebook Activity Log at any time.”

This stuff sounds okay, but it’s standard. And, like any Facebook product testing the waters before turning the ad hose on full-blast, it’s all subject to change. For example, Portal’s camera doesn’t identify who you are, but Facebook commands a powerful facial recognition engine and is known for blurring the boundaries between its major products, a habit that’s likely to worsen with some of the gatekeepers out of the way.

Facebook does not command a standard level of trust. To recover from recent lows, Facebook needs to establish an extraordinary level of trust with users. A fantastic level of trust. Instead, it’s charting new inroads into their lives.

Hardware is hard. Facebook isn’t a hardware maker and its handling of Oculus is the company’s only real trial with the challenges of making, marketing — and securing — something that isn’t a social app. In 2012, Zuckerberg declared that hardware has “always been the wrong strategy” for Facebook. Two years later, Facebook bought Oculus, but that was a bid to own the platform of the future after missing the boat on the early mobile boom — not a signal that Facebook wanted to be a hardware company.

Reminder: Facebook’s entire raison d’être is to extract personal data from its users. For intimate products — video chat, messaging, kitchen-friendly panopticons — it’s best to rely on companies with a business model that is not diametrically opposed to user privacy. Facebook isn’t the only one of those companies (um, hey Google) but Facebook’s products aren’t singular enough to be worth fooling yourself into a surfeit of trust.

Gut check

Right now, as consumers, we only have so much leverage. A small handful of giant tech companies — Facebook, Apple, Amazon, Google and Microsoft — make products that are ostensibly useful, and we decide how useful they are and how much privacy we’re willing to trade to get them. That’s the deal and the deal sucks.

As a consumer it’s worth really sitting with that. Which companies do you trust the least? Why?

It stands to reason that if Facebook cannot reliably secure its flagship product — Facebook itself — then the company should not be trusted with experimental forays into wildly different products, i.e. physical ones. Securing a software platform that serves 2.23 billion users is an extremely challenging task, and adding hardware to that equation just complicates existing concerns.

You don’t have to know the technical ins and outs of security to make secure choices. Trust is leverage — demand that it be earned. If a product doesn’t pass the smell test, trust that feeling. Throw it out. Better yet, don’t invite it onto your kitchen counter to begin with.

If we can’t trust Facebook to safely help us log in to websites or share news stories, why should we trust Facebook to move into our homes an always-on counter-mounted speaker capable of collecting incredibly sensitive data? Tl; dr: We shouldn’t! Of course we shouldn’t. But you knew that.

Facebook is weaponizing security to erode privacy

At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.

“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.

Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.

But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.

You could say Facebook has ‘hostility to privacy‘ as a core value.

Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.

But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.

On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.

Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.

All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparencyhate speech and abuse, and also directly, and at times closely, on consumer privacy and control

Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.) 

The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”

As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.

But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.

He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.

Security, weaponized 

What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.

Simply put, Facebook is weaponizing security to shield its erosion of privacy.

Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.

Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.

In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.

Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.

Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.

Hence the existential nature of the threat.

The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.

Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.

One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.

It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.

US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.

Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.

So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)

The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.

Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.

While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.

In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.

Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.

So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.

Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:

It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.

At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.

“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.

He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.

“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”

He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).

These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.

But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.

No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.

TechCrunch/Bryce Durbin

Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…

So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.

What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.

Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.

Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.

“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.

So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.

Safe to say, that’s not going to play at all well in Europe.

Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!

Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)

In other contexts this would be called spying — or, well, ‘mass surveillance’.

It’s also how Facebook makes money.

And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.

Except Facebook is a commercial company, not the NSA.

So it’s only fighting to keep being able to carpet-bomb the planet with ads.

Profiting from shadow profiles

Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.

In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.

Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.

It’s just a double whammy of awful, awful behavior.

And of course, there’s more.

A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.

In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.

Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.

Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)

So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)

In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.

One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…

That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.

There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.

These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.

Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.

WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.

On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.

So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.

This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.

No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.

Yet this is a very dangerous strategy, though.

Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.

What’s the best security practice of all? That’s super simple: Not holding data in the first place.

What Instagram users need to know about Facebook’s security breach

Even if you never log into Facebook itself these days, the other apps and services you use might be impacted by Facebook’s latest big, bad news.

In a follow-up call on Friday’s revelation that Facebook has suffered a security breach affecting at least 50 million accounts, the company clarified that Instagram users were not out of the woods — nor were any other third-party services that utilized Facebook Login. Facebook Login is the tool that allows users to sign in with a Facebook account instead of traditional login credentials and many users choose it as a convenient way to sign into a variety of apps and services.

Third-party apps and sites affected too

Due to the nature of the hack, Facebook cannot rule out the fact that attackers may have also accessed any Instagram account linked to an affected Facebook account through Facebook Login. Still, it’s worth remembering that while Facebook can’t rule it out, the company has no evidence (yet) of this kind of activity.

“So the vulnerability was on Facebook, but these access tokens enable someone to use [a connected account] as if they were the account holder themselves — this does mean they could have access other third party apps that were using Facebook login,” Facebook Vice President of Product Management Guy Rosen explained on the call.

“Now that we have reset all of those access tokens as part of protecting the security of people’s accounts, developers who use Facebook login will be able to detect that those access tokens has been reset, identify those users and as a user, you will simply have to log in again into those third party apps.”

Rosen reiterated that there is plenty Facebook does not know about the hack, including the extent to which attackers manipulated the three security bugs in question to obtain access to external accounts through Facebook Login.

“The vulnerability was on Facebook itself and we’ve yet to determine, given the investigation is really early, [what was] the exact nature of misuse and whether there was any access to Instagram accounts, for example,” Rosen said.

Anyone with a Facebook account affected by the breach — you should have been automatically logged out and will receive a notification — will need to unlink and relink their Instagram account to Facebook in order to continue cross-posting content to Facebook.

How to relink your Facebook account and do a security check

To do relink your Instagram account to Facebook, if you choose to, open Instagram Settings > Linked Accounts and select the checkbox next to Facebook. Click Unlink and confirm your selection. If you’d like to reconnect Instagram with Facebook, you’ll need to select Facebook in the Linked Accounts menu and login with your credentials like normal.

If you know your Facebook account was affected by the breach, it’s wise to check for suspicious activity on your account. You can do this on Facebook through the Security and Login menu.

There, you’ll want to browse the activity listed to make sure you don’t see anything that doesn’t look like you — logins from other countries, for example. If you’re concerned or just want to play it safe, you can always find the link to “Log Out Of All Sessions” by scrolling toward the bottom of the page.

While we know a little bit more now about Facebook’s biggest security breach to date, there’s still a lot that we don’t. Expect plenty of additional information in the coming days and weeks as Facebook surveys the damage and passes that information along to its users. We’ll do the same.

Facebook says 50 million accounts affected by account takeover bug

Facebook has said 50 million user accounts may be at risk after hackers exploited a security vulnerability on the site.

The company said in a blog post Friday that it discovered the bug earlier in the week. The bug is part of the site’s “View As” feature that lets a user view their profile as someone else.

Facebook said that it’s reset those access tokens, and an additional 40 million accounts. Anyone affected may have been logged out of their account — either on their phone or computer. Facebook also said that users will be notified once they log in.

The company has switched off the “View As” feature in the meantime while it conducts a security review.

“We have yet to determine whether these accounts were misused or any information accessed,” said Guy Rosen, Facebook’s vice president of product management. “We also don’t know who’s behind these attacks or where they’re based.”

Facebook has contacted law enforcement, the blog post said. The social network has 2.2 billion monthly active users.

“If we find more affected accounts, we will immediately reset their access tokens,” said Rosen.

Facebook did not immediately respond to a request for comment.

More soon…

Facebook’s ex-CSO, Alex Stamos, defends its decision to inject ads in WhatsApp

Alex Stamos, Facebook’s former chief security officer, who left the company this summer to take up a role in academia, has made a contribution to what’s sometimes couched as a debate about how to monetize (and thus sustain) commercial end-to-end encrypted messaging platforms in order that the privacy benefits they otherwise offer can be as widely spread as possible.

Stamos made the comments via Twitter, where he said he was indirectly responding to the fallout from a Forbes interview with WhatsApp co-founder Brian Acton — in which Acton hit at out at his former employer for being greedy in its approach to generating revenue off of the famously anti-ads messaging platform.

Both WhatsApp founders’ exits from Facebook has been blamed on disagreements over monetization. (Jan Koum left some months after Acton.)

In the interview, Acton said he suggested Facebook management apply a simple business model atop WhatsApp, such as metered messaging for all users after a set number of free messages. But that management pushed back — with Facebook COO Sheryl Sandberg telling him they needed a monetization method that generates greater revenue “scale”.

And while Stamos has avoided making critical remarks about Acton (unlike some current Facebook staffers), he clearly wants to lend his weight to the notion that some kind of trade-off is necessary in order for end-to-end encryption to be commercially viable (and thus for the greater good (of messaging privacy) to prevail); and therefore his tacit support to Facebook and its approach to making money off of a robustly encrypted platform.

Stamos’ own departure from the fb mothership was hardly under such acrimonious terms as Acton, though he has had his own disagreements with the leadership team — as set out in a memo he sent earlier this year that was obtained by BuzzFeed. So his support for Facebook combining e2e and ads perhaps counts for something, though isn’t really surprising given the seat he occupied at the company for several years, and his always fierce defence of WhatsApp encryption.

(Another characteristic concern that also surfaces in Stamos’ Twitter thread is the need to keep the technology legal, in the face of government attempts to backdoor encryption, which he says will require “accepting the inevitable downsides of giving people unfettered communications”.)

This summer Facebook confirmed that, from next year, ads will be injected into WhatsApp statuses (aka the app’s Stories clone). So it is indeed bringing ads to the famously anti-ads messaging platform.

For several years the company has also been moving towards positioning WhatsApp as a business messaging platform to connect companies with potential customers — and it says it plans to meter those messages, also from next year.

So there are two strands to its revenue generating playbook atop WhatsApp’s e2e encrypted messaging platform. Both with knock-on impacts on privacy, given Facebook targets ads and marketing content by profiling users by harvesting their personal data.

This means that while WhatsApp’s e2e encryption means Facebook literally cannot read WhatsApp users’ messages, it is ‘circumventing’ the technology (for ad-targeting purposes) by linking accounts across different services it owns — using people’s digital identities across its product portfolio (and beyond) as a sort of ‘trojan horse’ to negate the messaging privacy it affords them on WhatsApp.

Facebook is using different technical methods (including the very low-tech method of phone number matching) to link WhatsApp user and Facebook accounts. Once it’s been able to match a Facebook user to a WhatsApp account it can then connect what’s very likely to be a well fleshed out Facebook profile with a WhatsApp account that nonetheless contains messages it can’t read. So it’s both respecting and eroding user privacy.

This approach means Facebook can carry out its ad targeting activities across both messaging platforms (as it will from next year). And do so without having to literally read messages being sent by WhatsApp users.

As trade offs go, it’s a clearly a big one — and one that’s got Facebook into regulatory trouble in Europe.

It is also, at least in Stamos’ view, a trade off that’s worth it for the ‘greater good’ of message content remaining strongly encrypted and therefore unreadable. Even if Facebook now knows pretty much everything about the sender, and can access any unencrypted messages they sent using its other social products.

In his Twitter thread Stamos argues that “if we want that right to be extended to people around the world, that means that E2E encryption needs to be deployed inside of multi-billion user platforms”, which he says means: “We need to find a sustainable business model for professionally-run E2E encrypted communication platforms.”

On the sustainable business model front he argues that two models “currently fit the bill” — either Apple’s iMessage or Facebook-owned WhatsApp. Though he doesn’t go into any detail on why he believes only those two are sustainable.

He does say he’s discounting the Acton-backed alternative, Signal, which now operates via a not-for-profit (the Signal Foundation) — suggesting that rival messaging app is “unlikely to hit 1B users”.

In passing he also throws it out there that Signal is “subsidized, indirectly, by FB ads” — i.e. because Facebook pays a licensing fee for use of the underlying Signal Protocol used to power WhatsApp’s e2e encryption. (So his slightly shade-throwing subtext is that privacy purists are still benefiting from a Facebook sugardaddy.)

Then he gets to the meat of his argument in defence of Facebook-owned (and monetized) WhatsApp — pointing out that Apple’s sustainable business model does not reach every mobile user, given its hardware is priced at a premium. Whereas WhatsApp running on a cheap Android handset ($50 or, perhaps even $30 in future) can.

Other encrypted messaging apps can also of course run on Android but presumably Stamos would argue they’re not professionally run.

“I think it is easy to underestimate how radical WhatsApp’s decision to deploy E2E was,” he writes. “Acton and Koum, with Zuck’s blessing, jumped off a bridge with the goal of building a monetization parachute on the way down. FB has a lot of money, so it was a very tall bridge, but it is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue.

“This could come from directly charging for the service, it could come from advertising, it could come from a WeChat-like services play. The first is very hard across countries, the latter two are complicated by E2E.”

“I can’t speak to the various options that have been floated around, or the arguments between WA and FB, but those of us who care about privacy shouldn’t see WhatsApp monetization as something evil,” he adds. “In fact, we should want WA to demonstrate that E2E and revenue are compatible. That’s the only way E2E will become a sustainable feature of massive, non-niche technology platforms.”

Stamos is certainly right that Apple’s iMessage cannot reach every mobile user, given the premium cost of Apple hardware.

Though he elides the important role that second hand Apple devices play in helping to reduce the barrier to entry to Apple’s pro-privacy technology — a role Apple is actively encouraging via support for older devices (and by its own services business expansion which extends its model so that support for older versions of iOS (and thus secondhand iPhones) is also commercially sustainable).

Robust encryption only being possible via multi-billion user platforms essentially boils down to a usability argument by Stamos — which is to suggest that mainstream app users will simply not seek encryption out unless it’s plated up for them in a way they don’t even notice it’s there.

The follow on conclusion is then that only a well-resourced giant like Facebook has the resources to maintain and serve this different tech up to the masses.

There’s certainly substance in that point. But the wider question is whether or not the privacy trade offs that Facebook’s monetization methods of WhatsApp entail, by linking Facebook and WhatsApp accounts and also, therefore, looping in various less than transparent data-harvest methods it uses to gather intelligence on web users generally, substantially erodes the value of the e2e encryption that is now being bundled with Facebook’s ad targeting people surveillance. And so used as a selling aid for otherwise privacy eroding practices.

Yes WhatsApp users’ messages will remain private, thanks to Facebook funding the necessary e2e encryption. But the price users are having to pay is very likely still their personal privacy.

And at that point the argument really becomes about how much profit a commercial entity should be able to extract off of a product that’s being marketed as securely encrypted and thus ‘pro-privacy’? How much revenue “scale” is reasonable or unreasonable in that scenario?

Other business models are possible, which was Acton’s point. But likely less profitable. And therein lies the rub where Facebook is concerned.

How much money should any company be required to leave on the table, as Acton did when he left Facebook without the rest of his unvested shares, in order to be able to monetize a technology that’s bound up so tightly with notions of privacy?

Acton wanted Facebook to agree to make as much money as it could without users having to pay it with their privacy. But Facebook’s management team said no. That’s why he’s calling them greedy.

Stamos doesn’t engage with that more nuanced point. He just writes: “It is foolish to expect that FB shareholders are going to subsidize a free text/voice/video global communications network forever. Eventually, WhatsApp is going to need to generate revenue” — thereby collapsing the revenue argument into an all or nothing binary without explaining why it has to be that way.

Twitter says bug may have exposed some direct messages to third-party developers

Twitter said that a “bug” sent user’s private direct messages to third-party developers “who were not authorized to receive them.”

The social media giant began warning users Friday of the possible exposure with a message in the app.

“The issue has persisted since May 2017, but we resolved it immediately upon discovering it,” the message said, which was posted on Twitter by a Mashable reporter. “Our investigation into this issue is ongoing, but presently we have no reason to believe that any data sent to unauthorized developers was misused.”

A spokesperson told TechCrunch that it’s “highly unlikely” that any communication was sent to the incorrect developers at all, but informed users out of an abundance of caution.

Twitter said in a notice that only messages sent to brand accounts — like airlines or delivery services — may be affected. In a separate blog post, Twitter said that it’s investigation has confirmed “only one set of technical circumstances where this issue could have occurred.”

The bug was found on September 10, but took almost two weeks to inform users.

“If your account was affected by this bug, we will contact you directly through an in-app notice and on twitter.com,” said the advice.

The company said that the bug affected less than 1 percent of users on Twitter. The company had 335 million users as of its latest earnings release.

“No action is required from you,” the message said.

It’s the second data-related bug this year. In May, the company said it mistakenly logged users’ passwords in plaintext in an internal log, used by Twitter staff. Twitter urged users to change their password.

Base10’s debut fund is the largest-ever for a Black-led VC firm

Adeyemi Ajao (above left), the co-founder and managing director of Base10 Partners, was surprised to hear his firm’s $137 million fund was the largest debut to date for a black-led venture capital firm.

He and his co-founder — managing director TJ Nahigian (above right) — found out from none other than their fund’s own limited partners, who told them they should seek out institutions looking to invest in diverse fund managers.

“Oh man, I was like, ‘yeah, I know I’m black but so what?'” Ajao told TechCrunch. “I can be a little bit naive about these things until they become extremely apparent.”

Ajao is African, European, Latin, and now, having spent a decade in San Francisco, American. Growing up in between Spain and Nigeria, it wasn’t until landing in the Bay Area that he was forced to confront a social dynamic absent in his international upbringing: racial inequality and being black in America.

“The U.S. is pretty different about those things,” he said. “I was surprised when at Stanford I got an invitation to a dinner of the Black Business Student Association. I’m like, ‘why would there be a Black Business Student Association? That’s so weird?’ It took me a while, a good, good while, to be like ok, here there’s actually a really entrenched history of a clash and people being treated differently day-to-day.”

In the business of venture capital, the gap in funding for black founders and other underrepresented entrepreneurs is jarring. There’s not a lot of good data out there to illustrate the gap, but one recent study by digitalundivided showed the median amount of funding raised by black women founders is $0, because most companies founded by black women receive no money.  

Ajao certainly hadn’t thought the color of his skin would impact his fundraising process, and, in retrospect, he doesn’t think it did. Still, he recognizes that pattern recognition and implicit bias continue to be barriers for diverse founders and investors.

Now, he plans to leverage his unique worldview to identify the next wave of unicorns others VCs are missing. Base10 doesn’t have a diversity thesis per say but it plans to invest in global companies fixing problems that affect 99 percent of the world, not the Silicon Valley 1 percent. 

I sat down with Ajao in Base10’s San Francisco office to discuss his background, the firm’s investment focus and the importance of looking beyond the Silicon Valley bubble.

Automation of the real economy

Base10 is writing seed and Series A checks between 500,000 and $5 million. It’s completed 10 investments so far, including in Brazilian mobility startups Grin and Yellow, which closed a $63 million Series A last week.

The firm is looking for entrepreneurs who have spent years in their industries, whether that be agriculture, logistics, waste management, construction, real estate or otherwise, and are trying to solve problems they’ve experienced first-hand.

“We are much more likely to fund someone that actually worked for eight years on a construction site and was like, ‘you know what, I think this could be done better and maybe I can make my life easier with automation,’ rather than a Ph.D. in AI out of the Stanford lab that says ‘I think construction is inefficient and it can be done without people,'” Ajao said. “[We are] kind of flipping the paradigm in that sense.”

The firm has also backed birth control delivery startup The Pill Club, on-demand staffing company Wonolo and Tokensoft, a platform for compliant token sales. 

Beyond the bubble

Ajao and Nahigian have a mix of operational and investing experience.

On the VC side, Nahigian, a Los Angeles native, spent seven years investing via Summit Partners, Accel, then Coatue Management. In 2014, he co-founded Jobr, a mobile job platform that was later acquired by Monster, where he became the VP of product and head of mobile.

Ajao was most recently a VP at Workday where he led the launch of Workday Ventures, a VC fund focused on AI for enterprise software. He joined Workday after the company acquired his startup, Identified, in what was his second successful exit to date. Before that, he co-founded Spanish social media company Tuenti, which Telefonica paid $100 million for in 2010

He also helped incubate and launch Cabify, a Spanish ride-hailing company based in Madrid. The Uber competitor raised $160 million at a $1.4 billion valuation earlier this year.

Ajao was Nahigian’s first investor in Jobr, which was also backed by Tim Draper, Redpoint Ventures, Eniac Ventures, Lowercase Capital and more. The pair stayed in touch, discussed startups and potential deals, ultimately deciding to go into business together. 

They agreed Base10 should support companies solving real problems and that as investors, they needed to be able to see beyond the Silicon Valley bubble.

Do we feel a little bit of a responsibility? Like … ‘hey, you should help Silicon Valley be more aware of global issues.’ Yes,” Ajao said. “I try to spend a lot of time meeting with founders that either look different or are trying to make it here and I try to be super open about my journey and my travels.”

His piece of advice to other VCs is one that countless diverse founders and investors have been shouting at the top of their lungs: Invest in underrepresented founders, it’s just good business.

“If you have the same company and one is run by a female and one is run by a male, and it’s the same stuff, you should probably invest in the female, because that person probably had a harder time getting there,” he said. “It’s actually good business. I believe that.”

“The more open and comfortable we get about talking about these things, the better it is for both parties.”