Facebook will not remove deepfakes of Mark Zuckerberg, Kim Kardashian and others from Instagram

Facebook will not remove the faked videos featuring Mark Zuckerberg, Kim Kardashian and President Donald Trump from Instagram, the company said in a statement.

Earlier today, Vice News reported on the existence of videos created by the artists Bill Posters and Daniel Howe and video and audio manipulation companies including CannyAIRespeecher and Reflect. 

The work, featured in a site-specific installation in the UK as well as circulating in video online, was the first test of Facebook’s content review policies since the company’s decision not to remove a manipulated video of House Speaker Nancy Pelosi received withering criticism from Democratic political leadership.

“We have said all along, poor Facebook, they were unwittingly exploited by the Russians,” Pelosi said in an interview with radio station KQED, quoted by The New York Times. “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

After the late May incident Facebook’s Neil Potts testified before a smorgasbord of international regulators in Ottawa about deep fakes, saying the company would not remove a video of Mark Zuckerberg . This appears to be the first instance testing the company’s resolve.

“We will treat this content the same way we treat all misinformation on Instagram . If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages,” said an Instagram spokesperson in an email to TechCrunch.

The videos appear not to violate any Facebook policies, which means that they will be subject to the treatment any video containing misinformation gets on any of Facebook’s platforms. So the videos will be blocked from appearing in the Explore feature and hashtags won’t work with the offending material.

Facebook already uses image detection technology to find content that has been debunked by its third-party fact checking program on Instagram. When misinformation is only present on Instagram the company is testing the ability to promote links into the fact-checking product on Facebook.

“Spectre interrogates and reveals many of the common tactics and methods that are used by corporate or political actors to influence people’s behaviours and decision making,” said Posters in an artist’s statement about the project. “In response to the recent global scandals concerning data, democracy, privacy and digital surveillance, we wanted to tear open the ‘black box’ of the digital influence industry and reveal to others what it is really like.”

Facebook’s consistent decisions not to remove offending content stands in contrast with YouTube which has taken the opposite approach in dealing with manipulated videos and other material that violate its policies.

YouTube removed the Pelosi video and recently took steps to demonetize and remove videos from the platform that violated its policies of hate speech — including a wholesale purge of content about Nazism.

These issues take on greater significance as the U.S. heads into the next Presidential election in 2020.

“In 2016 and 2017, the UK, US and Europe witnessed massive political shocks as new forms of computational propaganda employed by social media platforms, the ad industry, and political consultancies like Cambridge Analytica [that] were exposed by journalists and digital rights advocates,” said Howe, in a statement about his Spectre project. “We wanted to provide a personalized experience that allows users to feel what is at stake when the data taken from us in countless everyday actions is used in unexpected and potentially dangerous ways.”

Perhaps, the incident will be a lesson to Facebook in what’s potentially at stake as well.

 

Protecting the integrity U.S. elections will require a massive regulatory overhaul, academics say

Ahead of the 2020 elections former Facebook chief security officer Alex Stamos and his colleagues at Stanford University have unveiled a sweeping new plan to secure U.S. electoral infrastructure and combat foreign campaigns seeking to interfere in U.S. politics.

As the Mueller investigation into electoral interference made clear, foreign agents from Russia (and elsewhere) engaged in a strategic campaign to influence the 2016 U.S. elections. As the chief security officer of Facebook at the time, Stamos was both a witness to the influence campaign on social media and a key architect of the efforts to combat its spread.

Along with Michael McFaul, a former ambassador to Russia, and a host of other academics from Stanford, Stamos lays out a multi-pronged plan that incorporates securing U.S. voting systems, providing clearer guidelines for advertising and the operations of foreign media in the U.S. and integrating government action more closely with media and social media organizations to combat the spread of misinformation or propaganda by foreign governments.

The paper lays out a number of suggestions for securing elections including:

  • Increasing the Security of the U.S. Election Infrastructure
  • Explicitly prohibit foreign governments and individuals from purchasing online advertisements targeting the American electorate
  • Require greater disclosure measures for FARA-registered foreign media organizations.
  • Create standardized guidelines for labeling content affiliated with disinformation campaign producers.
  • Mandate transparency in the use of foreign consultants and foreign companies in U.S. political campaigns.
  • Foreground free and fair elections as part of U.S. policy and identifying election rights as human rights
  • Signal a clear and credible commitment to respond to election interference.

A lot of heavy lifting by Congress and media and social media companies would be required to enact all of these policy recommendations and many of them speak to core issues that policymakers and corporate executives are already attempting to manage.

For lawmakers that means drafting legislation that would require paper trails for all ballots and improve threat assessments of computerized election systems along with a complete overhaul of campaign laws related to advertising, financing, and press freedoms (for foreign press).

The Stanford proposals call for the strict regulation of foreign involvement in campaigns including a ban on foreign governments and individuals from buying online ads that would target the U.S. electorate with an eye toward influencing elections. The proposals also call for greater disclosure requirements indicating articles, opinion pieces or media produced by foreign media organizations. Furthermore, any campaign working with a foreign company or consultant or with significant foreign business interests should be required to disclose those connections.

Clearly, the echoes of Facebook’s Cambridge Analytica and political advertising scandals can be heard in some of the suggestions made by the paper’s authors.

Indeed, the paper leans heavily on the use and abuse of social media and tech as a critical vector for an attack on future U.S. elections. And the Stanford proposals don’t shirk from calling on legislators to demand that these companies do more to protect their platforms from being used and abused by foreign governments or individuals.

In some cases companies are already working to enact suggestions from the report. Facebook, Alphabet, and Twitter have said that they will work together to coordinate and encourage the spread of best practices. Media companies need to create (and are working to create) norms for handling stolen information. Labeling manipulated videos or propaganda (or articles and videos that come from sources known to disseminate propaganda) is another task that platforms are undertaking, but an area where there is still significant work to be done (especially when it comes to deepfakes).

As the report’s author’s note:

Existing user interface features and platforms’ content delivery algorithms need to be utilized as much as possible to provide contextualization for questionable information and help users escape echo chambers. In addition, social media platforms should provide more transparency around users who are paid to promote certain content. One area ripe for innovation is the automatic labeling of synthetic content, such as videos created by a variety of techniques that are often lumped under the term “deepfakes”. While there are legitimate uses of synthetic media technologies, there is no legitimate need to mislead social media users about the authenticity of that media. Automatically labeling content, which shows technical signs of being modified in this manner, is the minimum level of due diligence required of the major video hosting sites.

There’s more work that needs to be done to limit the targeting capabilities for political advertising and improving transparency around paid and unpaid political content as well, according to the report.

And somewhat troubling is the report’s call for the removal of barriers around sharing information relating to disinformation campaigns that would include changes to privacy laws.

Here’s the argument from the report:

At the moment, access to the content used by disinformation actors is generally restricted to analysts who archived the content before it was removed or governments with lawful request capabilities. Few organizations have been able to analyze the full paid and unpaid content created by Russian groups in 2016, and the analysis we have is limited to data from the handful of companies who investigated the use of their platforms and were able to legally provide such data to Congressional committees. Congress was able to provide that content and metadata to external researchers, an action that is otherwise proscribed by U.S. and European law. Congress needs to establish a legal framework within which the metadata of disinformation actors can be shared in real-time between social media platforms, and removed disinformation content can be shared with academic researchers under reasonable privacy protections.

Ultimately, these suggestions are meaningless without real action from the Congress and the President to ensure the security of elections. As the events of 2016  — documented in the Mueller report — revealed there are a substantial number of holes in the safeguards erected to secure our elections. As the country looks for a place to build walls for security, perhaps one around election integrity would be a good place to start.

Twitter’s updated T&Cs look clearer — yet it still can’t say no to nazis

Twitter has taken a pair of shears to its user rules, shaving almost 2,000 words off of its T&Cs — with the stated aim of making it clearer for users what is not acceptable behaviour on its platform.

It says the rules have shrunk from 2,500 words to just 600 — with each of the reworded rules now encapsulated within a pithy tweet length (280 characters or less).

Though each tweet-length rule is still followed by plenty of supplementary detail — where Twitter explains the rationale behind it and provides examples of what not to do, and details of potential consequences. So the full rule-book is still way over 2,500 words.

“Everyone who uses Twitter should be able to easily understand what is and is not allowed on the service,” writes Twitter’s Del Harvey, VP of trust and safety, in a blog post announcing the changes. “As part of our continued push towards more transparency across every aspect of Twitter, we’re working to make sure every rule has its own help page with more detailed information and relevant resources, with abuse and harassment, hateful conduct, suicide or self-harm, and copyright being next on our list to update. Our focus remains on keeping everyone safe and supporting a healthier public conversation on Twitter.”

The newly reworded rules can be found at: twitter.com/rules

We’ve listed the tweet-sized rules below, without any of their qualifying clutter:

  • You may not threaten violence against an individual or a group of people. We also prohibit the glorification of violence.
  • You may not threaten or promote terrorism or violent extremism.
  • We have zero tolerance for child sexual exploitation on Twitter.
  • You may not engage in the targeted harassment of someone, or incite other people to do so. This includes wishing or hoping that someone experiences physical harm.
  • You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
  • You may not promote or encourage suicide or self-harm.
  • You may not post media that is excessively gory or share violent or adult content within live video or in profile or header images. Media depicting sexual violence and/or assault is also not permitted.
  • You may not use our service for any unlawful purpose or in furtherance of illegal activities. This includes selling, buying, or facilitating transactions in illegal goods or services, as well as certain types of regulated goods or services.
  • You may not publish or post other people’s private information (such as home phone number and address) without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.
  • You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.
  • You may not use Twitter’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on Twitter.
  • You may not use Twitter’s services for the purpose of manipulating or interfering in elections. This includes posting or sharing content that may suppress voter turnout or mislead people about when, where, or how to vote.
  • You may not impersonate individuals, groups, or organizations in a manner that is intended to or does mislead, confuse, or deceive others.
  • You may not violate others’ intellectual property rights, including copyright and trademark.

Notably the rules make no mention of fascist ideologies being unwelcome on Twitter’s platform. Although a logical person might be forgiven for thinking such hateful stuff would naturally be prohibited — based on the core usage principles Twitter is stating here (such as a ban on threatening and/or promoting violence against groups of people including on the basis of their race, ethnicity and so on).

But for Twitter nazi-ism remains, uh, ‘complicated’.

The company recently told Vice it’s working with researchers to consider whether or not it should ban nazis. Which suggests its new ‘pithier’ rules are missing a few qualifying asterisks.

Here, we fixed one:

  • You may not threaten violence against an individual or a group of people*. We also prohibit the glorification of violence**. *unless you’re a nazi **white supremacists totally get a pass while we mull the commercial implications of actually banning racist hate

Another abuse vector that continues to look like a blindspot in Twitter’s rule-book is sex.

While the company does include both ‘gender’ and ‘gender identity’ among the many categories it stipulates that users must not direct harassment, at or promote violence against, it does not offer the same shield based on a user’s sex. Which appears to have resulted in instances where Twitter has deemed tweets containing violent misogyny to not be in violation of its rules.

Last month a Twitter UK public policy rep told the parliamentary human rights committee, which had raised the issue of the violent sexist tweets, that it believed the inclusion of gender should be enough to protect against instances of violent misogyny, despite having demonstrably failed to do so in the selection of tweets the committee put to it.

We’ve asked Twitter about its continued decision not to prohibit harassment and threats of violence against users based on their sex, as well as its ongoing failure to ban nazis and will update this report with any response.

In addition to editing down the wording of its rules, Twitter says it has thematically organized them under three new categories — safety, privacy, and authenticity — to make it easier for users to find what they’re looking for.

Though it’s not quite as at-a-glance clear as that on the rules page — which also includes a general preamble; a note on wider content boundaries; a section dealing with spam and security; and an addendum on content visibility restrictions that Twitter may apply in cases where it suspects an account of abuses and is investigating.

But, as ever, algorithmically driven platforms are anything but simple.

Hideously wordy T&Cs have of course been a tech staple for years so it’s good to see Twitter paying greater attention to the acceptable conduct signals it gives users — and at least trying to boil down a clearer essence of what isn’t acceptable behavior, albeit tardily.

But, equally, refreshed wording of what’s unacceptable makes it plainer that Twitter retains stubborn blind-spots that allow its platform to be a conduct for targeted racial hatred.

Perhaps these blindspots are commercially motivated, in the case of far right ideologies. Or perhaps Twitter’s leadership is still so drunk on its own philosophical koolaid it really has fuzzed the lines between fascism and, er, humanity.

If that’s the case, no pithily written rules will save Twitter from itself.

Don’t forget, this is a company that has been promising to get a handle on its abuse problem for years. Including — just last year — making a grand stance about wanting to champion ‘conversational health‘.

Yet it still can’t screw its courage to the sticking place and say no nazis.

Twitter’s multi-year struggles to respond to baked in hate might be farcical at this point — if the human impacts of amplifying racial and ethnic hatred weren’t a tragedy for all concerned.

And had it found a moral compass when it was first being warned about the rising tide of amplified abuse, it’s entirely possible one of its most high profile users might not be a geopolitical mega-bully known to retweet fascist propaganda.

Chew on that, Jack.

Verified Expert Growth Marketing Agency: Bell Curve

Bell Curve founder Julian Shapiro describes his team as talented growth marketers who have a long tail expertise of various channels and who aren’t afraid to play part-time therapists. As an agency, they’re comfortable grounding founder expectations by explaining “No, virality isn’t a dependable growth strategy,” but “Hey, we can come up with a better strategy together.”

Bell Curve, the agency, also runs Demand Curve, a remote growth marketing training program that teaches students (and marketing professionals) the ins and outs of performance marketing.

For a glimpse of how Bell Curve thinks about growth marketing, check out Julian’s guest posts about how startups can actually get content marketing to work and how founders can hire a great growth marketer.

What makes Bell Curve different:

“Bell Curve runs a growth bootcamp which we took in February. It radically improved our growth rate, gave us access to enough data to experiment with, and as a result we built an engine for growth that we could continue to tune.” Gil Akos, SF, CEO & Co-founder, Astra
“We run a program where we train companies to run ads on every channel. So, what makes Bell Curve unique is that we, by necessity, have a deep understanding of many more channels than the average agency. We have an archive of tactics and approaches that we’ve accumulated for how to do them just as well as the big ad channels.

In effect, companies come to us when they need expertise beyond Facebook, Google and Instagram, which we still bring to the table, but when they also need to figure out how to make Quora ads profitable, how to get Reddit working, how to get YouTube videos working, Snapchat, Pinterest, etc. These are channels people don’t specialize in enough and so we also bring that long tail of expertise.”

On common misconceptions about growth:

“A common mistake people make coming into growth is thinking that growth hacks are a meaningful thing. The ultimate growth hack is having the self-discipline to pursue growth fundamentals properly and completely. So, things like properly A/B testing, identifying your most enticing value propositions and articulating them clearly and concisely, bringing in deep channel expertise for Facebook, Instagram, Google Search, and a couple of other channels. These are the tenants of making digital growth work. Not one-off hacks.”

Below, you’ll find the rest of the founder reviews, the full interview, and more details like pricing and fee structures. This profile is part of our ongoing series covering startup growth marketing agencies with whom founders love to work, based on this survey and our own research. The survey is open indefinitely, so please fill it out if you haven’t already.


Interview with Bell Curve Founder Julian Shapiro

Yvonne Leow: Can you tell me a little bit about how you got into this game of growth?

Julian Shapiro: I actually started by running growth for friends’ companies because they had a hard time finding experienced growth marketers. After a year and a half of doing this, I realized it’d be a more stable source of income if I formed an agency. It’d also allow me to pattern match so I could exchange learnings among clients and have a better net performance.

It all came together very quickly. Once Bell Curve hit about 10 clients, we had enough strategic and customer acquisition overlap that we were able to share tactics, double our volume of A/B testing, and get better results. It also gave us the ability to hire out a full-fledged team so we could start specializing, whereas, as a contractor, I was too much of a generalist. I wasn’t able to go deep on certain channels, like Snapchat or Pinterest ads.

Twitter bags deep learning talent behind London startup, Fabula AI

Twitter has just announced it has picked up London-based Fabula AI. The deep learning startup has been developing technology to try to identify online disinformation by looking at patterns in how fake stuff vs genuine news spreads online — making it an obvious fit for the rumor-riled social network.

Social media giants remain under increasing political pressure to get a handle on online disinformation to ensure that manipulative messages don’t, for example, get a free pass to fiddle with democratic processes.

Twitter says the acquisition of Fabula will help it build out its internal machine learning capabilities — writing that the UK startup’s “world-class team of machine learning researchers” will feed an internal research group it’s building out, led by Sandeep Pandey, its head of ML/AI engineering.

This research group will focus on “a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning” — now with Fabula co-founder and chief scientist, Michael Bronstein, as a leading light within it.

Bronstein is chair in machine learning & pattern recognition at Imperial College, London — a position he will remain while leading graph deep learning research at Twitter.

Fabula’s chief technologist, Federico Monti — another co-founder, who began the collaboration that underpin’s the patented technology with Bronstein while at the University of Lugano, Switzerland — is also joining Twitter.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities. Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service,” said Bronstein in a statement.

“This strategic investment in graph deep learning research, technology and talent will be a key driver as we work to help people feel safe on Twitter and help them see relevant information,” Twitter added. “Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience.”

Terms of the acquisition have not been disclosed.

We covered Fabula’s technology and business plan back in February when it announced its “new class” of machine learning algorithms for detecting what it colloquially badged ‘fake news’.

Fabula has patented what it described as a “new class” of machine learning algorithms, using the emergent field of “Geometric Deep Learning” to detect online disinformation — where the datasets in question are so large and complex that traditional machine learning techniques struggle to find purchase. Which does really sound like a patent designed with big tech in mind.

Its approach to the problem of online disinformation looks at how it spreads on social networks — and therefore who is spreading it — rather than focusing on the content itself, as some other approaches do.

Fabula likens how ‘fake news’ spreads on social media vs real news as akin to “a very simplified model of how a disease spreads on the network”.

One advantage of the approach is it looks to be language agnostic (at least barring any cultural differences which might also impact how fake news spread).

Back in February the startup told us it was aiming to build an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency, just focused on content not cash.

It’s not clear from Twitter’s blog post whether the core technologies it will be acquiring with Fabula will now stay locked up within its internal research department — or be shared more widely, to help other platforms grappling with online disinformation challenges.

The startup had intended to offer an API for platforms and publishers later this year.

But of course building a platform is a major undertaking. And, in the meanwhile, Twitter — with its pressing need to better understand the stuff its network spreads — came calling.

A source close to the matter told us that Fabula’s founders decided that selling to Twitter instead of pushing for momentum behind a vision of a decentralized, open platform because the exit offered them more opportunity to have “real and deep impact, at scale”.

Though it is also still not certain what Twitter will end up doing with the technology it’s acquiring. And it at least remains possible that Twitter could choose to make it made open across platforms.

“That’ll be for the team to figure out with Twitter down the line,” our source added.

A spokesman for Twitter did not respond directly when we asked about its plans for the patented technology but he told us: “There’s more to come on how we will integrate Fabula’s technology where it makes sense to strengthen our systems and operations in the coming months.  It will likely take us some time to be able to integrate their graph deep learning algorithms into our ML platform. We’re bringing Fabula in for the team, tech and mission, which are all aligned with our top priority: Health.”

Startups net more than capital with NBA players as investors

If you’re a big basketball fan like me, you’ll be glued to the TV watching the Golden State Warriors take on the Toronto Raptors in the NBA finals. (You might be surprised who I’m rooting for.)

In honor of the big games, we took a shot at breaking down investment activities of the players off the court. Last fall, we did a story highlighting some of the sport’s more prolific investors. In this piece, we’ll take a deeper dive into just what having an NBA player as a backer can do for a startup beyond the capital involved. But first, here’s a chart of some startups funded by NBA players, both former and current.

 

In February, we covered how digital sports media startup Overtime had raised $23 million in a Series B round of funding led by Spark Capital. Former NBA Commissioner David Stern was an early investor and advisor in the company (putting money in the company’s seed round). Golden State Warriors player Kevin Durant invested as part of the company’s Series A in early 2018 via his busy investment vehicle, Thirty Five Ventures. And then, Carmelo Anthony invested (via his Melo7 Tech II fund) earlier this year. Other NBA-related investors include Baron DavisAndre Iguodala and Victor Oladipo, and other non-NBA backers include Andreessen Horowitz and Greycroft.

I talked to Overtime’s CEO, 27-year-old Zack Weiner, about how the involvement of so many NBA players came about. I also wondered what they brought to the table beyond their cash. But before we get there, let me explain a little more about what Overtime does.

Founded in late 2016 by Dan Porter and Weiner, the Brooklyn company has raised a total of $35.3 million. The pair founded the company after observing “how larger, legacy media companies, such as ESPN, were struggling” with attracting the younger viewer who was tuning into the TV less and less “and consuming sports in a fundamentally different way.”

So they created Overtime, which features about 25 to 30 sports-related shows across several platforms (which include YouTube, Snapchat, Instagram, Facebook, TikTok, Twitter and Twitch) aimed at millennials and the Gen Z generation. Weiner estimates the company’s programs get more than 600 million video views every month.

In terms of attracting NBA investors, Weiner told me each situation was a little different, but with one common theme: “All of them were fans of Overtime before we even met them…They saw what we were doing as the new wave of sports media and wanted to get involved. We didn’t have to have 10 meetings for them to understand what we were doing. This is the world they live and breathe.”

So how is having NBA players as investors helping the company grow? Well, for one, they can open a lot of doors, noted Weiner.

“NBA players are very powerful people and investors,” he said. “They’ve helped us make connections in music, fashion and all things tangential to sports. Some have created content with us.”

In addition, their social clout has helped with exposure. Their posting or commenting on Instagram gives the company credibility, Weiner said.

“Also just, in general, getting their perspectives and opinions,” he added. “A lot of our content is based on working with athletes, so they understand what athletes want and are interested in being a part of.”

It’s not just sports-related startups that are attracting the interest of NBA players. I also talked with Hussein Fazal, the CEO of SnapTravel, which recently closed a $21.2 million Series A that included participation from Telstra Ventures and Golden State Warriors point guard Stephen Curry.

Founded in 2016, Toronto-based SnapTravel offers online hotel booking services over SMS, Facebook Messenger, Alexa, Google Home and Slack. It’s driven more than $100 million in sales, according to Fazal, and is seeing its revenue grow about 35% quarter over quarter.

Like Weiner, Fazal told me that Curry’s being active on social media about SnapTravel helped draw positive attention and “add a lot of legitimacy” to his company.

“If you’re an end-consumer about to spend $1,000 on a hotel booking, you might be a little hesitant about trusting a newer brand like ours,” he said. “But if they go to our home page and see our investors, that holds some weight in the eyes of the public, and helps show we’re not a fly-by-night company.”

Another way Curry’s involvement has helped SnapTravel is in terms of the recruitment and retainment of employees. Curry once spent hours at the office, meeting with employees and doing a Q&A.

“It was really cool,” Fazal said. “And it helps us stand out from other startups when hiring.”

Regardless of who wins the series, it’s clear that startups with NBA investors on their team have a competitive advantage. (Still, Go Raptors!)

UK Internet attitudes study finds public support for social media regulation

UK telecoms regulator Ofcom has published a new joint report and stat-fest on Internet attitudes and usage with the national data protection watchdog, the ICO — a quantitative study to be published annually which they’re calling the Online Nation report.

The new structure hints at the direction of travel for online regulation in the UK, following government plans set out in a recent whitepaper to regulate online harms — which will include creating a new independent regulator to ensure Internet companies meet their responsibilities.

Ministers are still consulting on whether this should be a new or existing body. But both Ofcom and the ICO have relevant interests in being involved — so it’s fitting to see joint working going into this report.

As most of us spend more time than ever online, we’re increasingly worried about harmful content — and also more likely to come across it,” writes Yih-Choung Teh, group director of strategy and research at Ofcom, in a statement. “ For most people, those risks are still outweighed by the huge benefits of the internet. And while most internet users favour tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech – which is one of the internet’s great strengths.”

While it’s not yet clear exactly what form the UK’s future Internet regulator will take, the Online Nation report does suggest a flavor of the planned focus.

The report, which is based on responses from 2,057 adult internet users and 1,001 children, flags as a top-line finding that eight in ten adults have concerns about some aspects of Internet use and further suggests the proportion of adults concerned about going online has risen from 59% to 78% since last year (though its small-print notes this result is not directly comparable with last year’s survey so “can only be interpreted as indicative”).

Another stat being highlighted is a finding that 61% of adults have had a potentially harmful online experience in the past year — rising to 79% among children (aged 12-15). (Albeit with the caveat that it’s using a “broad definition”, with experiences ranging from “mildly annoying to seriously harmful”.)

While a full 83% of polled adults are found to have expressed concern about harms to children on the Internet.

The UK government, meanwhile, has made child safety a key focus of its push to regulate online content.

At the same time the report found that most adults (59%) agree that the benefits of going online outweigh the risks, and 61% of children think the internet makes their lives better.

While Ofcom’s annual Internet reports of years past often had a fairly dry flavor, tracking usage such as time spent online on different devices and particular services, the new joint study puts more of an emphasis on attitudes to online content and how people understand (or don’t) the commercial workings of the Internet — delving into more nuanced questions, such as by asking web users whether they understand how and why their data is collected, and assessing their understanding of ad-supported business models, as well as registering relative trust in different online services’ use of personal data.

The report also assesses public support for Internet regulation — and on that front it suggests there is increased support for greater online regulation in a range of areas. Specifically it found that most adults favour tighter rules for social media sites (70% in 2019, up from 52% in 2018); video-sharing sites (64% v. 46%); and instant-messaging services (61% v. 40%).

At the same time it says nearly half (47%) of adult internet users expressed recognition that websites and social media platforms play an important role in supporting free speech — “even where some people might find content offensive”. So the subtext there is that future regulation of harmful Internet content needs to strike the right balance.

On managing personal data, the report found most Internet users (74%) say they feel confident to do so. A majority of UK adults are also happy for companies to collect their information under certain conditions — vs over a third (39%) saying they are not happy for companies to collect and use their personal information.

Those conditions look to be key, though — with only small minorities reporting they are happy for their personal data to be used to program content (17% of adult Internet users were okay with this); and to target them with ads (only 18% didn’t mind that, so most do).

Trust in online services to protect user data and/or use it responsibly also varies significantly, per the report findings — with social media definitely in the dog house on that front. “Among ten leading UK sites, trust among users of these services was highest for BBC News (67%) and Amazon (66%) and lowest for Facebook (31%) and YouTube (34%),” the report notes.

Despite low privacy trust in tech giants, more than a third (35%) of the total time spent online in the UK is on sites owned by Google or Facebook.

“This reflects the primacy of video and social media in people’s online consumption, particularly on smartphones,” it writes. “Around nine in ten internet users visit YouTube every month, spending an average of 27 minutes a day on the site. A similar number visit Facebook, spending an average of 23 minutes a day there.”

And while the report records relatively high awareness that personal data collection is happening online — finding that 71% of adults were aware of cookies being used to collect information through websites they’re browsing (falling to 60% for social media accounts; and 49% for smartphone apps) — most (69%) also reported accepting terms and conditions without reading them.

So, again, mainstream public awareness of how personal data is being used looks questionable.

The report also flags limited understanding of how search engines are funded — despite the bald fact that around half of UK online advertising revenue comes from paid-for search (£6.7BN in 2018). “[T]here is still widespread lack of understanding about how search engines are funded,” it writes. “Fifty-four per cent of adult internet users correctly said they are funded by advertising, with 18% giving an incorrect response and 28% saying they did not know.”

The report also highlights the disconnect between time spent online and digital ad revenue generated by the adtech duopoly, Google and Facebook — which it says together generated an estimated 61% of UK online advertising revenue in 2018; a share of revenue that it points out is far greater than time spent (35%) on their websites (even as those websites are the most visited by adults in the UK).

As in previous years of Ofcom ‘state of the Internet’ reports, the Online Nation study also found that Facebook use still dominates the social media landscape in the UK.

Though use of the eponymous service continues falling (from 95% of social media users in 2016 to 88% in 2018). Even as use of other Facebook-owned social properties — Instagram and WhatsApp — grew over the same period.


The report also recorded an increase in people using multiple social services — with just a fifth of social media users only using Facebook in 2018 (down from 32% in 2018). Though as noted above, Facebook still dominates time spent, clocking up way more time (~23 minutes) per user per day on average vs Snapchat (around nine minutes) and Instagram (five minutes).  

A large majority (74%) of Facebook users also still check it at least once a day.

Overall, the report found that Brits have a varied online diet, though — on average spending a minute or more each day on 15 different internet sites and apps. Even as online ad revenues are not so equally distributed.

“Sites and apps that were not among the top 40 sites ranked by time spent accounted for 43% of average daily consumption,” the report notes. “Just over one in five internet users said that in the past month they had used ‘lots of websites or apps they’ve used before’ while a third (36%) said they ‘only use websites or apps they’ve used before’.”

There is also variety when it comes to how Brits search for stuff online, and while 97% of adult internet users still use search engines the report found a variety of other services also in the mix. 

It found that nearly two-thirds of people (65%) go more often to specific sites to find specific things, such as a news site for news stories or a video site for videos; while 30% of respondents said they used to have a search engine as their home page but no longer do.

The high proportion of searches being registered on shopping websites/apps (61%) also looks interesting in light of the 2017 EU antitrust ruling against Google Shopping — when the European Commission found Google had demoted rival shopping comparison services in search results, while promoting its own, thereby undermining rivals’ ability to gain traffic and brand recognition.

The report findings also indicate that use of voice-based search interfaces remains relatively low in the UK, with just 10% using voice assistants on a mobile phone — and even smaller percentages tapping into smart speakers (7%) or voice AIs on connected TVs (3%).

In another finding, the report suggests recommendation engines play a major part in content discovery.

“Recommendation engines are a key way for platforms to help people discover content and products — 70% of viewing to YouTube is reportedly driven by recommendations, while 35% of what consumers purchase on Amazon comes from recommendations,” it writes. 

In overarching aggregate, the report says UK adults now spend the equivalent of almost 50 days online per year.

While, each week, 44 million Brits use the internet to send or receive email; 29 million send instant messages; 30 million bank or pay bills via the internet; 27 million shop online; and 21 million people download information for work, school or university.

The full report can be found here.

Fundraising 101: How to trigger FOMO among VCs

Let’s go beyond the high-level fundraising advice that fills VC blogs. If you have a compelling business and have educated yourself on crafting a pitch deck and getting warm intros to VCs, there are still specific questions about the strategy to follow for your fundraise.

How can you make your round “hot” and trigger a fear of missing out (FOMO) among investors? How can you fundraise faster to reduce the distraction it has on running your business?

“You’re trying to make a market for your equity. In order to make a market you need multiple people lining up at the same time.”
Unsurprisingly, I’ve noticed that experienced founders tend to be more systematic in the tactics they employ to raise capital. So I asked several who have raised tens (or hundreds) of millions in VC funding to share specific strategies for raising money on their terms. Here’s their advice.

(The three high-profile CEOs who agreed to share their specific playbooks requested anonymity so VCs don’t know which is theirs. I’ve nicknamed them Founder A, Founder B, and Founder C.)

Have additional fundraising tactics to share? Email me at [email protected].

Table of Contents

You need to create a market for your shares

“You’re trying to make a market for your equity. In order to make a market, you need multiple people lining up at the same time.”

That advice from Atrium CEO Justin Kan (a co-founder of companies like Twitch and former partner at Y Combinator) was reiterated by all the entrepreneurs I interviewed. Fundraising should be a sprint, not a marathon, otherwise the loss of momentum will make it more difficult.

Meet Projector, collaborative design software for the Instagram age

Mark Suster of Upfront Ventures bonded with Trevor O’Brien in prison. The pair, Suster was quick to clarify, were on site at a correctional facility in 2016 to teach inmates about entrepreneurship as part of a workshop hosted by Defy Ventures, a nonprofit organization focused on addressing the issue of mass incarceration.

They hit it off, sharing perspectives on life and work, Suster recounted to TechCrunch. So when O’Brien, a former director of product management at Twitter, mentioned he was in the early days of building a startup, Suster listened.

Three years later, O’Brien is ready to talk about the idea that captured the attention of the Bird, FabFitFun and Ring investor. It’s called Projector.

It’s the brainchild of a product veteran (O’Brien) and a gaming industry engineer turned Twitter’s vice president of engineering (Projector co-founder Jeremy Gordan), a combination that has given way to an experiential and well-designed platform. Projector is browser-based, real-time collaborative design software tailored for creative teams that feels and looks like a mix of PowerPoint, Google Docs and Instagram . Though it’s still months away from a full-scale public launch, the team recently began inviting potential users to test the product for bugs.

We want to reimagine visual communication in the workplace by building these easier to use tools and giving creative powers to the non-designers who have great stories to tell and who want to make a difference,” O’Brien told TechCrunch. “They want change to happen and they need to be empowered with the right kinds of tools.”

Today, Projector is a lean team of 13 employees based in downtown San Francisco. They’ve kept quiet since late 2016 despite closing two rounds of venture capital funding. The first, a $4 million seed round, was led by Upfront’s Suster, as you may have guessed. The second, a $9 million Series A, was led by Mayfield in 2018. Hunter Walk of Homebrew, Jess Verrilli of #Angels and Nancy Duarte of Duarte, Inc. are also investors in the business, among others.

O’Brien leads Projector as chief executive officer alongside co-founder and chief technology officer Gordon. Years ago, O’Brien was pursuing a PhD in computer graphics and information visualization at Brown University when he was recruited to Google’s competitive associate product manager program. He dropped out of Brown and began a career in tech that would include stints at YouTube, Twitter, Coda and, finally, his very own business.

O’Brien and Gordan crossed paths at Twitter in 2013 and quickly realized a shared history in the gaming industry. O’Brien had spent one year as an engineer at a games startup called Mad Doc Software, while Gordon had served as the chief technology officer at Sega Studios. Gordan left Twitter in 2014 and joined Redpoint Ventures as an entrepreneur-in-residence before O’Brien pitched him on an idea that would become Projector.

Projector co-founders Jeremy Gordan (left), Twitter’s former vice president of engineering, and Trevor O’Brien, Twitter’s former director of product management

“We knew we wanted to create a creative platform but we didn’t want to create another creative platform for purely self-expression, we wanted to do something that was a bit more purposeful,” O’Brien said. “At the end of the day, we just wanted to see good ideas succeed. And with all of those good ideas, succeeding typically starts with them being presented well to their audience.”

Initially, Projector is targeting employees within creative organizations and marketing firms, who are frequently tasked with creating visually compelling presentations. The tool suite is free for now and will be until it’s been sufficiently tested for bugs and has fully found its footing. O’Brien says he’s not sure just yet how the team will monetize Projector, but predicts they’ll adopt Slack’s per user monthly subscription pricing model.

As original and user-friendly as it may be, Projector is up against great competition right out of the gate. In the startup landscape, it’s got Canva, a graphic design platform valued at $2.5 billion earlier this week with a $70 million financing. On the old-guard, it’s got Adobe, which sells a widely used suite of visual communication and graphic design tools. Not to mention Prezi, Figma and, of course, Microsoft’s PowerPoint, which is total crap but still used by millions of people.

There are many tools scratching at the surface, but there’s not one visual communications tool that wins them all,” Suster said of his investment in Projector.

Projector is still in its very early days. The company currently has just two integrations: Unsplash for free stock images and Giphy for GIFs. O’Brien would eventually like to incorporate iconography, typography and sound to liven up Projector’s visual presentation capabilities.

The ultimate goal, aside from generally improving workplace storytelling, is to make crafting presentations fun, because shouldn’t a corporate slideshow or even a startup’s pitch be as entertaining as scrolling through your Instagram feed?

“We wanted to try to create something that doesn’t feel like work,” O’Brien said.

Indonesia restricts WhatsApp and Instagram usage following deadly riots

Indonesia is the latest nation to hit the hammer on social media after the government restricted the use of WhatsApp and Instagram following deadly riots yesterday.

Numerous Indonesia-based users are today reporting difficulties sending multimedia messages via WhatsApp, which is one of the country’s most popular chat apps, while the hashtag #instagramdown is trending among the country’s Twitter users due to problems accessing the Facebook-owned photo app.

Wiranto, a coordinating minister for political, legal and security affairs, confirmed in a press conference that the government is limiting access to social media and “deactivating certain features” to maintain calm, according to a report from Coconuts.

Rudiantara, the communications minister of Indonesia and a critic of Facebook, explained that users “will experience lag on Whatsapp if you upload videos and photos.”

Facebook — which operates both WhatsApp and Instagram — didn’t explicitly confirm the blockages , but it did say it has been in communication with the Indonesian government.

“We are aware of the ongoing security situation in Jakarta and have been responsive to the Government of Indonesia. We are committed to maintaining all of our services for people who rely on them to communicate with their loved ones and access vital information,” a spokesperson told TechCrunch.

A number of Indonesia-based WhatsApp users confirmed to TechCrunch that they are unable to send photos, videos and voice messages through the service. Those restrictions are lifted when using Wi-Fi or mobile data services through a VPN, the people confirmed.

The restrictions come as Indonesia grapples with political tension following the release of the results of its presidential election on Tuesday. Defeated candidate Prabowo Subianto said he will challenge the result in the constitutional court.

Riots broke out in capital state Jakarta last night, killing at least six people and leaving more than 200 people injured. Following this, it is alleged that misleading information and hoaxes about the nature of riots and people who participated in them began to spread on social media services, according to local media reports.

Protesters hurl rocks during clash with police in Jakarta on May 22, 2019. – Indonesian police said on May 22 they were probing reports that at least one demonstrator was killed in clashes that broke out in the capital Jakarta overnight after a rally opposed to President Joko Widodo’s re-election. (Photo by ADEK BERRY / AFP)

For Facebook, seeing its services forcefully cut off in a region is no longer a rare incident. The company, which is grappling with the spread of false information in many markets, faced a similar restriction in Sri Lanka in April, when the service was completely banned for days amid terrorist strikes in the nation. India, which just this week concluded its general election, has expressed concerns over Facebook’s inability to contain the spread of false information on WhatsApp, which is its largest chat app with over 200 million monthly users.

Indonesia’s Rudiantara expressed a similar concern earlier this month.

“Facebook can tell you, ‘We are in compliance with the government’. I can tell you how much content we requested to be taken down and how much of it they took down. Facebook is the worst,” he told a House of Representatives Commission last week, according to the Jakarta Post.