Sri Lanka blocks social media sites after deadly explosions

The government of Sri Lanka has temporarily blocked access to several social media services following deadly explosions that ripped through the country, killing at least 207 people and injuring hundreds more.

Eight bombings were reported, including during Easter services at three churches, on the holiest weekend of the Christian calendar.

In a brief statement, the Sri Lankan president’s secretary Udaya Seneviratne said the government has “decided to temporarily block social media sites including Facebook and Instagram,” in an effort to curb “false news reports.”

The government said the services will be restored once the investigations into the attacks had concluded.

Nalaka Gunawardene, a science writer and Sri Lankan native, confirmed in a tweet that Facebook-owned WhatsApp was also blocked in the country. Others reported that YouTube was inaccessible. But some said they were able to still use WhatsApp .

Spokespeople for Facebook and Google did not immediately comment.

It’s a rare but not unprecedented step for a government to block access to widely used sites and services. Although Sri Lanka’s move is ostensibly aimed at preventing the spread of false news, it’s likely to have an inhibiting effect on freedom of speech and efforts to communicate with loved ones.

Sri Lanka, like other emerging nations, has previously battled with misinformation. The government has complained that false news shared on Facebook has helped spread hatred and violence against the country’s Muslim minority. Other countries like India say encrypted messaging app WhatsApp has contributed to the spread of misinformation, prompting the social media company to add limits to how many groups a message can be sent to.

Iran and Turkey have also blocked access to social media sites in recent years amid protests and political unrest.

Sri Lanka’s prime minister Ranil Wickremesinghe has described the explosions as a terrorist incident.

Instagram hides Like counts in leaked design prototype

“We want your followers to focus on what you share, not how many likes your posts get. During this test, only the person who share a post will see the total number of likes it gets.” That’s how Instagram describes a seemingly small design change test with massive potential impact on users’ well-being.

Hiding Like counts could reduce herd mentality, where people just Like what’s already got tons of Likes. It could reduce the sense of competition on Instagram, since users won’t compare their own counts with those of more popular friends or superstar creators. And it could encourage creators to post what feels most authentic rather than trying to rack of Likes for everyone to see.

The design change test was spotted by Jane Manchun Wong, the prolific reverse engineering expert and frequent TechCrunch tipster whose spotted tons of Instagram features before they’re officially confirmed or launched. Wong discovered the design change test in Instagram’s Android code and was able to generate the screenshots above.

You can see on the left that the Instagram feed post lacks a Like count, but still shows a few faces and a name of other people who’ve Liked it. Users are alerted that only they’ll see their post’s Like counts, and anyone else won’t. Many users delete posts that don’t immediately get ‘enough’ Likes or post to their fake ‘Finstagram’ accounts if they don’t think they’ll be proud of the hearts they collect. Hiding Like counts might get users posting more since they’ll be less self-conscious.

An Instagram confirmed to TechCrunch that this design is an internal prototype that’s not visible to the public yet. A spokesperson told us: “We’re not testing this at the moment, but exploring ways to reduce pressure on Instagram is something we’re always thinking about.” Other features we’ve reported on in the same phase, such as video calling, soundtracks for Stories, and the app’s time well spent dashboard all went on to receive official launches.

Instagram’s prototypes (from left): feed post reactions, Stories lyrics, and Direct stickers

Meanwhile, Wong has also recently spotted several other Instagram prototypes lurking in its Android code. Those include chat thread stickers for Direct messages, augmented reality filters for Direct Video calls, simultaneous co-watching of recommended videos through Direct, karaoke-style lyrics that appear synced to soundtracks in Stories, emoji reactions to feed posts, and a shopping bag for commerce.

It appears that there’s no plan to hide follower counts on user profiles, which are the true measure of popularity but also serve a purpose of distinguishing great content creators and assessing their worth to marketers. Hiding Likes could just put more of a spotlight on follower and comment counts. And even if users don’t see Like counts, they still massively impact the feed’s ranking algorithm, so creators will still have to battle for them to be seen.

Close-up of Instagram’s design for feed posts without Like counters

The change matches a growing belief that Like counts can be counter-productive or even harmful to users’ psyches. Instagram co-founder Kevin Systrom told me back in 2016 that getting away from the pressure of Like counts was one impetus for Instagram launching Stories. Last month, Twitter began testing a design which hides retweet counts behind an extra tap to similarly discourage inauthentic competition and herd mentality. And Snapchat has never shown Like counts or even follower counts, which has made it feel less stressful but also less useful for influencers.

Narcissism, envy spiraling, and low self-image can all stem from staring at Like counts. They’re a constant reminder of the status hierarchies that have emerged from social networks. For many users, at some point it stopped being fun and started to feel more like working in the heart mines. If Instagram rolls the feature out, it could put the emphasis back on sharing art and self-expression, not trying to win some popularity contest.

Mueller report details the evolution of Russia’s troll farm as it began targeting US politics

BRENDAN SMIALOWSKI/AFP/Getty Images

On Thursday, Attorney General William Barr released the long-anticipated Mueller report. With it comes a useful overview of how Russia leveraged U.S.-based social media platforms to achieve its political ends.

While we’ve yet to find too much in the heavily redacted text that we didn’t already know, Mueller does recap efforts undertaken by Russia’s mysterious Internet Research Agency or “IRA” to influence the outcome of the 2016 presidential election. The IRA attained infamy prior to the 2016 election after it was profiled in depth by the New York Times in 2015. (That piece is still well worth a read.)

Considering the success the shadowy group managed to achieve in infiltrating U.S. political discourse — and the degree to which those efforts have reshaped how we talk about the world’s biggest tech platforms — the events that led us here are worth revisiting.

IRA activity begins in 2014

In Spring of 2014, the special counsel reports that the IRA started to “consolidate U.S. operations within a single general department” with the internal nickname the “translator.” The report indicates that this is the time the group began to “ramp up” its operations in the U.S. with its sights on the 2016 presidential election.

At this time, the IRA was already running operations across various social media platforms, including Facebook, Twitter and YouTube. Later it would expand its operations to Instagram and Tumblr as well.

Stated anti-Clinton agenda

As the report details, in the early stages of its U.S.-focused political operations, the IRA mostly impersonated U.S. citizens but into 2015 it shifted its strategy to create larger pages and groups that pretended to represent U.S.-based interests and causes, including “anti-immigration groups, Tea Party activists, Black Lives Matter [activists]” among others.

The IRA offered internal guidance to its specialists to “use any opportunity to criticize Hillary [Clinton] and the rest (except Sanders and Trump – we support them” in early 2016.

While much of the IRA activity that we’ve reported on directly sowed political discord on divisive domestic issues, the group also had a clearly stated agenda to aid the Trump campaign. When the mission strayed, one IRA operative was criticized for a “lower number of posts dedicated to criticizing Hillary Clinton” and called the goal of intensify criticism of Clinton “imperative.”

That message continued to ramp up on Facebook into late 2016, even as the group also continued its efforts in issued-based activist groups that, as we’ve learned, sometimes inspired or intersected with real life events. The IRA bought a total of 3,500 ads on Facebook for $100,000 — a little less than $30 per ad. Some of the most successful IRA groups had hundreds of thousands of followers. As we know, Facebook shut down many of these operations in August 2017.

IRA operations on Twitter

The IRA used Twitter as well, though its strategy there produced some notably different results. The group’s biggest wins came when it managed to successfully interact with many members of the Trump campaign, as was the case with @TEN_GOP which posed as the “Unofficial Twitter of Tennessee Republicans.” That account earned mentions from a number of people linked to the Trump campaign, including Donald Trump Jr., Brad Parscale and Kellyanne Conway.

As the report describes, and has been previously reported, that account managed to get the attention of Trump himself:

“On September 19, 2017, President Trump’s personal account @realDonaldTrump responded to a tweet from the IRA-controlled account @ l0_gop (the backup account of @TEN_ GOP, which had already been deactivated by Twitter). The tweet read: “We love you, Mr. President!”

The special counsel also notes that “Separately, the IRA operated a network of automated Twitter accounts (commonly referred to as a bot network) that enabled the IRA to amplify existing content on Twitter.”

Real life events

The IRA leveraged both Twitter and Facebook to organize real life events, including three events in New York in 2016 and a series of pro-Trump rallies across both Florida and Pennsylvania in the months leading up the election. The IRA activity includes one event in Miami that the then-candidate Trump’s campaign promoted on his Facebook page.

While we’ve been following revelations around the IRA’s activity for years now, Mueller’s report offers a useful birds-eye overview of how the group’s operations wrought havoc on social networks, achieving mass influence at very little cost. The entire operation exemplified the greatest weaknesses of our social networks — weaknesses that up until companies like Facebook and Twitter began to reckon with their role in facilitating Russian election interference, were widely regarded as their greatest strengths.

Facebook now says its password leak affected ‘millions’ of Instagram users

Facebook has confirmed its password-related security incident last month now affects “millions” of Instagram users, not “tens of thousands” of users as first thought.

The social media giant confirmed the new information in its updated blog post, first published on March 21.

“We discovered additional logs of Instagram passwords being stored in a readable format,” the company said. “We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others.”

“Our investigation has determined that these stored passwords were not internally abused or improperly accessed,” the updated post said, but the company still has not said how it made that determination.

The social media giant did not say how many millions were affected, however.

Last month, Facebook admitted it had inadvertently stored “hundreds of millions” of user account passwords in plaintext for years, said to have dated as far back as 2012. The company said the unencrypted passwords were stored in logs accessible to some 2,000 engineers and developers. The data was not leaked outside of the company, however. Facebook still explained how the bug occurred

Facebook posted the update at 10am ET — an hour before the Special Counsel’s report into Russian election interference was published.

We asked the company when it learned of the new scale of the password leak and will update if we hear back.

How-to video maker Jumprope launches to leapfrog YouTube

Sick of pausing and rewinding YouTube tutorials to replay that tricky part? Jumprope is a new instructional social network offering a powerful how-to video slideshow creation tool. Jumprope helps people make step-by-step guides to cooking, beauty, crafts, parenting and more using voice-overed looping GIFs for each phase. And creators can export their whole lesson for sharing on Instagram, YouTube, or wherever.

Jumprope officially launches its iOS app today with plenty of how-tos for making chocolate chip bars, Easter eggs, flower boxes, or fierce eyebrows. “By switching from free-form linear video to something much more structured, we can make it much easier for people to share their knowledge and hacks” says Jumprope co-founder and CEO Jake Poses.

The rise of Snapchat Stories and Pinterest have made people comfortable jumping on camera and showing off their niche interests. By building a new medium, Jumprope could become the home for rapid-fire learning. And since viewers will have tons of purchase intent for the makeup, art supplies, or equipment they’ll need to follow along, Jumprope could make serious cash off of ads or affiliate commerce.

The opportunity to bring instruction manuals into the mobile video era has attracted a $4.5 million seed round led by Lightspeed Venture Partners and joined by strategic angels like Adobe Chief Product Officer Scott Belsky and Thumbtack co-founders Marco Zappacosta and Jonathan Swanson. People are already devouring casual education content on HGTV and the Food Network, but Jumprope democratizes its creation.

Jumprope co-founders (from left): CTO Travis Johnson and CEO Jake Poses

The idea came from a deeply personal place for Poses. “My brother has pretty severe learning differences, and so growing up with him gave me this appreciation for figuring out how to break things down and explain them to people” Poses reveals. “I think that attached me to this problem of ‘how do you organize information so its simple and easy to understand?’. Lots and lots of people have this information trapped in their heads because there isn’t an a way to easily share that.”

Poses was formerly the VP of product at Thumbtack where he helped grow the company from 8 to 500 people and a $1.25 billion valuation. He teamed up with AppNexus’ VP of engineering Travis Johnson, who’d been leading a 50-person team of coders. “The product takes people who have knowledge and passion but not the skill to make video [and gives them] guard rails that make it easy to communicate” Poses explains.

Disrupting incumbents like YouTube’s grip on viewers might take years, but Jumprope sees its guide creation and export tool as a way to infiltrate and steal their users. That strategy mirrors how TikTok’s watermarked exports colonized the web

How To Make A Jumprope.

Jumprope lays out everything you’ll need to upload, including a cover image, introduction video, supplies list, and all your steps. For each, you’ll record a video that you can then enhance with voice over, increased speed, music, and filters.

Creators are free to suggest their own products or enter affiliate links to monetize their videos. Once it has enough viewers, Jumprope plans to introduce advertising, but it could also add tipping, subscriptions, paid how-tos, or brand sponsorship options down the line. Creators can export their lessons with five different border themes and seven different aspect ratios for posting to Instagram’s feed, IGTV, Snapchat Stories, YouTube, or embedding on their blog.

“Like with Stories, you basically tap through at your own pace” Poses says of the viewing experience. Jumprope offers some rudimentary discovery through categories, themed collections, or what’s new and popular. The startup has done extensive legwork to sign up featured creators in all its top categories. That means Jumprope’s catalog is already extensive, with food guides ranging from cinnabuns to pot roasts to how to perfectly chop an onion. 

“You’re not constantly dealing with the frustration of cooking something and trying to start and stop the video with greasy hands. And if you don’t want all the details, you can tap through it much faster” than trying to skim a YouTube video or blog post, Poses tells me. Next the company wants to build a commenting feature where you can leave notes, substitution suggestions, and more on each step of a guide.

Poses claims there’s no one building a direct competitor to its mobile video how-to editor. But he admits it will be an uphill climb to displace viewership on Instagram and YouTube. One challenge facing Jumprope is that most people aren’t hunting down how-to videos every day. The app will have to work to remind users it exists and that they shouldn’t just go with the lazy default of letting Google recommend the videos it hosts.

The internet has gathered communities around every conceivable interest. But greater access to creation and consumption necessitates better tools for production and curation. As we move from a material to an experiential culture, people crave skills that will help them forge memories and contribute to the world around them. Jumprope makes it a lot less work to leap into the life of a guru.

Scranos, a new rootkit malware, steals passwords and pushes YouTube clicks

Security researchers have discovered an unusual new malware that steals user passwords and account payment methods stored in a victim’s browser — and also silently pushes up YouTube subscribers and revenue.

The malware, Scranos, infects with rootkit capabilities, burying deep into vulnerable Windows computers to gain persistent access — even after the computer restarts. Scranos only emerged in recent months, according to Bitdefender with new research out Tuesday, but the number of its infections has rocketed in the months since it was first identified in November.

“The motivations are strictly commercial,” said Bogdan Botezatu, director of threat research and reporting at Bitdefender, in an email. “They seem to be interested in spreading the botnet to consolidate the business by infecting as many devices as possible to perform advertising abuse and to use it as a distribution platform for third party malware,” he said.

Bitdefender found the malware spreading through trojanized downloads that masquerade as real apps, like video players and e-book readers. The rogue apps are digitally signed — likely from a fraudulently generated certificate — to prevent getting blocked by the computer. “By using this approach, the hackers are more likely to infect targets,” said Botezatu. Once installed, the rootkit takes hold to maintain its presence and phones home to its command and control server to download additional malicious components. The second-stage droppers inject custom code libraries in common browsers — Chrome, Firefox, Edge, Baidu, and Yandex to name a few — to target Facebook, YouTube, Amazon, and Airbnb accounts, gathering data to send back to the malware operator.

“The motivations are strictly commercial… they are looking at advertising fraud by consuming ads on their publisher channels invisibly in order to pocket the profit.” Bitdefender's Bogdan Botezatu

Chief among those is the YouTube component, said Bitdefender. The malware opens Chrome in debugging mode and, with the payload, hides the browser window on the desktop and taskbar. The browser is tricked into opening a YouTube videos in the background, mutes it, subscribes to a channel specified by the command and control server and click ads.

The malware “aggressively” promoted four YouTube videos on different channels, the researchers found, turning victim computers into a de facto clickfarm to generate video revenue.

“They are looking at advertising fraud by consuming ads on their publisher channels invisibly in order to pocket the profit,” said Botezatu. “They are growing accounts that they have been paid to grow and helping inflate an audience so they can grow specific ‘influencer’ accounts.”

Another downloadable component allows the malware to spam a victim’s Facebook friend requests with phishing messages. By siphoning off a user’s session cookie, it sends a malicious link to an Android adware app over a chat message.

“If the user is logged into a Facebook account, it impersonates the user and extracts data from the account by visiting certain web pages from the user’s computer, to avoid arousing suspicion by triggering an unknown device alert,” reads the report. “It can extract the number of friends, and whether the user administrates any pages or has payment information in the account.” The malware also tries to steal Instagram session cookies and the number of followers the user has.

Other malicious components allow the malware to steal data from Steam accounts, inject adware to Internet Explorer, run rogue Chrome extensions, and collect and upload a user’s browsing history.

“This is an extremely sophisticated threat that took a lot of time and effort to set up,” said Botezatu. The researchers believe the botnet has tens of thousands of devices ensnared already — at least.

“Rootkit-based malware shows an unusual level of sophistication and dedication,” he said.

Instagram now demotes vaguely “inappropriate” content

Instagram is home to plenty of scantily-clad models and edgy memes that may start to get fewer views starting today. Now Instagram says “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines”. That means if a post is sexually suggestive, but doesn’t depict a sex act or nudity, it could still get demoted. Similarly, if a meme doesn’t constitute hate speech or harassment, but is considered in bad taste, lewd, violent, or hurtful, it could get fewer views.

Specifically, Instagram says “this type of content may not appear for the broader community in Explore or hashtag pages” which could severely hurt the ability of creators to gain new followers. The news came amidst a flood of “Integrity” announcements from Facebook to safeguard its family of apps revealed today at a press event a the company’s Menlo Park headquarters.

“We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community” Instagram’s Product Lead for Discovery Will Ruben said. Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

These posts won’t be fully removed from the feed, and Instagram tells me for now the new policy won’t impact Instagram’s feed or Stories bar. But Facebook CEO Mark Zuckerberg’s November manifesto described the need to broadly reduce the reach of this “borderline content”, which on Facebook would mean being shown lower in News Feed. That’s policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

Today Facebook’s Henry Silverman explained that “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised” said Facebook’s head of News Feed Integrity Tessa Lyons. ”

This all makes sense when it comes to click bait, false news, and harassment which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram.

Creators currently have no guidelines about what constitutes Borderline Content — there’s nothing in Instagram’s rules or  terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specficied that violent, graphic/shocking, sexuall suggestive, misinformation, and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

 

Instagram denied an account from a creator claiming that the app reduced their feed and Stories reach after one of their posts that actually violates the content policy taken down.

One female creator with around a half-million followers likened receiving a two-week demotion that massively reduced their content’s reach to Instagram defecating on them. “It just makes it like, ‘Hey, how about we just show your photo to like 3 of your followers? Is that good for you? . . . I know this sounds kind of tin-foil hatty but . . . when you get a post taken down or a story, you can set a timer on your phone for two weeks to the godd*mn f*cking minute and when that timer goes off you’ll see an immediate change in your engagement. They put you back on the Explore page and you start getting followers.”

As you can see, creators are pretty passionate about Instagram demoting their reach. Instagram’s Product Lead on Discovery Will Ruben said regarding the feed/Stories reach reduction: No, that’s not happening. We distinguish between feed and surfaces where you’ve taken the choice to follow somebody, and Explore and hashtag pages where Instagram is recommending content to people.”

The questions now are whether borderline content demotions are ever extended to Instagram’s feed and Stories, and how content is classified as recommendable, non-recommendable, or violating. With artificial intelligence involved, this could turn into another situation where Facebook is seen as shirking its responsibilities in favor of algorithmic efficiency — but this time in removing or demoting too much content rather than too little.

Given the lack of clear policies to point to, the subjective nature of deciding what’s offensive but not abusive, Instagram’s 1 billion user scale, and its nine years of allowing this content, there are sure to be complaints and debates about fair and consistent enforcement.

Snap is channeling Asia’s messaging giants with its move into gaming

Snap is taking a leaf out of the Asian messaging app playbook as its social messaging service enters a new era.

The company unveiled a series of new strategies that are aimed at breathing fresh life into the service which has been ruthlessly cloned by Facebook across Instagram, WhatsApp, and even its primary social network. The result? Snap has consistently lost users since going public in 2017. It managed to stop the rot with a flat Q4, but resting on its laurels isn’t going to bring the good times back.

Snap has taken a three-pronged approach: extending its stories feature (and ads) into third-party apps and building out its camera play with an AR platform, but it is the launch of social games that is the most intriguing. The other moves are logical and they fall in line with existing Snap strategies, but games is an entirely new category for the company.

It isn’t hard to see where Snap found inspiration for social games — Asian messaging companies have long twinned games and chat — but the U.S. company is applying its own twist to the genre.

Facebook’s handling of Alex Jones is a microcosm of its content policy problem

A revealing cluster of emails leaked to Business Insider offers a glimpse at how Facebook decides what content is objectionable in high profile cases. In this instance, a group of executives at Facebook went hands on in determining if an Alex Jones Instagram post violated the platform’s terms of service or not.

As Business Insider reports, 20 Facebook and Instagram executives hashed it out over the Jones post, which depicted a mural known as “False Profits” by the artist Mear One. Facebook began debating the post after it was flagged by Business Insider for kicking up anti semitic comments on Wednesday.

The company removed 23 of 500 comments on the post that it interpreted to be in clear violation of Facebook policy. Later in the conversation, some of the UK-based Instagram and Facebook executives on the email provided more context for their US-based peers.

Last year, a controversy over the same painting erupted when British politician Jeremy Corbyn argued in support of the mural’s creator after the art was removed from a wall in East London due what many believed to be antisemitic overtones. Because of that, the image and its context are likely better known in the UK, a fact that came up in Facebook’s discussion over how to handle the Jones post.

“This image is widely acknowledged to be anti-Semitic and is a famous image in the UK due to public controversy around it,” one executive said. “If we go back and say it does not violate we will be in for a lot criticism.”

Ultimately, after some back and forth, the post was removed.

According to the emails, Alex Jones’ Instagram account “does not currently violate [the rules]” as “an IG account has to have at least 30% of content violating at any given time as per our regular guidelines.” That fact might prove puzzling once you know that Alex Jones got his main account booted off Facebook itself in 2018 — and the company did another sweep for Jones-linked pages last month.

Whether you agree with Facebook’s content moderation decisions or not, it’s impossible to argue that they are consistently enforced. In the latest example, the company argued over a single depiction of a controversial image even as the same image is literally for sale by the artist elsewhere on both on Instagram and Facebook. (As any Facebook reporter can attest, these inconsistencies will probably be resolved shortly after this story goes live.)

The artist himself sells its likeness on a t-shirt on both Instagram and Facebook and numerous depictions of the same image appear on various hashtags. And even after the post was taken down, Jones displayed it prominently in his Instagram story, declaring that the image “is just about monopoly men and the class struggle” and decrying Facebook’s “crazy-level censorship.”

It’s clear that even as Facebook attempts to make strides, its approach to content moderation remains reactive, haphazard and probably too deeply preoccupied with public perception. Some cases of controversial content are escalated all the way to the top while others languish, undetected. Where the line is drawn isn’t particularly clear. And even when high profile violations are determined, it’s not apparent that those case studies meaningfully trickle down clarify smaller, everyday decisions by content moderators on Facebook’s lower rungs.

As always, the squeaky wheel gets the grease — but two billion users and reactive rather than proactive policy enforcement means that there’s an endless sea of ungreased wheels drifting around. This problem isn’t unique to Facebook, but given its scope, it does make the biggest case study in what can go wrong when a platform scales wildly with little regard for the consequences.

Unfortunately for Facebook, it’s yet another lose-lose situation of its own making. During its intense, extended growth spurt, Facebook allowed all kinds of potentially controversial and dangerous content to flourish for years. Now, when the company abruptly cracks down on accounts that violate its longstanding policies forbidding hate speech, divisive figures like Alex Jones can cry censorship, roiling hundreds of thousands of followers in the process.

Like other tech companies, Facebook is now paying mightily for the worry-free years it enjoyed before coming under intense scrutiny for the toxic side effects of all that growth. And until Facebook develops a more uniform interpretation of its own community standards — one the company enforces from the bottom up rather than the top down — it’s going to keep taking heat on all sides.

The ethics of internet culture: a conversation with Taylor Lorenz

Taylor Lorenz was in high demand this week. As a prolific journalist at The Atlantic and about-to-be member of Harvard’s prestigious Nieman Fellowship for journalism, that’s perhaps not surprising. Nor was this the first time she’s had a bit of a moment: Lorenz has already served as an in-house expert on social media and the internet for several major companies, while having written and edited for publications as diverse as The Daily Beast, The Hill, People, The Daily Mail, and Business Insider, all while remaining hip and in touch enough to currently serve as a kind of youth zeitgeist translator, on her beat as a technology writer for The Atlantic.

Lorenz is in fact publicly busy enough that she’s one of only two people I personally know to have openly ‘quit email,’ the other being my friend Russ, an 82 year-old retired engineer and MIT alum who literally spends all day, most days, working on a plan to reinvent the bicycle.

I wonder if any of Lorenz’s previous professional experiences, however, could have matched the weight of the events she encountered these past several days, when the nightmarish massacre in Christchurch, New Zealand brought together two of her greatest areas of expertise: political extremism (which she covered for The Hill), and internet culture. As her first Atlantic piece after the shootings said, the Christchurch killer’s manifesto was “designed to troll.” Indeed, his entire heinous act was a calculated effort to manipulate our current norms of Internet communication and connection, for fanatical ends.

Taylor Lorenz

Lorenz responded with characteristic insight, focusing on the ways in which the stylized insider subcultures the Internet supports can be used to confuse, distract, and mobilize millions of people for good and for truly evil ends:

Before people can even begin to grasp the nuances of today’s internet, they can be radicalized by it. Platforms such as YouTube and Facebook can send users barreling into fringe communities where extremist views are normalized and advanced. Because these communities have so successfully adopted irony as a cloaking device for promoting extremism, outsiders are left confused as to what is a real threat and what’s just trolling. The darker corners of the internet are so fragmented that even when they spawn a mass shooting, as in New Zealand, the shooter’s words can be nearly impossible to parse, even for those who are Extremely Online.”

Such insights are among the many reasons I was so grateful to be able to speak with Taylor Lorenz for this week’s installment of my TechCrunch series interrogating the ethics of technology.

As I’ve written in my previous interviews with author and inequality critic Anand Giridharadas, and with award-winning Google exec turned award-winning tech critic James Williams, I come to tech ethics from 25 years of studying religion. My personal approach to religion, however, has essentially always been that it plays a central role in human civilization not only or even primarily because of its theistic beliefs and “faith,” but because of its culture — its traditions, literature, rituals, history, and the content of its communities.

And because I don’t mind comparing technology to religion (not saying they are one and the same, but that there is something to be learned from the comparison), I’d argue that if we really want to understand the ethics of the technologies we are creating, particularly the Internet, we need to explore, as Taylor and I did in our conversation below, “the ethics of internet culture.”

What resulted was, like Lorenz’s work in general, at times whimsical, at times cool enough to fly right over my head, but at all times fascinating and important.

Editor’s Note: we ungated the first of 11 sections of this interview. Reading time: 22 minutes / 5,500 words.

Joking with the Pope

Greg Epstein: Taylor, thanks so much for speaking with me. As you know, I’m writing for TechCrunch about religion, ethics, and technology, and I recently discovered your work when you brought all those together in an unusual way. You subtweeted the Pope, and it went viral.

Taylor Lorenz: I know. [People] were freaking out.

Greg: What was that experience like?

Taylor: The Pope tweeted some insane tweet about how Mary, Jesus’ mother, was the first influencer. He tweeted it out, and everyone was spamming that tweet to me because I write so much about influencers, and I was just laughing. There’s a meme on Instagram about Jesus being the first influencer and how he killed himself or faked his death for more followers.

Because it’s fluid, it’s a lifeline for so many kids. It’s where their social network lives. It’s where identity expression occurs.

I just tweeted it out. I think a lot of people didn’t know the joke, the meme, and I think they just thought that it was new & funny. Also [some people] were saying, “how can you joke about Jesus wanting more followers?” I’m like, the Pope literally compared Mary to a social media influencer, so calm down. My whole family is Irish Catholic.

A bunch of people were sharing my tweet. I was like, oh, god. I’m not trying to lead into some religious controversy, but I did think whether my Irish Catholic mother would laugh. She has a really good sense of humor. I thought, I think she would laugh at this joke. I think it’s fine.

Greg: I loved it because it was a real Rorschach test for me. Sitting there looking at that tweet, I was one of the people who didn’t know that particular meme. I’d like to think I love my memes but …

Taylor: I can’t claim credit.

Greg: No, no, but anyway most of the memes I know are the ones my students happen to tell me about. The point is I’ve spent 15 plus years being a professional atheist. I’ve had my share of religious debates, but I also have had all these debates with others I’ll call Professional Strident Atheists.. who are more aggressive in their anti-religion than I am. And I’m thinking, “Okay, this is clearly a tweet that Richard Dawkins would love. Do I love it? I don’t know. Wait, I think I do!”

Taylor: I treated it with the greatest respect for all faiths. I thought it was funny to drag the Pope on Twitter .

The influence of Instagram

Alexander Spatari via Getty Images