After data incidents, Instagram expands its bug bounty

Facebook is expanding its data abuse bug bounty to Instagram.

The social media giant, which owns Instagram, first rolled out its data abuse bounty in the wake of the Cambridge Analytica scandal, which saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

The idea was that security researchers and platform users alike could report instances of third-party apps or companies that were scraping, collecting and selling Facebook data for other purposes, such as to create voter profiles or build vast marketing lists.

Even following he high profile public relations disaster of Cambridge Analytica, Facebook still still had apps illicitly collecting data on its users.

Instagram wasn’t immune either. Just this month Instagram booted a “trusted” marketing partner off its platform after it was caught scraping millions of users’ stories, locations and other data points on millions of users, forcing Instagram to make product changes to prevent future scraping efforts. That came after two other incidents earlier this year where a security researcher found 14 million scraped Instagram profiles sitting on an exposed database — without a password — for anyone to access. Another incident saw another company platform scrape the profile data — including email addresses and phone numbers — of Instagram influencers.

Last year Instagram also choked developers’ access as the company tried to rebuild its privacy image in the aftermath of the Cambridge Analytica scandal.

Dan Gurfinkel, security engineering manager at Instagram, said its new and expanded data abuse bug bounty aims to “encourage” security researchers to report potential abuse.

Instagram said it’s also inviting a select group of trusted security researchers to find flaws in its Checkout service ahead of its international rollout, who will also be eligible for bounty payouts.

Read more:

Week in Review: Snapchat beats a dead horse

Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about how Netflix might have some rough times ahead as Disney barrels towards it.


3d video spectacles 3

The big story

There is plenty to be said about the potential of smart glasses. I write about them at length for TechCrunch and I’ve talked to a lot of founders doing cool stuff. That being said, I don’t have any idea what Snap is doing with the introduction of a third-generation of its Spectacles video sunglasses.

The first-gen were a marketing smash hit, their sales proved to be a major failure for the company which bet big and seemingly walked away with a landfill’s worth of the glasses.

Snap’s latest version of Spectacles were announced in Vogue this week, they are much more expensive at $380 and their main feature is that they have two cameras which capture images in light depth which can lead to these cute little 3D boomerangs. One one hand, it’s nice to see the company showing perseverance with a tough market, on the other it’s kind of funny to see them push the same rock up the hill again.

Snap is having an awesome 2019 after a laughably bad 2018, the stock has recovered from record lows and is trading in its IPO price wheelhouse. It seems like they’re ripe for something new and exciting, not beautiful yet iterative.

The $150 Spectacles 2 are still for sale, though they seem quite a bit dated-looking at this point. Spectacles 3 seem to be geared entirely towards women, and I’m sure they made that call after seeing the active users of previous generations, but given the write-down they took on the first-generation, something tells me that Snap’s continued experimentation here is borne out of some stubbornness form Spiegel and the higher-ups who want the Snap brand to live in a high fashion world and want to be at the forefront of an AR industry that seems to have already moved onto different things.

Send me feedback
on Twitter @lucasmtny or email
[email protected]

On to the rest of the week’s news.

tumblr phone sold

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • WordPress buys Tumblr for chump change
    Tumblr, a game-changing blogging network that shifted online habits and exited for $1.1 billion just changed hands after Verizon (which owns TechCrunch) unloaded the property for a reported $3 million. Read more about this nightmarish deal here.
  • Trump gives American hardware a holiday season pass on tariffs 
    The ongoing trade war with China generally seems to be rough news for American companies deeply intertwined with the manufacturing centers there, but Trump is giving U.S. companies a Christmas reprieve from the tariffs, allowing certain types of hardware to be exempt from the recent rate increases through December. Read more here.
  • Facebook loses one last acquisition co-founder
    This week, the final remnant of Facebook’s major acquisitions left the company. Oculus co-founder Nate Mitchell announced he was leaving. Now, Instagram, WhatsApp and Oculus are all helmed by Facebook leadership and not a single co-founder from the three companies remains onboard. Read more here.

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Facebook’s turn in audio transcription debacle:
    [Facebook transcribed users’ audio messages without permission]
  2. Google’s hate speech detection algorithms get critiqued:
    [Racial bias observed in hate speech detection algorithm from Google]
  3. Amazon has a little email mishap:
    [Amazon customers say they received emails for other people’s orders]

Adam Neumann (WeWork) at TechCrunch Disrupt NY 2017

Extra Crunch

Our premium subscription service had another week of interesting deep dives. My colleague Danny Crichton wrote about the “tech” conundrum that is WeWork and the questions that are still unanswered after the company filed documents this week to go public.

WeWork’s S-1 misses these three key points

…How is margin changing at its older locations? How is margin changing as it opens up in places like India, with very different costs and revenues? How do those margins change over time as a property matures? WeWork spills serious amounts of ink saying that these numbers do get better … without seemingly being willing to actually offer up the numbers themselves…

Here are some of our other top reads this week for premium subscribers. This week, we published a major deep dive into the world’s next music unicorn and we dug deep into marketplace startups.

Sign up for more newsletters in your inbox (including this one) here.

Twitter to test a new filter for spam and abuse in the Direct Message inbox

Twitter is testing a new way to filter unwanted messages from your Direct Message inbox. Today, Twitter allows users to set their Direct Message inbox as being open to receiving messages from anyone, but this can invite a lot of unwanted messages, including abuse. While one solution is to adjust your settings so only those you follow can send you private messages, that doesn’t work for everyone. Some people — like reporters, for example — want to have an open inbox in order to have private conversations and receive tips.

This new experiment will test a filter that will move unwanted messages, including those with offensive content or spam, to a separate tab.

Instead of lumping all your messages into a single view, the Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

Users would have to click on the “Show” button to even read these, which protects them from having to face the stream of unwanted content that can pour in at times when the inbox is left open.

And even upon viewing this list of filtered messages, all the content itself isn’t immediately visible.

In the case that Twitter identifies content that’s potentially offensive, the message preview will say the message is hidden because it may contain offensive content. That way, users can decide if they want to open the message itself or just click the delete button to trash it.

The change could allow Direct Messages to become a more useful tool for those who prefer an open inbox, as well as an additional means of clamping down on online abuse.

It’s also similar to how Facebook Messenger handles requests — those from people you aren’t friends with are relocated to a separate Message Requests area. And those that are spammy or more questionable are in a hard-to-find Filtered section below that.

It’s not clear why a feature like this really requires a “test,” however — arguably, most people would want junk and abuse filtered out. And those who for some reason did not, could just toggle a setting to turn off the filter.

Instead, this feels like another example of Twitter’s slow pace when it comes to making changes to clamp down on abuse. Facebook Messenger has been filtering messages in this way since late 2017. Twitter should just launch a change like this, instead of “testing” it.

The idea of hiding — instead of entirely deleting — unwanted content is something Twitter has been testing in other areas, too. Last month, for example, it began piloting a new “Hide Replies” feature in Canada, which allows users to hide unwanted replies to their tweets so they’re not visible to everyone. The tweets aren’t deleted, but rather placed behind an extra click — similar to this Direct Message change.

Twitter is updating is Direct Message system in other ways, too.

At a press conference this week, Twitter announced several changes coming to its platform, including a way to follow topics, plus a search tool for the Direct Message inbox, as well as support for iOS Live Photos as GIFs, the ability to reorder photos and more.

8 million Android users tricked into downloading 85 adware apps from Google Play

Dozens of Android adware apps disguised as photo editing apps and games have been caught serving ads that would take over users’ screens as part of a fraudulent money-making scheme.

Security firm Trend Micro said it found 85 individual apps downloaded more than eight million times from the Google Play — all of which have since been removed from the app store.

More often than not adware apps will run on a user’s device and will silently serve and click ads in the background and without the user’s knowledge to generate ad revenue. But these apps were particularly brazen and sneaky, one of the researchers said.

“It isn’t your run-of-the-mill adware family,” said Ecular Xu, a mobile threat response engineer at Trend Micro. “Apart from displaying advertisements that are difficult to close, it employs unique techniques to evade detection through user behavior and time-based triggers.”

The researchers discovered that the apps would keep a record when they were installed and sit dormant for around half-an-hour. After the delay, the app would hide its icon and create a shortcut on the user’s home screen, the security firm said. That, they say, helped to protect the app from being deleted if the user decided to drag and drop the shortcut to the ‘uninstall’ section of the screen.

“These ads are shown in full screen,” said Xu. “Users are forced to view the whole duration of the ad before being able to close it or go back to app itself.”

When the app unlocked, it displayed ads on the user’s home screen. The code also checks to make sure it doesn’t show the same ad too frequently, the researchers said.

Worse, the ads can be remotely configured by the fraudster, allowing ads to be displayed more frequently than the default five minute intervals.

Trend Micro provided a list of the apps — including Super Selfie Camera, Cos Camera, Pop Camera, and One Stroke Line Puzzle — all of which had a million downloads each.

Users about to install the apps had a dead giveaway: most of the apps had appalling reviews, many of which had as many one-star reviews as they did five-stars, with users complaining about the deluge of pop-up ads.

Google does not typically comment on app removals beyond acknowledging their removal from Google Play.

Read more:

US legislator, David Cicilline, joins international push to interrogate platform power

US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.

Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.

The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.

Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.

Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.

“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”

“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”

The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.

Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.

A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.

Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.

Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.

Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.

While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.

In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world.  As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.

“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”

Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.

Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.

Facebook contractors said to have collected and transcribed users’ audio without permission

Reset the counter on “number of days since the last Facebook privacy scandal.”

Facebook has become the latest tech giant to face scrutiny over its handling of users’ audio, following a report that said the social media giant collected audio from its users and transcribed it using third-party contractors.

The report came from Bloomberg, citing the contractors who requested anonymity for fear of losing their jobs.

According to the report, it’s not known where the audio came from or why it was transcribed, but that Facebook users’ conversations were often matched against to see if they were properly interpreted by the company’s artificial intelligence.

There are several ways that Facebook collects voice and audio data, including from its mobile devices, its Messenger voice and video service, and through its smart speaker. But the social media giant’s privacy policy makes no mention of voice . Bloomberg also reported that contractors felt their work was “unethical” because Facebook “hasn’t disclosed to users that third parties may review their audio.”

We’ve asked Facebook several questions, including how the audio was collected, for what reason it was transcribed, and why users weren’t explicitly told of the third-party transcription, but did not immediately hear back.

Facebook becomes the latest tech company to face questions about its use of third-party contractors and staff to review user audio.

Amazon saw the initial round of flak for allowing contractors to manually review Alexa recordings without express user permission, forcing the company to add an opt-out to its Echo devices. Google also faced the heat for allowing human review of audio data, along with Apple which used contractors to listen to seemingly private Siri recordings. Microsoft also listened to some Skype calls made through the company’s app translation feature.

It’s been over a year since Facebook last had a chief security officer in the wake of Alex Stamos’ departure.

How even the best marketplace startups get paralyzed

Over the past 15 years, I’ve seen a pernicious disease infect a number of marketplace startups. I call it Marketplace Paralysis. The root cause of the disease is quite innocent and seemingly harmless. Smart people with good intentions fall victim to it all the time. It starts when a platform has sufficient scale — such that there is a good amount of data on things like performance, quality rankings, purchase rates, and fill ratios. What a platform implements as a result of that data, and how it’s received by their user base, is what can lead to marketplace paralysis.

In this post, I will detail what Marketplace Paralysis is and what startups can do to avoid it. Before I get into the nitty-gritty, here’s a snapshot of the lessons you’ll learn by reading this post:

  1. Segment and focus on high-value users
  2. Remember the silent majority
  3. Modify company goals to include quality components
  4. Empower small, autonomous teams

The easiest way to explain Marketplace Paralysis is with a hypothetical example. So allow me to introduce you to Labor Marketplace X (LMX).

Equipped with the aforementioned data, the well-intentioned product managers at LMX will think about policies or features to try and improve a KPI, like fill ratio or job success rate. They might craft a policy that would separate users into two tiers.

Tier 1 gets a shiny gold star next to their name, along with extra pay, bonuses, and preferred job access. Tier 2 gets standard pay and standard job access. They’ve done their homework and feel this will benefit the marketplace.

So, they build the feature. They launch it and make an announcement to their users. And then… a revolt!

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Facebook has denied contradicting itself in evidence to the UK parliament and a US public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians vs evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf, over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages”. (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data”.)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a ‘sketchy’ data modeling company with deep Facebook platform penetration looked like ‘business as usual’ for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other US legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $100M fine, in addition to the FTC’s $5BN privacy penalty.

Nonetheless, Facebook is once claiming there’s nothing but ‘rumors’ to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other ‘sketchy’ apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing”.

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.

Reports say White House has drafted an order putting the FCC in charge of monitoring social media

The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.

The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.

Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.

The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.

Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.

In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.

As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.

Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .

At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.

The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.

The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.

The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.

The FTC and FCC had not responded to a request for comment at the time of publication.

Why AI needs more social workers, with Columbia University’s Desmond Patton

Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.

In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at TechCrunch). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.

A less expected and perhaps more profound finding, however, has been what the introspective founder Priyag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.

In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.

So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.

The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.

desmond cropped 800x800

Desmond Patton. Image via Desmond Patton / Stern Strategy Group

A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.

It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.

Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.

Greg Epstein: How did you end up working in both social work and tech?

Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.

[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.