All Facebook users can now access a tool to port data to Google Photos

Facebook’s photo transfer tool is now available globally half a year on from an initial rollout in Europe, the company said today.

The data portability feature enables users of the social network to directly port a copy of their photos to Google’s eponymous photo storage service via encrypted transfer, rather than needing to download and manually upload photos themselves — thereby reducing the hassle involved with switching to a rival service.

Facebook users can find the option to “Transfer a copy of your photos and videos” under the Your Facebook Information settings menu.

This is the same menu where the company has long enabled users to download a copy of a range of information related to their use of its service (including photos). However there’s little that can be done with that data dump. Whereas the direct photo transfer mechanism shrinks the friction involved in account switching.

Facebook debuted the feature in Ireland at the back end of last year, going on to open it up to more international markets earlier this year and grant access to users in the US and Canada come April.

Now all Facebook users can tap in — though the choice of where you can port your photos remains limited to Google Photos. So it’s not the kind of data portability that’s of any help to startup services (yet).

Facebook has said support for other services is being built out. However this requires collaborating developers to build the necessary adapters for photos APIs. Which in turn depends on wider participation in an underpinning open source effort, called the Data Transfer Project (DTP).

The wider context around the DTP — which kicked off in 2018, backed by a number of tech giants all keen to hitch their wagon to the notion of greasing platform-to-platform data portability — is the fact regulators in the US and Europe are paying closer attention to the deleterious impact of platform power on competition and markets.

Putting some resource into data portability looks like a collective strategy by powerful players to try to manage and fend off antitrust action that might otherwise see dominant empires broken up in the interests of rebalancing digital markets.

If platforms can make a plausible case that their users aren’t locked into their walled gardens because network effects force them to stay but can simply push a button to move their stuff and waltz elsewhere, they will hope to shrink their antitrust risk and water down the case for sweeping reforms of digital regulations.

Europe is certainly looking closely at updating its rulebook to tackle platform power — with legislative proposals wrapping digital services slated before the end of the year.

EU lawmakers are also specifically consulting on whether the bloc needs a new tool in its antitrust arsenal to tackle the problem of tipping markets in the digital sphere — where a dominant player consolidates a market position to such an extent that it becomes difficult to reverse. The proposed new power would enable European antitrust regulators to speed up interventions by letting them impose behavioural and structural remedies without needing to make a finding of infringement first.

Given all that, it would be interesting to know how many Facebook users have actually made use of the photo porting tool in the half-year since it launched to a sub-set of users.

A Facebook spokesman told us he did not have “specific numbers to share at this time” — but claimed it’s seen “many” users making photo transfers via the tool.

“We’ve received some positive feedback from stakeholders who have been giving feedback on the product throughout the rollout,” the spokesman added. “We hope that will continue to increase as more people are aware of the tool and new destinations and data types are added.”

Facebook to acquire Giphy in a deal reportedly worth $400 million

Facebook will acquire Giphy, the web-based animated gif search engine and platform provider, Facebook confirmed today, in a deal worth around $400 million, according to a report by Axios. Facebook said it isn’t disclosing terms of the deal. Giphy has grown to be a central source for shareable, high-engagement content, and its animated response gifs are available across Facebook’s platforms, as well as through other social apps and services on the web.

Most notably, Giphy provides built-in search and sticker functions for Facebook’s Instagram, and it will continue to operate in that capacity, becoming a part of the Instagram team. Giphy will also be available to Facebook’s other apps through existing and additional integrations. People will still be able to upload their own GIFs, and Facebook intends to continue to operate Giphy under its own branding and offer integration to outside developers.

Facebook says it will invest in additional tech development for Giphy, as well as build out new relationships for it on both the content side and the endpoint developer side. The company says that fully 50% of traffic that Giphy receives already comes from Facebook’s apps, including Instagram, Messenger, the FB app itself and WhatsApp .

Giphy was founded in 2013, and was originally simply a search engine for gifs. The website’s first major product expansion was an extension that allowed sharing via Facebook, introduced later in its founding year, and it quickly added Twitter as a second integration. According to the most recent data from Crunchbase, Giphy had raised $150.9 million across five rounds, backed by funders including DFJ Growth, Lightspeed, Betaworks, GV, Lerer Hippeau and more.

Security lapse exposed Clearview AI source code

Since it exploded onto the scene in January after a newspaper exposé, Clearview AI quickly became one of the most elusive, secretive and reviled companies in the tech startup scene.

The controversial facial recognition startup allows its law enforcement users to take a picture of a person, upload it and match it against its alleged database of 3 billion images, which the company scraped from public social media profiles.

But for a time, a misconfigured server exposed the company’s internal files, apps and source code for anyone on the internet to find.

Mossab Hussein, chief security officer at Dubai-based cybersecurity firm SpiderSilk, found the repository storing Clearview’s source code. Although the repository was protected with a password, a misconfigured setting allowed anyone to register as a new user to log in to the system storing the code.

The repository contained Clearview’s source code, which could be used to compile and run the apps from scratch. The repository also stored some of the company’s secret keys and credentials, which granted access to Clearview’s cloud storage buckets. Inside those buckets, Clearview stored copies of its finished Windows, Mac and Android apps, as well as its iOS app, which Apple recently blocked for violating its rules. The storage buckets also contained early, pre-release developer app versions that are typically only for testing, Hussein said.

The repository also exposed Clearview’s Slack tokens, according to Hussein, which, if used, could have allowed password-less access to the company’s private messages and communications.

Clearview has been dogged by privacy concerns since it was forced out of stealth following a profile in The New York Times, but its technology has gone largely untested and the accuracy of its facial recognition tech unproven. Clearview claims it only allows law enforcement to use its technology, but reports show that the startup courted users from private businesses like Macy’s, Walmart and the NBA. But this latest security lapse is likely to invite greater scrutiny of the company’s security and privacy practices.

When reached for comment, Clearview founder Hoan Ton-That claimed his company “experienced a constant stream of cyber intrusion attempts, and have been investing heavily in augmenting our security.”

“We have set up a bug bounty program with HackerOne whereby computer security researchers can be rewarded for finding flaws in Clearview AI’s systems,” said Ton-That. “SpiderSilk, a firm that was not a part of our bug bounty program, found a flaw in Clearview AI and reached out to us. This flaw did not expose any personally identifiable information, search history or biometric identifiers,” he said.

Clearview AI’s app for iOS did not need a log-in, according to Hussein. He took several screenshots to show how the app works. In this example, Hussein used a photo of Mark Zuckerberg.

Ton-That accused the research firm of extortion, but emails between Clearview and SpiderSilk paint a different picture.

Hussein, who has previously reported security issues at several startups, including MoviePass, Remine and Blind, said he reported the exposure to Clearview but declined to accept a bounty, which he said if signed would have barred him from publicly disclosing the security lapse.

It’s not uncommon for companies to use bug bounty terms and conditions or non-disclosure agreements to prevent the disclosure of security lapses once they are fixed. But experts told TechCrunch that researchers are not obligated to accept a bounty or agree to disclosure rules.

Ton-That said that Clearview has “done a full forensic audit of the host to confirm no other unauthorized access occurred.” He also confirmed that the secret keys have been changed and no longer work.

Hussein’s findings offer a rare glimpse into the operations of the secretive company. One screenshot shared by Hussein showed code and apps referencing the company’s Insight Camera, which Ton-That described as a “prototype” camera, since discontinued.

A screenshot of Clearview AI’s app for macOS. It connects to Clearview’s database through an API. The app also references Clearview’s former prototype camera hardware, Insight Camera.

According to BuzzFeed News, one of the firms that tested the cameras is New York City real estate firm Rudin Management, which trialed use of a camera at two of its city residential buildings.

Hussein said that he found some 70,000 videos in one of Clearview’s cloud storage buckets, taken from a camera installed at face-height in the lobby of a residential building. The videos show residents entering and leaving the building.

Ton-That explained that, “as part of prototyping a security camera product we collected some raw video strictly for debugging purposes, with the permission of the building management.”

TechCrunch has learned that the Rudin-owned building is on Manhattan’s east side. Several property listings with images of the building’s lobby also confirm this. A representative for the real estate company did not return our emails.

One of the videos from a camera in a lobby of a residential building, recording residents (blurred by TechCrunch) as they pass by.

Clearview has come under intense scrutiny since its January debut. It has also attracted the attention of hackers.

In February, Clearview admitted to customers that a list of its customers was stolen in a data breach — though, it claimed its servers were “never accessed.” Clearview also left unprotected several of its cloud storage buckets containing its Android app.

Vermont’s attorney general’s office has already opened an investigation into the company for allegedly violating consumer protection laws, and police departments have been told to stop using Clearview, including in New Jersey and San Diego. Several tech companies, including Facebook, Twitter and YouTube, have already filed cease-and-desist letters with Clearview AI.

In an interview with CBS News in February, Ton-That defended his company’s practices. “If it’s public and it’s out there and could be inside Google’s search engine, it can be inside ours as well,” he said.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755-8849.

Zoom sued by shareholder for ‘overstating’ security claims

Zoom has been served with another class action lawsuit — this time by one of its shareholders, who says he lost money after the company “overstated” its security measures, which led its share price to tank.

The video conferencing giant has seen its daily usage rocket from 10 million users to 200 million since the start of the coronavirus pandemic, which forced vast swathes of the world to stay and work from home. As its popularity rose, the company also faced a growing number of security and privacy problems, including claims that Zoom was not end-to-end encrypted as advertised.

Zoom’s later admission saw the company’s share price fall by almost 20 percent.

Shareholder Michael Drieu, who filed the suit in a California federal court on Tuesday, said he and others have “suffered significant losses and damages” as a result. According to the complaint, Drieu bought 50 shares priced at $149.50 but lost out when he sold the shares a week later at $120.50 per share.

Zoom did not respond to a request for comment.

It’s the latest class action served against Zoom in recent weeks. Zoom was slapped with another suit last month after Zoom’s iOS app was found to have shared data with Facebook — even when users did not have a Facebook account.

Zoom has doubled down on its efforts to improve its image in the past week, including a promise to improve its encryption efforts and by changing its default settings to prevent trolls and intruders from accessing Zoom calls without permission, coined “Zoombombing.” The security problems have led to New York City schools banning Zoom in favor of Microsoft Teams. The Taiwanese government also banned its agencies from using the app.

Just today, former Facebook chief security officer Alex Stamos said he joined Zoom as an advisor. Zoom also said it has enlisted security experts and leaders to advise on the company’s security strategy.

Pinterest CEO and a team of leading scientists launch a self-reporting COVID-19 tracking app

There have been a few scattered efforts to leverage crowd-sourced self-reporting of symptoms as a way to potentially predict and chart the progress of COVID-19 across the U.S., and around the world. A new effort looks like the most comprehensive, well-organized and credibly backed yet — and it has been developed in part by Pinterest co-founder and CEO Ben Silbermann.

Silbermann and a team from Pinterest enlisted the help of high school friend, and CRISPR gene-editing pioneer / MIT and Harvard Broad Institute member, Dr. Feng Zhang to build what Silbermann termed in a press release a “bridge between citizens and scientists.” The result is the How We Feel app that Silbermann developed along with input from Zhang and a long list of well-regarded public health, computer science, therapeutics, social science and medical professors from Harvard, Stanford, MIT, Weill Cornell and more.

How We Feel is a mobile app available for both iOS and Android, which is free to download, and which is designed to make it very easy to self-report whether or not they feel well — and if they’re feeling unwell, what symptoms they’re experiencing. It also asks for information about whether or not you’ve been tested for COVID-19, and whether you’re in self-isolation, and for how long. The amount of interaction required is purposely streamlined to make it easy for anyone to contribute daily, and to do so in a minute or less.

The app doesn’t ask for or collect info like name, phone number or email information. It includes an up-front request that users agree to donate their information, and the data collected will be aggregated and then shared with researchers, public health professionals and doctors, including those who are signed on as collaborators with the project, as well as others (and the project is encouraging collaborators to reach out if interested). Part of the team working on the project are experts in the field of differential privacy, and a goal of the endeavor is to ensure that people’s information is used responsibly.

The How We Feel app is, as mentioned, one of a number of similar efforts out there, but this approach has a number of advantages when compared to existing projects. First, it’s a mobile app, whereas some rely on web-based portals that are less convenient for the average consumer, especially when you want continued use over time. Second, they’re motivating use through positive means — Silbermann and his wife Divya will be providing a donated meal to nonprofit Feeding America for every time a person downloads and uses the app for the first time, up to a maximum of 10 million meals. Finally, it’s already designed in partnership with, and backed by, world-class academic institutions and researchers, and seems best-positioned to be able to get the information it gathers to the greatest number of those in a position to help.

How We Feel is organized as an entirely independent, nonprofit organization, and it’s hoping to expand its availability and scientific collaboration globally. It’s an ambitious project, but also one that could be critically important in supplementing testing efforts and other means of tracking the progress and course of the spread of SARS-CoV-2 and COVID-19. While self-reported information on its own is far from a 100% accurate or reliable source, taken in aggregate at scale, it could be a very effective leading indicator of new or emerging viral hotspots, or provide scientific researches with other valuable insights when used in combination with other signals.

What does a pandemic say about the tech we’ve built?

There’s a joke* being reshared on chat apps that takes the form of a multiple choice question — asking who’s the leading force in workplace digital transformation? The red-lined punchline is not the CEO or CTO but: C) COVID-19.

There’s likely more than a grain of truth underpinning the quip. The novel coronavirus is pushing a lot of metaphorical buttons right now. ‘Pause’ buttons for people and industries, as large swathes of the world’s population face quarantine conditions that can resemble house arrest. The majority of offline social and economic activities are suddenly off limits.

Such major pauses in our modern lifestyle may even turn into a full reset, over time. The world as it was, where mobility of people has been all but taken for granted — regardless of the environmental costs of so much commuting and indulged wanderlust — may never return to ‘business as usual’.

If global leadership rises to the occasional then the coronavirus crisis offers an opportunity to rethink how we structure our societies and economies — to make a shift towards lower carbon alternatives. After all, how many physical meetings do you really need when digital connectivity is accessible and reliable? As millions more office workers log onto the day job from home that number suddenly seems vanishingly small.

COVID-19 is clearly strengthening the case for broadband to be a utility — as so much more activity is pushed online. Even social media seems to have a genuine community purpose during a moment of national crisis when many people can only connect remotely, even with their nearest neighbours.

Hence the reports of people stuck at home flocking back to Facebook to sound off in the digital town square. Now the actual high street is off limits the vintage social network is experiencing a late second wind.

Facebook understands this sort of higher societal purpose already, of course. Which is why it’s been so proactive about building features that nudge users to ‘mark yourself safe’ during extraordinary events like natural disasters, major accidents and terrorist attacks. (Or indeed why it encouraged politicians to get into bed with its data platform in the first place — no matter the cost to democracy.)

In less fraught times, Facebook’s ‘purpose’ can be loosely summed to ‘killing time’. But with ever more sinkholes being drilled by the attention economy that’s a function under ferocious and sustained attack.

Over the years the tech giant has responded by engineering ways to rise back to the top of the social heap — including spying on and buying up competition, or directly cloning rival products. It’s been pulling off this trick, by hook or by crook, for over a decade. Albeit, this time Facebook can’t take any credit for the traffic uptick; A pandemic is nature’s dark pattern design.

What’s most interesting about this virally disrupted moment is how much of the digital technology that’s been built out online over the past two decades could very well have been designed for living through just such a dystopia.

Seen through this lens, VR should be having a major moment. A face computer that swaps out the stuff your eyes can actually see with a choose-your-own-digital-adventure of virtual worlds to explore, all from the comfort of your living room? What problem are you fixing VR? Well, the conceptual limits of human lockdown in the face of a pandemic quarantine right now, actually…

Virtual reality has never been a compelling proposition vs the rich and textured opportunity of real life, except within very narrow and niche bounds. Yet all of a sudden here we all are — with our horizons drastically narrowed and real-life news that’s ceaselessly harrowing. So it might yet end up wry punchline to another multiple choice joke: ‘My next vacation will be: A) Staycation, B) The spare room, C) VR escapism.’

It’s videoconferencing that’s actually having the big moment, though. Turns out even a pandemic can’t make VR go viral. Instead, long lapsed friendships are being rekindled over Zoom group chats or Google Hangouts. And Houseparty — a video chat app — has seen surging downloads as barflies seek out alternative night life with their usual watering-holes shuttered.

Bored celebs are TikToking. Impromptu concerts are being livestreamed from living rooms via Instagram and Facebook Live. All sorts of folks are managing social distancing and the stress of being stuck at home alone (or with family) by distant socializing — signing up to remote book clubs and discos; joining virtual dance parties and exercise sessions from bedrooms. Taking a few classes together. The quiet pub night with friends has morphed seamlessly into a bring-your-own-bottle group video chat.

This is not normal — but nor is it surprising. We’re living in the most extraordinary time. And it seems a very human response to mass disruption and physical separation (not to mention the trauma of an ongoing public health emergency that’s killing thousands of people a day) to reach for even a moving pixel of human comfort. Contactless human contact is better than none at all.

Yet the fact all these tools are already out there, ready and waiting for us to log on and start streaming, should send a dehumanizing chill down society’s backbone.

It underlines quite how much consumer technology is being designed to reprogram how we connect with each other, individually and in groups, in order that uninvited third parties can cut a profit.

Back in the pre-COVID-19 era, a key concern being attached to social media was its ability to hook users and encourage passive feed consumption — replacing genuine human contact with voyeuristic screening of friends’ lives. Studies have linked the tech to loneliness and depression. Now we’re literally unable to go out and meet friends the loss of human contact is real and stark. So being popular online in a pandemic really isn’t any kind of success metric.

Houseparty, for example, self-describes as a “face to face social network” — yet it’s quite the literal opposite; you’re foregoing face-to-face contact if you’re getting virtually together in app-wrapped form.

While the implication of Facebook’s COVID-19 traffic bump is that the company’s business model thrives on societal disruption and mainstream misery. Which, frankly, we knew already. Data-driven adtech is another way of saying it’s been engineered to spray you with ad-flavored dissatisfaction by spying on what you get up to. The coronavirus just hammers the point home.

The fact we have so many high-tech tools on tap for forging digital connections might feel like amazing serendipity in this crisis — a freemium bonanza for coping with terrible global trauma. But such bounty points to a horrible flip side: It’s the attention economy that’s infectious and insidious. Before ‘normal life’ plunged off a cliff all this sticky tech was labelled ‘everyday use’; not ‘break out in a global emergency’.

It’s never been clearer how these attention-hogging apps and services are designed to disrupt and monetize us; to embed themselves in our friendships and relationships in a way that’s subtly dehumanizing; re-routing emotion and connections; nudging us to swap in-person socializing for virtualized fuzz that designed to be data-mined and monetized by the same middlemen who’ve inserted themselves unasked into our private and social lives.

Captured and recompiled in this way, human connection is reduced to a series of dilute and/or meaningless transactions. The platforms deploying armies of engineers to knob-twiddle and pull strings to maximize ad opportunities, no matter the personal cost.

It’s also no accident we’re also seeing more of the vast and intrusive underpinnings of surveillance capitalism emerge, as the COVID-19 emergency rolls back some of the obfuscation that’s used to shield these business models from mainstream view in more normal times. The trackers are rushing to seize and colonize an opportunistic purpose.

Tech and ad giants are falling over themselves to get involved with offering data or apps for COVID-19 tracking. They’re already in the mass surveillance business so there’s likely never felt like a better moment than the present pandemic for the big data lobby to press the lie that individuals don’t care about privacy, as governments cry out for tools and resources to help save lives.

First the people-tracking platforms dressed up attacks on human agency as ‘relevant ads’. Now the data industrial complex is spinning police-state levels of mass surveillance as pandemic-busting corporate social responsibility. How quick the wheel turns.

But platforms should be careful what they wish for. Populations that find themselves under house arrest with their phones playing snitch might be just as quick to round on high tech gaolers as they’ve been to sign up for a friendly video chat in these strange and unprecedented times.

Oh and Zoom (and others) — more people might actually read your ‘privacy policy‘ now they’ve got so much time to mess about online. And that really is a risk.

*Source is a private Twitter account called @MBA_ish

Maybe we shouldn’t use Zoom after all

Now that we’re all stuck at home thanks to the coronavirus pandemic, video calls have gone from a novelty to a necessity. Zoom, the popular videoconferencing service, seems to be doing better than most and has quickly become one of, if not the most, popular option going.

But should it be?

Zoom’s recent popularity has also shone a spotlight on the company’s security protections and privacy promises. Just today, The Intercept reported that Zoom video calls are not end-to-end encrypted, despite the company’s claims that they are.

And Motherboard reports that Zoom is leaking the email addresses of “at least a few thousand” people because personal addresses are treated as if they belong to the same company.

It’s the latest examples of the company having to spend the last year mopping up after a barrage of headlines examining the company’s practices and misleading marketing. To wit:

  • Apple was forced to step in to secure millions of Macs after a security researcher found Zoom failed to disclose that it installed a secret web server on users’ Macs, which Zoom failed to remove when the client was uninstalled. The researcher, Jonathan Leitschuh, said the web server meant any malicious website could activate Mac webcam with Zoom installed without the user’s permission. The researcher declined a bug bounty payout because Zoom wanted Leitschuh to sign a non-disclosure agreement, which would have prevented him from disclosing details of the bug.
  • Zoom was quietly sending data to Facebook about a user’s Zoom habits — even when the user does not have a Facebook account. Motherboard reported that the iOS app was notifying Facebook when they opened the app, the device model, which phone carrier they opened the app, and more. Zoom removed the code in response, but not fast enough to prevent a class action lawsuit or New York’s attorney general from launching an investigation.
  • Zoom came under fire again for its “attendee tracking” feature, which, when enabled, lets a host check if participants are clicking away from the main Zoom window during a call.
  • A security researcher found that the Zoom uses a “shady” technique to install its Mac app without user interaction. “The same tricks that are being used by macOS malware,” the researcher said.
  • On the bright side and to some users’ relief, we reported that it is in fact possible to join a Zoom video call without having to download or use the app. But Zoom’s “dark patterns” doesn’t make it easy to start a video call using just your browser.
  • Zoom has faced questions over its lack of transparency on law enforcement requests it receives. Access Now, a privacy and rights group, called on Zoom to release the number of requests it receives, just as Amazon, Google, Microsoft and many more tech giants report on a semi-annual basis.
  • Then there’s Zoombombing, where trolls take advantage of open or unprotected meetings and poor default settings to take over screen-sharing and broadcast porn or other explicit material. The FBI this week warned users to adjust their settings to avoid trolls hijacking video calls.
  • And Zoom tightened its privacy policy this week after it was criticized for allowing Zoom to collect information about users’ meetings — like videos, transcripts and shared notes — for advertising.

There are many more privacy-focused alternatives to Zoom. Three are several options, but they all have their pitfalls. FaceTime and WhatsApp are end-to-end encrypted, but FaceTime works only on Apple devices and WhatsApp is limited to just four video callers at a time. A lesser known video calling platform, Jitsi, is not end-to-end encrypted but it’s open source — so you can look at the code to make sure there are no backdoors — and it works across all devices and browsers. You can run Jitsi on a server you control for greater privacy.

In fairness, Zoom is not inherently bad and there are many reasons why Zoom is so popular. It’s easy to use, reliable and for the vast majority it’s incredibly convenient.

But Zoom’s misleading claims give users a false sense of security and privacy. Whether it’s hosting a virtual happy hour or a yoga class, or using Zoom for therapy or government cabinet meetings, everyone deserves privacy.

Now more than ever Zoom has a responsibility to its users. For now, Zoom at your own risk.

Is Facebook dead to Gen Z?

The writing is on the wall for Facebook — the platform is losing market share, fast, among young users.

Edison Research’s Infinite Dial study from early 2019 showed that 62% of U.S. 12–34 year-olds are Facebook users, down from 67% in 2018 and 79% in 2017. This decrease is particularly notable as 35–54 and 55+ age group usage has been constant or even increased.

There are many theories behind Facebook’s fall from grace among millennials and Gen Zers — an influx of older users that change the dynamics of the platform, competition from more mobile and visual-friendly platforms like Instagram and Snapchat, and the company’s privacy scandals are just a few.

We surveyed 115 of our Accelerated campus ambassadors to learn more about how they’re using Facebook today. It’s worth noting that this group skews older Gen Z (ages 18–24); we suspect you’d get different results if you surveyed younger teens.

Overall penetration is still high, as 99% of our respondents have Facebook accounts. And most aren’t abandoning the platform entirely — 59% are on Facebook every day, and another 32% are on weekly. Daily Facebook usage is much lower than Instagram, however, which 82% of our respondents use daily and 7% use weekly.

Data from our scouts also confirms that the shift in usage in the last few years is particularly dramatic among younger users. 66% report using Facebook less frequently over the past two years, compared to 11% who use it more frequently (23% say their usage hasn’t changed).

What’s most interesting is what college students are using Facebook for. When we were in high school and college in the early/mid 2010s, our friends used Facebook to post (broadcast) content via their status, photos, and posts on friends’ Walls. Today, very few students use Facebook to “broadcast” content. Only 5% of our respondents say they regularly upload photos to Facebook, 4% post on friends’ Walls, and 3.5% post content to the Newsfeed (statuses). What are they doing instead?

Facebook launches a photo portability tool, starting in Ireland

It’s not friend portability, but Facebook has announced the launch today of a photo transfer tool to enable users of its social network to port their photos directly to Google’s photo storage service, via encrypted transfer.

The photo portability feature is initially being offered to Facebook users in Ireland, where the company’s international HQ is based. Facebook says it is still testing and tweaking the feature based on feedback but slates “worldwide availability” as coming in the first half of 2020.

It also suggests porting to other photo storage services will be supported in the future, in addition to Google Photos — which specifying which services it may seek to add.

Facebook says the tool is based on code developed via its participation in the Data Transfer Project — a collaborative effort started last year that’s currently backed by five tech giants (Apple, Facebook, Google, Microsoft and Twitter) who have committed to build “a common framework with open-source code that can connect any two online service providers, enabling a seamless, direct, user initiated portability of data between the two platforms”.

Facebook also points to a white paper it published in September — where it advocates for “clear rules” to govern the types of data that should be portable and “who is responsible for protecting that data as it moves to different providers”.

Behind all these moves is of course the looming threat of antitrust regulation, with legislators and agencies on both sides of the Atlantic now closely eyeing platforms’ grip on markets, eyeballs and data.

Hence Facebook’s white paper couching portability tools as “helping keep competition vibrant among online services”. (Albeit, if the ‘choice’ being offered is to pick another tech giant to get your data that’s not exactly going to reboot the competitive landscape.)

It’s certainly true that portability of user uploaded data can be helpful in encouraging people to feel they can move from a dominant service.

However it is also something of a smokescreen — especially when A) the platform in question is a social network like Facebook (because it’s people who keep other people stuck to these types of services); and B) the value derived from the data is retained by the platform regardless of whether the photos themselves travel elsewhere.

Facebook processes user uploaded data such as photos to gain personal insights to profile users for ad targeting purposes. So even if you send your photos elsewhere that doesn’t diminish what Facebook has already learned about you, having processed your selfies, groupies, baby photos, pet shots and so on. (It has also designed the portability tool to send a copy of the data; ergo, Facebook still retains your photos unless you take additional action — such as deleting your account.)

The company does not offer users any controls (portability tools or access rights) over the inferences it makes based on personal data such as photos.

Or indeed control over insights it services from its analysis of usage of its platform or wider browsing of the Internet (Facebook tracks both users and non users across the web via tools like social plug-ins and tracking pixels).

Given its targeted ads business is powered by a vast outgrowth of tracking (aka personal data processing), there’s little risk to Facebook to offer a portability feature buried in a sub-menu somewhere that lets a few in-the-know users click to send a copy of their photos to another tech giant.

Indeed, it may hope to benefit from similar incoming ports from other platforms in future.

“We hope this product can help advance conversations on the privacy questions we identified in our white paper,” Facebook writes. “We know we can’t do this alone, so we encourage other companies to join the Data Transfer Project to expand options for people and continue to push data portability innovation forward.”

Competition regulators looking to reboot digital markets will need to dig beneath the surface of such self-serving initiatives if they are to alight on a meaningful method of reining in platform power.

Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

Image search engine Giphy bills itself as providing a “fun and safe way” to search and create animated GIFs. But despite its ban on illicit content, the site is littered with self-harm and child sex abuse imagery, TechCrunch has learned.

A new report from Israeli online child protection startup L1ght — previously AntiToxin Technologies — has uncovered a host of toxic content hiding within the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics like white supremacy and hate speech. The report, shared exclusively with TechCrunch, also showed content encouraging viewers into unhealthy weight loss and glamorizing eating disorders.

TechCrunch verified some of the company’s findings by searching the site using certain keywords. (We did not search for terms that may have returned child sex abuse content, as doing so would be illegal.) Although Giphy blocks many hashtags and search terms from returning results, search engines like Google and Bing still cache images with certain keywords.

When we tested using several words associated with illicit content, Giphy sometimes showed content from its own results. When it didn’t return any banned materials, search engines often returned a stream of would-be banned results.

L1ght develops advanced solutions to combat online toxicity. Through its tests, one search of illicit material returned 195 pictures on the first search page alone. L1ght’s team then followed tags from one item to the next, uncovering networks of illegal or toxic content along the way. The tags themselves were often innocuous in order to help users escape detection, but they served as a gateway to the toxic material.

Despite a ban on self-harm content, researchers found numerous keywords and search terms to find the banned content. We have blurred this graphic image. (Image: TechCrunch)

Many of the more extreme content — including images of child sex abuse — are said to have been tagged using keywords associated with known child exploitation sites.

We are not publishing the hashtags, search terms or sites used to access the content, but we passed on the information to the National Center for Missing and Exploited Children, a national nonprofit established by Congress to fight child exploitation.

Simon Gibson, Giphy’s head of audience, told TechCrunch that content safety was of the “utmost importance” to the company and that it employs “extensive moderation protocols.” He said that when illegal content is identified, the company works with the authorities to report and remove it.

He also expressed frustration that L1ght had not contacted Giphy with the allegations first. L1ght said that Giphy is already aware of its content moderation problems.

Gibson said Giphy’s moderation system “leverages a combination of imaging technologies and human validation,” which involves users having to “apply for verification in order for their content to appear in our searchable index.” Content is “then reviewed by a crowdsourced group of human moderators,” he said. “If a consensus for rating among moderators is not met, or if there is low confidence in the moderator’s decision, the content is escalated to Giphy’s internal trust and safety team for additional review,” he said.

“Giphy also conducts proactive keyword searches, within and outside of our search index, in order to find and remove content that is against our policies,” said Gibson.

L1ght researchers used their proprietary artificial intelligence engine to uncover illegal and other offensive content. Using that platform, the researchers can find other related content, allowing them to find vast caches of illegal or banned content that would otherwise and for the most part go unseen.

This sort of toxic content plagues online platforms, but algorithms only play a part. More tech companies are finding human moderation is critical to keeping their sites clean. But much of the focus to date has been on the larger players in the space, like Facebook, Instagram, YouTube and Twitter.

Facebook, for example, has been routinely criticized for outsourcing moderation to teams of lowly paid contractors who often struggle to cope with the sorts of things they have to watch, even experiencing post-traumatic stress-like symptoms as a result of their work. Meanwhile, Google’s YouTube this year was found to have become a haven for online sex abuse rings, where criminals had used the comments section to guide one another to other videos to watch while making predatory remarks.

Giphy and other smaller platforms have largely stayed out of the limelight, during the past several years. But L1ght’s new findings indicate that no platform is immune to these sorts of problems.

L1ght says the Giphy users sharing this sort of content would make their accounts private so they wouldn’t be easily searchable by outsiders or the company itself. But even in the case of private accounts, the abusive content was being indexed by some search engines, like Google, Bing and Yandex, which made it easy to find. The firm also discovered that pedophiles were using Giphy as the means of spreading their materials online, including communicating with each other and exchanging materials. And they weren’t just using Giphy’s tagging system to communicate — they were also using more advanced techniques like tags placed on images through text overlays.

This same process was utilized in other communities, including those associated with white supremacy, bullying, child abuse and more.

This isn’t the first time Giphy has faced criticism for content on its site. Last year a report by The Verge described the company’s struggles to fend off illegal and banned content. Last year the company was booted from Instagram for letting through racist content.

Giphy is far from alone, but it is the latest example of companies not getting it right. Earlier this year and following a tip, TechCrunch commissioned then-AntiToxin to investigate the child sex abuse imagery problem on Microsoft’s search engine Bing. Under close supervision by the Israeli authorities, the company found dozens of illegal images in the results from searching certain keywords. When The New York Times followed up on TechCrunch’s report last week, its reporters found Bing had done little in the months that had passed to prevent child sex abuse content appearing in its search results.

It was a damning rebuke on the company’s efforts to combat child abuse in its search results, despite pioneering its PhotoDNA photo detection tool, which the software giant built a decade ago to identify illegal images based off a huge database of hashes of known child abuse content.

Giphy’s Gibson said the company was “recently approved” to use Microsoft’s PhotoDNA but did not say if it was currently in use.

Where some of the richest, largest and most-resourced tech companies are failing to preemptively limit their platforms’ exposure to illegal content, startups are filling in the content moderation gaps.

L1ght, which has a commercial interest in this space, was founded a year ago to help combat online predators, bullying, hate speech, scams and more.

The company was started by former Amobee chief executive Zohar Levkovitz and cybersecurity expert Ron Porat, previously the founder of ad-blocker Shine, after Porat’s own son experienced online abuse in the online game Minecraft. The company realized the problem with these platforms was something that had outgrown users’ own ability to protect themselves, and that technology needed to come to their aid.

L1ght’s business involves deploying its technology in similar ways as it has done here with Giphy — in order to identify, analyze and predict online toxicity with near real-time accuracy.