Facebook now says its password leak affected ‘millions’ of Instagram users

Facebook has confirmed its password-related security incident last month now affects “millions” of Instagram users, not “tens of thousands” of users as first thought.

The social media giant confirmed the new information in its updated blog post, first published on March 21.

“We discovered additional logs of Instagram passwords being stored in a readable format,” the company said. “We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others.”

“Our investigation has determined that these stored passwords were not internally abused or improperly accessed,” the updated post said, but the company still has not said how it made that determination.

The social media giant did not say how many millions were affected, however.

Last month, Facebook admitted it had inadvertently stored “hundreds of millions” of user account passwords in plaintext for years, said to have dated as far back as 2012. The company said the unencrypted passwords were stored in logs accessible to some 2,000 engineers and developers. The data was not leaked outside of the company, however. Facebook still explained how the bug occurred

Facebook posted the update at 10am ET — an hour before the Special Counsel’s report into Russian election interference was published.

We asked the company when it learned of the new scale of the password leak and will update if we hear back.

Samsung responds to reviewer complaints about its Galaxy Fold phone

Samsung has issued a statement about its new, folding phone as early photos of tech reviewers with their shiny new toys were replaced on social media (and in numerous columns) with complaints from those same tech reviewers about problems with the phone’s screen.

Apparently a number of reviewers either mistakenly destroyed their phone screens or had the screens bork on them after a few days of use. It’s not a good look for Samsung.

However, our own Brian Heater had his hands on the Samsung phone, and has had nary a dent in his two days of use.

He wrote:

This sort of thing can happen with pre-production models. I’ve certainly had issues with review units in the past, but these reports are worth mentioning as a note of caution with a product, which we were concerned might not be ready for prime time only a couple of weeks ago.

At the very least, it’s as good a reason as any to wait a couple of weeks before more of these are out in the world before dropping $2,000 to determine how widespread these issues are.

All of that said, I’ve not had any technical issues with my Samsung Galaxy Fold. So far, so good. A day or so in does, however, tend to be the time when the harsh light of day starts to seep in on these things, after that initial novelty of the company’s admittedly impressive feat begins wane.

In its response, the company is bravely forging ahead and (sort of) blaming the messenger for not using the thing correctly. The phones will go on sale in the U.S. on April 26 as planned.

No less esteemed a tech reviewer than Recode’s Walt Mossberg called the response from Samsung “Really weak“.

Here’s the statement in full:

“A limited number of early Galaxy Fold samples were provided to media for review. We have received a few reports regarding the main display on the samples provided. We will thoroughly inspect these units in person to determine the cause of the matter.

Separately, a few reviewers reported having removed the top layer of the display causing damage to the screen. The main display on the Galaxy Fold features a top protective layer, which is part of the display structure designed to protect the screen from unintended scratches. Removing the protective layer or adding adhesives to the main display may cause damage. We will ensure this information is clearly delivered to our customers.”

Facebook’s Portal will now surveil your living room for half the price

No, you’re not misremembering the details from that young adult dystopian fiction you’re reading — Facebook really does sell a video chat camera adept at tracking the faces of you and your loved ones. Now, you too can own Facebook’s poorly timed foray into social hardware for the low, low price of $99. That’s a pretty big price drop considering that the Portal, introduced less than six months ago, debuted at $199.

Unfortunately for whoever toiled away on Facebook’s hardware experiment, the device launched into an extremely Facebook-averse, notably privacy-conscious market. Those are pretty serious headwinds. Of course, plenty of regular users aren’t concerned about privacy — but they certainly should be.

As we found in our review, Facebook’s Portal is actually a pretty competent device with some thoughtful design touches. Still, that doesn’t really offset the unsettling idea of inviting a company notorious for disregarding user privacy into your home, the most intimate setting of all.

Facebook’s premium Portal+ with a larger, rotating 1080p screen is still priced at $349 when purchased individually, but if you buy a Portal+ with at least one other Portal, it looks like you can pick it up for $249. Facebook advertised the Portal discount for Mother’s Day and the sale ends on May 8. We reached out to the company to ask how sales were faring and if the holiday discounts would stick around for longer and we’ll update when we hear back.

The chat feature may soon return to Facebook’s mobile app

Facebook upset millions upon millions of users five years ago when it removed chat from its core mobile app and forced them to download Messenger to communicate privately with friends. Now it looks like it might be able to restore the option inside the Facebook app.

That’s according to a discovery from researcher Jane Manchun Wong who discovered an unreleased feature that brings limited chat features back into the core social networking app. Wong’s finding suggests that, at this point, calling, photo sharing and reactions won’t be supported inside the Facebook app chat feature, but it remains to be seen if that is simply because it is currently in development.

It is unclear whether the feature will ship to users at all since this is a test. Messenger, which has over 1.3 billion monthly users, will likely stick but this change would give users other options for chatting to friends.

We’ve contacted Facebook for comment, although we’re yet to hear back from the company. We’ll update this story with any comment that the company does share.

As you’d expect, the discovery has been greeted with cheers from many users who were disgruntled when Facebook yanked chat from the app all those years ago. I can’t help but wonder, however, if there are more people today who are content with using Messenger to chat without the entire Facebook service bolted on. Given all of Facebook’s missteps over the past year or two, consumer opinion of the social network has never been lower, which raises the appeal of using it to connect with friends but without engaging its advertising or newsfeed.

Wong’s finding comes barely a month after Facebook CEO Mark Zuckerberg sketched out a plan to pivot the company’s main focus to groups and private conversation rather than its previously public forum approach. That means messaging is about to become its crucial social graph, so why not bring it back to the core Facebook app? We’ll have to wait and see, but the evidence certainly shows Facebook is weighing the merits of such a move.

China’s largest stock photo provider draws fire over use of black hole image

While the world marvels at the first black hole image ever taken, a Chinese photo-sharing community is setting off a huge public outcry over its use of the landmark photo and a wider debate over copyrights practices in China.

As soon as the European Southern Observatory released the black hole photo on April 10, Visual China Group (VCG), China’s leading stock image provider that’s compared to Getty Images and owns Flikr’s one-time rival 500px, made the image available for sale in its library without attribution to the Event Horizon Telescope Collaboration (EHT), an array of radio telescopes that captured the image of the black hole.

“This is an editorial image. Please call 400-818-2525 or consult our customer service representative for commercial use,” said a note for the black hole image on VCG’s website.

Internet users took to social media slamming VCG for monetizing a photo intended for free distribution among the human race. Most of images on ESO are, according to the organization, under the Creative Commons license.

Unless specifically noted, the images, videos, and music distributed on the public ESO website, along with the texts of press releases, announcements, pictures of the week, blog posts and captions, are licensed under a Creative Commons Attribution 4.0 International License, and may on a non-exclusive basis be reproduced without fee provided the credit is clear and visible.

VCG swiftly revised the note to say the black hole photo should not be used for commercial purposes, but Pandora’s box was already open. The incident sparked a plethora of comments on Weibo, China’s equivalent of Twitter, condemning VCG’s opportunist business practice. The site is said to often play the role of the victim to obtain financial compensation, and it does so by seeking damages from users who inadvertently use a public domain photo that VCG has preemptively copyrighted.

Shares of VCG plummeted 10 percent Friday morning in Shanghai, giving it a market cap of 17.66 billion yuan ($2.63 billion).

Assets of VCG’s massive content library range from logos of large tech companies like Baidu, all the way to the Chinese national flag.

“Does your company also own copyrights to the national flag and national emblem?” remarked the Chinese Communist Youth League on its official Weibo account in a snarky response to VCG’s unscrupulous licensing practice.

The price tag of the national emblem image is, lo and behold, no less than 150 yuan ($22) for use in a newspaper article and at least 1,500 yuan ($220) on a magazine cover.

Screenshot: The image of the Chinese national emblem was for sale on VCG at 150 yuan to 1500 yuan

“Copyrights protection should definitely be promoted. The question is, why is VCG allowed to price photos of the black hole and the likes out of the market? Why is it able to exploit loopholes?” Du Yu, a Beijing-based freelance technology journalist, said to TechCrunch.

TechCrunch has reached out to ESO for comments and will update the story once we hear back.

Government intervention soon followed on the heels of online criticisms. On April 11, the cyberspace watchdog of Tianjin, where VCG’s parent company is based, ordered the photo site to end its “illegal, rule-breaking practices.”

VCG apologized on April 12 in a company statement, admitting the lack of oversight over its contracted contributors who allegedly uploaded the images in question. “We have taken down all non-compliant photos and closed down the site voluntarily for a revamp in accordance with related laws,” said VCG.

For true transparency around political advertising, U.S. tech companies must collaborate

In October 2017 online giants Twitter, Facebook, and Google announced plans to voluntarily increase transparency for political advertising on their platforms. The three plans to tackle disinformation had roughly the same structure: funder disclaimers on political ads, stricter verification measures to prevent foreign entities from posting such ads, and varying formats of ad archives.

All three announcements came just before representatives from the companies were due to testify before Congress about Russian interference in the 2016 election and reflected fears of forthcoming regulation, as well as concessions to consumer pressure.

Since then, the companies have continued to attempt to address the issue of digital deception occurring on their platforms.

Google recently released a white paper detailing how it would deal with online disinformation campaigns across many of its products. In the run-up to the 2018 midterm elections, Facebook announced it would ban false information about voting. These efforts reflect an awareness that the public is concerned about the use of social media to manipulate their votes and is pushing for tech companies to actively address the issue.

These efforts at self-regulation are a step in the right direction — but they fall far short of providing the true transparency necessary to inform voters about who is trying to influence them. The lack of consistency in disclosure across platforms, indecision over issue ads, and inaction on wider digital deception issues including fake and automated accounts, harmful micro-targeting, and the exposure of user data are major defects of this self-governing model.

For example, individuals looking at Facebook’s ad transparency platform are currently able to see information about who viewed an ad that is not currently available on Google’s platform. However, on Google the same user can see top keywords for advertisements, or search political ads by district, which cannot be done on Facebook.

With this inconsistency in disclosure across platforms, users are not able to get a full picture of who is trying to influence them, which prevents them from being able to cast an informed vote.

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018. Advocacy group Avaaz is calling attention to what the groups says are hundreds of millions of fake accounts still spreading disinformation on Facebook. (Photo: SAUL LOEB/AFP/Getty Images)

Issue ads pose an additional problem. These are public communications that do not reference particular candidates, focusing instead on hot-button political issues such as gun control or immigration. Issue ads cannot currently be regulated in the same way that political communications that refer to a candidate can due to the Supreme Court’s interpretation of the First Amendment.

Moreover, as Bruce Flack, Twitter’s General Manager for Revenue Product, pointed out in a blog post addressing the platform’s impending transparency efforts, “there is currently no clear industry definition for issue-based ads.”

In the same post, Flack indicated a potential solution, writing, “We will work with our peer companies, other industry leaders, policy makers and ad partners to clearly define [issue ads] quickly and integrate them into the new approach mentioned above.” This post was written 18 months ago, but no definition has been established—possibly because tech companies are not collaborating to systemically confront digital deception.

This lack of collaboration damages the public’s right to be politically informed. If representatives from the platforms where digital deception occurs most often — Facebook, Twitter, and Google — were to form an independent advisory group that met regularly and worked with regulators and civil society to discuss solutions to digital deception, transparency and disclosure across the platforms would be more complete.

The platforms could look to the example set by the nuclear power industry, where national and international nonprofit advisory bodies facilitate cooperation among utilities to ensure nuclear safety. The World Association of Nuclear Operators (WANO) connects all 115 nuclear power plant operators in 34 countries in order to facilitate the exchange of experience and expertise. The Institute of Nuclear Power Operations (INPO) in the U.S. functions in a similar fashion but is able to institute tighter sanctions since it operates at the national level.

Similar to WANO and INPO, an independent advisory group for the technology sector could develop a consistent set of disclosure guidelines — based on policy regulations put in place by government — that would apply evenly across all social media platforms and search engines.

These guidelines would hopefully include a unified database of ads purchased by political groups as well as clear and uniform disclaimers of the source of each ad, how much it cost, and who it targeted. Beyond paid ads, the industry group could develop guidelines to increase transparency for all communications by organized political entities, address computational propaganda, and determine how best to safeguard users’ data.

Additionally, if the companies were working together, they could set up a consistent definition of what an issue ad is and determine what transparency guidelines should apply. This is particularly relevant given policymakers’ limited authority to regulate issue ads.

Importantly, working together regularly would allow platforms to identify technological advances that might catch policymakers by surprise. Deepfakes — fabricated images, audio, or video that purport to be authentic — represent one area where technology companies will almost certainly be ahead of lawmakers’ expertise. If digital corporations were working together as well as cooperating with government agencies, they could flag new technologies like these in advance and help regulators determine the best way to maintain transparency in the face of a rapidly changing technological landscape.

Would such collaboration ever happen? The extensive aversion to regulation shown by these companies indicates a worrying preference towards appeasing advertisers at the expense of the American public.

However, in August 2018, in advance of the midterm elections, representatives from large tech firms did meet to discuss countering manipulation on their platforms. This followed a meeting in May with U.S. intelligence officials, also to discuss the midterm elections. Additionally, Facebook, Microsoft, Twitter, and YouTube formed the Global Internet Forum to Counter Terrorism to disrupt terrorists’ ability to promote extremist viewpoints on those platforms. This shows that when they are motivated, technology companies can work together.

It’s time for Facebook, Twitter, and Google to put their obligation to the public interest first and work together to systematically address the threat to democracy posed by digital deception.

Facebook agrees to clearer T&Cs in Europe

Facebook has agreed to amend its terms and conditions under pressure from EU lawmakers.

The new terms will make it plain that free access to its service is contingent on users’ data being used to profile them to target with ads, the European Commission said today.

“The new terms detail what services, Facebook sells to third parties that are based on the use of their user’s data, how consumers can close their accounts and under what reasons accounts can be disabled,” it writes.

Although the exact wording of the new terms has not yet been published, and the company has until the end of June 2019 to comply — so it remains to be seen how clear is ‘clear’.

Nonetheless the Commission is couching the concession as a win for consumers, trumpeting the forthcoming changes to Facebook’s T&C in a press release in which Vera Jourová, commissioner for justice, consumers and gender equality, writes:

Today Facebook finally shows commitment to more transparency and straight forward language in its terms of use. A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people’s data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights of EU consumers.

The change to Facebook’s T&Cs follows pressure applied to it in the wake of the Cambridge Analytica data misuse scandal, according to the Commission.

Along with national consumer protection authorities it says it asked Facebook to clearly inform consumers how the service gets financed and what revenues are derived from the use of consumer data as part of its response to the data-for-political-ads scandal.

“Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users’ agreement to share their data and to be exposed to commercial advertisements,” it writes. “Facebook’s terms will now clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.”

We reached out to Facebook with questions — including asking to see the wording of the new terms — but at the time of writing the company had declined to provide any response.

It’s also not clear whether the amended T&Cs will apply universally or only for Facebook users in Europe.

European commissioners have been squeezing social media platforms including Facebook over consumer rights issues since 2017 — when Facebook, Twitter and Google were warned the Commission was losing patience with their failure to comply with various consumer protection standards.

Aside from unclear language in their T&Cs, specific issues of concern for the Commission include terms that deprive consumers of their right to take a company to court in their own country or require consumers to waive mandatory rights (such as their right to withdraw from an online purchase).

Facebook has now agreed to several other T&Cs changes under pressure from the Commission, i.e. in addition to making it plainer that ‘if it’s free, you’re the product’.

Namely, the Commission says Facebook has agreed to: 1) amend its policy on limitation of liability — saying Facebook’s new T&Cs “acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties”; 2) amend its power to unilaterally change terms and conditions by “limiting it to cases where the changes are reasonable also taking into account the interest of the consumer”; 3) amend the rules concerning the temporary retention of content which has been deleted by consumers  — with content only able to be retained in “specific cases” (such as to comply with an enforcement request by an authority), and only for a maximum of 90 days when retained for “technical reasons”; and 4) amend the language clarifying the right to appeal of users when their content has been removed.

The Commission says it expects Facebook to make all the changes by the end of June at the latest — warning that the implementation will be closely monitored.

“If Facebook does not fulfil its commitments, national consumer authorities could decide to resort to enforcement measures, including sanctions,” it adds.

Human rights activist Amira Yahyaoui is battling the US college financial aid system

Tunisian human rights activist Amira Yahyaoui couldn’t go to college.

Not because she couldn’t afford it; where she comes from, college is virtually free. She lost the opportunity to pursue higher education, to finish high school, even, when she was exiled from Tunisia at age 17, under the repressive regime of the country’s former President, Zine El Abidine Ben Ali.

As part of the Tunisian human rights diaspora, she was inspired to build Al Bawsala, a globally renowned NGO that fights for government accountability, transparency and access to information. Now, Yahyaoui has traveled thousands of miles to San Francisco to fight another battle near and dear to her heart: civic education, or in Silicon Valley terms, edtech.

“I always knew that I wouldn’t allow myself to do anything else before solving the problem in my country and today, Tunisia is the only Arab democracy in the world,” Yahyaoui told TechCrunch.

With that in mind, her focus has shifted to Mos, a tech-enabled platform for students to apply for financial aid. With backing from Uber co-founder Garrett Camp, his startup studio Expa, Kleiner Perkins chairman John Doerr, Base Ventures, Sweet Capital and others, Mos has closed a $4 million seed round and plans to take its recently-launched product to the next level.

The startup seeks to decrease American student debt, which totaled nearly $1.6 trillion in 2018, and digitize the antiquated government systems that deter students from applying for financial aid. For a one-time fee of $149 and about 20 minutes of their time, Mos helps students of all backgrounds maximize their aid awards.

“Our mission is to bridge the gap between citizens and government in a way that works with technology today,” Yahyaoui said.

Yahyaoui is applying what she’s learned building a government-fighting NGO to the startup world, and with the support of top-tier investors, she’s well on her way to proving an “uneducated” immigrant woman of color can write a Silicon Valley success story for the masses.

A face of the Arab Spring

Mos founder and chief executive officer Amira Yahyaoui.

After being forced out of her home country, Yahyaoui fled to France, where she lived as an illegal immigrant and continued to fight against Tunisia’s authoritarian leadership through her blog and an anti-censorship campaign she started online.

When social media sparked anti-government protests across the Middle East, Yahyaoui, still unable to reenter Tunisia, became a face of what was later called the Arab Spring. Her digital prowess, activist reputation and persistent efforts to highlight the Tunisian administration’s human rights abuses quickly made her a face of the movement.

On January 14, 2011, when the protests succeeded in making Tunisia a pioneer of Arab democracy and ended Ben Ali’s reign, Yahyaoi got her passport back and went home, immediately.

Back in Tunisia with newfound freedom, she had an agenda: To hold the governing agency charged with writing a new Tunisian constitution accountable.

Yahyaoui built Al Bawsala, translated as The Compass, an NGO focused on transparency and government accountability. Al Bawsala became one of the largest NGOs in the Middle East, a bona fide success that attracted numerous awards and cemented Yahyaoui’s status as a fearless advocate for human rights, a freedom fighter and one of the most influential Arab women in the world.

“I had to work probably 10 times harder to get to be the self-educated me I am today,” she said. “I saw way too many people getting their education refused and therefore their future ruined.”

Her global standing earned her a seat on the board of the United Nation’s High Commissioner For Refugees Advisory Group on Gender, Forced Displacement, and Protection, as well as the title of Young Global Leader at the World Economic Forum and co-chair of the Davos Conference in 2016, a title she shard with Microsoft’s Satya Nadella and GM’s Mary Barra .

Three years later, with a resume enviable to any dignitary, Yahyaoui is leveraging her unique experience to lure in venture capitalists and use their cash for good.

Repairing a broken financial aid system

The Mos dashboard.

Mos is like if Turbo Tax married Typeform and had a baby, Yahyaoui explained. Not dissimilar to Common App, Mos lets students apply to more than 500 federal and state-based aid programs in minutes using a survey that matches them to every grant and scholarship program they qualify for, while simultaneously completing the FAFSA and state aid applications. To ensure every family is getting the most financial support possible, a Mos financial aid advisor reviews each case and negotiates with colleges for higher awards.

“Today, the biggest problem is people think they are not eligible for financial aid just because of how the thing is designed,” Yahyaoui said. “You’re supposed to just go ahead and fill a form that has 200 questions and then send it like a bottle in the sea and wait for months.”

Mos will complete a full-scale launch this summer and eventually tackle other nation’s college financial aid systems thanks to the new infusion of capital and the high-profile relationships Yahyaoui has forged in just one year living in the Bay Area.

Ultimately, it was Yahyaoui’s activism that granted her a ticket into the opaque world of Silicon Valley VC. As it turns out, angel investor Khaled Helioui, a fellow Tunisian immigrant who’d taken up residence in San Francisco, was familiar with Yahyaoui’s work and when he heard she had relocated to the Bay Area to launch a technology startup, he wanted to know exactly what she was building. Today, he’s a Mos investor and board member and it was his introductions that helped Yahyaoui quickly and skillfully close her seed round.

An early angel investor in Uber, Helioui connected Yahyaoui with his friend Garrett Camp, the very wealthy co-founder and chairman of the ride-hailing giant, who was sold on Mos’s mission right off the bat.

“I think because Garrett is an immigrant, he knows what it is to suffer with bureaucracy,” Yahyaoui said. “He was a huge believer. He actually made it so easy for me because he said, okay, here’s an office, just stay and work.”

Camp then introduced her to John Doerr, the chairman of the esteemed VC firm Kleiner Perkins, known for his successful bets on companies like Google and Amazon. With Camp and Doerr on board, Mos didn’t struggle to raise additional capital; in fact, Yahyaoui was in an unusual position of being able to reject investors whose values and vision for Mos clearly didn’t align with hers.

Tearing down barriers

Yahyaoui, center, with the Mos team in San Francisco.

Yahyaoui isn’t in the startup business to get rich off students trying to navigate their way through the absorbently expensive process of applying to and attending college. She’s part of a growing class of founders out to prove that you can pair profits with good morals and lead venture-backed values-based businesses.

“I know if I created the same thing as an NGO, I could have already raised $100 million, but I like the accountability of business,” she said. “We can create businesses that are good for people.”

Yahyaoui’s story, from being exiled from her home country at a young age to fighting an authoritarian regime is not one that’s ever been told before in Silicon Valley.

In addition to being a trailblazing human rights advocate, she’s a woman, an immigrant, “uneducated” by Silicon Valley standards and a first-time tech founder that was able to walk into a meeting with John Doerr and walk out with a term sheet.

If she’s successful in building a global edtech business, she’ll be emblematic of the meritocratic culture The Valley has falsely claimed to uphold. Even if she’s not successful, she’ll have torn down barriers for other underrepresented founders and written a success story fitting for this new era of accountability in tech.

To cut down on spam, Twitter cuts the number of accounts you can follow per day

Twitter just took another big step to help boot spammers off its platform: it’s cutting the number of accounts Twitter users can follow, from 1,000 per day to just 400. The idea with the new limits is that it helps prevent spammers from rapidly growing their networks by following then unfollowing Twitter accounts in a “bulk, aggressive or indiscriminate manner” – something that’s a violation of the Twitter Rules.

A number of services were recently banned from Twitter’s API for doing this same thing.

Several companies had been offering tools that allowed their customers to automatically follow a large number of users with little effort. This works as a growth tactic because some people will follow back out of courtesy, without realizing they’ve followed a bot.

The companies also offered tools to mass unfollow the Twitter accounts of those who didn’t return the favor by following the bot back. Other automated tools were often provided, as well –  like ones for creating those annoying auto-DMs, for example.

Twitter at the beginning of the year suspended a good handful of apps for violating its rules around “following and follow churn.” But booting the companies only addressed those that aimed profit by providing spammy automations as a service that others could use.

To really take on the spammers, the limits around how many people Twitter users can follow also had to be changed at the API level.

However, some people believe Twitter hasn’t gone far enough with today’s move.

In response to Twitter’s tweet about the new limits, several have responded to ask why the number “400” was chosen, as that still far more than a regular Twitter user would need to follow in a single day. Some users said it took years to get to the point of following hundreds of people. Meanwhile, the business use case for following 400 people is somewhat debatable, since DMs can be left open and companies can tweet a special URL to send customers to their inbox to continue a conversation – no following or unfollowing needed on either side.

While smaller businesses may still employ mass following techniques to attract customers, this at least puts more of a cap on those efforts.

These new limits and the spam dealer crackdown aren’t the only changes Twitter has taken in recent months to tackle the spam problem on its platform.

The company also updated its reporting tools to allow users to report spam, like fake accounts; and it introduced new security measures around account verification and sign-up, alongside other changes focused on more proactively identifying spammers. Last summer, Twitter also purged accounts it had previously locked for being spammy from people’s follower metrics.

Combined, the series of actions is designed to make spamming Twitter less attractive and considerably more difficult to scale. This impacts not only those who use spam for capital gain but also the new wave of fake news peddlers looking to topple democracies and disrupt elections – something that now has the U.S. government considering increased regulations for social media.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office. The paper can be read in full here (PDF).

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse (which will be covered by further stringent requirements under the plan).

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although the newspaper reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, ending July 1, after which it says it will set out the action it will take in developing its final proposals for legislation.

“Following the publication of the Government Response to the consultation, we will bring forward legislation when parliamentary time allows,” it adds.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any legislative gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own — at least, for now.

The House of Lords committee was another parliamentary body that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”.

And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle. But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”