Chipotle customers are saying their accounts have been hacked

A stream of Chipotle customers have said their accounts have been hacked and are reporting fraudulent orders charged to their credit cards — sometimes totaling hundreds of dollars.

Customers have posted on several Reddit threads complaining of account breaches and many more have tweeted at @ChipotleTweets to alert the fast food giant of the problem. In most cases, orders were put through under a victim’s account and delivered to addresses often not even in the victim’s state.

Many of the customers TechCrunch spoke to in the past two days said they used their Chipotle account password on other sites. Chipotle spokesperson Laurie Schalow told TechCrunch that credential stuffing was to blame. Hackers take lists of usernames and passwords from other breached sites and brute-force their way into other accounts.

But several customers we spoke to said their password was unique to Chipotle. Another customer said they didn’t have an account but ordered through Chipotle’s guest checkout option.

Tweets from Chipotle customers. (Screenshot: TechCrunch)

When we asked Chipotle about this, Schalow said the company is “monitoring any possible account security issues of which we’re made aware and continue to have no indication of a breach of private data of our customers,” and reiterated that the company’s data points to credential stuffing.

It’s a similar set of complaints made by DoorDash customers last year, who said their accounts had been improperly accessed. DoorDash also blamed the account hacks on credential stuffing, but could not explain how some accounts were breached even when users told TechCrunch that they used a unique password on the site.

If credential stuffing is to blame for Chipotle account breaches, rolling out two-factor authentication would help prevent the automated login process — and, put an additional barrier between a hacker and a victim’s account.

But when asked if Chipotle has plans to roll out two-factor authentication to protect its customers going forward, spokesperson Schalow declined to comment. “We don’t discuss our security strategies.”

Chipotle reported a data breach in 2017 affecting its 2,250 restaurants. Hackers infected its point-of-sale devices with malware, scraping millions of payment cards from unsuspecting restaurant goers. More than a hundred fast food and restaurant chains were also affected by the same malware infections.

In August, three suspects said to be members of the FIN7 hacking and fraud group were charged with the credit card thefts.

Facebook’s Portal will now surveil your living room for half the price

No, you’re not misremembering the details from that young adult dystopian fiction you’re reading — Facebook really does sell a video chat camera adept at tracking the faces of you and your loved ones. Now, you too can own Facebook’s poorly timed foray into social hardware for the low, low price of $99. That’s a pretty big price drop considering that the Portal, introduced less than six months ago, debuted at $199.

Unfortunately for whoever toiled away on Facebook’s hardware experiment, the device launched into an extremely Facebook-averse, notably privacy-conscious market. Those are pretty serious headwinds. Of course, plenty of regular users aren’t concerned about privacy — but they certainly should be.

As we found in our review, Facebook’s Portal is actually a pretty competent device with some thoughtful design touches. Still, that doesn’t really offset the unsettling idea of inviting a company notorious for disregarding user privacy into your home, the most intimate setting of all.

Facebook’s premium Portal+ with a larger, rotating 1080p screen is still priced at $349 when purchased individually, but if you buy a Portal+ with at least one other Portal, it looks like you can pick it up for $249. Facebook advertised the Portal discount for Mother’s Day and the sale ends on May 8. We reached out to the company to ask how sales were faring and if the holiday discounts would stick around for longer and we’ll update when we hear back.

Security flaw in EA’s Origin client exposed gamers to hackers

Electronic Arts has fixed a vulnerability in its online gaming platform Origin after security researchers found they could trick an unsuspecting gamer into remotely running malicious code on their computer.

The bug affected Windows users with the Origin app installed. Tens of millions of gamers use the Origin app to buy, access and download games. To make it easier to access an individual game’s store from the web, the client has its own URL scheme that allows gamers to open the app and load a game from a web page by clicking a link with origin:// in the address.

But two security researchers, Daley Bee and Dominik Penner of Underdog Security, found that the app could be tricked into running any app on the victims computer.

“An attacker could’ve ran anything they wanted,” Bee told TechCrunch.

‘Popping calc’ to demonstrate a remote code execution bug in Origin. (Image: supplied)

The researchers gave TechCrunch proof-of-concept code to test the bug for ourselves. The code allowed any app to run at the same level of privileges as the logged-in user. In this case, the researchers popped open the Windows calculator — the go-to app for hackers to show they can run code remotely on an affected computer.

But worse, a hacker could send malicious PowerShell commands, an in-built app often used by attackers to download additional malicious components and install ransomware.

Bee said a malicious link could be sent as an email or listed on a webpage, but could also triggered if the malicious code was combined with a cross-site scripting exploit that ran automatically in the browser.

It was also possible to steal a user’s account access token using a single line of code, allowing a hacker to gain access to a user’s account without needing their password.

Origin’s macOS client wasn’t affected by the bug.

EA spokesperson John Reseburg confirmed a fix was rolled out Monday. TechCrunch confirmed the code no longer worked following the update.

TikTok downloads banned on iOS and Android in India over porn and other illegal content

TikTok, the user-generated video sharing app from Chinese publisher Bytedance that has been a global runaway success, has stumbled hard in one of the world’s biggest mobile markets, India, over illicit content in its app.

Today, the country’s main digital communications regulator, the Ministry of Electronics and Information Technology, ordered both Apple and Google to remove the app from its app stores, per a request from High Court in Madras after the latter investigated and determined that the app — which has hundreds of millions of users, including minors — was encouraging pornography and other illicit content.

This is the second time in two months that TikTok’s content has been dinged by regulators, after the app was fined $5.7 million by the FTC in the US over violating child protection policies.

The order in India does not impact the 120 million users in the country who already have the app downloaded, or those on Android who might download it from a source outside of Google’s official Android store. But it’s a strong strike against TikTok that will impede its growth, harm its reputation, and potentially pave the way for further sanctions or fines against the app in India (and elsewhere taking India’s lead).

TikTok has issued no less than three different statements — each subsequently less aggressive — as it scrambles to respond to the order.

“We welcome the decision of the Madras High Court to appoint Arvind Datar as Amicus Curae (independent counsel) to the court,” the statement from TikTok reads. “We have faith in the Indian judicial system and we are optimistic about an outcome that would be well received by over 120 million monthly active users in India, who continue using TikTok to showcase their creativity and capture moments that matter in their everyday lives.”

(A previous version of the statement from TikTok was less ‘welcoming’ of the decision and instead highlighted how TikTok was making increased efforts to police its content without outside involvement. It noted that it had removed more than 6 million videos that violated its terms of use and community guidelines, following a review of content generated by users in India. That alone speaks to the actual size of the problem.)

On top of prohibiting downloads, the High Court also directed the regulator to bar media companies from broadcasting any videos — illicit or otherwise — made with or posted on TikTok. Bytedance has been working to try to appeal the orders, but the Supreme Court, where the appeal was heard, upheld it.

This is not the first time that TikTok has faced government backlash over the content that it hosts on its platform. In the US, two months ago, the Federal Trade Commission ruled that the app violated children’s privacy laws and fined it $5.7 million, and through a forced app updated, required all users to verify that they were over 13, or otherwise be redirected to a more restricted experience. Musically, TikTok’s predecessor, had also faced similar regulatory violations.

More generally the problems that TikTok is facing right now are not unfamiliar ones. Social media apps, relying on user-generated content as both the engine of their growth and the fuel for that engine, have long been problematic when it comes to illicit content. The companies that create and run these apps have argued that they are not responsible for what people produce on the platform, as long as it fits within its terms of use, but that has left a large gap where content is not policed as well as it should be. On the other hand, as these platforms rely on growth and scale for their business models, some have argued that this has made them less inclined to proactively police their platforms to bar the illicit content in the first place.

Additional reporting Rita Liao

Spy on your smart home with this open source research tool

Researchers at Princeton University have built a web app that lets you (and them) spy on your smart home devices to see what they’re up to.

The open source tool, called IoT Inspector, is available for download here. (Currently it’s Mac OS only, with a wait list for Windows or Linux.)

In a blog about the effort the researchers write that their aim is to offer a simple tool for consumers to analyze the network traffic of their Internet connected gizmos. The basic idea is to help people see whether devices such as smart speakers or wi-fi enabled robot vacuum cleaners are sharing their data with third parties. (Or indeed how much snitching their gadgets are doing.)

Testing the IoT Inspector tool in their lab the researchers say they found a Chromecast device constantly contacting Google’s servers even when not in active use.

A Geeni smart bulb was also found to be constantly communicating with the cloud — sending/receiving traffic via a URL (tuyaus.com) that’s operated by a China-based company with a platform which controls IoT devices.

There are other ways to track devices like this — such as setting up a wireless hotspot to sniff IoT traffic using a packet analyzer like WireShark. But the level of technical expertise required makes them difficult for plenty of consumers.

Whereas the researchers say their web app doesn’t require any special hardware or complicated set-up so it sounds easier than trying to go packet sniffing your devices yourself. (Gizmodo, which got an early look at the tool, describes it as “incredibly easy to install and use”.)

One wrinkle: The web app doesn’t work with Safari; requiring either Firefox or Google Chrome (or a Chromium-based browser) to work.

The main caveat is that the team at Princeton do want to use the gathered data to feed IoT research — so users of the tool will be contributing to efforts to study smart home devices.

The title of their research project is Identifying Privacy, Security, and Performance Risks of Consumer IoT Devices. The listed principle investigators are professor Nick Feamster and PhD student Danny Yuxing Huang at the university’s Computer Science department.

The Princeton team says it intends to study privacy and security risks and network performance risks of IoT devices. But they also note they may share the full dataset with other non-Princeton researchers after a standard research ethics approval process. So users of IoT Inspector will be participating in at least one research project. (Though the tool also lets you delete any collected data — per device or per account.)

“With IoT Inspector, we are the first in the research community to produce an open-source, anonymized dataset of actual IoT network traffic, where the identity of each device is labelled,” the researchers write. “We hope to invite any academic researchers to collaborate with us — e.g., to analyze the data or to improve the data collection — and advance our knowledge on IoT security, privacy, and other related fields (e.g., network performance).”

They have produced an extensive FAQ which anyone thinking about running the tool should definitely read before getting involved with a piece of software that’s explicitly designed to spy on your network traffic. (tl;dr, they’re using ARP-spoofing to intercept traffic data — a technique they warn may slow your network, in addition to the risk of their software being buggy.)

The dataset that’s being harvesting by the traffic analyzer tool is anonymized and the researchers specify they’re not gathering any public-facing IP addresses or locations. But there are still some privacy risks — such as if you have smart home devices you’ve named using your real name. So, again, do read the FAQ carefully if you want to participate.

For each IoT device on a network the tool collects multiple data-points and sends them back to servers at Princeton University — including DNS requests and responses; destination IP addresses and ports; hashed MAC addresses; aggregated traffic statistics; TLS client handshakes; and device manufacturers.

The tool has been designed not to track computers, tablets and smartphones by default, given the study focus on smart home gizmos.

Users can also manually exclude individual smart devices from being tracked if they’re able to power them down during set up or by specifying their MAC address.

Up to 50 smart devices can be tracked on the network where IoT Inspector is running. Anyone with more than 50 devices is asked to contact the researchers to ask for an increase to that limit.

The project team has produced a video showing how to install the app on Mac:

How to stop robocalls spamming your phone

No matter what your politics, beliefs, or even your sports team, we can all agree on one thing: robocalls are the scourge of modern times.

These unsolicited auto-dialed spam calls bug you dozens of times a week — sometimes more — demanding you “pay the IRS” or pretend to be “Apple technical support.” Even the now-infamous Chinese embassy scam, recently warned about by the FBI, has gained notoriety. These robocallers spoof their phone number to peddle scams and tricks — but the calls are real. Some 26 billion calls in 2018 were robocalls — up by close to half on the previous year. And yet there’s little the government agency in charge — the Federal Communications Commission — can do to deter robocallers, even though it’s illegal. Although the FCC has fined robocallers more than $200 million in recent years but collected just $6,790 because the agency lacks the authority to enforce the fines.

So, tough luck — it’s up to you to battle the robocallers — but it doesn’t have to be a losing battle. These are the best solutions to help keep the spammers at bay.

YOUR CARRIER IS YOUR FIRST CALL

Any winds of change will come from the big four cell giants: AT&T, Sprint, T-Mobile, and Verizon (which owns TechCrunch).

Spoofing happens because the carriers don’t verify that a phone number is real before a call crosses their networks. While the networks are figuring out how to fix the problem — more on that later — each carrier has an offering to help prevent spam calls.

Here are what they have:

AT&T‘s Call Protect app, which requires AT&T postpaid service, provides fraud warnings, and spam call screening and blocking. Call Protect is free for iOS and Android. AT&T also offers Call Protect Plus for $3.99 a month which offers enhanced caller ID services and reverse number lookups.

Sprint lets customers block or restrict calls through its Premium Caller ID service. It costs $2.99 per month and can be added to your Sprint account. You can then download the app for iOS. A Sprint spokesperson told TechCrunch that Android users should have an app preinstalled on their devices.

T-Mobile doesn’t offer an app, but provides a call screening to alert customers to potentially scammy or robocalled incoming calls. (Image: Farknot_Architect/Getty Images)

T-Mobile already lets you know when an incoming call is fishy by displaying “scam likely” as the caller ID. Better yet, you can ask T-Mobile to block those calls before your phone even rings using Scam Block. Customers can get it for free by dialing #632# from your device.

Verizon‘s Call Filter is an app that works on both iOS — though most Android devices sold through the carrier already have the app preinstalled. The free version detect and filter spam calls, while its $2.99 a month version gives you a few additional features like its proprietary “risk meter” to help you know more about the caller.

There are a few caveats you should consider:

  • These apps and services won’t be a death blow to spam calls, but they’re meant to help more than they hurt. Your mileage may vary.
  • Many of the premium app features — such as call blocking — are already options on your mobile device. (You can read more about that later.) You may not need to pay even more money on top of your already expensive cellular bill if you don’t need those features.
  • You may get false positives. These apps and services won’t affect your ability to make outbound or emergency calls, but there’s a risk that by using a screening app or service you may miss important phone calls.

WHAT YOU CAN DO

You don’t have to just rely on your carrier. There’s a lot you can do to help yourself.

There are some semi-obvious things like signing up for free to the National Do Not Call Register, but robocallers are not marketers and do not follow the same rules. You should forget about changing your phone number — it won’t help. Within days of setting up my work phone — nobody had my number — it was barraged with spam calls. The robocallers aren’t dialing you from a preexisting list; they’re dialing phones at random using computer-generated numbers. Often the spammers will reel off a list of numbers based off your own area code to make the number look more local and convincing. Sometimes the spoofing is done so badly that there are extra digits in the phone numbers.

Another option for the most annoying of robocalls is to use a third-party app, one that screens and manages your calls on your device.

There are, however, privacy tradeoffs with third-party apps. Firstly, you’re giving information about who calls you — and sometimes who you call — to another company that isn’t your cell carrier. That additional exposure puts your data at risk — we’ve all seen cases of cell data leaking. But the small monthly cost of the apps are worth if it means the apps don’t make money off your data, like serving you ads. Some apps will ask you for access to your phone contacts — be extremely mindful of this.

The three apps we’ve selected balance privacy, cost and their features.

  • Nomorobo has a constantly updated database of more than 800,000 phone numbers which lets the app proactively block against spammy incoming calls while still allowing legal robocalls through, like school closures and emergency alerts. It doesn’t ask for access to your contacts unlike other apps, and can also protect against spam texts. It’s $1.99 per month but comes with a 14-day free trial. Available for iOS and Android.
  • Hiya is an ad-free spam and robocall blocker that powers Samsung’s Smart Call service. Hiya pulls in caller profile information to tell you who’s calling. The app doesn’t automatically ask for access to your contacts but it’s an option for some of the enhanced features, though its privacy policy says it may upload them to its servers. The app has a premium feature set at $2.99 per month after a seven-day trial. Available for iOS and Android.
  • RoboKiller is another spam call blocker with a twist: it has the option to answer spam calls with prerecorded audio that aims to waste the bot’s time. Better yet, you can listen back to the recording for your own peace of mind. The app has more than 1.1 million numbers in its database. The app was awarded $25,000  by the Federal Trade Commission following a contest at security conference Def Con in 2015. RoboKiller’s full feature set can be found on iOS but is slowly rolling out to Android users. The app starts at $0.99 per month. Available for iOS and Android

You may find one app better than another. It’s worth experimenting with each app one at a time, which you can do with their free trials.

WHAT YOUR PHONE CAN DO FOR YOU

There are some more drastic but necessary options at your disposal.

Both iOS and Android devices have the ability to block callers. On one hand it helps against repeat offenders, but on the other it’s like a constant game of Whac-a-Mole. Using your in-built phone’s feature to block numbers prevents audio calls, video calls and text messages from coming through. But you have to block each number as they come in.

How to block spam calls on an iPhone (left) and filter spam calls on Android (right).

Some Android versions are different, but for most versions you can go to Settings > Caller ID & Spam and switch on the feature. You should be aware that incoming and outgoing call data will be sent to Google. You can also block individual numbers by going to Phone > Recents and tapping on each spam number to Block and Report call as spam, which helps improve Google spam busting efforts.

iPhones don’t come with an in-built spam filter, but you can block calls nonetheless. Go to Phone > Recents and tap on the information button next to each call record. Press Block this caller and that number will not be able to contact you again.

You can also use each device’s Do Not Disturb feature, a more drastic technique that blocks calls and notifications from bugging you when you’re busy. This feature for both iOS and Android block calls by default unless you whitelist each number.

How to enable Do Not Disturb on an iPhone (left) and Android (right).

In Android, swipe down from the notifications area and hit the Do Not Disturb icon, a bubble with a line through it. To change its settings, long tap on the button. From here, go to Exceptions > Calls. If you want to only allow calls from your contacts, select From contacts only or From starred contacts only for a more granular list. Your phone will only ring if a contact in your phone book calls you.

It’s almost the same in iOS. You can swipe up from your notifications area and hit the Do Not Disturb icon, shaped as a moon. To configure your notifications, go to Settings > Do Not Disturb and scroll down to Phone. From here you can set it so you only Allow Calls From your contacts or your favorites.

WHAT THE REGULATORS CAN DO

Robocalls aren’t going away unless they’re stamped out at the source. That requires an industry-wide effort — and the U.S. just isn’t quite there yet.

You might be surprised to learn that robocalls aren’t nearly as frequent or as common in the Europe. In the U.K., the carriers and the communications regulator Ofcom worked together in recent years to pool their technical and data sharing resources to find ways to prevent misuse on the cell networks.

Collectively, more than a billion calls have been stopped in the past year. Vodafone, one of the largest networks in Europe, said the carrier prevents around two million automated calls from reaching customers each day alone.

“In the U.K., the problem has been reduced by every major operator implementing techniques to reject nuisance calls,” said Vodafone’s Laura Hind in an email to TechCrunch. “These are generally based on evidence from customer complaints and network heuristics.”

Though collaboration and sharing spam numbers is important, technology is vital to crushing the spammers. Because most calls nowadays rely in some way on voice-over-the-internet, it’s easier than ever to prevent spoofed calls. Ofcom, with help from privacy regulator the Information Commissioner’s Office, plans to bring in technical solutions this year to bring into effect caller authentication to weed out spoofed spam calls.

The reality is that there are solutions to fix the robocall and spammer problem. The downside is that it’s up to the cell carriers to act.

Federal regulators are as sick of the problem as everyone else, ramping up the pressure on the big four to take the situation more seriously. Earlier this year, the Federal Communications Commission chairman Ajit Pai threatened “regulatory intervention” if carriers don’t roll out a system that properly identifies real callers.

One authentication system would make call spoofing nearly impossible, known as Secure Telephone Identity Revisited and Signature-based Handling of Asserted Information Using Tokens — or STIR/SHAKEN. The system relies on every phone number having a unique digital signature which, when checked against the cell networks will prove you are a real caller. The carrier then approves the call and patches it through to the recipient. This happens near-instantly.

The carriers have so far promised to implement the protocol, though the system isn’t expected to go into effect across the board for months — if not another year. So far only AT&T and Comcast have tested the protocol — with success. But there is still a way to go.

Until then, don’t let the spammers win.

Cybersecurity 101 - TechCrunch

Proposed bill would forbid big tech platforms from using dark pattern design

A new piece of bipartisan legislation aims to protect people from one of the sketchiest practices that tech companies employ to subtly influence user behavior. Known as “dark patterns,” this dodgy design strategy often pushes users toward giving up their privacy unwittingly and allowing a company deeper access to their personal data.

To fittingly celebrate the one year anniversary of Mark Zuckerberg’s appearance before Congress, Senators Mark Warner (D-VA) and Deb Fischer (R-NE) have proposed the Deceptive Experiences To Online Users Reduction (DETOUR) Act. While the acronym is a bit of a stretch, the bill would forbid online platforms with more than 100 million users from “relying on user interfaces that intentionally impair user autonomy, decision-making, or choice.”

“Any privacy policy involving consent is weakened by the presence of dark patterns,” Senator Fischer said of the proposed bipartisan bill. “These manipulative user interfaces intentionally limit understanding and undermine consumer choice.”

While this particular piece of legislation might not go on to generate much buzz in Congress, it does point toward some regulatory themes that we’ll likely hear more about as lawmakers build support for regulating big tech.

The bill, embedded below, would create a standards body to coordinate with the FTC on user design best practices for large online platforms. That entity would also work with platforms to outline what sort of design choices infringe on user rights, with the FTC functioning as a “regulatory backstop.”

Whether the bill gets anywhere or not, the FTC itself is probably best suited to take on the issue of dark pattern design, issuing its own guidelines and fines for violating them. Last year, after a Norwegian consumer advocacy group published a paper detailing how tech companies abuse dark pattern design, a coalition of eight U.S. watchdog groups called on the FTC to do just that.

Beyond eradicating dark pattern design, the bill also proposes prohibiting user interface designs that cultivate “compulsive usage” in children under the age of 13 as well as disallowing online platforms from conducting “behavioral experiments” without informed user consent. Under the guidelines set out by the bill, big online tech companies would have to organize their own Institutional Review Boards. These groups, more commonly called IRBs, provide powerful administrative oversight in any scientific research that uses human subjects.

“For years, social media platforms have been relying on all sorts of tricks and tools to convince users to hand over their personal data without really understanding what they are consenting to,” Senator Warner said of the proposed legislation. “Our goal is simple: to instill a little transparency in what remains a very opaque market and ensure that consumers are able to make more informed choices about how and when to share their personal information.”

The full text of the legislation is embedded below.

New privacy assistant Jumbo fixes your Facebook & Twitter settings

Jumbo could be a nightmare for the tech giants, but a savior for the victims of their shady privacy practices.

Jumbo saves you hours as well as embarrassment by automatically adjusting 30 Facebook privacy settings to give you more protection, and by deleting your old tweets after saving them to your phone. It can even erase your Google Search and Amazon Alexa history, with clean-up features for Instagram and Tinder in the works.

The startup emerges from stealth today to launch its Jumbo privacy assistant app on iPhone (Android coming soon). What could take a ton of time and research to do manually can be properly handled by Jumbo with a few taps.

The question is whether tech’s biggest companies will allow Jumbo to operate, or squash its access. Facebook, Twitter and the rest really should have built features like Jumbo’s themselves or made them easier to use, since they could boost people’s confidence and perception that might increase usage of their apps. But since their business models often rely on gathering and exploiting as much of your data as possible, and squeezing engagement from more widely visible content, the giants are incentivized to find excuses to block Jumbo.

“Privacy is something that people want, but at the same time it just takes too much time for you and me to act on it,” explains Jumbo founder Pierre Valade, who formerly built beloved high-design calendar app Sunrise that he sold to Microsoft in 2015. “So you’re left with two options: you can leave Facebook, or do nothing.”

Jumbo makes it easy enough for even the lazy to protect themselves. “I’ve used Jumbo to clean my full Twitter, and my personal feeling is: I feel lighter. On Facebook, Jumbo changed my privacy settings, and I feel safer.” Inspired by the Cambridge Analytica scandal, he believes the platforms have lost the right to steward so much of our data.

Valade’s Sunrise pedigree and plan to follow Dropbox’s bottom-up freemium strategy by launching premium subscription and enterprise features has already attracted investors to Jumbo. It’s raised a $3.5 million seed round led by Thrive Capital’s Josh Miller and Nextview Ventures’ Rob Go, who “both believe that privacy is a fundamental human right,” Valade notes. Miller sold his link-sharing app Branch to Facebook in 2014, so his investment shows those with inside knowledge see a need for Jumbo. Valade’s six-person team in New York will use the money to develop new features and try to start a privacy moment.

How Jumbo works

First let’s look at Jumbo’s Facebook settings fixes. The app asks that you punch in your username and password through a mini-browser open to Facebook instead of using the traditional Facebook Connect feature. That immediately might get Jumbo blocked, and we’ve asked Facebook if it will be allowed. Then Jumbo can adjust your privacy settings to Weak, Medium, or Strong controls, though it never makes any privacy settings looser if you’ve already tightened them.

Valade details that since there are no APIs for changing Facebook settings, Jumbo will “act as ‘you’ on Facebook’s website and tap on the buttons, as a script, to make the changes you asked Jumbo to do for you.” He says he hopes Facebook makes an API for this, though it’s more likely to see his script as against policies.

.

For example, Jumbo can change who can look you up using your phone number to Strong – Friends only, Medium – Friends of friends, or Weak – Jumbo doesn’t change the setting. Sometimes it takes a stronger stance. For the ability to show you ads based on contact info that advertisers have uploaded, both the Strong and Medium settings hide all ads of this type, while Weak keeps the setting as is.

The full list of what Jumbo can adjust includes Who can see your future posts?, Who can see the people?, Pages and lists you follow, Who can see your friends list?, Who can see your sexual preference?, Do you want Facebook to be able to recognize you in photos and videos?, Who can post on your timeline?, and Review tags people add to your posts the tags appear on Facebook? The full list can be found here.

For Twitter, you can choose if you want to remove all tweets ever, or that are older than a day, week, month (recommended), or three months. Jumbo never sees the data, as everything is processed locally on your phone. Before deleting the tweets, it archives them to a Memories tab of its app. Unfortunately, there’s currently no way to export the tweets from there, but Jumbo is building Dropbox and iCloud connectivity soon, which will work retroactively to download your tweets. Twitter’s API limits mean it can only erase 3,200 tweets of yours every few days, so prolific tweeters may require several rounds.

Its other integrations are more straightforward. On Google, it deletes your search history. For Alexa, it deletes the voice recordings stored by Amazon. Next it wants to build a way to clean out your old Instagram photos and videos, and your old Tinder matches and chat threads.

Across the board, Jumbo is designed to never see any of your data. “There isn’t a server-side component that we own that processes your data in the cloud,” Valade says. Instead, everything is processed locally on your phone. That means, in theory, you don’t have to trust Jumbo with your data, just to properly alter what’s out there. The startup plans to open source some of its stack to prove it isn’t spying on you.

While there are other apps that can clean your tweets, nothing else is designed to be a full-fledged privacy assistant. Perhaps it’s a bit of idealism to think these tech giants will permit Jumbo to run as intended. Valade says he hopes if there’s enough user support, the privacy backlash would be too big if the tech giants blocked Jumbo. “If the social network blocks us, we will disable the integration in Jumbo until we can find a solution to make them work again.”

But even if it does get nixed by the platforms, Jumbo will have started a crucial conversation about how privacy should be handled offline. We’ve left control over privacy defaults to companies that earn money when we’re less protected. Now it’s time for that control to shift to the hands of the user.

Facebook agrees to clearer T&Cs in Europe

Facebook has agreed to amend its terms and conditions under pressure from EU lawmakers.

The new terms will make it plain that free access to its service is contingent on users’ data being used to profile them to target with ads, the European Commission said today.

“The new terms detail what services, Facebook sells to third parties that are based on the use of their user’s data, how consumers can close their accounts and under what reasons accounts can be disabled,” it writes.

Although the exact wording of the new terms has not yet been published, and the company has until the end of June 2019 to comply — so it remains to be seen how clear is ‘clear’.

Nonetheless the Commission is couching the concession as a win for consumers, trumpeting the forthcoming changes to Facebook’s T&C in a press release in which Vera Jourová, commissioner for justice, consumers and gender equality, writes:

Today Facebook finally shows commitment to more transparency and straight forward language in its terms of use. A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people’s data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights of EU consumers.

The change to Facebook’s T&Cs follows pressure applied to it in the wake of the Cambridge Analytica data misuse scandal, according to the Commission.

Along with national consumer protection authorities it says it asked Facebook to clearly inform consumers how the service gets financed and what revenues are derived from the use of consumer data as part of its response to the data-for-political-ads scandal.

“Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users’ agreement to share their data and to be exposed to commercial advertisements,” it writes. “Facebook’s terms will now clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.”

We reached out to Facebook with questions — including asking to see the wording of the new terms — but at the time of writing the company had declined to provide any response.

It’s also not clear whether the amended T&Cs will apply universally or only for Facebook users in Europe.

European commissioners have been squeezing social media platforms including Facebook over consumer rights issues since 2017 — when Facebook, Twitter and Google were warned the Commission was losing patience with their failure to comply with various consumer protection standards.

Aside from unclear language in their T&Cs, specific issues of concern for the Commission include terms that deprive consumers of their right to take a company to court in their own country or require consumers to waive mandatory rights (such as their right to withdraw from an online purchase).

Facebook has now agreed to several other T&Cs changes under pressure from the Commission, i.e. in addition to making it plainer that ‘if it’s free, you’re the product’.

Namely, the Commission says Facebook has agreed to: 1) amend its policy on limitation of liability — saying Facebook’s new T&Cs “acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties”; 2) amend its power to unilaterally change terms and conditions by “limiting it to cases where the changes are reasonable also taking into account the interest of the consumer”; 3) amend the rules concerning the temporary retention of content which has been deleted by consumers  — with content only able to be retained in “specific cases” (such as to comply with an enforcement request by an authority), and only for a maximum of 90 days when retained for “technical reasons”; and 4) amend the language clarifying the right to appeal of users when their content has been removed.

The Commission says it expects Facebook to make all the changes by the end of June at the latest — warning that the implementation will be closely monitored.

“If Facebook does not fulfil its commitments, national consumer authorities could decide to resort to enforcement measures, including sanctions,” it adds.

UK sets out safety-focused plan to regulate Internet firms

The UK government has laid out proposals to regulate online and social media platforms, setting out the substance of its long-awaited White Paper on online harms today — and kicking off a public consultation.

The Online Harms White Paper is a joint proposal from the Department for Digital, Culture, Media and Sport (DCMS) and Home Office. The paper can be read in full here (PDF).

It follows the government announcement of a policy intent last May, and a string of domestic calls for greater regulation of the Internet as politicians have responded to rising concern about the mental health impacts of online content.

The government is now proposing to put a mandatory duty of care on platforms to take reasonable steps to protect their users from a range of harms — including but not limited to illegal material such as terrorist and child sexual exploitation and abuse (which will be covered by further stringent requirements under the plan).

The approach is also intended to address a range of content and activity that’s deemed harmful.

Examples providing by the government of the sorts of broader harms it’s targeting include inciting violence and violent content; encouraging suicide; disinformation; cyber bullying; and inappropriate material being accessed by children.

Content promoting suicide has been thrown into the public spotlight in the UK in recent months, following media reports about a schoolgirl whose family found out she had been viewing pro-suicide content on Instagram after she killed herself.

The Facebook -owned platform subsequently agreed to change its policies towards suicide content, saying it would start censoring graphic images of self-harm, after pressure from ministers.

Commenting on the publication of the White Paper today, digital secretary Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However those that fail to do this will face tough action.

”We want the UK to be the safest place in the world to go online, and the best place to start and grow a digital business and our proposals for new laws will help make sure everyone in our country can enjoy the Internet safely.”

In another supporting statement Home Secretary Sajid Javid added: “The tech giants and social media companies have a moral duty to protect the young people they profit from. Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

“That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”

Children’s charity, the NSPCC, was among the sector bodies welcoming the proposal.

“This is a hugely significant commitment by the Government that once enacted, can make the UK a world pioneer in protecting children online,” wrote CEO Peter Wanless in a statement.

For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.  So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

Although the Internet Watch Foundation, which works to stop the spread of child exploitation imagery online, warned against unintended consequences from badly planned legislation — and urged the government to take a “balanced approach”.

The proposed laws would apply to any company that allows users to share or discover user generated content or interact with each other online — meaning companies both big and small.

Nor is it just social media platforms either, with file hosting sites, public discussion forums, messaging services, and search engines among those falling under the planned law’s remit.

The government says a new independent regulator will be introduced to ensure Internet companies meet their responsibilities, with ministers consulting on whether this should be a new or existing body.

Telecoms regulator Ofcom has been rumored as one possible contender, though the UK’s data watchdog, the ICO, has previously suggested it should be involved in any Internet oversight given its responsibility for data protection and privacy. (According to the FT a hybrid entity combining the two is another possibility — although the newspaper reports that the government remains genuinely undecided on who the regulator will be.)

The future Internet watchdog will be funded by industry in the medium term, with the government saying it’s exploring options such as an industry levy to put it on a sustainable footing.

On the enforcement front, the watchdog will be armed with a range of tools — with the government consulting on powers for it to issue substantial fines; block access to sites; and potentially to impose liability on individual members of senior management.

So there’s at least the prospect of a high profile social media CEO being threatened with UK jail time in future if they don’t do enough to remove harmful content.

On the financial penalties front, Wright suggested that the government is entertaining GDPR-level fines of as much as 4% of a company’s annual global turnover, speaking during an interview on Sky News…

Other elements of the proposed framework include giving the regulator the power to force tech companies to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address it; to compel companies to respond to users’ complaints and act to address them quickly; and to comply with codes of practice issued by the regulator, such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

A long-running enquiry by a DCMS parliamentary committee into online disinformation last year, which was continuously frustrated in its attempts to get Facebook founder Mark Zuckerberg to testify before it, concluded with a laundry list of recommendations for tightening regulations around digital campaigning.

The committee also recommended clear legal liabilities for tech companies to act against “harmful or illegal content”, and suggested a levy on tech firms to support enhanced regulation.

Responding to the government’s White Paper in a statement today DCMS chair Damian Collins broadly welcomed the government’s proposals — though he also pressed for the future regulator to have the power to conduct its own investigations, rather than relying on self reporting by tech firms.

“We need a clear definition of how quickly social media companies should be required to take down harmful content, and this should include not only when it is referred to them by users, but also when it is easily within their power to discover this content for themselves,” Collins wrote.

“The regulator should also give guidance on the responsibilities of social media companies to ensure that their algorithms are not consistently directing users to harmful content.”

Another element of the government’s proposal is a “Safety by Design” framework that’s intended to help companies incorporate online safety features in new apps and platforms from the start.

The government also wants the regulator to head up a media literacy strategy that’s intended to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, such as catfishing, grooming and extremism.

It writes that the UK is committed to a free, open and secure Internet — and makes a point of noting that the watchdog will have a legal duty to pay “due regard” to innovation, and also to protect users’ rights online by paying particular mindful not infringe privacy and freedom of expression.

It therefore suggests technology will be an integral part of any solution, saying the proposals are designed to promote a culture of continuous improvement among companies — and highlighting technologies such as Google’s “Family Link” and Apple’s Screen Time app as examples of the sorts of developments it wants the policy framework to encourage.

Although such caveats are unlikely to do much to reassure those concerned the approach will chill online speech, and/or place an impossible burden on smaller firms with less resource to monitor what their users are doing.

“The government’s proposals would create state regulation of the speech of millions of British citizens,” warns digital and civil rights group, the Open Rights Group, in a statement by its executive director Jim Killock. “We have to expect that the duty of care will end up widely drawn with serious implications for legal content, that is deemed potentially risky, whether it really is nor not.

“The government refused to create a state regulator the press because it didn’t want to be seen to be controlling free expression. We are skeptical that state regulation is the right approach.”

UK startup policy advocacy group Coadec was also quick to voice concerns — warning that the government’s plans will “entrench the tech giants, not punish them”.

“The vast scope of the proposals means they cover not just social media but virtually the entire internet – from file sharing to newspaper comment sections. Those most impacted will not be the tech giants the Government claims they are targeting, but everyone else. It will benefit the largest platforms with the resources and legal might to comply – and restrict the ability of British startups to compete fairly,” said Coadec executive director Dom Hallas in a statement. 

“There is a reason that Mark Zuckerberg has called for more regulation. It is in Facebook’s business interest.”

UK startup industry association, techUK, also put out a response statement that warns about the need to avoid disproportionate impacts.

“Some of the key pillars of the Government’s approach remain too vague,” said Vinous Ali, head of policy, techUK. “It is vital that the new framework is effective, proportionate and predictable. Clear legal definitions that allow companies in scope to understand the law and therefore act quickly and with confidence will be key to the success of the new system.

“Not all of the legitimate concerns about online harms can be addressed through regulation. The new framework must be complemented by renewed efforts to ensure children, young people and adults alike have the skills and awareness to navigate the digital world safely and securely.”

The government has launched a 12-week consultation on the proposals, ending July 1, after which it says it will set out the action it will take in developing its final proposals for legislation.

“Following the publication of the Government Response to the consultation, we will bring forward legislation when parliamentary time allows,” it adds.

Last month a House of Lords committee recommended an overarching super regulator be established to plug any legislative gaps and/or handle overlaps in rules on Internet platforms, arguing that “a new framework for regulatory action” is needed to handle the digital world.

Though the government appears confident that an Internet regulator will be able to navigate any legislative patchwork and keep tech firms in line on its own — at least, for now.

The House of Lords committee was another parliamentary body that came down in support of a statutory duty of care for online services hosting user-generated content, suggesting it should have a special focus on children and “the vulnerable in society”.

And there’s no doubt the concept of regulating Internet platforms has broad consensus among UK politicians — on both sides of the aisle. But how to do that effectively and proportionately is another matter.

We reached out to Facebook and Google for a response to the White Paper.

Commenting on the Online Harms White Paper in a statement, Rebecca Stimson, Facebook’s head of UK public policy, said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Stimson also reiterated how Facebook has expanded the number of staff it has working on trust and safety issues to 30,000 in recent years, as well as claiming it’s invested heavily in technology to help prevent abuse — while conceding that “we know there is much more to do”.

Last month the company revealed shortcomings with its safety measures around livestreaming, after it emerged that a massacre in Christchurch, New Zealand which was livestreamed to Facebook’s platform, had not been flagged for accelerated review by moderates because it was not tagged as suicide related content.

Facebook said it would be “learning” from the incident and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

In its response to the UK government White Paper today, Stimson added: “The internet has transformed how billions of people live, work and connect with each other, but new forms of communication also bring huge challenges. We have responsibilities to keep people safe on our services and we share the government’s commitment to tackling harmful content online. As Mark Zuckerberg said last month, new regulations are needed so that we have a standardised approach across platforms and private companies aren’t making so many important decisions alone.”