Box introduces Box Shield with increased security controls and threat protection

Box has always had to balance the idea of sharing content broadly while protecting it as it moved through the world, but the more you share, the more likely something can go wrong, such as misconfigured shared links that surfaced earlier this year. In an effort to make the system more secure, the company announced Box Shield today in Beta, a set of tools to help employees sharing Box content better understand who they are sharing with, while helping the security team see when content is being misused.

Link sharing is a natural part of what companies do with Box, and as Chief Product- and Chief Strategy Officer Jeetu Patel says, you don’t want to change the way people use Box. Instead, he says it’s his job to make it easier to make it secure and that is the goal with today’s announcement.

“We’ve introduced Box Shield, which embeds these content controls and protects the content in a way that doesn’t compromise user experience, while ensuring safety for the administrator and the company, so their intellectual property is protected,” Patel explained.

He says this involves two components. The first is about raising user awareness and helping them understand what they’re sharing. In fact, sometimes companies use Box as a content management backend to distribute files like documentation on the internet on purpose. They want them to be indexed in Google. Other times, however, it’s through misuse of the file sharing component, and Box wants to fix that with this release by making it clear who they are sharing with and what that means.

They’ve updated the experience on the web and mobile products to make it much clearer through messaging and interface design what the sharing level they have chosen means. Of course, some users will ignore all these messages, so there is a second component to give administrators more control.

2. Box Shield Smart Access

Box Shield access controls. Photo: Box

This involves helping customers build guardrails into the product to prevent leakage of an entire category of documents that you would never want leaked like internal business plans, salary lists or financial documents, or even to granularly protect particular files or folders. “The second thing we’re trying to do is make sure that Box itself has some built-in security guardrails and boundary conditions that can help people reduce the risk around employee negligence or inadvertent disclosures, and then make sure that you have some very precision-based, granular security controls that can be applied to classifications that you’ve set on content,” he explained.

In addition, the company wants to help customers detect when employees are abusing content, perhaps sharing sensitive data like customers lists with a personal account, and flag these for the security team. This involves flagging anomalous downloads, suspicious sessions or unusual locations inside Box.

The tool can also work with existing security products already in place, so that whatever classification has been applied in Box travels with a file, and anomalies or misuse, can be captured by the company’s security apparatus before the file leaves the company’s boundaries.

While Patel acknowledges, there is no way to prevent user misuse or abuse in all cases, by implementing Box Shield, the company is attempting to provide customers with a set of tools to help them reduce the possibility of it going undetected. Box Shield is in private Beta today and will be released in the Fall.

Google, Mozilla team up to block Kazakhstan’s browser spying tactics

Google and Mozilla have taken the rare step of blocking an untrusted certificate issued by the Kazakhstan government, which critics say it forced its citizens to install as part of an effort to monitor their internet traffic.

The two browser makers said in a joint statement Wednesday it deployed “technical solutions” to block the government-issued certificate.

Citizens had been told to install the government-issued certificate on their computers and devices as part of a domestic surveillance program. In doing so it gave the government ‘root’ access to the network traffic on those devices, allowing the government to intercept and snoop on citizens’ internet browsing activities.

Researchers found that only a few sites were being monitored, like Facebook, Twitter, and Google.

Although the Kazakh government is said to have stopped what it called “system testing” and allowed citizens to delete the certificate, both Google and Mozilla said its measures would stop the data-intercepting certificate from working — even if it’s still installed.

“We don’t take actions like this lightly,” said Marshall Erwin, Mozilla’s senior director of trust and security. But Google browser chief Parisa Tabriz said the company would “never tolerate any attempt, by any organization — government or otherwise — to compromise Chrome users’ data.”

The block went into effect invisibly and no action is needed by users.

Kazakhstan has a population of 18 million. Researchers said that the Kazakh government’s efforts to intercept the country’s internet traffic only hit a “fraction” of the connections passing through the country’s largest internet provider.

The Central-Asian country currently ranks as one of the least free countries on the internet freedom score, based off data collected by watchdog Freedom House, trailing just behind Russia and Iran.

A spokesperson for the Kazakhstan consulate in New York did not respond to a request for comment.

MoviePass exposed thousands of unencrypted customer card numbers

Movie ticket subscription service MoviePass has exposed tens of thousands of customer card numbers and personal credit cards because a critical server was not protected with a password.

Mossab Hussein, a security researcher at Dubai-based cybersecurity firm SpiderSilk, found an exposed database on one of the company’s many subdomains. The database was massive, containing 161 million records at the time of writing and growing in real-time. Many of the records were normal computer-generated logging messages used to ensure the running of the service — but many also included sensitive user information, such as MoviePass customer card numbers.

These MoviePass customer cards are like normal debit cards: they’re issued by Mastercard and store a cash balance, which users who sign up to the subscription service can use to pay to watch a catalog of movies. For a monthly subscription fee, MoviePass uses the debit card to load the full cost of the movie, which the customer then uses to pay for the movie at the cinema.

We reviewed a sample of 1,000 records and removed the duplicates. A little over half contained unique MoviePass debit card numbers. Each customer card record had the MoviePass debit card number and its expiry date, the card’s balance, when it was activated.

The database had more than 58,000 records containing card data — and was growing by the minute.

We also found records containing customers’ personal credit card numbers and their expiry date — which included billing information, including names, and postal addresses. Among the records we reviewed, we found records with enough information to make fraudulent card purchases.

Some records, however, contained card numbers that had been masked except for the last four digits.

The database also contained email address and some password data related to failed login attempts. We found hundreds of records containing the user’s email address and presumably incorrectly typed password — which was logged — in the database. We verified this by attempting log into the app with an email address and password that didn’t exist but only we knew. Our dummy email address and password appeared in the database almost immediately.

None of the records in the database were encrypted.

Hussain contacted MoviePass chief executive Mitch Lowe by email — which TechCrunch has seen — over the weekend but did not hear back. It was only after TechCrunch reached out Tuesday when MoviePass took the database offline.

It’s understood that the database may have been exposed for months, according to data collected by cyberthreat intelligence firm RiskIQ, which first detected the system in late June.

We asked MoviePass several questions — including why the initial email disclosing the security lapse was ignored, for how long the server was exposed, and its plans to disclose the incident to customers and state regulators. When reached, a spokesperson did not comment by our deadline.

MoviePass has been on a rollercoaster since it hit mainstream audiences last year. The company quickly grew its customer base from 1.5 million to 2 million customers in less than a month. But MoviePass took a tumble after critics said it grew too fast, forcing the company to cease operating briefly after the company briefly ran out of money. The company later said it was profitable, but then suspended service, supposedly to work on its mobile app. It now says it has “restored [service] to a substantial number of our current subscribers.”

Leaked internal data from April said its customer numbers went from three million subscribers to about 225,000. And just this month MoviePass reportedly changed user passwords to hobble access for customers who use the service extensively.

Hussain said the company was negligent in leaving data unencrypted in an exposed, accessible database.

“We keep on seeing companies of all sizes using dangerous methods to maintain and process private user data,” Hussain told TechCrunch. “In the case of MoviePass, we are questioning the reason why would internal technical teams ever be allowed to see such critical data in plaintext — let alone the fact that the dataset was exposed for public access by anyone,” he said.

The security researcher said he found the exposed database using his company-built web mapping tools, which peeks into non-password protected databases that are connected to the internet, and identifies the owner. The information is privately disclosed to companies, often in exchange for a bug bounty.

Hussain has a history of finding exposed databases. In recent months he found one of Samsung’s development labs exposed on the internet. He also found an exposed backend database belonging to Blind, an anonymity-driven workplace social network, exposing private user data.

Read more:

Yubico launches its dual USB-C and Lightning two-factor security key

Almost two months after it was first announced, Yubico has launched the YubiKey 5Ci, a security key with dual support for both iPhones, Macs and other USB-C compatible devices.

Yubico’s latest Yubikey is the latest iteration of its security key built to support a newer range of devices, including Apple’s iPhone, iPad, and MacBooks in a single device. Announced in June, the company said the security keys would cater for cross-platform users — particularly Apple device owners.

These security keys may be small enough to sit on a keyring, but they contain the keys to your online line. Your Gmail, Twitter, and Facebook account all support these plug-in devices as a second-factor of authentication after your username and password — a far stronger mechanism than the simple code sent to your phone.

Security keys offer almost unbeatable security and can protect against a variety of threats, including nation-state attackers.

Jerrod Chong, Yubico’s chief solutions officer, said the new key would fill a “critical gap in the mobile authentication ecosystem,” particularly given how users are increasingly spending their time across a multitude of mobile devices.

The new key works with a range of apps, including password managers like 1Password and LastPass, and web browsers like Brave, which support security key authentication.

Twitter says accounts linked to China tried to ‘sow political discord’ in Hong Kong

Twitter says a significant state-backed information operation involving hundreds of accounts linked to China were part of an effort to deliberately “sow political discord” in Hong Kong after weeks of protests in the region.

In a blog post, the social networking site said the 936 accounts it found tried to undermine “the legitimacy and political positions of the protest movement on the ground.”

More than a million protesters took to the streets this weekend to demonstrate peacefully against the Chinese government, which took over rule from the British government in 1997. Protests erupted months ago following a bid by Hong Kong leader Carrie Lam to push through a highly controversial bill that would allow criminal suspects to be extradited to mainland China for trial. The bill was suspended, effectively killing it from reaching the law books, but protests have continued, pushing back at claims that China is trying to meddle in Hong Kong’s affairs.

Although Twitter is banned in China, the social media giant says the latest onslaught of fake accounts is likely “a coordinated state-backed operation.

“Specifically, we identified large clusters of accounts behaving in a coordinated manner to amplify messages related to the Hong Kong protests,” the statement said.

china tweets

Two of the tweets supplied by Twitter.

Twitter said many of the accounts are using virtual private networks — or VPNs — which can be used to tunnel through China’s vast domestic censorship system, known as the Great Firewall. The company added that the accounts its sharing represent the “most active” portions of a wider spam campaign of about 200,000 accounts.

“Covert, manipulative behaviors have no place on our service — they violate the fundamental principles on which our company is built,” said Twitter.

News of the fake accounts come days after Twitter user @Pinboard warned that China was using Twitter to send and promote tweets aimed at discrediting the protest movement.

Facebook said in its own post it also took down five Facebook accounts, seven pages and three groups on its site “based on a tip shared by Twitter.” The accounts frequently posted about local political news and issues including topics like the ongoing protests in Hong Kong, said Nathaniel Gleicher, Facebook’s head of cybersecurity policy.

“Although the people behind this activity attempted to conceal their identities, our investigation found links to individuals associated with the Chinese government,” said Gleicher.

Some of the posts, Facebook said, referred Hong Kong residents are “cockroaches.”

Twitter said it’s adding the complete set the accounts’ tweets to its archive of information operations.

After data incidents, Instagram expands its bug bounty

Facebook is expanding its data abuse bug bounty to Instagram.

The social media giant, which owns Instagram, first rolled out its data abuse bounty in the wake of the Cambridge Analytica scandal, which saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.

The idea was that security researchers and platform users alike could report instances of third-party apps or companies that were scraping, collecting and selling Facebook data for other purposes, such as to create voter profiles or build vast marketing lists.

Even following he high profile public relations disaster of Cambridge Analytica, Facebook still still had apps illicitly collecting data on its users.

Instagram wasn’t immune either. Just this month Instagram booted a “trusted” marketing partner off its platform after it was caught scraping millions of users’ stories, locations and other data points on millions of users, forcing Instagram to make product changes to prevent future scraping efforts. That came after two other incidents earlier this year where a security researcher found 14 million scraped Instagram profiles sitting on an exposed database — without a password — for anyone to access. Another incident saw another company platform scrape the profile data — including email addresses and phone numbers — of Instagram influencers.

Last year Instagram also choked developers’ access as the company tried to rebuild its privacy image in the aftermath of the Cambridge Analytica scandal.

Dan Gurfinkel, security engineering manager at Instagram, said its new and expanded data abuse bug bounty aims to “encourage” security researchers to report potential abuse.

Instagram said it’s also inviting a select group of trusted security researchers to find flaws in its Checkout service ahead of its international rollout, who will also be eligible for bounty payouts.

Read more:

Developers accuse Apple of anti-competitive behavior with its privacy changes in iOS 13

A group of app developers have penned a letter to Apple CEO Tim Cook, arguing that certain privacy-focused changes to Apple’s iOS 13 operating system will hurt their business. In a report by The Information, the developers were said to have accused Apple of anti-competitive behavior when it comes to how apps can access user location data.

With iOS 13, Apple aims to curtail apps’ abuse of its location-tracking features as part of its larger privacy focus as a company.

Today, many apps ask users upon first launch to give their app the “Always Allow” location-tracking permission. Users can confirm this with a tap, unwittingly giving apps far more access to their location data than is actually necessary, in many cases.

In iOS 13, however, Apple has tweaked the way apps can request location data.

There will now be a new option upon launch presented to users, “Allow Once,” which allows users to first explore the app to see if it fits their needs before granting the app developer the ability to continually access location data. This option will be presented alongside existing options, “Allow While Using App” and “Don’t Allow.”

The “Always” option is still available, but users will have to head to iOS Settings to manually enable it. (A periodic pop-up will also present the “Always” option, but not right away.)

The app developers argue that this change may confuse less technical users, who will assume the app isn’t functioning properly unless they figure out how to change their iOS Settings to ensure the app has the proper permissions.

The developers’ argument is a valid assessment of user behavior and how such a change could impact their apps. The added friction of having to go to Settings in order to toggle a switch so an app to function can cause users to abandon apps. It’s also, in part, why apps like Safari ad blockers and iOS replacement keyboards never really went mainstream, as they require extra steps involving the iOS Settings.

That said, the changes Apple is rolling out with iOS 13 don’t actually break these apps entirely. They just require the apps to refine their onboarding instructions to users. Instead of asking for the “Always Allow” permission, they will need to point users to the iOS Settings screen, or limit the apps’ functionality until it’s granted the “Always Allow” permission.

In addition, the developers’ letter pointed out that Apple’s own built-in apps (like Find My) aren’t treated like this, which raises anti-competitive concerns.

The letter also noted that Apple in iOS 13 would not allow developers to use PushKit for any other purpose beyond internet voice calls — again, due to the fact that some developers abused this toolkit to collect private user data.

“We understand that there were certain developers, specifically messaging apps, that were using this as a backdoor to collect user data,” the email said, according to the report. “While we agree loopholes like this should be closed, the current Apple plan to remove [access to the internet voice feature] will have unintended consequences: it will effectively shut down apps that have a valid need for real-time location.”

The letter was signed by Tile CEO CJ Prober; Arity (Allstate) President Gary Hallgren; CEO of Life360 Chris Hullsan; CEO of dating app Happn Didier Rappaport; CEO of Zenly (Snap), Antoine Martin; , CEO of Zendrive, Jonathan Matus which; and chief strategy officer of social networking app Twenty, Jared Allgood.

Apple responded to The Information by saying that any changes it makes to the operating system are “in service to the user” and to their privacy. It also noted that any apps it distributes from the App Store have to abide by the same procedures.

It’s another example of how erring on the side of increased user privacy can lead to complications and friction for end users. One possible solution could be allowing apps to present their own in-app Settings screen where users could toggle the app’s full set of permissions directly — including everything from location data to push notifications to the app’s use of cellular data or Bluetooth sharing.

The news comes at a time when the U.S. Dept. of Justice is considering investigating Apple for anti-competitive behavior. Apple told The Information it was working with some of the impacted developers using PushKit on alternate solutions.

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity

Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells TechCrunch. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”

Amazon customers say they received emails for other people’s orders

Users have said they are receiving emails from Amazon containing invoices and order updates on other customers, TechCrunch has learned.

Jake Williams, founder of cybersecurity firm Rendition Infosec, raised the alarm after he received an email from Amazon addressed to another customer with their name, postal address and their order details.

Williams said he ordered something months ago which recently became available for shipping. He checked the email headers to make sure it was a genuine message.

“I think they legitimately intended to email me a notification that my item was shipping early,” he said. “I just think they screwed something up in the system and sent the updates to the wrong people.”

He said the apparent security lapse was worrying because emails about orders sent to the wrong place is a “serious breach of trust” that can reveal private information about a customer’s life, such as sexual orientation, proclivities or other personal information

Several other Amazon customers also said they received emails seemingly meant for other people.

“I made an order yesterday afternoon and received her email last night,” another customer who tweeted about the mishap told TechCrunch. “Luckily I’m not a malicious person but that’s a huge security issue,” she said.

Another customer tweeted out about receiving an email meant for someone else. He said he spoke to Amazon customer service, which said they will investigate additional security issues.

“Hope you didn’t send my sensitive account info to someone else,” he added.

And, one other customer posted a tweet thread about the issue, saying they spoke to a supervisor about the issue who gave a “nonchalant” response, she wrote. She said the supervisor said the issue happens frequently.

A spokesperson for Amazon did not return a request for comment when we asked how many customers were affected and if the company plans to inform customers of the breach. If we hear back, we’ll update.

It’s the second security lapse in a year. In November the company emailed customers saying a “technical error” had exposed an unknown number of their email addresses. When asked about specifics, the notoriously secretive company declined to comment further.

8 million Android users tricked into downloading 85 adware apps from Google Play

Dozens of Android adware apps disguised as photo editing apps and games have been caught serving ads that would take over users’ screens as part of a fraudulent money-making scheme.

Security firm Trend Micro said it found 85 individual apps downloaded more than eight million times from the Google Play — all of which have since been removed from the app store.

More often than not adware apps will run on a user’s device and will silently serve and click ads in the background and without the user’s knowledge to generate ad revenue. But these apps were particularly brazen and sneaky, one of the researchers said.

“It isn’t your run-of-the-mill adware family,” said Ecular Xu, a mobile threat response engineer at Trend Micro. “Apart from displaying advertisements that are difficult to close, it employs unique techniques to evade detection through user behavior and time-based triggers.”

The researchers discovered that the apps would keep a record when they were installed and sit dormant for around half-an-hour. After the delay, the app would hide its icon and create a shortcut on the user’s home screen, the security firm said. That, they say, helped to protect the app from being deleted if the user decided to drag and drop the shortcut to the ‘uninstall’ section of the screen.

“These ads are shown in full screen,” said Xu. “Users are forced to view the whole duration of the ad before being able to close it or go back to app itself.”

When the app unlocked, it displayed ads on the user’s home screen. The code also checks to make sure it doesn’t show the same ad too frequently, the researchers said.

Worse, the ads can be remotely configured by the fraudster, allowing ads to be displayed more frequently than the default five minute intervals.

Trend Micro provided a list of the apps — including Super Selfie Camera, Cos Camera, Pop Camera, and One Stroke Line Puzzle — all of which had a million downloads each.

Users about to install the apps had a dead giveaway: most of the apps had appalling reviews, many of which had as many one-star reviews as they did five-stars, with users complaining about the deluge of pop-up ads.

Google does not typically comment on app removals beyond acknowledging their removal from Google Play.

Read more: