Equifax, Western Union, Priceline settle with New York attorney general over insecure mobile apps

New York’s attorney general has settled with five tech and financial giants, requiring each company to implement basic security on their mobile apps.

The settlements force Credit Sesame, Equifax (yes, that Equifax), Priceline, Spark Networks and Western Union to ensure data sent between the app and their servers are encrypted. Specifically, the attorney general said their apps “could have allowed sensitive information entered by users — such as passwords, social security numbers, credit card numbers, and bank account numbers — to be intercepted by eavesdroppers employing simple and well-publicized techniques.”

In other words, their mobile apps “all failed” to properly roll out and implement HTTPS, one of the barest minimum security measures in any modern app’s security.

HTTPS certificates (also known as SSL/TLS certificates) encrypt data between a device, like your phone or computer, and a website or app server, ensuring any sensitive data, like credit card numbers or passwords, can’t be intercepted as it travels over the internet — whether that’s someone on the same coffee shop Wi-Fi network or your nearest federal intelligence agency.

These certificates are more common than ever, not least because when they’re not incredibly cheap, they’re completely free — and most modern browsers these days will bluntly tell you when a website is “not secure.” Apps are no different, but without a green padlock in your browser window, there’s often very little to know for sure on the face of it that your data is traversing the internet securely.

At least, with financial, banking and dating apps — you’d just assume, right? Bzzt, wrong.

“Although each company represented to users that it used reasonable security measures to protect their information, the companies failed to sufficiently test whether their mobile apps had this vulnerability,” the office of attorney general Barbara Underwood said in a statement. “Today’s settlements require each company to implement comprehensive security programs to protect user information.”

The apps were picked out after an extensive batch of app testing in an effort to find security issues before incidents happen. Underwood’s office follows in the footsteps of federal enforcement in recent years by the Federal Trade Commission, which brought action against several app makers — including Credit Karma and Fandango — for failing to properly implement HTTPS certificates.

In taking action, the attorney general gets to keep closer tabs on the companies going forward to make sure they’re not flouting their data security responsibilities.

How Russia’s online influence campaign engaged with millions for years

Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.

The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.

A clearer picture

If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.

If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.

Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?

The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.

Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.

The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:

Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.

Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.

Examples of posts targeting black Americans.

In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”

Instagram on the rise

Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.

In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).

A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.

IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.

Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.

In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”

Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.

Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.

This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.

For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”

In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.

Desperately seeking transparency

In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.

“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”

“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”

Predictably, even social networks’ moderation policies became targets for propagandizing.

Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.

Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.

“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)

If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.

New malware pulls its instructions from code hidden in memes posted to Twitter

Security researchers said they’ve found a new kind of malware that takes its instructions from code hidden in memes posted to Twitter.

The malware itself is relatively underwhelming: like most primitive remote access trojans (RATs), the malware quietly infects a vulnerable computer, takes screenshots and pulls other data from the affected system and sends it back to the malware’s command and control server.

What’s interesting is how the malware uses Twitter as an unwilling conduit in communicating with its malicious mothership.

Trend Micro said in a blog post that the malware listens for commands from a Twitter account run by the malware operator. The researchers found two tweets that used steganography to hide “/print” commands in the meme images, which told the malware to take a screenshot of an infected computer. The malware then separately obtains the address where its command and control server is located from a Pastebin post, which directs the malware where to send the screenshots.

10/10 points for creativity, that’s for sure.

The researchers said that memes uploaded to the Twitter page could have included other commands, like “/processos” to retrieve a list of running apps and processes, “/clip” to steal the contents of a user’s clipboard, and “/docs” to retrieve filenames from specific folders.

The malware appears to have first appeared in mid-October, according to a hash analysis by VirusTotal, around the time that the Pastebin post was first created.

But the researchers admit they don’t have all the answers, and more work needs to be done to fully understand the malware. It’s not clear where the malware came from, how it infects its victims, or who’s behind it. It’s also not clear exactly what the malware is for — or its intended use in the future. The researchers also don’t know why the Pastebin post points to a local, non-internet address, suggesting it may be a proof-of-concept for future attacks.

Although Twitter didn’t host any malicious content, nor could the tweets resulted in a malware infection, it’s an interesting (although not unique) way of using the social media site as a clever way of communicating with malware.

The logic goes that in using Twitter, the malware would connect to “twitter.com,” which is far less likely to be flagged or blocked by anti-malware software than a dodgy-looking server.

After Trend Micro reported the account, Twitter pulled the account offline, suspending it permanently.

It’s not the first time malware or botnet operators have used Twitter as a platform for communicating with their networks. Even as far back as 2009, Twitter was used as a way to send commands to a botnet. And, as recently as 2016, Android malware would communicate with a predefined Twitter account to receive commands.

3D-printed heads let hackers – and cops – unlock your phone

There’s a lot you can make with a 3D printer: from prosthetics, corneas, and firearms — even an Olympic-standard luge.

You can even 3D print a life-size replica of a human head — and not just for Hollywood. Forbes reporter Thomas Brewster commissioned a 3D printed model of his own head to test the face unlocking systems on a range of phones — four Android models and an iPhone X.

Bad news if you’re an Android user: only the iPhone X defended against the attack.

Gone, it seems, are the days of the trusty passcode, which many still find cumbersome, fiddly, and inconvenient — especially when you unlock your phone dozens of times a day. Phone makers are taking to the more convenient unlock methods. Even if Google’s latest Pixel 3 shunned facial recognition, many Android models — including popular Samsung devices — are relying more on your facial biometrics. In its latest models, Apple effectively killed its fingerprint-reading Touch ID in favor of its newer Face ID.

But that poses a problem for your data if a mere 3D-printed model can trick your phone into giving up your secrets. That makes life much easier for hackers, who have no rulebook to go from. But what about the police or the feds, who do?

It’s no secret that biometrics — your fingerprints and your face — aren’t protected under the Fifth Amendment. That means police can’t compel you to give up your passcode, but they can forcibly depress your fingerprint to unlock your phone, or hold it to your face while you’re looking at it. And the police know it — it happens more often than you might realize.

But there’s also little in the way of stopping police from 3D printing or replicating a set of biometrics to break into a phone.

“Legally, it’s no different from using fingerprints to unlock a device,” said Orin Kerr, professor at USC Gould School of Law, in an email. “The government needs to get the biometric unlocking information somehow,” by either the finger pattern shape or the head shape, he said.

Although a warrant “wouldn’t necessarily be a requirement” to get the biometric data, one would be needed to use the data to unlock a device, he said.

Jake Laperruque, senior counsel at the Project On Government Oversight, said it was doable but isn’t the most practical or cost-effective way for cops to get access to phone data.

“A situation where you couldn’t get the actual person but could use a 3D print model may exist,” he said. “I think the big threat is that a system where anyone — cops or criminals — can get into your phone by holding your face up to it is a system with serious security limits.”

The FBI alone has thousands of devices in its custody — even after admitting the number of encrypted devices is far lower than first reported. With the ubiquitous nature of surveillance, now even more powerful with high-resolution cameras and facial recognition software, it’s easier than ever for police to obtain our biometric data as we go about our everyday lives.

Those cheering on the “death of the password” might want to think again. They’re still the only thing that’s keeping your data safe from the law.

‘donald’ debuts at No. 23 on worst passwords of 2018 list

Almost 10 percent of people on the interwebs used at least one of the 25 worst passwords on SplashData’s annual list, which was released this week. And nearly three percent of you are still using “123456,” the worst password of the entire ranking.

The eighth annual list of worst passwords of the year is based on SplashData’s evaluation of more than 5 million passwords leaked on the internet. Most of the leaked passwords evaluated for the 2018 list were held by users in North America and Western Europe. Passwords leaked from hacks of adult websites were not included in the report, according to SplashData, which provides password management applications TeamsID, Gpass and SplashID.

This year revealed the same takeaway as previous ones: computer users continue to use the same predictable, easily guessable passwords. For instance, 2018 was the fifth consecutive year that “123456” and “password” retained their top two spots on the list. The following five top passwords on the list are simply numerical strings, the company said.

There were a few newcomers on the list. President Donald Trump debuted on this year’s list with “donald” showing up as the 23rd most frequently used password.

“Hackers have great success using celebrity names, terms from pop culture and sports, and simple keyboard patterns to break into accounts online because they know so many people are using those easy-to- remember combinations,” according to Morgan Slain, CEO of SplashData.

SplashData does offer some tips to protect your data, including the use of passphrases of 12 characters or more with mixed types of characters, using different passwords for each login, and protecting assets and personal identity by using a password manager to organize passwords, generate secure random passwords and automatically log into websites.

Here’s what caused yesterday’s O2 and SoftBank outages

It appears that most mobile carriers, including O2 and SoftBank, have recovered from yesterday’s cell phone network outage that was triggered by a shutdown of Ericsson equipment running on their networks. That shutdown appears to have been triggered by expired software certificates on the equipment itself.

While Ericsson acknowledged in their press release yesterday that expired certificates were at the root of the problem, you may be wondering why this would cause a shutdown. It turns out that it’s likely due to a fail-safe system in place, says Tim Callan, senior fellow at Sectigo (formerly Comodo CA), a U.S. certificate-issuing authority. Callan has 15 years of experience in the industry.

He indicated that while he didn’t have specific information on this outage, it would be consistent with industry best practices to shut down the system when encountering expired certificates “We don’t have specific visibility into the Ericsson systems in question, but a typical application would require valid certificates to be in place in order to keep operating. That is to protect against breach by some kind of agent that is maliciously inserted into the network,” Callan told TechCrunch.

In fact, Callan said that in 2009 a breach at Heartland Payments was directly related to such a problem. “2009’s massive data breach of Heartland Payment Systems occurred because the network in question did NOT have such a requirement. Today it’s common practice to use certificates to avoid that same vulnerability,” he explained.

Ericsson would not get into specifics about what caused the problem.”Ericsson takes full responsibility for this technical failure. The problem has been identified and resolved. After a complete analysis Ericsson will take measures to prevent such a failure from happening again.”

Among those affected yesterday were millions of O2 customers in Great Britain and SoftBank customers in Japan. SoftBank issued an apology in the form of a press release on the company website. “We deeply apologize to our customers for all inconveniences it caused. We will strive to take all measures to prevent the same network outage.”

As for O2, they also apologized this morning after restoring service, tweeting:

Australia rushes its ‘dangerous’ anti-encryption bill into parliament, despite massive opposition

Australia’s controversial anti-encryption bill is one step closer to becoming law, after the two leading but sparring party political giants struck a deal to pass the legislation.

The bill, in short, grants Australian police greater powers to issue “technical notices” — a nice way of forcing companies — even websites — operating in Australia to help the government hack, implant malware, undermine encryption or insert backdoors at the behest of the government.

If companies refuse, they could face financial penalties.

Lawmakers say that the law is only meant to target serious criminals — sex offenders, terrorists, homicide and drug offenses. Critics have pointed out that the law could allow mission creep into less serious offenses, such as copyright infringement, despite promises that compelled assistance requests are signed off by two senior government officials.

In all, the proposed provisions have been widely panned by experts, who argue that the bill is vague and contradictory, but powerful, and still contains “dangerous loopholes.” And, critics warn (as they have for years) that any technical backdoors that allow the government to access end-to-end encrypted messages could be exploited by hackers.

But that’s unlikely to get in the way of the bill’s near-inevitable passing.

Australia’s ruling coalition government and its opposition Labor party agreed to have the bill put before parliament this week before its summer break.

Several lawmakers look set to reject the bill, criticizing the government’s efforts to rush through the bill before the holiday.

“Far from being a ‘national security measure’ this bill will have the unintended consequence of diminishing the online safety, security and privacy of every single Australian,” said Jordon Steele-John, a Greens’ senator, in a tweet.

Tim Watts, a Labor member of Parliament for Gellibrand, tweeted a long thread slamming the government’s push to get the legislation passed before Christmas, despite more than 15,000 submissions to a public consultation, largely decrying the bill’s content.

The tech community — arguably the most affected by the bill’s passing — has also slammed the bill. Apple called it “dangerously ambiguous”, while Cisco and Mozilla joined a chorus of other tech firms calling for the government to dial back the provisions.

But the rhetoric isn’t likely to dampen the rush by the global surveillance pact — the U.S., U.K., Canada, Australia and New Zealand, known as the so-called “Five Eyes” group of nations — to push for greater access to encrypted data. Only earlier this year, the governmental coalition said in no uncertain terms that it would force backdoors if companies weren’t willing to help their governments spy.

Australia’s likely to pass the bill — but when exactly remains a mystery. The coalition government has to call an election in less than six months, putting the anti-encryption law on a timer.

Lawmakers say Amazon’s facial recognition software may be racially biased and harm free expression

Amazon has “failed to provide sufficient answers” about its controversial facial recognition software, Rekognition — and lawmakers won’t take the company’s usual silent treatment for an answer.

The letter, signed by eight lawmakers — including Sen. Edward Markey and Reps. John Lewis and Judy Chu — called on Amazon chief executive Jeff Bezos to explain how the company’s technology works — and where it will be used.

It comes after the cloud and retail giant secured several high-profile contracts with the U.S. government and at least one major metropolitan city — including Orlando, Florida — for surveillance.

The lawmakers said that they expressed a “heightened concern given recent reports that Amazon is actively marketing its biometric technology to U.S. Immigration and Customs Enforcement, as well as other reports of pilot programs lacking any hands-on training from Amazon for participating law enforcement officers.”

They also said that the system suffers from accuracy issues — which could lead to racial bias, and could harm citizens’ constitutional rights to free expression.

“However, at this time, we have serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans’ willingness to exercise their First Amendment rights in public,” the letter said.

The lawmakers want Amazon to explain how Amazon tests for accuracy and if those tests have been independently verified — and how the company tests for bias.

It comes after the ACLU found that the software failed to facially recognize 28 members of Congress, with a higher failure rate towards people of color.

The facial recognition software has been controversial from the start. Even after concerns from its own employees, Amazon said it would push ahead and sell the technology regardless.

Amazon has a little over two weeks to respond to the lawmakers. A spokesperson for Amazon did not respond to a request for comment.

Marriott says 500 million Starwood guest records stolen in massive data breach

Starwood Hotels has confirmed its hotel guest database of about 500 million customers has been stolen in a data breach.

The hotel and resorts giant said in a statement filed with U.S. regulators that the “unauthorized access” to its guest database was detected on or before September 10 — but may have dated back as far as 2014.

“Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014,” said the statement. “Marriott recently discovered that an unauthorized party had copied and encrypted information, and took steps towards removing it.”

Specific details of the breach remain unknown. We’ve contacted Starwood for more and will update when we hear back.

The company said that it obtained and decrypted the database on November 19 and “determined that the contents were from the Starwood guest reservation database.”

Some 327 million records contained a guest’s name, postal address, phone number, date of birth, gender, email address, passport number, Starwood’s rewards information (including points and balance), arrival and departure information, reservation date, and their communication preferences.

Starwood said an unknown number of records contained encrypted credit card data, but has “not been able to rule out” that the components needed to decrypt the data wasn’t also taken.

“Marriott reported this incident to law enforcement and continues to support their investigation,” said the statement.

Marriott-owned Starwood the largest hotel chain in the world, with more than 11 brands covering 1,200 properties, including W Hotels, St. Regis, Sheraton, Westin, Element and more. Starwood branded timeshare properties are also included.

The company said that its Marriott hotels are not believed to be affected as its reservation system is “on a different network,” following Marriott’s acquisition of Starwood in 2016.

The company has begun informing customers of the breach — including in the U.S., Canada, and the U.K.

Given that the breach falls under the European-wide GDPR rules, Starwood may face significant financial penalties of up to four percent of its global annual revenue if found to be in breach of the rules.

Facebook denies report that election war room was disbanded

Facebook’s election war room monitors and dashboards remain, since so does the threat of election interference. Facebook has confirmed to TechCrunch that its election war room that it paraded reporters through in October has not been disbanded and will be used again for future elections. That directly contradicts a report from Bloomberg today about the war room that claimed “it’s been disbanded”, citing confirmation from a Facebook spokesperson. That article has not received a formal correction or update despite the Facebook’s VP of product for election security Guy Rosen tweeting to Bloomberg’s Sarah Friar that “The war room was effective and we’re not disbanding it, we’re going to do more things like this.”

“Our war room effort is focused specifically on elections-related issues and is designed to rapidly respond to threats such as voter suppression efforts and civic-related misinformation. It was an effective effort during the recent U.S. and Brazil elections, and we are planning to expand the effort going forward for elections around the globe” a Facebook spokesperson tells TechCrunch. It seems there was a miscommunication between Facebook PR and Bloomberg.

Facebook created the war room at its Menlo Park HQ to monitor for election-related violations of its policies ahead of the Brazilian Presidential race and the US midterms. The room features screens visualizing the volume of foreign political content and voter suppressions efforts to a team of high-ranking teammates from Facebook as well as Instagram and WhatsApp. The goal was to speed up response times to sudden spikes in misinformation about candidates or how to vote to prevent the company from being caught flat-footed as it was in the 2016 presidential election when Russian agents pumped propaganda into the social network.

Facebook tells me that the way the war room works is that a few weeks before key elections, it’s staffed up. Interdisciplinary teams work through election day to identify and respond to threats. After an election concludes, staffers return to their teams where they continue 24/7 monitoring for policy-violating activity across the board. That’s because there’s typically much fewer voter suppression attempts and other surges of propaganda when elections are still many months or years away.

But when future key elections arise, the war room will buzz with activity again. The company plans to invest more in the effort since it succeeded in enhancing coordination between Facebook’s security teams. A spokesperson tells me that while the room might move locations to allow more space or be closer to a specific product group, the war room strategy remains.

The Verge sells an anti-Facebook t-shirt

“The war room will be operational ahead of major events, and it still stands. It was effective for our work in both the Brazil and US elections which is why it’s going to be expanded, not disbanded” Rosen tweeted. “Bottom line is the war room we built originally for the US midterms and for Brazil was effective. Going forward we’re expanding not disbanding the effort.”

Bloomberg had reported that “Facebook says [the war room] was never intended to be permanent, and the company is still assessing what is needed for future elections. The strategic response team is a more-permanent solution to crisis problems, a Facebook spokesperson said.” Rosen’s comment that “the headline is incorrect” referred to news aggregator Techmeme’s manually re-written headline “Facebook disbands its widely publicized “War Room”, says it wasn’t a permanent solution, touts Strategic Response Team as its way to handle future crises”, not Bloomberg’s headline “Facebook’s Sheryl Sandberg Is Tainted by Crisis After Crisis”.

The original Bloomberg story had caused such a stire because it came merely five weeks after Facebook had lured scores of reporters to “tour” the war room, shoot video, and report on it. The company was eager to impress on the public that it was taking election security seriously and fighting hard against misinformation. The PR campaign succeeded, with the “war room” name proving especially tantalizing. The words appeared in headlines from many outlets including TechCrunch.

The whole situation has made the Facebook press corps more cynical and skeptical about how the company tries to manipulate their coverage. The idea that Facebook might have just made the war room for show and since shut it down left many with a sour taste, even if that didn’t end up being true. That feeling was only fueled by the New York Times’ report about how Facebook had hired opposition research firm Definers, whose employees tried to seed stories with journalists that defamed the social network’s critics, and wrote their own biased takes for Definers-affiliated publication NTK Network.

It’s clear that Facebook’s relationship with the press remains contentious. Some believe Facebook sucked ad dollars away from news sites before dialing down its referral traffic to those sites, possibly leaving outlets with a grudge. The Verge currently sells an anti-Facebook t-shirt in its merchandise store, showing protestors toppling its logo like a dictator’s statue. But Facebook does plenty to deserve the tough criticism, from failing to protect the 2016 elections, to its ruthless PR strategies, to how it’s allowed polarizing and sensational content to flourish, to how its growth hacking seeks to devour our attention.

As long as the “days since the last Facebook scandal” counter keeps getting reset to zero, it will remain in the hot seat. The systemic change necessary to put society’s well-being above its own growth may take years of rehiring, training, and a fundamental rethinking of its engagement-seeking business model.