In letter to Congress, Apple sends strongest denial over ‘spy chip’ story

Apple has doubled down on its repudiation of Bloomberg’s report last week that claimed its systems had been compromised by Chinese spies.

The blockbuster story cited more than a dozen sources claiming that China installed tiny chips on motherboards built by Supermicro, which companies across the U.S. tech industry — including Amazon and Apple — have used to power servers in their datacenters. Bloomberg’s report also claimed that the chip can reportedly compromise data on the server, allowing China to spy on some of the world’s most powerful tech companies.

Now, in a letter to Congress, Apple’s vice president of information security George Stathakopoulos sent the company’s strongest denial to date.

“Apple has never found malicious chips, ‘hardware manipulations’ or vulnerabilities purposely planted in any server,” he said. “We never alerted the FBI to any security concerns like those described in the article, nor has the FBI ever contacted us about such an investigation.”

It follows a statement by both the U.K. National Cyber Security Center and U.S. Homeland Security stating that they had “no reason to doubt” statements by Apple, Amazon and Supermicro denying the claims.

Stathakopoulos added that Apple “repeatedly asked them to share specific details about the alleged malicious chips that they seemed certain existed, they were unwilling or unable to provide anything more than vague secondhand accounts.”

Apple’s statement is far stronger than its earlier remarks. A key detail missing in the Bloomberg story is that its many sources, albeit anonymous, provided the reporters with a first hand account of the alleged spy chips.

Without any evidence that the chips exist beyond eyewitness accounts and sources, Bloomberg’s story remains on shaky grounds.

Facebook is weaponizing security to erode privacy

At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.

“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.

Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.

But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.

You could say Facebook has ‘hostility to privacy‘ as a core value.

Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.

But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.

On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.

Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.

All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparencyhate speech and abuse, and also directly, and at times closely, on consumer privacy and control

Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.) 

The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”

As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.

But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.

He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.

Security, weaponized 

What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.

Simply put, Facebook is weaponizing security to shield its erosion of privacy.

Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.

Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.

In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.

Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.

Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.

Hence the existential nature of the threat.

The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.

Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.

One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.

It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.

US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.

Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.

So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)

The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.

Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.

While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.

In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.

Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.

So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.

Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:

It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.

At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.

“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.

He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.

“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”

He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).

These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.

But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.

No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.

TechCrunch/Bryce Durbin

Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…

So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.

What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.

Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.

Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.

“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.

So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.

Safe to say, that’s not going to play at all well in Europe.

Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!

Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)

In other contexts this would be called spying — or, well, ‘mass surveillance’.

It’s also how Facebook makes money.

And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.

Except Facebook is a commercial company, not the NSA.

So it’s only fighting to keep being able to carpet-bomb the planet with ads.

Profiting from shadow profiles

Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.

In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.

Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.

It’s just a double whammy of awful, awful behavior.

And of course, there’s more.

A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.

In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.

Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.

Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)

So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)

In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.

One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…

That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.

There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.

These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.

Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.

WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.

On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.

So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.

This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.

No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.

Yet this is a very dangerous strategy, though.

Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.

What’s the best security practice of all? That’s super simple: Not holding data in the first place.

White House says a draft executive order reviewing social media companies is not “official”

A draft executive order circulating around the White House “is not the result of an official White House policymaking process,” according to deputy White House press secretary, Lindsay Walters.

According to a report in The Washington Post, Walters denied that White House staff had worked on a draft executive order that would require every federal agency to study how social media platforms moderate user behavior and refer any instances of perceived bias to the Justice Department for further study and potential legal action.

Bloomberg first reported the draft executive order and a copy of the document was acquired and published by Business Insider.

Here’s the relevant text of the draft (from Business Insider):

Section 2. Agency Responsibilities. (a) Executive departments and agencies with authorities that could be used to enhance competition among online platforms (agencies) shall, where consistent with other laws, use those authorities to promote competition and ensure that no online platform exercises market power in a way that harms consumers, including through the exercise of bias.

(b) Agencies with authority to investigate anticompetitive conduct shall thoroughly investigate whether any online platform has acted in violation of the antitrust laws, as defined in subsection (a) of the first section of the Clayton Act, 15 U.S.C. § 12, or any other law intended to protect competition.

(c) Should an agency learn of possible or actual anticompetitive conduct by a platform that the agency lacks the authority to investigate and/or prosecute, the matter should be referred to the Antitrust Division of the Department of Justice and the Bureau of Competition of the Federal Trade Commission.

While there are several reasonable arguments to be made for and against the regulation of social media platforms, “bias” is probably the least among them.

That hasn’t stopped the steady drumbeat of accusations of bias under the guise of “anticompetitive regulation” against platforms like Facebook, Google, YouTube, and Twitter from increasing in volume and tempo in recent months.

Bias was the key concern Republican lawmakers brought up when Mark Zuckerberg was called to testify before Congress earlier this year. And bias was front and center in Republican lawmakers’ questioning of Jack Dorsey, Sheryl Sandberg, and Google’s empty chair when they were called before Congress earlier this month to testify in front of the Senate Intelligence Committee.

The Justice Department has even called in the attorneys general of several states to review the legality of the moderation policies of social media platforms later this month (spoiler alert: they’re totally legal).

With all of this activity focused on tech companies, it’s no surprise that the administration would turn to the Executive Order — a preferred weapon of choice for Presidents who find their agenda stalled in the face of an uncooperative legislature (or prevailing rule of law).

However, as the Post reported, aides in the White House said there’s little chance of this becoming actual policy.

… three White House aides soon insisted they didn’t write the draft order, didn’t know where it came from, and generally found it to be unworkable policy anyway. One senior White House official confirmed the document had been floating around the White House but had not gone through the formal process, which is controlled by the staff secretary.

White House says a draft executive order reviewing social media companies is not “official”

A draft executive order circulating around the White House “is not the result of an official White House policymaking process,” according to deputy White House press secretary, Lindsay Walters.

According to a report in The Washington Post, Walters denied that White House staff had worked on a draft executive order that would require every federal agency to study how social media platforms moderate user behavior and refer any instances of perceived bias to the Justice Department for further study and potential legal action.

Bloomberg first reported the draft executive order and a copy of the document was acquired and published by Business Insider.

Here’s the relevant text of the draft (from Business Insider):

Section 2. Agency Responsibilities. (a) Executive departments and agencies with authorities that could be used to enhance competition among online platforms (agencies) shall, where consistent with other laws, use those authorities to promote competition and ensure that no online platform exercises market power in a way that harms consumers, including through the exercise of bias.

(b) Agencies with authority to investigate anticompetitive conduct shall thoroughly investigate whether any online platform has acted in violation of the antitrust laws, as defined in subsection (a) of the first section of the Clayton Act, 15 U.S.C. § 12, or any other law intended to protect competition.

(c) Should an agency learn of possible or actual anticompetitive conduct by a platform that the agency lacks the authority to investigate and/or prosecute, the matter should be referred to the Antitrust Division of the Department of Justice and the Bureau of Competition of the Federal Trade Commission.

While there are several reasonable arguments to be made for and against the regulation of social media platforms, “bias” is probably the least among them.

That hasn’t stopped the steady drumbeat of accusations of bias under the guise of “anticompetitive regulation” against platforms like Facebook, Google, YouTube, and Twitter from increasing in volume and tempo in recent months.

Bias was the key concern Republican lawmakers brought up when Mark Zuckerberg was called to testify before Congress earlier this year. And bias was front and center in Republican lawmakers’ questioning of Jack Dorsey, Sheryl Sandberg, and Google’s empty chair when they were called before Congress earlier this month to testify in front of the Senate Intelligence Committee.

The Justice Department has even called in the attorneys general of several states to review the legality of the moderation policies of social media platforms later this month (spoiler alert: they’re totally legal).

With all of this activity focused on tech companies, it’s no surprise that the administration would turn to the Executive Order — a preferred weapon of choice for Presidents who find their agenda stalled in the face of an uncooperative legislature (or prevailing rule of law).

However, as the Post reported, aides in the White House said there’s little chance of this becoming actual policy.

… three White House aides soon insisted they didn’t write the draft order, didn’t know where it came from, and generally found it to be unworkable policy anyway. One senior White House official confirmed the document had been floating around the White House but had not gone through the formal process, which is controlled by the staff secretary.

Washington hit China hard on tech influence this week

After months of back-and-forth negotiations, Washington moved rapidly this past week to fend off the increasing transcendence of China’s tech industry, with Congress passing expanded national security controls over M&A transactions and the Trump administration heaping more pressure on China with threats of increased tariffs.

We’ve been following the reforms to CFIUS — the Committee on Foreign Investment in the United States — since the proposal was first floated late last year. The committee is charged with protecting America’s economic interests by preventing takeovers of companies by foreign entities where the transaction could have deleterious national security consequences. The committee and its antecedents have slowly gained powers over the past few decades since the Korean War, but this week, it suddenly gained a whole lot more.

Through the Foreign Investment Risk Review Modernization Act of 2018, which was rolled into the must-pass National Defense Authorization Act and passed by Congress this week, CFIUS is gaining a number of new powers, more resources and staff, more oversight, and a charge to massively expand its influence in any M&A process involving foreign entities.

Lawfare has a great summary of the final text of the bill and its ramifications, but I want to highlight a few of the changes that I think are going to have an outsized effect on Silicon Valley and the tech industry more widely.

One of the top priorities of this legislation was to make it more difficult for Chinese venture capital firms to invest in American startups and pilfer intellectual property or acquire confidential user data.

Congress fulfilled that goal in two ways. First, the definition of a “covered transaction” has been massively expanded, with a focus on “critical technology” industries. In the past, there was an expectation that a foreign entity had to essentially buy out a company in order to trigger a CFIUS review. That jurisdiction has now been expanded to include such actions as adding a member to a company’s board of directors, even in cases where an investment is essentially passive.

That means that the typical VC round could now trigger a review in Washington — and in the fast timelines of startup fundraising, that might be enough friction to keep Chinese venture capital out of the American ecosystem. Given that Chinese venture capital (at least by some measures) has outpaced U.S. venture capital in the first half of this year, this provision will have huge ramifications for startups and their valuations.

The second element Congress added was requiring that CFIUS receive all partnership agreements that a company has signed with a foreign investor. Often in a transaction, there is a main agreement spelling out the overall structure of a deal, and then side agreements with individual investors with special terms not shared with the wider syndicate, such as the right to access internal company data or intellectual property. By requiring further disclosure, CFIUS will have a more holistic picture of a deal and any risks it might add for national security.

It’s important to note that Congress was keen on balancing the need for investment with the need of national security. Through oversight provisions, including allowing CFIUS decisions to be contested in the DC Court of Appeals, Congress has designed the reform to be fairer, even as it takes a harder line on certain transactions.

It will take many months for the provisions to come in full force, so some of the effects of this bill won’t be felt until the end of next year. Nonetheless, Congress has sent a clear message of its intent.

Congress’ national security concerns in financial transactions are also crossing the Atlantic. British Prime Minister Theresa May and her government are spearheading new controls over foreign investment transactions, and the EU has also launched more screenings to ensure that transactions are in the best interests of the continent. All of these legislative moves are a response to Chinese foreign direct investment, which has skyrocketed in Europe while almost disappearing in North America.

President Trump signed tariffs on China earlier this year. Now, the administration wants to more than double them.

That disappearance is a function of the on-going trade dispute between the U.S. and China, which crescendoed this past week. The Trump administration said it is considering increasing tariffs from 10% to 25% on $200 billion worth of Chinese goods, significantly heightening the tariffs it had put in place earlier this year.

That threat got a swift response from China overnight, with the Chinese Commerce Ministry saying that it would put tariffs on $60 billion worth of American goods in retaliation if the U.S. followed through with its threat.

So far, the tech industry appears to have been more insulated from the back-and-forth than expected, although the increasing scope and intensity of tariffs could change that calculus. Apple updated its quarterly filing this week to include a new risk around trade disputes, saying that “Tariffs could also make the Company’s products more expensive for customers, which could make the Company’s products less competitive and reduce consumer demand.” Legal boilerplate for sure, but it is the first time the company has included such a provision in its filing.

The tariffs drama is going to continue in the weeks and months ahead. But this week in particularly was a watershed for U.S. and China technology relations, and a busy week for tech lobbyists and policy officials.

For startups, most of this news basically boils down to the following: the U.S. is one market, and China is another. Cross-investing and cross-distribution just aren’t going to be easy as they were even a few months ago. Pick a market — one market — and focus your energies there. Clearly, it’s going to be tough times for anyone caught in the middle between the two.

Russian hackers already targeted a Missouri senator up for reelection in 2018

A Democratic senator seeking reelection this fall appears to be the first identifiable target of Russian hacking in the 2018 midterm race. In a new story on the Daily Beast, Andrew Desiderio and Kevin Poulsen reported that Democratic Missouri Senator Claire McCaskill was targeted in a campaign-related phishing attack. That clears up one unspecified target from last week’s statement by Microsoft’s Tom Burt that three midterm election candidates had been targeted by Russian phishing campaigns.

Russian Election Interference

The report cites its own forensic research in determining the attacker is likely Fancy Bear, a hacking group believed to be affiliated with Russian military intelligence.

“We did discover that a fake Microsoft domain had been established as the landing page for phishing attacks, and we saw metadata that suggested those phishing attacks were being directed at three candidates who are all standing for elections in the midterm elections,” Burt said during the Aspen Security Forum. Microsoft removed the domain and noted that the attack was unsuccessful.

Sen. McCaskill confirmed in a press release that she was targeted by the attack, which appears to have taken place in August 2017:

Russia continues to engage in cyber warfare against our democracy. I will continue to speak out and press to hold them accountable. While this attack was not successful, it is outrageous that they think they can get away with this. I will not be intimidated. I’ve said it before and I will say it again, Putin is a thug and a bully.

TechCrunch has reached out to Sen. McCaskill’s office for additional details on the incident. McCaskill, a vocal Russia critic, will likely face Republican frontrunner and Trump pick Josh Hawley this fall.

Twitter’s efforts to suspend fake accounts have doubled since last year

Bots, your days of tweeting politically divisive nonsense might be numbered. The Washington Post reported Friday that in the last few months the company has aggressively suspended accounts in an effort to stem the spread of disinformation running rampant on its platform.

The Washington Post reports that Twitter suspended as many as 70 million accounts between May and June of this year, with no signs of slowing down in July. According to data obtained by the Post, the platform suspended 13 million accounts during a weeklong spike of bot banning activity in mid-May.

Sources tell the Post that the uptick in suspensions is tied to the company’s efforts to comply with scrutiny from the Congressional investigation into Russian disinformation on social platforms. The report adds that Twitter investigates bots and other fake accounts through an internal project known as “Operation Megaphone” through which it buys suspicious accounts and then investigates their connections.

Twitter declined to provide additional information about the Washington Post report but pointed us to a blog post from last week in which it disclosed other numbers related to its bot hunting efforts. In May of 2018, Twitter identified more than 9.9 million suspicious accounts — triple its efforts in late 2017.

Chart via Twitter

When Twitter identifies an account that it deems suspicious it then “challenges” that account, giving legitimate Twitter users an opportunity to prove their sentience by confirming a phone number. When an account fails this test it gets the boot, while accounts that pass are reinstated.

As Twitter noted in its recent blog post, bots can make users look good by artificially inflating follower counts.

“As a result of these improvements, some people may notice their own account metrics change more regularly,” Twitter warned. The company noted that cracking down on fake accounts means that “malicious actors” won’t be able to promote their own content and accounts as easily by inflating their own numbers. Kicking users off a platform, fake or not, is a risk for a company that regularly reports its monthly active users, though only a temporary one.

As the report notes, at least one insider expects Twitter’s Q2 active user numbers to dip, reflecting its shift in enforcement. Still, any temporary user number setback would prove nominal for a platform that should focus on healthy user growth. Facebook is facing a similar reckoning as a result of the Russian bot scandal, as the company anticipates user engagement stats to dip as it moves to emphasize quality user experiences over juiced up quarterly numbers. In both cases, it’s a worthy tradeoff.

Tinder bolsters its security to ward off hacks and blackmail

This week, Tinder responded to a letter from Oregon Senator Ron Wyden calling for the company to seal up security loopholes in its app that could lead to blackmail and other privacy incursions.

In a letter to Sen. Wyden, Match Group General Counsel Jared Sine describes recent changes to the app, noting that as of June 19, “swipe data has been padded such that all actions are now the same size.” Sine added that images on the mobile app are fully encrypted as of February 6, while images on the web version of Tinder were already encrypted.

The Tinder issues were first called out in a report by a research team at Checkmarx describing the app’s “disturbing vulnerabilities” and their propensity for blackmail:

The vulnerabilities, found in both the app’s Android and iOS versions, allow an attacker using the same network as the user to monitor the user’s every move on the app. It is also possible for an attacker to take control over the profile pictures the user sees, swapping them for inappropriate content, rogue advertising or other type of malicious content (as demonstrated in the research).

While no credential theft and no immediate financial impact are involved in this process, an attacker targeting a vulnerable user can blackmail the victim, threatening to expose highly private information from the user’s Tinder profile and actions in the app.

In February, Wyden called for Tinder to address the vulnerability by encrypting all data that moves between its servers and the app and by padding data to obscure it from hackers. In a statement to TechCrunch at the time, Tinder indicated that it heard Sen. Wyden’s concerns and had recently implemented encryption for profile photos in the interest of moving toward deepening its privacy practices.

“Like every technology company, we are constantly working to improve our defenses in the battle against malicious hackers and cyber criminals,” Sine said in the letter. “… Our goal is to have protocols and systems that not only meet, but exceed industry best practices.”

Facebook shared data with Chinese telecom Huawei, raising US government security concerns

Concerns around Facebook’s recently revealed data sharing relationship with some device makers just took a turn for the worse. The practice, first revealed over the weekend, is now confirmed to have included relationships with Chinese companies Huawei, Lenovo, Oppo and TCL, according to The New York Times. Given that the U.S. government has longstanding national security concerns over Huawei, Facebook’s newly revealed data deal with the Chinese company has raised some eyebrows in Congress.

“Concerns about Huawei aren’t new – they were widely publicized beginning in 2012, when the House Permanent Select Committee on Intelligence released a well-read report on the close relationships between the Chinese Communist Party and equipment makers like Huawei,” U.S. Senator Mark Warner said of the revelation. Warner serves as the Vice Chairman of the Senate Select Committee on Intelligence.

“The news that Facebook provided privileged access to Facebook’s API to Chinese device makers like Huawei and TCL raises legitimate concerns, and I look forward to learning more about how Facebook ensured that information about their users was not sent to Chinese servers.”

In that report, the House Intelligence Committee wrote that “Huawei did not fully cooperate with the investigation and was unwilling to explain its relationship with the Chinese government or Chinese Communist Party, while credible evidence exists that it fails to comply with U.S. laws” and that Huawei’s history indicated that it likely had ties to the Chinese military.

Earlier in the day, the Senate Commerce Committee addressed a letter to Facebook over the broader issue of these manufacturer relationships and questioning Facebook’s assertion that the shared data was not abused. As the New York Times reports, these relationships date back “at least 2010” — the relative dark ages of Facebook’s mobile strategy. It does not appear that ZTE had a similar agreement with Facebook.

Facebook has disputed the characterization of these relationships as a privacy scandal, emphasizing that it imposed tight restrictions on this class of device integration.

Facebook told the New York Times that while the partnerships have been ongoing for years, the company would end its relationship with Huawei by the week’s end.

Pro-Trump social media duo accuses Facebook of anti-conservative censorship

Following up on a recurring thread from Mark Zuckerberg’s congressional appearance earlier this month, the House held a hearing today on perceived bias against conservatives on Facebook and other social platforms. The hearing, ostensibly about “how social media companies filter content on their platforms,” focused on the anecdotal accounts of social media stars Diamond and Silk (Lynnette Hardaway and Rochelle Richardson), a pro-Trump viral web duo that rose to prominence during Trump’s presidential campaign.

“Facebook used one mechanism at a time to diminish reach by restricting our page so that our 1.2 million followers would not see our content, thus silencing our conservative voices,” Diamond and Silk said in their testimony.

“It’s not fair for these Giant Techs [sic] like Facebook and YouTube get to pull the rug from underneath our platform and our feet and put their foot on our neck to silence our voices; it’s not fair for them to put a strong hold on our finances.”

During the course of their testimony, Diamond and Silk repeated their unfounded assertions that Facebook targeted their content as a deliberate act of political censorship.

What followed was mostly a partisan back-and-forth. Republicans who supported the hearing’s mission asked the duo to elaborate on their claims and Democrats pointed out their lack of substantiating evidence and their willingness to denounce documented facts as “fake news.”

Controversially, they also denied that they had accepted payment from the Trump campaign, in spite of public evidence to the contrary. On November 22, 2016, the pair received $1,274.94 for “field consulting,” as documented by the FEC.

Earlier in April, Zuckerberg faced a question about the pair’s Facebook page from Republican Rep. Joe Barton:

Why is Facebook censoring conservative bloggers such as Diamond and Silk? Facebook called them “unsafe” to the community. That is ludicrous. They hold conservative views. That isn’t unsafe.

At the time, Zuckerberg replied that the perceived censorship was an “enforcement error” and had been in contact with Diamond and Silk to reverse its mistake. Senator Ted Cruz also asked Zuckerberg about what he deemed a “pervasive pattern of bias and political censorship” against conservative voices on the platform.

Today’s hearing, which California Rep. Ted Lieu dismissed as “stupid and ridiculous,” was little more than an exercise in idle hyper-partisanship, but it’s notable for a few reasons. For one, Diamond and Silk are two high-profile creators who managed to take their monetization grievances with tech companies, however misguided, all the way to Capitol Hill. Beyond that, and the day’s strange role-reversal of regulatory stances, the hearing was the natural escalation of censorship claims made by some Republicans during the Zuckerberg hearings. Remarkably, those accusations only comprised a sliver of the two days’ worth of testimony; in a rare display of bipartisanship, Democrats and Republicans mostly cooperated in grilling the Facebook CEO on his company’s myriad failures.

Congressional hearing or not, the truth of Facebook’s platform screw-ups is far more universal than political claims on the right or left might suggest. As Zuckerberg’s testimony made clear, Facebook’s moderation tools don’t exactly work as intended and the company doesn’t even really know the half of it. Facebook users have been manipulating the platform’s content reporting tools for years, and unfortunately that phenomenon coupled with Facebook’s algorithmic and moderation blind spots punishes voices on both sides of the U.S. political spectrum — and everyone in between.