The New York Times sues the FCC to investigate Russian interference in Net Neutrality decision

The ongoing saga over the FCC’s handling of public comments to its net neutrality proposal continues after The New York Times sued the organization for withholding of information that it believes could prove there was Russian interference.

The Times has filed multiple Freedom of Information Act requests for data on the comments since July 2017, and now, after reducing the scope of its requests significantly was rejected, it is taking the FCC to court in a bid to get the information.

The FCC’s comment system keeled over in May 2017 over during the public feedback period as more than 22 million comments were posted. Plenty of those were suspected of using repeated phrases, fake email addresses and even the names of deceased New Yorkers. The FCC initially falsely claimed the outage was because it was hacked — it wasn’t and it has only just made that clear — it seems instead that its system was unable to handle the volume of comments, with a John Oliver sketch thought to have accounted for a surge in interest.

The New York Times, meanwhile, has been looking into whether Russia was involved. An op-ed in the Washington Post from FCC member Jessica Rosenworcel published earlier this year suggested that as many as 500,000 comments came from Russian email addresses, with an estimated eight million comments sent by throw-away email accounts created via FakeMailGenerator.com. In addition, a report found links between emails mentioned in the Mueller Report and those used to provide comment on net neutrality.

Since the actual events are unclear — for more than a year the FCC allowed people to incorrectly believe it was hacked — an FOIA request could provide a clearer insight into whether there was overseas interference.

Problem: the FCC itself won’t budge, as the suit (which you can find here) explains:

The request at issue in this litigation involves records that will shed light on the extent to which Russian nationals and agents of the Russian government have interfered with the agency notice-and-comment process about a topic of extensive public interest: the government’s decision to abandon “net neutrality.” Release of these records will help broaden the public’s understanding of the scope of Russian interference in the American democratic system.

Despite the clear public importance of the requested records, the FCC has thrown up a series of roadblocks, preventing The Times from obtaining the documents.

Repeatedly, The Times has narrowed its request in the hopes of expediting release of the records so it could explore whether the FCC and the American public had been the victim of orchestrated campaign by the Russians to corrupt the notice-and-comment process and undermine an important step in the democratic process of rule-making.

The original FOIA request lodged in June 2017 from the Times requested “IP addresses, timestamps, and comments, among other data” which included web server data. The FCC initially bulked and declined on the basis that doing so would compromise its IT systems and security (that sounds familiar!), while it also cited privacy concerns for the commenters.

Over the proceeding months, which included dialogue between both parties, the Times pared back the scope of its request considerably. By 31 August 2018, it was only seeking a list of originating IP addresses and timestamps for comments, and a list of user-agent headers (which show a user’s browser type and other diagnostic details) and timestamps. The requested lists were separated to address security concerns.

However, the FCC declined again, and now the Times believes it has “exhausted all administrative remedies.”

“The FCC has no lawful basis for declining to release the records requested,” it added.

Not so, according to the FCC, which released a statement to Ars Technica.

“We are disappointed that The New York Times has filed suit to collect the Commission’s internal Web server logs, logs whose disclosure would put at jeopardy the Commission’s IT security practices for its Electronic Comment Filing System,” a spokesperson said.

The organization cited a District of Columbia case earlier this month which it claimed found that “the FCC need not turn over these same web server logs under the Freedom of Information Act.”

But that is a simplistic read on the case. While the judge did rule against turning over server logs, he ordered the FCC to provide email addresses for those that had provided comment via its .CSV file template, and the files themselves. That’s a decent precedent for the New York Times, which has a far narrow scope with its request.

Singapore is the crypto sandbox that Asia needs

Singapore Blockchain Week happened this past week. While there have been a few announcements from companies, some of the most interesting updates have come from regulators, and specifically, the Monetary Authority of Singapore (MAS). The financial regulator openly discussed its views on cryptocurrency and plans to develop blockchain technology locally.

For those who are unfamiliar, Singapore historically has been a financial hub in Southeast Asia, but now has also gradually become the crypto hub of Asia. Compared to the rest of Asia and the rest of the world, the regulators in Singapore are well-informed and more transparent about their views on blockchain and cryptocurrency. While regulatory uncertainties still loom over Korea and Japan, in Southeast Asia, the MAS has already released its opinion “A Guide to Digital Token Offering” that illustrates the application of securities laws to digital token offerings and issuances. Singaporean regulators have arguably been pioneering economic and regulatory standards in Asia since the early days of the country’s founding by Lee Kuan Yew in 1965.

Singapore is the first stop for foreign companies in crypto

In the past, I’ve said that Thailand is one of the most interesting countries in crypto in Southeast Asia. Nonetheless, for any Western or foreign company looking to establish a footing in Asia, or even for any local company in any Asian country looking to establish a presence outside of their own country, Singapore should be the first stop. It has become the go-to crypto sandbox of Asia.

There are a number of companies all over Asia, as well as in the West, that have already made moves into the country. And the types of cryptocurrency projects and exchanges that go to Singapore vary widely.

A few months ago, a Korean team called MVL introduced Tada, or the equivalent of “Uber” on the blockchain, in Singapore. Tada is an on-demand car sharing service that utilizes MVL’s technology. The Tada app is built on MVL’s blockchain ecosystem, which is specifically designed to serve the automotive industry, adjacent service industries, and their customers. In this case, MVL was looking to test out its blockchain projects in a progressive, friendly jurisdiction outside of Korea, but still close enough to its headquarters. Singapore fulfilled most of these requirements.

Relatedly, Didi, China’s ride-sharing company, has also looked to build out its own blockchain-based ride-sharing program, called VVgo. VVgo’s launch is pending, and its home is intended to be in Toronto, Singapore, Hong Kong or San Francisco. Given Singapore’s geographic proximity and the transparency of its regulators, it would likely be a good testing ground for Didi as well.

This week, exchanges such as Binance and Upbit from Korea have also announced their plans to enter the Singaporean market. A few days ago, Changpeng ZhaoCEO of Binance, the world’s largest cryptocurrency exchange, announced the launch of a fiat currency exchange that will be based in Singapore. He also mentioned his company’s plan to launch five to ten fiat-to-crypto exchanges in the next year, with ideally two per continent. Dunamu, the parent company of South Korea’s largest crypto exchange Upbit, also just announced the launch of Upbit Singapore, which will be fully operational by October.

The team at Dunamu mentions how they are encouraged by MAS’s attitude towards cryptocurrency regulation and the vision of the country’s government to establish a strong crypto and blockchain sector. They also believe Singapore could be a bridge between Korea and the global cryptocurrency exchange market.

From a high level, the supply of crypto projects and trading volume in Singapore is certainly strong, and the demand also appears abundant. Following China’s ICO ban in late 2017, Singapore has become home to many financial institutions that can serve as potential investors for ICOs.

As recently featured on the China Money Network, Li Dongmei wrote that:

What is supporting such optimism is the quiet preparation of capital on a massive scale getting ready to act the “All In Crypto” mantra. “In recent months, there have been over a thousand foundations being established in Singapore by Chinese nationals,” said Chen Xianhui, an agent specialized in helping Chinese clients to register foundations in Singapore. Most of these newly established foundations are used setting up various token investments funds.

Singapore has become the first choice when crypto companies from both the West and the East are initially scoping out their market strategies in Asia, and companies want an overarching idea of what’s going on in the cryptocurrency world in the region.

In fact, it’s often the case that Southeast Asian crypto companies and leaders gather in Singapore before they go off and do crypto businesses in their own countries. It’s the place for one wants to tap all of the Asian crypto markets in one single physical location. The proof is in the data: in 2017, Singapore ascended to the number three market for ICO issuance based on the number of funds raised, trailing the United States and Switzerland.

Crypto is thriving due to regulator openness

The Monetary Authority of Singapore (MAS) takes a very practical approach to crypto. Currently, MAS divides digital tokens into utility tokens, payments tokens, and securities. In Asia, only Singapore and Thailand currently have such detailed classifications.

While speaking at Consensus Singapore this week, Damien Pang, Singapore’s Technology Infrastructure Office under the FinTech & Innovation Group (FTIG), said that “[MAS does] not regulate technology itself but purpose,” when in conversation discussing ICOs in Singapore. “The MAS takes a close look at the characteristics of the tokens, in the past, at the present, and in the future, instead of just the technology built on”.

Additionally, Pang mentioned that MAS does not intend to regulate utility tokens. Nevertheless, they are looking to regulate payment tokens that have a store of value and payment properties by passing a service bill by the end of the year. They are also paying attention to any utility or payment tokens with security features (i.e. a promise of future earnings, which will be regulated as such).

On the technology front, since 2017, Singapore authorities have been looking to use distributed ledger technology to boost the efficiency of settling cross-bank financial transactions. They believe that blockchain technology offers the potential to make trade finance safer and more efficient.

When compared to other Asia crypto hubs like Hong Kong, Seoul, or Shanghai, Singapore can expose one to the Southeast Asia market significantly more. I believe market activity will likely continue to thrive in the region as the country continues to act as the springboard for cryptocurrency companies and investors, and until countries like Korea and Japan establish a clear regulatory stance.

Trump’s new cyber strategy eases rules on use of government cyberweapons

The Trump administration’s new cyber strategy out this week isn’t much more than a stringing together of previously considered ideas.

In the 40-page document, the government set out its plans to improve cybersecurity, incentivizing change, and reforming computer hacking laws. Election security about a quarter of a page, second only to “space cybersecurity.”

The difference was the tone. Although the document had no mention of “offensive” action against actors and states that attack the US, the imposition of “consequences” was repeated.

“Our presidential directive effectively reversed those restraints, effectively enabling offensive cyber-operations through the relevant departments,” said John Bolton, national security advisor, to reporters.

“Our hands are not tied as they were in the Obama administration,” said Bolton, throwing shade on the previous government.

The big change, beyond the rehashing of old policies and principles, was the tearing up of an Obama-era presidential directive, known as PPD-20, which put restrictions on the government’s cyberweapons. Those classified rules were removed a month ago, the Wall Street Journal reported, described at the time as an “offensive step forward” by an administration official briefed on the plan.

In other words, it’ll give the government greater authority to hit back at targets seen as active cyberattackers — like Russia, North Korea, and Iran — all of which have been implicated in cyberattacks against the US in the recent past.

Any rhetoric that ramps up the threat of military action or considers use of force — whether in the real world or in cyberspace — is all too often is met with criticism, amid concerns of rising tensions. This time, not everyone hated it. Even ardent critics like Sen. Mark Warner of the Trump administration said the new cyber strategy contained “important and well-established cyber priorities.”

The Obama administration was long criticized for being too slow and timid after recent threats — like North Korea’s use of the WannaCry and Russian disinformation campaigns. Some former officials pushed back, saying the obstacle to responding aggressively to a foreign cyberattack was not the policy, but the inability of agencies to deliver a forceful response.

Kate Charlet, a former government cyber policy chief, said that policy’s “chest-thumping” rhetoric is forgivable so long as it doesn’t mark an escalation in tactics.

“I felt keenly the Department’s frustration over the challenges in taking even reasonable actions to defend itself and the United States in cyberspace,” she said. “I have since worried that the pendulum would swing too far in the other direction, increasing the risk of ill-considered operations, borne more of frustration than sensibility.”

Trump’s new cyber strategy, although a change in tone, ratchets up the rhetoric but doesn’t mean the government will suddenly become trigger-happy overnight. While the government now has greater powers to strike back, it may not have to if the policy serves as the deterrent it’s meant to be.

Sen. Harris tells federal agencies to get serious about facial recognition risks

Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.

In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.

Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.

Sen. Harris at a recent hearing.

Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”

If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just  positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.

“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.

Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:

Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.

First, the technology may interpret her mannerisms less accurately than a white male candidate.

Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.

Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.

If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.

“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.

A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.

Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.

“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”

Another example is offered:

Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.

Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.

Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.

The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.

The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).

Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?

“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.

The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.

“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.

The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?

The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.

These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.


You can find the letters in full below.

EEOC:

SenHarris – EEOC Facial Rec… by on Scribd

FTC:

SenHarris – FTC Facial Reco… by on Scribd

FBI:

SenHarris – FBI Facial Reco… by on Scribd

Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

As well as “new opportunities for democratic engagement”, among the role’s other listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”.

So here, in a second policy job, Facebook looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

And if Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.

Why the Pentagon’s $10 billion JEDI deal has cloud companies going nuts

By now you’ve probably heard of the Defense Department’s massive winner-take-all $10 billion cloud contract dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short).
Star Wars references aside, this contract is huge, even by government standards.The Pentagon would like a single cloud vendor to build out its enterprise cloud, believing rightly or wrongly that this is the best approach to maintain focus and control of their cloud strategy.

Department of Defense (DOD) spokesperson Heather Babb tells TechCrunch the department sees a lot of upside by going this route. “Single award is advantageous because, among other things, it improves security, improves data accessibility and simplifies the Department’s ability to adopt and use cloud services,” she said.

Whatever company they choose to fill this contract, this is about modernizing their computing infrastructure and their combat forces for a world of IoT, artificial intelligence and big data analysis, while consolidating some of their older infrastructure. “The DOD Cloud Initiative is part of a much larger effort to modernize the Department’s information technology enterprise. The foundation of this effort is rationalizing the number of networks, data centers and clouds that currently exist in the Department,” Babb said.

Setting the stage

It’s possible that whoever wins this DOD contract could have a leg up on other similar projects in the government. After all it’s not easy to pass muster around security and reliability with the military and if one company can prove that they are capable in this regard, they could be set up well beyond this one deal.

As Babb explains it though, it’s really about figuring out the cloud long-term. “JEDI Cloud is a pathfinder effort to help DOD learn how to put in place an enterprise cloud solution and a critical first step that enables data-driven decision making and allows DOD to take full advantage of applications and data resources,” she said.

Photo: Mischa Keijser for Getty Images

The single vendor component, however, could explain why the various cloud vendors who are bidding, have lost their minds a bit over it — everyone except Amazon, that is, which has been mostly silent, happy apparently to let the process play out.

The belief amongst the various other players, is that Amazon is in the driver’s seat for this bid, possibly because they delivered a $600 million cloud contract for the government in 2013, standing up a private cloud for the CIA. It was a big deal back in the day on a couple of levels. First of all, it was the first large-scale example of an intelligence agency using a public cloud provider. And of course the amount of money was pretty impressive for the time, not $10 billion impressive, but a nice contract.

For what it’s worth, Babb dismisses such talk, saying that the process is open and no vendor has an advantage. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said.

Complaining loudly

As the Pentagon moves toward selecting its primary cloud vendor for the next decade, Oracle in particular has been complaining to anyone who will listen that Amazon has an unfair advantage in the deal, going so far as to file a formal complaint last month, even before bids were in and long before the Pentagon made its choice.

Photo: mrdoomits for Getty Images (cropped)

Somewhat ironically, given their own past business model, Oracle complained among other things that the deal would lock the department into a single platform over the long term. They also questioned whether the bidding process adhered to procurement regulations for this kind of deal, according to a report in the Washington Post. In April, Bloomberg reported that co-CEO Safra Catz complained directly to the president that the deal was tailor made for Amazon.

Microsoft hasn’t been happy about the one-vendor idea either, pointing out that by limiting itself to a single vendor, the Pentagon could be missing out on innovation from the other companies in the back and forth world of the cloud market, especially when we’re talking about a contract that stretches out for so long.

As Microsoft’s Leigh Madden told TechCrunch in April, the company is prepared to compete, but doesn’t necessarily see a single vendor approach as the best way to go. “If the DOD goes with a single award path, we are in it to win, but having said that, it’s counter to what we are seeing across the globe where 80 percent of customers are adopting a multi-cloud solution,” he said at the time.

He has a valid point, but the Pentagon seems hell bent on going forward with the single vendor idea, even though the cloud offers much greater interoperability than proprietary stacks of the 1990s (for which Oracle and Microsoft were prime examples at the time).

Microsoft has its own large DOD contract in place for almost a billion dollars, although this deal from 2016 was for Windows 10 and related hardware for DOD employees, rather than a pure cloud contract like Amazon has with the CIA.

It also recently released Azure Stack for government, a product that lets government customers install a private version of Azure with all the same tools and technologies you find in the public version, and could prove attractive as part of its JEDI bid.

Cloud market dynamics

It’s also possible that the fact that Amazon controls the largest chunk of the cloud infrastructure market, might play here at some level. While Microsoft has been coming fast, it’s still about a third of Amazon in terms of market size, as Synergy Research’s Q42017 data clearly shows.

The market hasn’t shifted dramatically since this data came out. While market share alone wouldn’t be a deciding factor, Amazon came to market first and it is much bigger in terms of market than the next four combined, according to Synergy. That could explain why the other players are lobbying so hard and seeing Amazon as the biggest threat here, because it’s probably the biggest threat in almost every deal where they come up against each other, due to its sheer size.

Consider also that Oracle, which seems to be complaining the loudest, was rather late to the cloud after years of dismissing it. They could see JEDI as a chance to establish a foothold in government that they could use to build out their cloud business in the private sector too.

10 years might not be 10 years

It’s worth pointing out that the actual deal has the complexity and opt-out clauses of a sports contract with just an initial two-year deal guaranteed. A couple of three-year options follow, with a final two-year option closing things out. The idea being, that if this turns out to be a bad idea, the Pentagon has various points where they can back out.

Photo: Henrik Sorensen for Getty Images (cropped)

In spite of the winner-take-all approach of JEDI, Babb indicated that the agency will continue to work with multiple cloud vendors no matter what happens. “DOD has and will continue to operate multiple clouds and the JEDI Cloud will be a key component of the department’s overall cloud strategy. The scale of our missions will require DOD to have multiple clouds from multiple vendors,” she said.

The DOD accepted final bids in August, then extended the deadline for Requests for Proposal to October 9th. Unless the deadline gets extended again, we’re probably going to finally hear who the lucky company is sometime in the coming weeks, and chances are there is going to be lot of whining and continued maneuvering from the losers when that happens.

California is ‘launching our own damn satellite’ to track pollution, with help from Planet

California plans to launch a satellite to monitor pollution in the state and contribute to climate science, Governor Jerry Brown announced today. The state is partnering with satellite imagery purveyor Planet to create a custom craft to “pinpoint – and stop – destructive emissions with unprecedented precision, on a scale that’s never been done before.”

Governor Brown made the announcement in the closing remarks of the Global Climate Action Summit in San Francisco, echoing a pledge made two years ago to scientists at the American Geophysical Union’s 2016 meeting.

“With science still under attack and the climate threat growing, we’re launching our own damn satellite,” Brown said today.

Planet, which has launched hundreds of satellites in the last few years in order to provide near-real-time imagery of practically anywhere on Earth, will develop and operate the satellite. The plan is to equip it with sensors that can detect pollutants at their point sources, be they artificial or natural. That kind of direct observation enables direct action.

Technical details of the satellite are to be announced as the project solidifies. We can probably expect something like a 6U CubeSat loaded with instruments focused on detecting certain gases and particulates. An orbit with the satellite passing across the whole state along its north/south axis seems most likely; a single craft sitting in one place probably wouldn’t offer adequate coverage. That said, multiple satellites are also a stated possibility.

“These satellite technologies are part of a new era of environmental innovation that is supercharging our ability to solve problems,” said Fred Krupp, president of the Environmental Defense Fund. “They won’t cut emissions by themselves, but they will make invisible pollution visible and generate the transparent, actionable, data we need to protect our health, our environment and our economies.”

The EDF is launching its own satellite to that end (MethaneSAT), but will also be collaborating with California in the creation of a shared Climate Data Partnership to make sure the data from these platforms is widely accessible.

More partners are expected to join up now that the endeavor is public, though none were named in the press release or in response to my questions on the topic to Planet. The funding, too, is something of an open question.

The effort is still a ways off from launch — these things take time — but Planet has certainly proven capable of designing and launching on a relatively short timeframe. In fact, it just opened up a brand new facility in San Francisco dedicated to pumping out new satellites.

Senator claps back after Ajit Pai calls California’s net neutrality bill ‘radical’ and ‘illegal’

FCC Chairman Ajit Pai has provoked a biting senatorial response from California after calling the “nanny state’s” new net neutrality legislation “radical,” “anti-consumer,” “illegal” and “burdensome.” Senator Scott Wiener (D-CA), in response, said Pai has “abdicated his responsibility to ensure an open internet” and that the FCC lacks the authority to intervene.

The political flame war was kicked off this morning in Pai’s remarks at the Maine Heritage Policy Center, a free market think tank. You can read them in full here, but I’ve quoted the relevant part below:

Of course, those who demand greater government control of the Internet haven’t given up. Their latest tactic is pushing state governments to regulate the Internet. The most egregious example of this comes from California. Last month, the California state legislature passed a radical, anti-consumer Internet regulation bill that would impose restrictions even more burdensome than those adopted by the FCC in 2015.

If this law is signed by the Governor, what would it do? Among other things, it would prevent Californian consumers from buying many free-data plans. These plans allow consumers to stream video, music, and the like exempt from any data limits. They have proven enormously popular in the marketplace, especially among lower-income Americans. But nanny-state California legislators apparently want to ban their constituents from having this choice. They have met the enemy, and it is free data.

The broader problem is that California’s micromanagement poses a risk to the rest of the country. After all, broadband is an interstate service; Internet traffic doesn’t recognize state lines. It follows that only the federal government can set regulatory policy in this area. For if individual states like California regulate the Internet, this will directly impact citizens in other states.

Among other reasons, this is why efforts like California’s are illegal.

The bogeyman of banning zero rating plans has been raised again and again, but everyone should understand now that the whole thing is a sham — just another ploy by telecoms to parcel out data the way they choose.

The legal question is far from decided, but Pai has been crowing about a recent court ruling for a week or so now, despite the fact that it has very little to do with net neutrality. Ars Technica went into detail on this ruling; the takeaway is that while it is possible that the FCC could preempt state law on information services in some cases, it’s not clear at all that it has any authority whatsoever to do so with broadband services. Ironically, that’s because Pai’s FCC drastically reduced the FCC’s jurisdiction with its reclassification of broadband in Restoring Internet Freedom.

At any rate, more consequential legal challenges and questions are still in the works, so Pai’s jubilation is somewhat premature.

“The Internet should be run by engineers, entrepreneurs, and technologists, not lawyers, bureaucrats, and politicians,” he concluded. Odd then that those very engineers, entrepreneurs and technologists almost unanimously oppose his policy, while he — literally seconds earlier — justified that policy via the world of lawyers, bureaucrats and politicians.

Senator Wiener was quick to issue a correction to the Chairman’s remarks. In an official statement, he explained that “Unlike Pai’s FCC, California isn’t run by the big telecom and cable companies.” The statement continued:

SB 822 is necessary and legal because Chairman Pai abdicated his responsibility to ensure an open internet. Since the FCC says it no longer has any authority to protect an open internet, it’s also the case that the FCC lacks the legal power to preempt states from protecting their residents and economy.

When Verizon was caught throttling the data connection of a wildfire fighting crew in California, Chairman Pai said nothing and did nothing. That silence says far more than his words today.

SB 822 is supported by a broad coalition of consumer groups, groups advocating for low income people, small and mid-size technology companies, labor unions, and President Obama’s FCC chairman, Tom Wheeler. I’ll take that support over Ajit Pai any day of the week.

The law in question has been approved by the state legislature, but has yet to be signed by Governor Jerry Brown, who has another two weeks to consider it.

UK’s mass surveillance regime violated human rights law, finds ECHR

In another blow to the UK government’s record on bulk data handling for intelligence purposes the European Court of Human Rights (ECHR) has ruled that state surveillance practices violated human rights law.

Arguments against the UK intelligence agencies’ bulk collection and data sharing practices were heard by the court in November last year.

In today’s ruling the ECHR has ruled that only some aspects of the UK’s surveillance regime violate human rights law. So it’s not all bad news for the government — which has faced a barrage of legal actions (and quite a few black marks against its spying practices in recent years) ever since its love affair with mass surveillance was revealed and denounced by NSA whistleblower Edward Snowden, back in 2013.

The judgement reinforces a sense that the government has been seeking to push as close to the legal line as possible on surveillance, and sometimes stepping over it — reinforcing earlier strikes against legislation for not setting tight enough boundaries to surveillance powers, and likely providing additional fuel for fresh challenges.

The complaints before the ECHR focused on three different surveillance regimes: 1) The bulk interception of communications (aka ‘mass surveillance’); 2) Intelligence sharing with foreign governments; and 3) The obtaining of communications data from communications service providers.

The challenge actually combines three cases, with the action brought by a coalition of civil and human rights campaigners, including the American Civil Liberties Union, Amnesty International, Big Brother Watch, Liberty, Privacy International and nine other human rights and journalism groups based in Europe, Africa, Asia and the Americas.

The Chamber judgment from the ECHR found, by a majority of five votes to two, that the UK’s bulk interception regime violates Article 8 of the European Convention on Human Rights (a right to respect for private and family life/communications) — on the grounds that “there was insufficient oversight both of the selection of Internet bearers for interception and the filtering; search and selection of intercepted communications for examination; and the safeguards governing the selection of ‘related communications data’ for examination were inadequate”.

The judges did not find bulk collection itself to be in violation of the convention but noted that such a regime must respect criteria set down in case law.

In an even more pronounced majority vote, the Chamber found by six votes to one that the UK government’s regime for obtaining data from communications service providers violated Article 8 as it was “not in accordance with the law”.

While both the bulk interception regime and the regime for obtaining communications data from communications service providers were deemed to have violated Article 10 of the Convention (the right to freedom of expression and information,) as the judges found there were insufficient safeguards in respect of confidential journalistic material.

However the Chamber did not rule against the government in two other components of the case — finding that the regime for sharing intelligence with foreign governments did not violate either Article 8 or Article 10.

While the court unanimously rejected complaints made by the third set of applicants, under Article 6 (right to a fair trial), about the domestic procedure for challenging secret surveillance measures, and under Article 14 (prohibition of discrimination).

The complaints in this case were lodged prior to the UK legislating for a new surveillance regime, the 2016 Investigatory Powers Act, so in coming to a judgement the Chamber was considering the oversight regime at the time (and in the case of points 1 and 3 above that’s the Regulation of Investigatory Powers Act 2000).

RIPA has since been superseded by IPA but, as noted above, today’s ruling will likely fuel ongoing human rights challenges to the latter — which the government has already been ordered to amend by other courts on human rights grounds.

Nor is it the only UK surveillance legislation judged to fall foul on that front. A few years ago UK judges agreed with a similar legal challenge to emergency surveillance legislation that predates IPA — ruling in 2015 that DRIPA was unlawful under human rights law. A verdict the UK Court of Appeal agreed with, earlier this year.

Also in 2015 the intelligence agencies’ own oversight court, the IPT, also found multiple violations following challenges to aspects of its historical surveillance operations, after they have been made public by the Snowden revelations.

Such judgements did not stop the government pushing on with the IPA, though — and it went on to cement bulk collection at the core of its surveillance modus operandi at the end of 2016.

Among the most controversial elements of the IPA is a requirement that communications service providers collect and retain logs on the web activity of the digital services accessed by all users for 12 months; state power to require a company to remove encryption, or limit the rollout of end-to-end encryption on a future service; and state powers to hack devices, networks and services, including bulk hacking on foreign soil. It also allows the security agencies to maintain large databases of personal information on U.K. citizens, including individuals suspected of no crime.

On the safeguards front the government legislated for what it claimed was a “double lock” authorization process for interception warrants — which loops in the judiciary to signing off intercept warrants for the first time in the U.K., along with senior ministers. However this does not regulate the collection or accessing of web activity data that’s blanket-retained on all users.

In April this shiny new surveillance regime was also dealt a blow in UK courts — with judges ordering the government to amend the legislation to narrow how and why retained metadata could be accessed, giving ministers a deadline of November 1 to make the necessary changes.

In that case the judges also did not rule against bulk collection in general — declining to find that the state’s current data retention regime is unlawful on the grounds that it constituted “general and indiscriminate” retention of data. (For its part the government has always argued its bulk collection activities do not constitute blanket retention.)

And today’s ECHR ruling further focuses attention on the safeguards placed around bulk collection programs — having found the UK regime lacked sufficient monitoring to be lawful (but not that bulk collection itself is unlawful by default).

Opponents of the current surveillance regime will be busily parsing the ruling to find fresh fronts to attack.

It’s not the first time the ECHR has looked at bulk interception. Most recently, in June 2018, it deemed Swedish legislation and practice in the field of signals intelligence did not violate EU human rights law. Among its reasoning was that it found the Swedish system to have provided “adequate and sufficient guarantees against arbitrariness and the risk of abuse”.

However it said the Big Brother Watch and Others vs United Kingdom case being ruled upon today is the first case in which it specifically considered the extent of the interference with a person’s private life that could result from the interception and examination of communications data (as opposed to content).

In a Q&A about today’s judgement, the court notes that it “expressly recognised” the severity of threats facing states, and also how advancements in technology have “made it easier for terrorists and criminals to evade detection on the Internet”.

“It therefore held that States should enjoy a broad discretion in choosing how best to protect national security. Consequently, a State may operate a bulk interception regime if it considers that it is necessary in the interests of national security. That being said, the Court could not ignore the fact that surveillance regimes have the potential to be abused, with serious consequences for individual privacy. In order to minimise this risk, the Court has previously identified six minimum safeguards which all interception regimes must have,” it writes.

“The safeguards are that the national law must clearly indicate: the nature of offences which may give rise to an interception order; a definition of the categories of people liable to have their communications intercepted; a limit on the duration of interception; the procedure to be followed for examining, using and storing the data obtained; the precautions to be taken when communicating the data to other parties; and the circumstances in which intercepted data may or must be erased or destroyed.”

(Additional elements the court says it considered in an earlier surveillance case, Roman Zakharov v. Russia, also to determine whether legislation breached Article 8, included “arrangements for supervising the implementation of secret surveillance measures, any notification mechanisms and the remedies provided for by national law”.)

Commenting on today’s ruling in a statement, Megan Goulding, a lawyer for Liberty, said: “This is a major victory for the rights and freedom of people in the UK. It shows that there is — and should be — a limit to the extent that states can spy on their citizens.

“Police and intelligence agencies need covert surveillance powers to tackle the threats we face today — but the court has ruled that those threats do not justify spying on every citizen without adequate protections. Our government has built a surveillance regime more extreme than that of any other democratic nation, abandoning the very rights and freedoms terrorists want to attack. It can and must give us an effective, targeted system that protects our safety, data security and fundamental rights.”

A Liberty spokeswoman also told us it will continue its challenge to IPA in the UK High Court, adding: “We continue to believe that mass surveillance can never be compliant in a free, rights-respecting democracy.”

Also commenting in a statement, Silkie Carlo, director of Big Brother Watch, said: “This landmark judgment confirming that the UK’s mass spying breached fundamental rights vindicates Mr Snowden’s courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.

“Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.”

A spokesperson for Privacy International told us it’s considering taking the case to the ECHR’s Grand Chamber.

Also commenting in a supporting statement, Antonia Byatt, director of English PEN, added: “This judgment confirms that the British government’s surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative journalism. The government must now take action to guarantee our freedom to write and to read freely online.”

We’ve reached out to the Home Office for comment from the UK government.

On intelligence sharing between governments, which the court had not previously considered, the judges found that the procedure for requesting either the interception or the conveyance of intercept material from foreign intelligence agencies to have been set out with “sufficient clarity in the domestic law and relevant code of practice”, noting: “In particular, material from foreign agencies could only be searched if all the requirements for searching material obtained by the UK security services were fulfilled.”

It also found “no evidence of any significant shortcomings in the application and operation of the regime, or indeed evidence of any abuse” — hence finding the intelligence sharing regime did not violate Article 8.

On the portion of the challenge concerning complaints that UK intelligence agencies’ oversight court, the IPT, lacked independence and impartiality, the court disagreed — finding that the tribunal had “extensive power to consider complaints concerning wrongful interference with communications, and those extensive powers had been employed in the applicants’ case to ensure the fairness of the proceedings”.

“Most notably, the IPT had access to open and closed material and it had appointed Counsel to the Tribunal to make submissions on behalf of the applicants in the closed proceedings,” it also writes.

In addition, it said it accepted the government’s argument that in order to ensure the efficacy of the secret surveillance regime restrictions on the applicants’ procedural rights had been “both necessary and proportionate and had not impaired the essence of their Article 6 rights”.

On the complaints under Article 14, in conjunction with Articles 8 and 10 — that those outside the UK were disproportionately likely to have their communications intercepted as the law only provided additional safeguards to people known to be in Britain — the court also disgareed, rejecting this complaint as manifestly ill-founded.

“The applicants had not substantiated their argument that people outside the UK were more likely to have their communications intercepted. In addition, any possible difference in treatment was not due to nationality but to geographic location, and was justified,” it writes. 

10 critical points from Zuckerberg’s epic security manifesto

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

“While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.”

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

“These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.”

Balancing free speech and security is complicated and expensive

“These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”‘

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”