State attorneys general to launch antitrust investigation into big tech companies, reports say

The state attorneys in more than a dozen states are preparing to begin an antitrust investigation of the tech giants, the Wall Street Journal and the New York Times reported Monday, putting the spotlight on an industry that is already facing federal scrutiny.

The bipartisan group of attorneys from as many as 20 states is expected to formally launch a probe as soon as next month to assess whether tech companies are using their dominant market position to hurt competition, WSJ reported.

If true, the move follows the Department of Justice, which last month announced its own antitrust review of how online platforms scaled to their gigantic sizes and whether they are using their power to curb competition and stifle innovation. Earlier this year, the Federal Trade Commission formed a task force to monitor competition among tech platforms.

It won’t be unprecedented for a group of states to look at a technology giant. In 1998, 20 states joined the Justice Department in suing Microsoft . The states could play a key role in building evidence and garnering public support for major investigations.

Because the tentacles of Google, Facebook, Amazon, and Apple reach so many industries, any investigation into them could last for years.

Apple and Google pointed the Times to their previous official statements on the matter, in which they have argued that they have been vastly innovative and created an environment that has benefited the consumers. Amazon and Facebook did not comment.

Also on Monday, Joseph Simons, the chairman of FTC, warned that Facebook’s planned effort to integrate Instagram and WhatsApp could stymie any attempt by the agency to break up the social media giant.

“If they’re maintaining separate business structures and infrastructure, it’s much easier to have a divestiture in that circumstance than in where they’re completely enmeshed and all the eggs are scrambled,” Simons told the Financial Times.

Without evidence, Trump accuses Google of manipulating millions of votes

The president this morning lashed out at Google on Twitter, accusing the company of manipulating millions of votes in the 2016 election to sway it toward Hillary Clinton. The authority on which he bases this serious accusation, however, is little more than supposition in an old paper reheated by months-old congressional testimony.

Trump’s tweet this morning actually cited no paper at all, in fact, though he did tag conservative watchdog group Judicial Watch, perhaps asking them to investigate. It’s also unclear who he thinks should sue the company.

Coincidentally, Fox News had just mentioned the existence of such a report about five minutes earlier. Trump has also recently criticized Google and CEO Sundar Pichai over a variety of perceived slights.

In fact, the report was not “just issued,” and does not say what the president suggests it did. What both Fox and Trump appear to be referring to is a paper published in 2017 that described what the authors say was a bias in Google and other search engines during the run-up to the 2016 election.

If you’re wondering why you haven’t heard about this particular study, I can tell you why — it’s a very bad study. Its contents do not amount to anything, let alone evidence by which to accuse a major company of election interference.

The authors looked at search results for 95 people over the 25 days preceding the election and evaluated the first page for bias. They claim to have found that based on “crowdsourced” determinations of bias, the process for which is not described, that most search results, especially on Google, tended to be biased in favor of Clinton.

No data on these searches, such as a sample search and results and how they were determined to be biased, is provided. There’s no discussion of the fact, for example, that Google routinely and openly tailors search results based on a person’s previous searches, stated preferences, location and so on.

In fact, Epstein’s “report” lacks all the qualifications of any ordinary research paper.

There is no abstract or introduction, no methods section to show the statistics work and definitions of terms, no discussion, no references. Without this basic information the document is not only incapable of being reviewed by peers or experts, but is indistinguishable from completely invented suppositions. Nothing in this paper can be in any way verified.

Robert Epstein freely references himself, however: a single 2015 paper in PNAS on how search results could be deliberately manipulated to affect a voter looking for information on candidates, and the many, many opinion pieces he has written on the subject, frequently on far-right outlets the Epoch Times and Daily Caller, but also non-partisan ones like USA Today and Bloomberg Businessweek.

The numbers advanced in the study are completely without merit. Citing math he does not describe, Epstein says that “a pro-Clinton bias in Google’s search results would over time, shift at least 2.6 million votes to Clinton.” No mechanism or justification for this assertion is provided, except a highly theoretical one based on ideas and assumptions from his 2015 study, which had little in common with this one. The numbers are, essentially, made up.

In other words, this so-called report is nothing of the kind — a nonfactual document written with no scientific justification of its claims written by someone who publishes anti-Google editorials almost monthly. It was not published in a journal of any kind, simply put online at a private nonprofit research agency called the American Institute for Behavioral Research and Technology, where Epstein is on staff and which appears to exist almost solely to promote his work — such as it is.

(In response to my inquiry, AIBRT said that it is not legally bound to reveal its donors and chooses not to, but stated that it does not accept “gifts that might cause the organization to bias its research projects in any way.”)

Lastly, in his paper, Epstein speculates that Google may have been manipulating the data they were collecting for the report, citing differences between data from Gmail users and non-users, choosing to throw away all the former while still reporting of it:

As you can see, the search results seen by non-gmail users were far more biased than the results seen by gmail users. Perhaps Google identified our confidants through its gmail system and targeted them to receive unbiased results; we have no way to confirm this at present, but it is a plausible explanation for the pattern of results we found.

I leave it to the reader to judge the plausibility of this assertion.

If that were all, it would be more than enough. But Trump’s citation of this flimsy paper doesn’t even get the facts right. His assertion was that “Google manipulated from 2.6 million to 16 million votes for Hillary Clinton in 2016 Election,” and the report doesn’t even state that.

The source for this false claim appears to be Epstein’s recent appearance in front of the Senate Judiciary Committee in July. Here he received star treatment from Sen. Ted Cruz (R-TX), who asked him to share his expert opinion on the possibility of tech manipulation of voting. Cruz’s previous expert for this purpose was conservative radio talk show host Dennis Prager.

Again citing no data, studies or mechanisms whatsoever, Epstein described 2.6 million as a “rock-bottom minimum” of votes that Google, Facebook, Twitter and others could have affected (he does not say did affected, or attempted to affect). He also says that in subsequent elections, specifically in 2020, “if all these companies are supporting the same candidate, there are 15 million votes on the line that can be shifted without people’s knowledge and without leaving a paper trail for authorities to trace.”

“The methods they are using are invisible, they’re subliminal, they’re more powerful than most any effects I’ve seen in the behavioral sciences,” Epstein said, but did not actually describe what the techniques are. Though he did suggest that Mark Zuckerberg could send out a “get out the vote” notification only to Democrats and no one would ever know — absurd.

In other words, the numbers are not only invented, but unrelated to the 2016 election, and inclusive of all tech companies, not just Google. Even if Epstein’s claims were anywhere near justifiable, Trump’s tweet mischaracterizes them and gets everything wrong. Nothing about any of this is anywhere close to correct.

Google issued a statement addressing the president’s accusation, saying, “This researcher’s inaccurate claim has been debunked since it was made in 2016. As we stated then, we have never re-ranked or altered search results to manipulate political sentiment.”

You can read the full “report” below:

EPSTEIN & ROBERTSON 2017-A Method for Detecting Bias in Search Rankings-AIBRT by TechCrunch on Scribd

Local governments are forcing the scooter industry to grow up fast

Gone are the days when tech companies can deploy their services in cities without any regard for rules and regulations. Before the rise of electric scooters, cities had already become hip to tech’s status quo (thanks to the likes of Uber and Lyft) and were ready to regulate. We explored some of this in “The uncertain future of shared scooters,” but since then, new challenges have emerged for scooter startups.

And for scooter startups, city regulations can make or break their businesses across nearly every aspect of operations, especially two major ones: ridership growth and ability to attract investor dollars. From issuing permits to determining how many scooters any one company can operate at any one time to enforcing low-income plans and impacting product roadmaps, the ball is really in the city’s court.

US Cyber Command has publicly posted malware linked to a North Korea hacking group

U.S. Cyber Command, the sister division of the National Security Agency focused on offensive hacking and security operations, has released a set of new samples of malware linked to North Korean hackers.

The military unit tweeted Wednesday that it had uploaded the malware to VirusTotal, a widely used database for malware and security research.

It’s not the first time the unit has uploaded malware to the server — it has its own Twitter account to tell followers which malware it uploads. On one hand the disclosure helps security teams fight threats from nation states, but it also gives a rare glimpse inside the nation state-backed hacking groups on which Cyber Command is focused.

The uploaded malware sample is named Electric Fish by the U.S. government. Electric Fish is a tunneling tool designed to exfiltrate data from one system to another over the internet once a backdoor has been placed.

Electric Fish is linked to the APT36 hacking group.

FireEye says APT36 has distinctly different motivations from other North Korean-backed hacking groups like Lazarus, which was blamed for the Sony hack in 2016 and the WannaCry ransomware attack in 2017. APT36 is focused on financial crimes, such as stealing millions of dollars from banks across the world, the cybersecurity firm said.

Electric Fish was first discovered in May, according to Homeland Security’s cybersecurity division CISA, but APT36 has been active for several years.

A recently leaked United Nations report said the North Korean regime has stolen more than $2 billion through dozens of cyberattacks to fund its various weapons programs.

APT36 has amassed more than $100 million in stolen funds since its inception.

Huawei employees reportedly aided African governments in spying

A new report from The Wall Street Journal could be another damning piece of evidence for a company already under a good deal of international scrutiny. The paper is reporting that technicians working for Huawei helped members of government in Uganda and Zambia spy on political opponents.

The report cites unnamed senior surveillance officers. The paper adds that an investigation didn’t confirm a direct tie between the Chinese government or Huawei executives. It did, however, appear to confirm that employees for the tech giant played a part in intercepting communications.

The list includes encrypted messages, the use of apps like WhatsApp and Skype and tracking opponents using cellular data.

A representative for Zambia’s ruling party confirmed with the paper that Huawei technicians have helped in the fight against news sites with opposing stances in the country, stating, “Whenever we want to track down perpetrators of fake news, we ask Zicta, which is the lead agency. They work with Huawei to ensure that people don’t use our telecommunications space to spread fake news.”

Huawei has, naturally, denied any involvement, stating that it has “never been engaged in ‘hacking’ activities. Huawei rejects completely these unfounded and inaccurate allegations against our business operations. Our internal investigation shows clearly that Huawei and its employees have not been engaged in any of the activities alleged. We have neither the contracts, nor the capabilities, to do so.”

The company has, of course, been under international scrutiny in places like the U.S. and Europe over concerns that its telecommunications technologies could be used for spying on behalf of the Chinese government, allegations Huawei has strongly and often rebuffed.

Pete Buttigieg echoes Warren with $80B rural broadband plan

Democratic presidential hopeful Pete Buttigieg has unveiled his plan to address the broadband gap in this country: an $80 billion “Internet for All” initiative and set of related reforms. It echoes Senator Elizabeth Warren’s (D-MA) announcement last week, which is, generally speaking, a good thing.

It’s detailed in a document entitled “Investing in an American Asset: Unleashing the Potential of Rural America,” which feels like it may rub people the wrong way. It seems to imply that rural America is an “asset” to the rest of America, and that its potential has not yet been unleashed. But that’s just a tone thing.

There are a number of programs in there worth looking at if you’re interested in the economy of rural areas and how it might be spurred or revitalized (for instance paying teachers better), but the internet access portion is the most relevant for tech.

Buttigieg’s main promise is to “expand access to all currently unserved and underserved communities,” including a “public option” where private companies have failed to provide coverage.

That gets broken down into a few sub-goals. First is to revamp the way we measure and track broadband access, since the current system “is inaccurate and perpetuates inequity.” It’s important this isn’t overlooked in anyone’s plan, since this is how we officially make decisions like where to spend federal dollars on connectivity.

Like Warren, Buttigieg wants to remove the impediments to public and municipal broadband options that have been put in place over the years. This will allow “community-driven broadband networks, such as public-private partnerships, rural co-ops or municipally owned broadband networks” to move forward without legal challenges. A new Broadband Incubator Office will help roll these out, and the $80 billion will help bankroll them.

Net neutrality gets a bullet point as well — “Given the FCC’s volatility on this issue, Pete believes that legislation will ultimately be necessary,” the document reads. That’s frank, and while Warren and others have spoken out in favor of an FCC solution, it is likely that legislation will eventually come around and hopefully solve the issue once and for all.

Sen. Bernie Sanders (I-VT) was the first to make net neutrality a campaign promise, though most of the candidates have expressed support for the rule in the past.

The plan is a little less specific than Warren’s, but the truth is any plan involving this amount of money and complexity is going to necessarily be a bit vague at first. Demonstrating priorities and openness to ideas and methods is the important part, as well as throwing out a giant number like $80 billion. The specifics are unlikely to see much debate until one of these people is in the Oval Office.

“To ensure greater opportunity for all, we must make a massive investment in Internet access” summarizes the Buttigieg plan pretty well. You can read the full plan here or below.

Pete Buttigieg Rural Economy by TechCrunch on Scribd

$600M Cray supercomputer will tower above the rest — to build better nukes

Cray has been commissioned by Lawrence Livermore National Laboratory to create a supercomputer head and shoulders above all the rest, with the contract valued at some $600 million. Disappointingly, El Capitan, as the system will be called, will be more or less solely dedicated to redesigning our nuclear armament.

El Capitan will be the third “exascale” computer being built by Cray for the U.S. government, the other two being Aurora for Argonne National Lab and Frontier for Oak Ridge. These computers are built on a whole new architecture called Shasta, in which Cray intends to combine the speed and scale of high performance computing with the easy administration of cloud-based enterprise tools.

Due for delivery in 2022, El Capitan will be operating on the order of 1.5 exaflops, or floating point operations per second, a measure of calculation often used to track supercomputer performance. Exa denotes a quintillion of something.

Right now the top dog is already at Oak Ridge: an IBM-built system called Sierra. At about 1.5 petaflops, it’s about 1/10th the power of Aurora — of course, the former is operational and the latter is theoretical right now, but you get the idea.

One wonders exactly what all this computing power is needed for. There are in fact countless domains of science that could be advanced by access to a system like El Capitan — simulations of atmospheric and geological processes, for instance, could be simulated in 3D at a larger scale and higher fidelity than ever before.

So it was a bit disheartening to learn that El Capitan will, once fully operational, be dedicated almost solely to classified nuclear weaponry design.

To be clear, that doesn’t just mean bigger and more lethal bombs. The contract is being carried out with the collaboration of the National Nuclear Security Administration, which of course oversees the nuclear stockpile alongside the Department of Energy and military. It’s a big operation, as you might expect.

We have an aging nuclear weapons stockpile that was essentially designed and engineered over a period of decades ending in the ’90s. We may not need to build new ones, but we do actually have to keep our old ones in good shape, not just in case of war but to prevent them failing in their advancing age and decrepitude.

shasta

The components of Cray’s Shasta systems.

“We like to say that while the stockpile was designed in two dimensions, it’s actually aging in three,” said LLNL director Bill Goldstein in a teleconference call on Monday. “We’re currently redesigning both warhead and delivery system. This is the first time we’ve been doing done this for about 30 years now. This requires us to be able to simulate the interaction between the physics of the nuclear system and the engineering features of the delivery system. These are real engineering interactions and are truly 3D. This is an example of a new requirement that we have to meet, a new problem that we have to solve, and we simply can’t rely on two dimensional simulations to get at. And El Capitan is being delivered just in time to address this problem.”

Although in response to my question Goldstein declined to provide a concrete example of a 3D versus 2D research question or result, citing the classified nature of the work, it’s clear that his remarks are meant to be taken both literally and figuratively. The depth, so to speak, of factors affecting a nuclear weapons system may be said to have been much flatter in the ’90s, when we lacked the computing resources to do the complex physics simulations that might inform their design. So both conceptually and spatially the design process has expanded.

That said, let’s be clear: “warhead and delivery systems” means nukes, and that is what this $600 million supercomputer will be dedicated to.

There’s a silver lining there: Before being air-gapped and entering into its classified operations, El Capitan will have a “shakeout period” during which others will have access to it. So while for most of its life it will be hard at work on weapons systems, during its childhood it will be able to experience a wider breadth of scientific problems.

The exact period of time and who will have access to it is to be determined (this is still three years out), but it’s not an afterthought to quiet jealous researchers. The team needs to get used to the tools and work with Cray to refine the system before it moves on to the top secret stuff. And opening it up to a variety of research problems and methods is a great way to do it, while also providing a public good.

Yet Goldstein referred to the 3D simulations of nuclear weapons physics as the “killer app” of the new computer system. Perhaps not the phrase I would have chosen. But it’s hard to deny the importance of making sure the nuclear stockpile is functional and not leaking or falling apart — I just wish the most powerful computer ever planned had a bit more noble of a purpose.

Huawei’s new OS isn’t an Android replacement… yet

If making an Android alternative was easy, we’d have a lot more of them. Huawei’s HarmonyOS won’t be replacing the mobile operating system for the company any time soon, and Huawei has made it pretty clear that it would much rather go back to working with Google than go at it alone.

Of course, that might not be an option.

The truth is that Huawei and Google were actually getting pretty chummy. They’d worked together plenty, and according to recent rumors, were getting ready to release a smart speaker in a partnership akin to what Google’s been doing with Lenovo in recent years. That was, of course, before Huawei was added to a U.S. “entity list” that ground those plans to a halt.

How tech is transforming the intelligence industry

At a conference on the future challenges of intelligence organizations held in 2018, former Director of National Intelligence Dan Coats argued that he transformation of the American intelligence community must be a revolution rather than an evolution. The community must be innovative and flexible, capable of rapidly adopting innovative technologies wherever they may arise.

Intelligence communities across the Western world are now at a crossroads: The growing proliferation of technologies, including artificial intelligence, Big Data, robotics, the Internet of Things, and blockchain, changes the rules of the game. The proliferation of these technologies – most of which are civilian, could create data breaches and lead to backdoor threats for intelligence agencies. Furthermore, since they are affordable and ubiquitous, they could be used for malicious purposes.

The technological breakthroughs of recent years have led intelligence organizations to challenge the accepted truths that have historically shaped their endeavors. The hierarchical, compartmentalized, industrial structure of these organizations is now changing, revolving primarily around the integration of new technologies with traditional intelligence work and the redefinition of the role of the humans in the intelligence process.

Take for example Open-Source Intelligence (OSINT) – a concept created by the intelligence community to describe information that is unclassified and accessible to the general public. Traditionally, this kind of information was inferior compared to classified information; and as a result, the investments in OSINT technologies were substantially lower compared to other types of technologies and sources. This is changing now; agencies are now realizing that OSINT is easy to acquire and more beneficial, compared to other – more challenging – types of information.

Yet, this understanding trickle down solely, as the use of OSINT by intelligence organizations still involves cumbersome processes, including slow and complex integration of unclassified and classified IT environments. It isn’t surprising therefore that intelligence executives – for example the Head of State Department’s Intelligence Arm or the nominee to become the Director of the National Reconnaissance Office – recently argued that one of the community’s grandest challenges is the quick and efficient integration of OSINT in its operations.

Indeed, technological innovations have always been central to the intelligence profession. But when it came to processing, analyzing, interpreting, and acting on intelligence, however, human ability – with all its limitations – has always been considered unquestionably superior. That the proliferation of data and data sources are necessitating a better system of prioritization and analysis, is not questionable. But who should have a supremacy? Humans or machines?

A man crosses the Central Intelligence Agency (CIA) seal in the lobby of CIA Headquarters in Langley, Virginia, on August 14, 2008. (Photo: SAUL LOEB/AFP/Getty Images)

Big data comes for the spy business

The discourse is tempestuous. Intelligence veterans claim that there is no substitute for human judgment. They argue that artificial intelligence will never be capable of comprehending the full spectrum of considerations in strategic decision-making, and that it cannot evaluate abstract issues in the interpretation of human behavior. Machines can collect data and perhaps identify patterns, but they will never succeed in interpreting reality as do humans. Others also warn of the ethical implications of relying on machines for life-or-death situations, such as a decision to go to war.

In contrast, techno-optimists claim that human superiority, which defined intelligence activities over the last century, is already bowing to technological superiority. While humans are still significant, their role is no longer exclusive, and perhaps not even the most important in the process. How can the average intelligence officer cope with the ceaseless volumes of information that the modern world produces?

From 1995 to 2016, the amount of reading required of an average US intelligence researcher, covering a low-priority country, grew from 20,000 to 200,000 words per day. And that is just the beginning. According to forecasts, the volume of digital data that humanity will produce in 2025 will be ten times greater than is produced today. Some argue this volume can only be processed – and even analyzed – by computers.

Of course, the most ardent advocates for integration of machines into intelligence work are not removing human involvement entirely; even the most skeptical do not doubt the need to integrate artificial intelligence into intelligence activities. The debate centers on the question of who will help whom: machines in aid of humans or humans in aid of machines.

Most insiders agree that the key to moving intelligence communities into the 21st century lies in breaking down inter- and intra-organizational walls, including between
the services within the national security establishment; between the public sector, the private sector, and academia; and between intelligence services of different countries.

It isn’t surprising therefore that the push toward technological innovation is a part of the current intelligence revolution. The national security establishment already recognizes that the private sector and academia are the main drivers of technological innovation.

Alexander Karp, chief executive officer and co-founder of Palantir Technologies Inc., walks the grounds after the morning sessions during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, U.S., on Thursday, July 7, 2016. Billionaires, chief executive officers, and leaders from the technology, media, and finance industries gather this week at the Idaho mountain resort conference hosted by investment banking firm Allen & Co. Photographer: David Paul Morris/Bloomberg via Getty Images

Private services and national intelligence

In the United States there is dynamic cooperation between these bodies and the security community, including venture capital funds jointly owned by the government and private companies.

Take In-Q-Tel – a venture capital fund established 20 years ago to identify and invest in companies that develop innovative technology which serves the national security of the United States, thus positioning the American intelligence community at the forefront of technological development. The fund is an independent corporation, which is not subordinate to any government agency, but it maintains constant coordination with the CIA, and the US government is the main investor.

It’s most successful endeavor, which has grown to become a multi-billion company though somewhat controversial, is Palantir, a data-integration and knowledge management provider. But there are copious other startups and more established companies, ranging from sophisticated chemical detection (e.g. 908devices), automated language translations (e.g. Lilt), and digital imagery (e.g. Immersive Wisdom) to sensor technology (e.g. Echodyne), predictive analytics (e.g. Tamr) and cyber security (e.g. Interset).

Actually, a significant part of intelligence work is already being done by such companies, small and big. Companies like Hexagon, Nice, Splunk, Cisco and NEC offer intelligence and law enforcement agencies a full suite of platforms and services, including various analytical solutions such as video analytics, identity analytics, and social media analytics . These platforms help agencies to obtain insights and make predictions from the collected and historic data, by using real-time data stream analytics and machine learning. A one-stop-intelligence-shop if you will.

Another example of government and non-government collaboration is the Intelligence Advanced Research Projects Activity (IARPA) – a nonprofit organization which reports to the Director of National Intelligence (DNI). Established in 2006, IARPA finances advanced research relevant to the American intelligence community, with a focus on cooperation between academic institutions and the private sector, in a broad range of technological and social sciences fields. With a relatively small annual operational budget of around $3bn, the fund gives priority to multi-year development projects that meet the concrete needs of the intelligence community. The majority of the studies supported by the fund are unclassified and open to public scrutiny, at least until the stage of implementation by intelligence agencies.

Image courtesy of Bryce Durbin/TechCrunch

Challenging government hegemony in the intelligence industry 

These are all exciting opportunities; however, the future holds several challenges for intelligence agencies:

First, intelligence communities lose their primacy over collecting, processing and disseminating data. Until recently, the organizations Raison D’etre was, first and foremost, to obtain information about the enemy, before said enemy could disguise that information.

Today, however, a lot of information is available, and a plethora of off-the-shelf tools (some of which are free) allow all parties, including individuals, to collect, process and analyze vast amounts of data. Just look at IBM’s i2 Analyst’s Notebook, which gives analysts, for just few thousand dollars, multidimensional visual analysis capabilities so they can quickly uncover hidden connections and patterns in data. Such capacities belonged, just until recently, only to governmental organizations.

A second challenge for intelligence organizations lies in the nature of the information itself and its many different formats, as well as in the collection and processing systems, which are usually separate and lacking standardization. As a result, it is difficult to merge all of the available information into a single product. For this reason, intelligence organizations are developing concepts and structures which emphasize cooperation and decentralization.

The private market offers a variety of tools for merging information; ranging from simple off-the-shelf solutions, to sophisticated tools that enable complex organizational processes. Some of the tools can be purchased and quickly implemented – for example, data and knowledge sharing and management platforms – while others are developed by the organizations themselves to meet their specific needs.

The third challenge relates to the change in the principle of intelligence prioritization. In the past, the collection of information about a given target required a specific decision to do so and dedicated resources to be allocated for that purpose, generally at the expense of allocation of resources to a different target. But in this era of infinite quantities of information, almost unlimited access to information, advanced data storage capabilities and the ability to manipulate data, intelligence organizations can now collect and store information on a massive scale, without the need to immediately process it – rather, it may be processed as required.

This development leads to other challenges, including: the need to pinpoint the relevant information when required; to process the information quickly; to identify patterns and draw conclusions from mountains of data; and to make the knowledge produced accessible to the consumer. It is therefore not surprising that most of the technological advancements in the intelligence field respond to these challenges, bringing together technologies such as big data with artificial intelligence, advanced information storage capabilities and advanced graphical presentation of information, usually in real time.

Lastly, intelligence organizations are built and operate according to concepts developed at the peak of the industrial era, which championed the principle of the assembly line, which are both linear and cyclical. The linear model of ​​the intelligence cycle – collection, processing, research, distribution and feedback from the consumer – has become less relevant. In this new era, the boundaries between the various intelligence functions and between the intelligence organizations and their eco-system are increasingly blurred.

 

The brave new world of intelligence

A new order of intelligence work is therefore required, and therefore intelligence organizations are currently in the midst of a redefinition process. Traditional divisions – e.g. between collection and research; internal security organizations and positive intelligence; and public and private sectors – all become obsolete. This is not another attempt to carry out structural reforms: there is a sense of epistemological rupture which requires a redefinition of the discipline, the relationships that intelligence organizations have with their environments – from decision makers to the general public – and the development of new structures and conceptions.

And of course, there are even wider concerns; legislators need to create a legal framework that accurately incorporates the assessments based on data in a way that takes the predictive aspects of these technologies into account and still protects the privacy and security rights of individual citizens in nation states that have a respect for those concepts.

Despite the recognition of the profound changes taking place around them, today’s intelligence institutions are still built and operate in the spirit of Cold War conceptions. In a sense, intelligence organizations have not internalized the complexity that characterizes the present time – a complexity which requires abandoning the dichotomous (within and outside) perception of the intelligence establishment, as well as the understanding of the intelligence enterprise and government bodies as having a monopoly on knowledge; concepts that have become obsolete in an age of decentralization, networking and increasing prosperity.

Although some doubt the ability of intelligence organizations to transform and adapt themselves to the challenges of the future, there is no doubt that they must do so in this era in which speed and relevance will determine who prevails.

Reports say White House has drafted an order putting the FCC in charge of monitoring social media

The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.

The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.

Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.

The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.

Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.

In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.

As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.

Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .

At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.

The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.

The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.

The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.

The FTC and FCC had not responded to a request for comment at the time of publication.