Singapore is the crypto sandbox that Asia needs

Singapore Blockchain Week happened this past week. While there have been a few announcements from companies, some of the most interesting updates have come from regulators, and specifically, the Monetary Authority of Singapore (MAS). The financial regulator openly discussed its views on cryptocurrency and plans to develop blockchain technology locally.

For those who are unfamiliar, Singapore historically has been a financial hub in Southeast Asia, but now has also gradually become the crypto hub of Asia. Compared to the rest of Asia and the rest of the world, the regulators in Singapore are well-informed and more transparent about their views on blockchain and cryptocurrency. While regulatory uncertainties still loom over Korea and Japan, in Southeast Asia, the MAS has already released its opinion “A Guide to Digital Token Offering” that illustrates the application of securities laws to digital token offerings and issuances. Singaporean regulators have arguably been pioneering economic and regulatory standards in Asia since the early days of the country’s founding by Lee Kuan Yew in 1965.

Singapore is the first stop for foreign companies in crypto

In the past, I’ve said that Thailand is one of the most interesting countries in crypto in Southeast Asia. Nonetheless, for any Western or foreign company looking to establish a footing in Asia, or even for any local company in any Asian country looking to establish a presence outside of their own country, Singapore should be the first stop. It has become the go-to crypto sandbox of Asia.

There are a number of companies all over Asia, as well as in the West, that have already made moves into the country. And the types of cryptocurrency projects and exchanges that go to Singapore vary widely.

A few months ago, a Korean team called MVL introduced Tada, or the equivalent of “Uber” on the blockchain, in Singapore. Tada is an on-demand car sharing service that utilizes MVL’s technology. The Tada app is built on MVL’s blockchain ecosystem, which is specifically designed to serve the automotive industry, adjacent service industries, and their customers. In this case, MVL was looking to test out its blockchain projects in a progressive, friendly jurisdiction outside of Korea, but still close enough to its headquarters. Singapore fulfilled most of these requirements.

Relatedly, Didi, China’s ride-sharing company, has also looked to build out its own blockchain-based ride-sharing program, called VVgo. VVgo’s launch is pending, and its home is intended to be in Toronto, Singapore, Hong Kong or San Francisco. Given Singapore’s geographic proximity and the transparency of its regulators, it would likely be a good testing ground for Didi as well.

This week, exchanges such as Binance and Upbit from Korea have also announced their plans to enter the Singaporean market. A few days ago, Changpeng ZhaoCEO of Binance, the world’s largest cryptocurrency exchange, announced the launch of a fiat currency exchange that will be based in Singapore. He also mentioned his company’s plan to launch five to ten fiat-to-crypto exchanges in the next year, with ideally two per continent. Dunamu, the parent company of South Korea’s largest crypto exchange Upbit, also just announced the launch of Upbit Singapore, which will be fully operational by October.

The team at Dunamu mentions how they are encouraged by MAS’s attitude towards cryptocurrency regulation and the vision of the country’s government to establish a strong crypto and blockchain sector. They also believe Singapore could be a bridge between Korea and the global cryptocurrency exchange market.

From a high level, the supply of crypto projects and trading volume in Singapore is certainly strong, and the demand also appears abundant. Following China’s ICO ban in late 2017, Singapore has become home to many financial institutions that can serve as potential investors for ICOs.

As recently featured on the China Money Network, Li Dongmei wrote that:

What is supporting such optimism is the quiet preparation of capital on a massive scale getting ready to act the “All In Crypto” mantra. “In recent months, there have been over a thousand foundations being established in Singapore by Chinese nationals,” said Chen Xianhui, an agent specialized in helping Chinese clients to register foundations in Singapore. Most of these newly established foundations are used setting up various token investments funds.

Singapore has become the first choice when crypto companies from both the West and the East are initially scoping out their market strategies in Asia, and companies want an overarching idea of what’s going on in the cryptocurrency world in the region.

In fact, it’s often the case that Southeast Asian crypto companies and leaders gather in Singapore before they go off and do crypto businesses in their own countries. It’s the place for one wants to tap all of the Asian crypto markets in one single physical location. The proof is in the data: in 2017, Singapore ascended to the number three market for ICO issuance based on the number of funds raised, trailing the United States and Switzerland.

Crypto is thriving due to regulator openness

The Monetary Authority of Singapore (MAS) takes a very practical approach to crypto. Currently, MAS divides digital tokens into utility tokens, payments tokens, and securities. In Asia, only Singapore and Thailand currently have such detailed classifications.

While speaking at Consensus Singapore this week, Damien Pang, Singapore’s Technology Infrastructure Office under the FinTech & Innovation Group (FTIG), said that “[MAS does] not regulate technology itself but purpose,” when in conversation discussing ICOs in Singapore. “The MAS takes a close look at the characteristics of the tokens, in the past, at the present, and in the future, instead of just the technology built on”.

Additionally, Pang mentioned that MAS does not intend to regulate utility tokens. Nevertheless, they are looking to regulate payment tokens that have a store of value and payment properties by passing a service bill by the end of the year. They are also paying attention to any utility or payment tokens with security features (i.e. a promise of future earnings, which will be regulated as such).

On the technology front, since 2017, Singapore authorities have been looking to use distributed ledger technology to boost the efficiency of settling cross-bank financial transactions. They believe that blockchain technology offers the potential to make trade finance safer and more efficient.

When compared to other Asia crypto hubs like Hong Kong, Seoul, or Shanghai, Singapore can expose one to the Southeast Asia market significantly more. I believe market activity will likely continue to thrive in the region as the country continues to act as the springboard for cryptocurrency companies and investors, and until countries like Korea and Japan establish a clear regulatory stance.

Apple Watch and other hardware reportedly spared by new Trump tariffs

The latest round of Trump administration tariffs is set to affect a number of different industries. At least one category previously expected to be impacted, however, is likely to be spared, according to a new report from Bloomberg.

According to anonymous sources, the tariffs impacting a slew of consumer electronics, running the gamut from the Apple Watch to Fitbit trackers to Sonos speakers, has not made it into the final language. That means, for this round at least, those products should be spared the tax that would drive up the cost of such imports.

Trump administration tariffs have been the centerpiece of a looming trade war between the U.S. and China. Earlier today, China was reportedly set to cancel further trade talks, should the U.S. announce additional tariffs. They’ve been a domestic issue as well, as companies like Harley-Davidson have announced plans to move some production overseas to avoid the fee.

Apple has been a vocal critic of the tariffs, noting the resulting price hike. Earlier this month, the company wrote a letter to the Office of the United States Trade Representative, noting, “Tariffs increase the cost of our US operations, divert our resources, and disadvantage Apple compared to foreign competitors. More broadly, tariffs will lead to higher US consumer prices, lower overall US economic growth, and other unintended economic consequences.”

CEO Tim Cook also met with the president and first lady at their New Jersey golf resort earlier this month, in what much have been one of the more awkward meals in recent memory.

The new tariffs are expected to be announced as early as today.

Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

As well as “new opportunities for democratic engagement”, among the role’s other listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”.

So here, in a second policy job, Facebook looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

And if Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.

UK’s mass surveillance regime violated human rights law, finds ECHR

In another blow to the UK government’s record on bulk data handling for intelligence purposes the European Court of Human Rights (ECHR) has ruled that state surveillance practices violated human rights law.

Arguments against the UK intelligence agencies’ bulk collection and data sharing practices were heard by the court in November last year.

In today’s ruling the ECHR has ruled that only some aspects of the UK’s surveillance regime violate human rights law. So it’s not all bad news for the government — which has faced a barrage of legal actions (and quite a few black marks against its spying practices in recent years) ever since its love affair with mass surveillance was revealed and denounced by NSA whistleblower Edward Snowden, back in 2013.

The judgement reinforces a sense that the government has been seeking to push as close to the legal line as possible on surveillance, and sometimes stepping over it — reinforcing earlier strikes against legislation for not setting tight enough boundaries to surveillance powers, and likely providing additional fuel for fresh challenges.

The complaints before the ECHR focused on three different surveillance regimes: 1) The bulk interception of communications (aka ‘mass surveillance’); 2) Intelligence sharing with foreign governments; and 3) The obtaining of communications data from communications service providers.

The challenge actually combines three cases, with the action brought by a coalition of civil and human rights campaigners, including the American Civil Liberties Union, Amnesty International, Big Brother Watch, Liberty, Privacy International and nine other human rights and journalism groups based in Europe, Africa, Asia and the Americas.

The Chamber judgment from the ECHR found, by a majority of five votes to two, that the UK’s bulk interception regime violates Article 8 of the European Convention on Human Rights (a right to respect for private and family life/communications) — on the grounds that “there was insufficient oversight both of the selection of Internet bearers for interception and the filtering; search and selection of intercepted communications for examination; and the safeguards governing the selection of ‘related communications data’ for examination were inadequate”.

The judges did not find bulk collection itself to be in violation of the convention but noted that such a regime must respect criteria set down in case law.

In an even more pronounced majority vote, the Chamber found by six votes to one that the UK government’s regime for obtaining data from communications service providers violated Article 8 as it was “not in accordance with the law”.

While both the bulk interception regime and the regime for obtaining communications data from communications service providers were deemed to have violated Article 10 of the Convention (the right to freedom of expression and information,) as the judges found there were insufficient safeguards in respect of confidential journalistic material.

However the Chamber did not rule against the government in two other components of the case — finding that the regime for sharing intelligence with foreign governments did not violate either Article 8 or Article 10.

While the court unanimously rejected complaints made by the third set of applicants, under Article 6 (right to a fair trial), about the domestic procedure for challenging secret surveillance measures, and under Article 14 (prohibition of discrimination).

The complaints in this case were lodged prior to the UK legislating for a new surveillance regime, the 2016 Investigatory Powers Act, so in coming to a judgement the Chamber was considering the oversight regime at the time (and in the case of points 1 and 3 above that’s the Regulation of Investigatory Powers Act 2000).

RIPA has since been superseded by IPA but, as noted above, today’s ruling will likely fuel ongoing human rights challenges to the latter — which the government has already been ordered to amend by other courts on human rights grounds.

Nor is it the only UK surveillance legislation judged to fall foul on that front. A few years ago UK judges agreed with a similar legal challenge to emergency surveillance legislation that predates IPA — ruling in 2015 that DRIPA was unlawful under human rights law. A verdict the UK Court of Appeal agreed with, earlier this year.

Also in 2015 the intelligence agencies’ own oversight court, the IPT, also found multiple violations following challenges to aspects of its historical surveillance operations, after they have been made public by the Snowden revelations.

Such judgements did not stop the government pushing on with the IPA, though — and it went on to cement bulk collection at the core of its surveillance modus operandi at the end of 2016.

Among the most controversial elements of the IPA is a requirement that communications service providers collect and retain logs on the web activity of the digital services accessed by all users for 12 months; state power to require a company to remove encryption, or limit the rollout of end-to-end encryption on a future service; and state powers to hack devices, networks and services, including bulk hacking on foreign soil. It also allows the security agencies to maintain large databases of personal information on U.K. citizens, including individuals suspected of no crime.

On the safeguards front the government legislated for what it claimed was a “double lock” authorization process for interception warrants — which loops in the judiciary to signing off intercept warrants for the first time in the U.K., along with senior ministers. However this does not regulate the collection or accessing of web activity data that’s blanket-retained on all users.

In April this shiny new surveillance regime was also dealt a blow in UK courts — with judges ordering the government to amend the legislation to narrow how and why retained metadata could be accessed, giving ministers a deadline of November 1 to make the necessary changes.

In that case the judges also did not rule against bulk collection in general — declining to find that the state’s current data retention regime is unlawful on the grounds that it constituted “general and indiscriminate” retention of data. (For its part the government has always argued its bulk collection activities do not constitute blanket retention.)

And today’s ECHR ruling further focuses attention on the safeguards placed around bulk collection programs — having found the UK regime lacked sufficient monitoring to be lawful (but not that bulk collection itself is unlawful by default).

Opponents of the current surveillance regime will be busily parsing the ruling to find fresh fronts to attack.

It’s not the first time the ECHR has looked at bulk interception. Most recently, in June 2018, it deemed Swedish legislation and practice in the field of signals intelligence did not violate EU human rights law. Among its reasoning was that it found the Swedish system to have provided “adequate and sufficient guarantees against arbitrariness and the risk of abuse”.

However it said the Big Brother Watch and Others vs United Kingdom case being ruled upon today is the first case in which it specifically considered the extent of the interference with a person’s private life that could result from the interception and examination of communications data (as opposed to content).

In a Q&A about today’s judgement, the court notes that it “expressly recognised” the severity of threats facing states, and also how advancements in technology have “made it easier for terrorists and criminals to evade detection on the Internet”.

“It therefore held that States should enjoy a broad discretion in choosing how best to protect national security. Consequently, a State may operate a bulk interception regime if it considers that it is necessary in the interests of national security. That being said, the Court could not ignore the fact that surveillance regimes have the potential to be abused, with serious consequences for individual privacy. In order to minimise this risk, the Court has previously identified six minimum safeguards which all interception regimes must have,” it writes.

“The safeguards are that the national law must clearly indicate: the nature of offences which may give rise to an interception order; a definition of the categories of people liable to have their communications intercepted; a limit on the duration of interception; the procedure to be followed for examining, using and storing the data obtained; the precautions to be taken when communicating the data to other parties; and the circumstances in which intercepted data may or must be erased or destroyed.”

(Additional elements the court says it considered in an earlier surveillance case, Roman Zakharov v. Russia, also to determine whether legislation breached Article 8, included “arrangements for supervising the implementation of secret surveillance measures, any notification mechanisms and the remedies provided for by national law”.)

Commenting on today’s ruling in a statement, Megan Goulding, a lawyer for Liberty, said: “This is a major victory for the rights and freedom of people in the UK. It shows that there is — and should be — a limit to the extent that states can spy on their citizens.

“Police and intelligence agencies need covert surveillance powers to tackle the threats we face today — but the court has ruled that those threats do not justify spying on every citizen without adequate protections. Our government has built a surveillance regime more extreme than that of any other democratic nation, abandoning the very rights and freedoms terrorists want to attack. It can and must give us an effective, targeted system that protects our safety, data security and fundamental rights.”

A Liberty spokeswoman also told us it will continue its challenge to IPA in the UK High Court, adding: “We continue to believe that mass surveillance can never be compliant in a free, rights-respecting democracy.”

Also commenting in a statement, Silkie Carlo, director of Big Brother Watch, said: “This landmark judgment confirming that the UK’s mass spying breached fundamental rights vindicates Mr Snowden’s courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.

“Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.”

A spokesperson for Privacy International told us it’s considering taking the case to the ECHR’s Grand Chamber.

Also commenting in a supporting statement, Antonia Byatt, director of English PEN, added: “This judgment confirms that the British government’s surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative journalism. The government must now take action to guarantee our freedom to write and to read freely online.”

We’ve reached out to the Home Office for comment from the UK government.

On intelligence sharing between governments, which the court had not previously considered, the judges found that the procedure for requesting either the interception or the conveyance of intercept material from foreign intelligence agencies to have been set out with “sufficient clarity in the domestic law and relevant code of practice”, noting: “In particular, material from foreign agencies could only be searched if all the requirements for searching material obtained by the UK security services were fulfilled.”

It also found “no evidence of any significant shortcomings in the application and operation of the regime, or indeed evidence of any abuse” — hence finding the intelligence sharing regime did not violate Article 8.

On the portion of the challenge concerning complaints that UK intelligence agencies’ oversight court, the IPT, lacked independence and impartiality, the court disagreed — finding that the tribunal had “extensive power to consider complaints concerning wrongful interference with communications, and those extensive powers had been employed in the applicants’ case to ensure the fairness of the proceedings”.

“Most notably, the IPT had access to open and closed material and it had appointed Counsel to the Tribunal to make submissions on behalf of the applicants in the closed proceedings,” it also writes.

In addition, it said it accepted the government’s argument that in order to ensure the efficacy of the secret surveillance regime restrictions on the applicants’ procedural rights had been “both necessary and proportionate and had not impaired the essence of their Article 6 rights”.

On the complaints under Article 14, in conjunction with Articles 8 and 10 — that those outside the UK were disproportionately likely to have their communications intercepted as the law only provided additional safeguards to people known to be in Britain — the court also disgareed, rejecting this complaint as manifestly ill-founded.

“The applicants had not substantiated their argument that people outside the UK were more likely to have their communications intercepted. In addition, any possible difference in treatment was not due to nationality but to geographic location, and was justified,” it writes. 

10 critical points from Zuckerberg’s epic security manifesto

Mark Zuckerberg wants you to know he’s trying his damnedest to fix Facebook before it breaks democracy. Tonight he posted a 3,260-word battle plan for fighting election interference. Amidst drilling through Facebook’s strategy and progress, he slips in several notable passages revealing his own philosophy.

Zuckerberg has cast off his premature skepticism and is ready to command the troops. He sees Facebook’s real identity policy as a powerful weapon for truth other social networks lack, but that would be weakened if Instagram and WhatsApp were split off by regulators. He’s done with the finger-pointing and wants everyone to work together on solutions. And he’s adopted a touch of cynicism that could open his eyes and help him predict how people will misuse his creation.

Here are the most important parts of Zuckerberg’s security manifesto:

Zuckerberg embraces his war-time tactician role

“While we want to move quickly when we identify a threat, it’s also important to wait until we uncover as much of the network as we can before we take accounts down to avoid tipping off our adversaries, who would otherwise take extra steps to cover their remaining tracks. And ideally, we time these takedowns to cause the maximum disruption to their operations.”

The fury he unleashed on Google+, Snapchat, and Facebook’s IPO-killer is now aimed at election attackers

“These are incredibly complex and important problems, and this has been an intense year. I am bringing the same focus and rigor to addressing these issues that I’ve brought to previous product challenges like shifting our services to mobile.”

Balancing free speech and security is complicated and expensive

“These issues are even harder because people don’t agree on what a good outcome looks like, or what tradeoffs are acceptable to make. When it comes to free expression, thoughtful people come to different conclusions about the right balances. When it comes to implementing a solution, certainly some investors disagree with my approach to invest so much in security.”

Putting Twitter and YouTube on blast for allowing pseudonymity…

“One advantage Facebook has is that we have a principle that you must use your real identity. This means we have a clear notion of what’s an authentic account. This is harder with services like Instagram, WhatsApp, Twitter, YouTube, iMessage, or any other service where you don’t need to provide your real identity.”

…While making an argument for why the Internet is more secure if Facebook isn’t broken up

“Fortunately, our systems are shared, so when we find bad actors on Facebook, we can also remove accounts linked to them on Instagram and WhatsApp as well. And where we can share information with other companies, we can also help them remove fake accounts too.”‘

Political ads aren’t a business, they’re supposedly a moral duty

“When deciding on this policy, we also discussed whether it would be better to ban political ads altogether. Initially, this seemed simple and attractive. But we decided against it — not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice. We didn’t want to take away an important tool many groups use to engage in the political process.”

Zuckerberg overruled staff to allow academic research on Facebook

“As a result of these controversies [like Cambridge Analytica], there was considerable concern amongst Facebook employees about allowing researchers to access data. Ultimately, I decided that the benefits of enabling this kind of academic research outweigh the risks. But we are dedicating significant resources to ensuring this research is conducted in a way that respects people’s privacy and meets the highest ethical standards.”

Calling on law enforcement to step up

“There are certain critical signals that only law enforcement has access to, like money flows. For example, our systems make it significantly harder to set up fake accounts or buy political ads from outside the country. But it would still be very difficult without additional intelligence for Facebook or others to figure out if a foreign adversary had set up a company in the US, wired money to it, and then registered an authentic account on our services and bought ads from the US.”

Instead of minimizing their own blame, the major players must unite forces

“Preventing election interference is bigger than any single organization. It’s now clear that everyone — governments, tech companies, and independent experts such as the Atlantic Council — need to do a better job sharing the signals and information they have to prevent abuse . . . The last point I’ll make is that we’re all in this together. The definition of success is that we stop cyberattacks and coordinated information operations before they can cause harm.”

The end of Zuckerberg’s utopic idealism

“One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”

Hate speech, collusion, and the constitution

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”

Facebook, Twitter: US intelligence could help us more in fighting election interference

Facebook’s chief operating officer Sheryl Sandberg has admitted that the social networking giant could have done more to prevent foreign interference on its platforms, but said that the government also needs to step up its intelligence sharing efforts.

The remarks are ahead of an open hearing at the Senate Intelligence Committee on Wednesday, where Sandberg and Twitter chief executive Jack Dorsey will testify on foreign interference and election meddling on social media platforms. Google’s Larry Page was invited, but declined to attend.

“We were too slow to spot this and too slow to act,” said Sandberg in prepared remarks. “That’s on us.”

The hearing comes in the aftermath of Russian interference in the 2016 presidential election. Social media companies have been increasingly under the spotlight after foreign actors, believed to be working for or closely to the Russian government, used disinformation spreading tactics to try to influence the outcome of the election, as well as in the run-up to the midterm elections later this year.

Both Facebook and Twitter have removed accounts and bots from their sites believed to be involved in spreading disinformation and false news. Google said last year that it found Russian meddling efforts on its platforms.

“We’re getting better at finding and combating our adversaries, from financially motivated troll farms to sophisticated military intelligence operations,” said Sandberg.

But Facebook’s second-in-command also said that the US government could do more to help companies understand the wider picture from Russian interference.

“We continue to monitor our service for abuse and share information with law enforcement and others in our industry about these threats,” she said. “Our understanding of overall Russian activity in 2016 is limited because we do not have access to the information or investigative tools that the U.S. government and this Committee have,” she said.

Later, Twitter’s Dorsey also said in his own statement: “The threat we face requires extensive partnership and collaboration with our government partners and industry peers,” adding: “We each possess information the other does not have, and the combined information is more powerful in combating these threats.”

Both Sandberg and Dorsey are subtly referring to classified information that the government has but private companies don’t get to see — information that is considered a state secret.

Tech companies have in recent years pushed for more access to knowledge that federal agencies have, not least to help protect against increasing cybersecurity threats and hostile nation state actors. The theory goes that the idea of sharing intelligence can help companies defend against the best resourced hackers. But efforts to introduce legislation has proven controversial because critics argue that in sharing threat information with the government private user data would also be collected and sent to US intelligence agencies for further investigation.

Instead, tech companies are now pushing for information from Homeland Security to better understand the threats they face — to independently fend off future attacks.

As reported, tech companies last month met in secret to discuss preparations to counter foreign manipulation on their platforms. But attendees, including Facebook, Twitter, and Google and Microsoft are said to have “left the meeting discouraged” that they received little insight from the government.

Instead of Larry Page, Google sends written testimony to tech’s Senate hearing

Silicon Valley is about to have another big moment before Congress. On Wednesday, Twitter’s Jack Dorsey and Facebook’s Sheryl Sandberg will go before the Senate Intelligence Committee to follow-up on their work investigating (and hopefully thwarting) Russian government-linked campaigns to sow political division in the US. The hearing is titled “Foreign Influence Operations and Their Use of Social Media Platforms” and begins tomorrow morning at 9:30 AM ET.

It will be both Dorsey and Sandberg’s first time appearing before Congress on the high-stakes topic, but they’re not the only invitees. Alphabet CEO Larry Page was also called before the committee, though he is the only one of the three to decline to appear on Wednesday. Google also declined to send Sundar Pichai.

“Our SVP of Global Affairs and Chief Legal Officer, who reports directly to our CEO and is responsible for our work in this area, will be in Washington, D.C. on September 5, where he will deliver written testimony, brief Members of Congress on our work, and answer any questions they have,” a Google spokesperson told TechCrunch. “We had informed the Senate Intelligence Committee of this in late July and had understood that he would be an appropriate witness for this hearing.”

The spokesperson added that the company has briefed “dozens of committee members” and “briefed major Congressional Committees numerous times” regarding its efforts to safeguard US elections from interference originating abroad.

On Tuesday, Google published the written remarks it planned to deliver the following day in a blog post by Kent Walker, the company’s lead legal counsel and now SVP of global affairs.

In the statement, Google predictably reviews the steps it has taken to follow through on previous promises to Congress. Those steps include an ID verification program for anyone seeking to buy a federal US election ad from Google, in-ad disclosures attached to election ads across Google’s products, a transparency report specific to political ads on Google and a searchable ad library that allows anyone to view political ads for candidates in the US. As we previously reported, that database does not include issue-based ads or any ads from state or local races so its utility is somewhat limited though new ads will be added on an ongoing basis.

In the statement to Congress, Google also touted its Advanced Protection Program​, an effort to discourage spear phishing campaigns, and Project Shield, a free DDoS protection service for US campaigns, candidates and political action committees. You can read the full statement, embedded below.

There’s not much surprising in the letter summarizing Google’s progress, nor does the company identify any particular shortcomings or specific areas of concern. That isn’t surprising either. For tech companies on Capitol Hill, the name of the game is ticking off each point of good behavior while divulging as little new information as possible.

Because the committee has decided that it’s heard plenty from Google’s lawyers already, the company’s chair will sit empty tomorrow. Needless to say, the committee —in particular its vice chairman Sen. Mark Warner — isn’t happy about it. The committee is certainly right about one thing: during testimony, a company’s lead counsel is indistinguishable from an empty hot seat.

Tomorrow, we’ll get to see if Dorsey and Sandberg can pull of the same disappearing act. Considering Mark Zuckerberg’s enduring and even performance earlier this year and Facebook’s (in)famously composed public posture, Sandberg is certainly the favorite to make it out without breaking a sweat.

It’s time for Facebook and Twitter to coordinate efforts on hate speech

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never exempted an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.

California lawmakers are one step closer to bringing back Obama-era net neutrality protections

California’s state Assembly voted 58-17 on Thursday to advance a bill, called S.B. 822, that would implement the strongest net neutrality provisions in the U.S.

The bill now heads back to the Senate for final approval. If a vote is not held by end of day tomorrow — the deadline for lawmakers to pass any legislation until 2019 — it won’t get the official green, or red, light until next year.

The bill, written by Democratic Senator Scott Wiener, would not only bring back Obama-era net neutrality rules ousted by the FCC in December, but go a step further, adding new protections for internet users. The bill prohibits internet service providers from blocking or throttling lawful content, apps, services or non-harmful devices. Plus, it bans paid prioritization, the practice of directly or indirectly favoring some traffic over other traffic in exchange for money, typically.

Here’s where it goes above and beyond the policy developed under the Obama administration: The bill also bans zero rating, which allows service providers to charge customers for data use on some websites but not on others. If you want to dive deeper into the nitty-gritty, take a look at the bill here.

The decision is a blow to Comcast and AT&T, for obvious reasons. They’ve been advocates for ending net neutrality and had lobbied aggressively against the bill. Net neutrality lobbying groups, on the other hand, are pleased with the results.

“No one wants their cable or phone company to control what they see and do on the internet,” said Evan Greer, deputy director of Fight for the Future, a nonprofit advocacy group for digital rights, in a statement. “California just took a huge step toward restoring protections that prevent companies like AT&T and Comcast from screwing us all over more than they already do.”

“This historic Assembly vote is a testament to the power of the internet. Big ISPs spent millions on campaign contributions, lobbyists and dark ads on social networks, but in the end, it was no match for the passion and dedication of net neutrality supporters using the internet to sound the alarm and mobilize.”

In December, the FCC voted to kill Obama-era net neutrality regulations developed in 2015 to keep the internet open and fair. The organization is led by Ajit Pai, a Republican appointed to the role by President Donald Trump.

The decision from California’s Assembly comes a day after Northern California congressional members asked that the FTC investigate Verizon’s throttling of the Santa Clara County Fire Department, which had reportedly exceeded their monthly allotment of 25 gigabytes when they were making calls and handling personnel issues amid fighting a massive wildfire.