France improves stock options policies for startup employees

A couple of weeks ago, France’s digital minister Cédric O announced some changes when it comes to stock options in France. President Emmanuel Macron is going to talk about the new policy today ahead of the World Economic Forum.

While I don’t want to be too technical, here’s a quick overview of the changes.

First, the price of stock options (also known as BSPCE in France) won’t be based on the same VC-determined valuation. Let’s take an example — a VC fund invests in a Series A round, valuing the company at €12 million.

If you join the company after, you can get stock options based on a lower valuation, which increases the chances of higher returns. Going forward, there will be a different valuation for employees getting stock options.

Second, if you work for a foreign startup but you’re based in France, you couldn’t receive stock options. For instance, if you’re a Citymapper employee — a startup that is headquartered in London — based out of the Paris office, you could forget about stock options. Employees based in France can now receive stock options even if the company isn’t incorporated in France.

Third, the French Tech Visa now also works for foreign companies with an office in Paris. If you work for Berlin-based N26 and you want to hire a great Brazilian data scientist in your Paris office, you can now go through the fast-track visa process for startup employees.

Last year, VC firm Index Ventures coordinated an effort to overhaul stock option policies across Europe by lobbying policymakers. Hundreds of tech CEOs have signed the ‘Not Optional’ letter since then.

According to Index Ventures, Germany, Spain and Belgium are the lowest-ranked European countries when it comes to the regulatory framework around stock options.

TechCrunch’s Top 10 investigative reports from 2019

Facebook spying on teens, Twitter accounts hijacked by terrorists, and sexual abuse imagery found on Bing and Giphy were amongst the ugly truths revealed by TechCrunch’s investigating reporting in 2019. The tech industry needs more watchdogs than ever as its size enlargens the impact of safety failures and the abuse of power. Whether through malice, naivety, or greed, there was plenty of wrongdoing to sniff out.

Led by our security expert Zack Whittaker, TechCrunch undertook more long-form investigations this year to tackle these growing issues. Our coverage of fundraises, product launches, and glamorous exits only tell half the story. As perhaps the biggest and longest running news outlet dedicated to startups (and the giants they become), we’re responsible for keeping these companies honest and pushing for a more ethical and transparent approach to technology.

If you have a tip potentially worthy of an investigation, contact TechCrunch at [email protected] or by using our anonymous tip line’s form.

Image: Bryce Durbin/TechCrunch

Here are our top 10 investigations from 2019, and their impact:

Facebook pays teens to spy on their data

Josh Constine’s landmark investigation discovered that Facebook was paying teens and adults $20 in gift cards per month to install a VPN that sent Facebook all their sensitive mobile data for market research purposes. The laundry list of problems with Facebook Research included not informing 187,000 users the data would go to Facebook until they signed up for “Project Atlas”, not receiving proper parental consent for over 4300 minors, and threatening legal action if a user spoke publicly about the program. The program also abused Apple’s enterprise certificate program designed only for distribution of employee-only apps within companies to avoid the App Store review process.

The fallout was enormous. Lawmakers wrote angry letters to Facebook. TechCrunch soon discovered a similar market research program from Google called Screenwise Meter that the company promptly shut down. Apple punished both Google and Facebook by shutting down all their employee-only apps for a day, causing office disruptions since Facebookers couldn’t access their shuttle schedule or lunch menu. Facebook tried to claim the program was above board, but finally succumbed to the backlash and shut down Facebook Research and all paid data collection programs for users under 18. Most importantly, the investigation led Facebook to shut down its Onavo app, which offered a VPN but in reality sucked in tons of mobile usage data to figure out which competitors to copy. Onavo helped Facebook realize it should acquire messaging rival WhatsApp for $19 billion, and it’s now at the center of anti-trust investigations into the company. TechCrunch’s reporting weakened Facebook’s exploitative market surveillance, pitted tech’s giants against each other, and raised the bar for transparency and ethics in data collection.

Protecting The WannaCry Kill Switch

Zack Whittaker’s profile of the heroes who helped save the internet from the fast-spreading WannaCry ransomware reveals the precarious nature of cybersecurity. The gripping tale documenting Marcus Hutchins’ benevolent work establishing the WannaCry kill switch may have contributed to a judge’s decision to sentence him to just one year of supervised release instead of 10 years in prison for an unrelated charge of creating malware as a teenager.

The dangers of Elon Musk’s tunnel

TechCrunch contributor Mark Harris’ investigation discovered inadequate emergency exits and more problems with Elon Musk’s plan for his Boring Company to build a Washington D.C.-to-Baltimore tunnel. Consulting fire safety and tunnel engineering experts, Harris build a strong case for why state and local governments should be suspicious of technology disrupters cutting corners in public infrastructure.

Bing image search is full of child abuse

Josh Constine’s investigation exposed how Bing’s image search results both showed child sexual abuse imagery, but also suggested search terms to innocent users that would surface this illegal material. A tip led Constine to commission a report by anti-abuse startup AntiToxin (now L1ght), forcing Microsoft to commit to UK regulators that it would make significant changes to stop this from happening. However, a follow-up investigation by the New York Times citing TechCrunch’s report revealed Bing had made little progress.

Expelled despite exculpatory data

Zack Whittaker’s investigation surfaced contradictory evidence in a case of alleged grade tampering by Tufts student Tiffany Filler who was questionably expelled. The article casts significant doubt on the accusations, and that could help the student get a fair shot at future academic or professional endeavors.

Burned by an educational laptop

Natasha Lomas’ chronicle of troubles at educational computer hardware startup pi-top, including a device malfunction that injured a U.S. student. An internal email revealed the student had suffered a “a very nasty finger burn” from a pi-top 3 laptop designed to be disassembled. Reliability issues swelled and layoffs ensued. The report highlights how startups operating in the physical world, especially around sensitive populations like students, must make safety a top priority.

Giphy fails to block child abuse imagery

Sarah Perez and Zack Whittaker teamed up with child protection startup L1ght to expose Giphy’s negligence in blocking sexual abuse imagery. The report revealed how criminals used the site to share illegal imagery, which was then accidentally indexed by search engines. TechCrunch’s investigation demonstrated that it’s not just public tech giants who need to be more vigilant about their content.

Airbnb’s weakness on anti-discrimination

Megan Rose Dickey explored a botched case of discrimination policy enforcement by Airbnb when a blind and deaf traveler’s reservation was cancelled because they have a guide dog. Airbnb tried to just “educate” the host who was accused of discrimination instead of levying any real punishment until Dickey’s reporting pushed it to suspend them for a month. The investigation reveals the lengths Airbnb goes to in order to protect its money-generating hosts, and how policy problems could mar its IPO.

Expired emails let terrorists tweet propaganda

Zack Whittaker discovered that Islamic State propaganda was being spread through hijacked Twitter accounts. His investigation revealed that if the email address associated with a Twitter account expired, attackers could re-register it to gain access and then receive password resets sent from Twitter. The article revealed the savvy but not necessarily sophisticated ways terrorist groups are exploiting big tech’s security shortcomings, and identified a dangerous loophole for all sites to close.

Porn & gambling apps slip past Apple

Josh Constine found dozens of pornography and real-money gambling apps had broken Apple’s rules but avoided App Store review by abusing its enterprise certificate program — many based in China. The report revealed the weak and easily defrauded requirements to receive an enterprise certificate. Seven months later, Apple revealed a spike in porn and gambling app takedown requests from China. The investigation could push Apple to tighten its enterprise certificate policies, and proved the company has plenty of its own problems to handle despite CEO Tim Cook’s frequent jabs at the policies of other tech giants.

Bonus: HQ Trivia employees fired for trying to remove CEO

This Game Of Thrones-worthy tale was too intriguing to leave out, even if the impact was more of a warning to all startup executives. Josh Constine’s look inside gaming startup HQ Trivia revealed a saga of employee revolt in response to its CEO’s ineptitude and inaction as the company nose-dived. Employees who organized a petition to the board to remove the CEO were fired, leading to further talent departures and stagnation. The investigation served to remind startup executives that they are responsible to their employees, who can exert power through collective action or their exodus.

If you have a tip for Josh Constine, you can reach him via encrypted Signal or text at (585)750-5674, joshc at TechCrunch dot com, or through Twitter DMs

Taiwan’s entrepreneurs move forward after tense presidential election

Last Saturday, Taiwanese voters re-elected President Tsai Ing-wen to her second term after an election that split the country among generational and ideological lines. A crucial issue were the differences in how Tsai, a member of the Democratic Progressive Party (DPP), and her main opponent, Han Kuo-yu of the Kuomintang (KMT), approach Taiwan’s fraught relationship with China.

Despite the highly polarizing run-up to the election, however, both the DPP and KMT have taken measures to foster entrepreneurship in Taiwan. Now that Tsai has won, many investors don’t expect a dramatic impact, but instead are keeping an eye on how policies put in motion by both parties will play out. They are also looking for political allies who understand the startup ecosystem in Taiwan, which is often overshadowed by large hardware OEMs and semiconductor companies.

Policy

Joseph Huang, an investment partner at Infinity Ventures, has worked with both the DPP and KMT as limited partners, and says “from our side, they are always asking for how to create more awareness of Taiwan startups, how do we help them with institutions, how do we help them more?”

Is Instacart’s wider rollout of Pickup an attempt to rely less on gig workers?

Earlier today, Instacart more widely rolled out its Pickup product, which enables customers to retrieve groceries directly from stores. The announcement comes just a day after Instacart shoppers unveiled their latest action to #DeleteInstacart, another step in the ongoing series of protests against the grocery startup’s wage and tipping practices.

Next Monday, Instacart workers are asking customers and the general public to tweet at Instacart, telling the company they will delete Instacart until the company meets their demands. They wrote:

We have fought for fair pay, but Instacart continues to lower it. This current protest only has one small demand — to raise the app’s default tip amount back to 10%. This is the same default setting Instacart had originally, but the company has repeatedly lowered it (as well as resorted to outright theft) to take it away from us. Combined with their recent bonus-cutting act of retaliation, workers are now bleeding out of both sides — our pay is too low AND the default tip amount is too low.

In a statement, Instacart said it’s tested a number of default tip options over the years, including a 10% default, no default and a 5% default. That has been in place for the last two years.

“Ultimately, we believe customers should have the choice to determine the tip amount they choose to give a shopper based on the experience they have,” an Instacart spokesperson said. “The default amount serves as a baseline for a shopper’s potential tip, and can be increased to any amount by the customer.”

In light of a new California gig worker protections law, which Instacart opposes, the greater push into pickup services could be a way for the company to beef up its argument that gig workers are free from the control of Instacart, and that its part-time workers* do the bulk of what Instacart says is its fastest-growing business.

Wear your helmet, concludes new study showing electronic scooter injuries have nearly tripled in the last four years

Taking a ride on an electronic scooter soon? Wear your helmet! According to a recent study published in JAMA Surgery, not wearing headgear or taking other precautions while riding is increasingly sending young people to the hospital — leading to over 40,000 broken bones, head wounds and other injuries.

Unfortunately, less than 5% of riders in the study were found to be wearing their helmet, leading to nearly one-third of patients having a head injury. That’s more than double the rate of head injuries experienced by bicyclists.

The rise is likely due to the increasingly popular adoption of scooters among young people in urban areas. Electronic scooter injuries for those age 18-34 increased overall by 222% and injuries sending riders to the hospital rose by 365% from 2014-2018, with the most dramatic increase in the last year. Close to two-thirds of those with scooter injuries were young men and most were not wearing head protection.

“There was a high proportion of people with head injuries, which can be very dangerous,” said Breyer, an associate professor of urology and chief of urology at UCSF partner hospital Zuckerberg San Francisco General Hospital and Trauma Center. “Altogether, the near doubling of e-scooter trauma from 2017 to 2018 indicates that there should be better rider safety measures and regulation.”

Right now there doesn’t seem to be much in the way of requirements for head gear while scootering in California, thanks to a change in the law that went into effect at the beginning of last year. Those over the age of 18 who want to ride without a helmet are free and legal to do so in California. Several other states also don’t require helmet wearing while on a motorized scooter.

The laws may need an update after recent revelations, but in the meantime perhaps the scooter companies themselves can help ensure safety precautions. We reached out to several electronic scooter companies and only heard back from a few about this issue. Lime tells TechCrunch it is committed to safety by encouraging users to wear a helmet, offering discounts to buy one and giving over 250,000 away as part of a campaign. Bird and others also encourage helmet wearing on their site and some companies offer helmets for rent at another location. But the promise of scooters is their convenience. You don’t have to carry anything. You just click on the app and hop on your ride. It’s too easy to just hop on a scooter without prior planning or helmet in tow.

So what’s the solution? Rider responsibility at this point. You’re free to take your chances but, though inconvenient, wearing your helmet on that scooter ride could prevent a serious accident.

“It’s been shown that helmet use is associated with a lower risk of head injury,” said first author Nikan K. Namiri, medical student at the UCSF School of Medicine. “We strongly believe that helmets should be worn, and e-scooter manufacturers should encourage helmet use by making them more easily accessible.”

 

Zuckerberg ditches annual challenges, but needs cynics to fix 2030

Mark Zuckerberg won’t be spending 2020 focused on wearing ties, learning Mandarin, or just fixing Facebook. “Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030” he wrote today on Facebook. As you might have guessed, though, Zuckerberg’s vision for an improved planet involves a lot more of Facebook’s family of apps.

His biggest proclamations in today’s notes include that:

  • AR – Phones will remain the primary computing platform for most of the decade by augmented reality could get devices out from between us so we can be present together — Facebook is building AR glasses
  • VR – Better virtual reality technology could address the housing crisis by letting people work from anywhere — Facebook is building Oculus
  • Privacy – The internet has created a global community where people find it hard to establish themselves as unique, so smaller online groups could make people feel special again – Facebook is building more private groups and messaging options
  • Regulation – That the big questions facing technology are too thorny for private companies to address by themselves, and governments must step in around elections, content moderation, data portability, and privacy — Facebook is trying to self-regulate on these and everywhere else to deter overly onerous lawmaking

Zuckerberg Elections

These are all reasonable predictions and suggestions. However, Zuckerberg’s post does little to address how the broadening of Facebook’s services in the 2010s also contributed to a lot of the problems he presents.

  • Isolation – Constant passive feed scrolling on Facebook and Instagram has created a way to seem like you’re being social without having true back-and-forther interaction with friends
  • Gentrification – Facebook’s shuttled employees have driven up rents in cities around the world, especially the Bay Area
  • Envy – Facebook’s algorithms can make anyone without a glamorous, Instagram-worthy life look less important, while hackers can steal accounts and its moderation systems can accidentally suspend profiles with little recourse for most users
  • Negligence – The growth-first mentality led Facebook’s policies and safety to lag behind its impact, creating the kind of democracy, content, anti-competition, and privacy questions its now asking the government to answer for it

Noticibly absent from Zuckerberg’s post are explicit mentions some of Facebook’s more controversial products and initiatives. He writes about “decentralizing opportunity” by giving small businesses commerce tools, but never mentions cryptocurrency, blockchain, or Libra directly. Instead he seems to suggest that Instagram store fronts, Messenger customer support, and WhatsApp remittance might be sufficient. He also largely leaves out Portal, Facebook’s smart screen that could help distant families stay closer, but that some see as a surveillance and data collection tool.

I’m glad Zuckerberg is taking his role as a public figure and the steward of one of humanity’s fundamental utilities more seriously. His willingness to even think about some of these long-term issues instead of just quarterly-profits is important. Optimism is necessary to create what doesn’t exist.

Still, if Zuckerberg wants 2030 to look better for the world, and for the world to look more kindly on Facebook, he may need to hire more skeptics and cynics that see a dystopic future instead. Their foresight on where societal problems could arise from Facebook’s products could help temper Zuckerberg’s team of idealists to create a company that balances the potential of the future with the risks to the present.

Every new year of the last decade I set a personal challenge. My goal was to grow in new ways outside my day-to-day work…

Posted by Mark Zuckerberg on Thursday, January 9, 2020

How gig economy giants are trying to keep workers classified as independent contractors

Now that 2020 has started, Uber, DoorDash and Lyft are taking additional steps to undermine a new California law that would help more gig workers qualify as full-time employees. These moves entail product changes, lawsuits and ramped-up efforts to get a ballot initiative in front of voters that would roll back the new legislation.

Let’s start with the most recent development; yesterday, Uber sent a note to users announcing that it’s getting rid of upfront pricing in favor of estimated prices, unless they’re Uber Pool rides.

“Due to a new state law, we are making some changes to help ensure that Uber remains a dependable source of flexible work for California drivers,” Uber wrote in an email to customers. “These changes may take some getting used to, but our goal is to keep Uber available to as many qualified drivers as possible, without restricting the number of drivers who can work at a given time.”

Uber says it also has to discontinue rewards benefits like price protection on a route and flexible cancellations for trips in California. For drivers, that means they won’t see estimated earnings and drivers in surge arteas will no longer see fixed dollar amounts.

“AB5 threatens to restrict or eliminate opportunities for independent workers across a wide spectrum of industries, including trucking, freelance journalism and ridesharing,” an Uber spokesperson told TechCrunch. “As a result of AB5, we’ve made a number of product changes to preserve flexible work for tens of thousands of California drivers. At the same time, we’ve put forward a progressive package of new protections for drivers, including guaranteed minimum earnings and benefits, so voters can choose to truly improve flexible work in November.”

While Uber is essentially saying this is something the company must do, it’s worth noting that this is not some requirement of the new law; this is Uber’s attempt to beef up its case that it’s legally allowed to classify drivers as independent contractors. Since much of the rationale for determining whether or not a worker is an employee comes down to control, removing upfront fares and ditching penalties for rejecting fares could help Uber make a case that its drivers are operating on their own accord.

Facebook won’t ban political ads, prefers to keep screwing democracy

It’s 2020 — a key election year in the US — and Facebook is doubling down on its policy of letting people pay it to fuck around with democracy.

Despite trenchant criticism — including from US lawmakers accusing Facebook’s CEO to his face of damaging American democracy the company is digging in, announcing as much today by reiterating its defence of continuing to accept money to run microtargeted political ads.

Instead of banning political ads Facebook is trumpeting a few tweaks to the information it lets users see about political ads — claiming it’s boosting “transparency” and “controls” while leaving its users vulnerable to default settings that offer neither.  

Political ads running on Facebook are able to be targeted at individuals’ preferences as a result of the company’s pervasive tracking and profiling of Internet users. And ethical concerns about microtargeting led the UK’s data protection watchdog to call in 2018 for a pause on the use of digital ad tools like Facebook by political campaigns — warning of grave risks to democracy.

Facebook isn’t for pausing political microtargeting, though. Even though various elements of its data-gathering activities are also subject to privacy and consent complaints, regulatory scrutiny and legal challenge in Europe, under regional data protection legislation.

Instead, the company made it clear last fall that it won’t fact-check political ads, nor block political messages that violate its speech policies — thereby giving politicians carte blanche to run hateful lies, if they so choose.

Facebook’s algorithms also demonstrably select for maximum eyeball engagement, making it simply the ‘smart choice’ for the modern digitally campaigning politician to run outrageous BS on Facebook — as long time Facebook exec Andrew Bosworth recently pointed out in an internal posting that leaked in full to the NYT.

Facebook founder Mark Zuckerberg’s defence of his social network’s political ads policy boils down to repeatedly claiming ‘it’s all free speech man’ (we paraphrase).

This is an entirely nuance-free argument that comedian Sacha Baron Cohen expertly demolished last year, pointing out that: “Under this twisted logic if Facebook were around in the 1930s it would have allowed Hitler to post 30-second ads on his solution to the ‘Jewish problem.’”

Facebook responded to the take-down with a denial that hate speech exists on its platform since it has a policy against it — per its typical crisis PR playbook. And it’s more of the same selectively self-serving arguments being dispensed by Facebook today.

In a blog post attributed to its director of product management, Rob Leathern, it expends more than 1,000 words on why it’s still not banning political ads (it would be bad for advertisers wanting to reaching “key audiences”, is the non-specific claim) — including making a diversionary call for regulators to set ad standards, thereby passing the buck on ‘democratic accountability’ to lawmakers (whose electability might very well depend on how many Facebook ads they run…), while spinning cosmetic, made-for-PR tweaks to its ad settings and what’s displayed in an ad archive that most Facebook users will never have heard of as “expanded transparency” and “more control”. 

In fact these tweaks do nothing to reform the fundamental problem of damaging defaults.

The onus remains on Facebook users to do the leg work on understanding what its platform is pushing at their eyeballs and why.

Even as the ‘extra’ info now being drip-fed to the Ad Library is still highly fuzzy (“We are adding ranges for Potential Reach, which is the estimated target audience size for each political, electoral or social issue ad so you can see how many people an advertiser wanted to reach with every ad,” as Facebook writes of one tweak.)

The new controls similarly require users to delve into complex settings menus in order to avail themselves of inherently incremental limits — such as an option that will let people opt into seeing “fewer” political and social issue ads. (Fewer is naturally relative, ergo the scale of the reduction remains entirely within Facebook’s control — so it’s more meaningless ‘control theatre’ from the lord of dark pattern design. Why can’t people switch off political and issue ads entirely?)

Another incremental setting lets users “stop seeing ads based on an advertiser’s Custom Audience from a list”.

But just imagine trying to explain WTF that means to your parents or grandparents — let alone an average Internet user actually being able to track down the ‘control’ and exercise any meaningful agency over the political junk ads they’re being exposed to on Facebook.

It is, to quote Baron Cohen, “bullshit”.

Nor are outsiders the only ones calling out Zuckerberg on his BS and “twisted logic”: A number of Facebook’s own employees warned in an open letter last year that allowing politicians to lie in Facebook ads essentially weaponizes the platform.

They also argued that the platform’s advanced targeting and behavioral tracking tools make it “hard for people in the electorate to participate in the public scrutiny that we’re saying comes along with political speech” — accusing the company’s leadership of making disingenuous arguments in defence of a toxic, anti-democratic policy. 

Nothing in what Facebook has announced today resets the anti-democratic asymmetry inherent in the platform’s relationship to its users.

Facebook users — and democratic societies — remain, by default, preyed upon by self-interested political interests thanks to Facebook’s policies which are dressed up in a self-interested misappropriation of ‘free speech’ as a cloak for its unfettered exploitation of individual attention as fuel for a propaganda-as-service business.

Yet other policy positions are available.

Twitter announced a total ban on political ads last year — and while the move doesn’t resolve wider disinformation issues attached to its platform, the decision to bar political ads has been widely lauded as a positive, standard-setting example.

Google also followed suit by announcing a ban on “demonstrably false claims” in political ads. It also put limits on the targeting terms that can be used for political advertising buys that appear in search, on display ads and on YouTube.

Still Facebook prefers to exploit “the absence of regulation”, as its blog post puts it, to not do the right thing and keep sticking two fingers up at democratic accountability — because not applying limits on behavioral advertising best serves its business interests. Screw democracy.

“We have based [our policies] on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public,” Facebook writes, ignoring the fact that some of its own staff already pointed out the sketchy hypocrisy of trying to claim that complex ad targeting tools and techniques are open to public scrutiny.

Twitter’s new reply blockers could let Trump hide critics

What if politicians could only display Twitter replies from their supporters while stopping everyone else from adding their analysis to the conversation? That’s the risk of Twitter’s upcoming Conversation Participants tool it’s about to start testing that lets you choose if you want replies from everyone, only those your follow or mention or no one.

For most, the reply limiter could help repel trolls and harassment. Unfortunately, it still puts the burden of safety on the victims rather than the villains. Instead of routing out abusers, Twitter wants us to retreat and wall off our tweets from everyone we don’t know. That could reduce the spontaneous yet civil reply chains between strangers that are part of what makes Twitter so powerful.

But in the hands of politicians hoping to avoid scrutiny, the tools could make it appear that their tweets and policies are uniformly supported. By only allowing their sycophants to add replies below their posts, anyone reading along will be exposed to a uniformity of opinion that clashes with Twitter’s position as a marketplace of ideas.

We’ve reached out to Twitter for comment on this issue and whether anyone such as politicians would be prevented from using the new reply-limiting tools. Twitter plans to test the reply-selection tool in Q1 and make modifications if necessary before rolling it out.

Here’s how the new Conversation Participants feature works, according to the preview shared by Twitter’s Suzanne Xie at CES today, though it could change during testing. When users go to tweet, they’ll have the option of selecting who can reply, unlike now when everyone can leave replies but authors can hide certain ones that viewers can opt to reveal. Conversation Participants offers four options:

Global: Replies from anyone

Group: Replies from those you follow or mention in this tweet

Panel: Replies from only those you mention in this tweet

Statement: No replies allowed

Now imagine President Trump opts to make all of his tweets Group-only. Only those who support him and he therefore follows — like his sons, Fox News’ Sean Hannity and his campaign team — could reply. Gone would be the reels of critics fact-checking his statements or arguing against his policies. His tweets would be safeguarded from reproach, establishing an echo chamber filter bubble for his acolytes.

It’s true that some of these responses from the public might constitute abuse or harassment. But those should be dealt with specifically through strong policy and consistent enforcement of adequate punishments when rules are broken. By instead focusing on stopping replies from huge swaths of the community, the secondary effects have the potential to prop up politicians that consistently lie and undam the flow of misinformation.

There’s also the practical matter that this won’t stop abuse, it will merely move it. Civil discussion will be harder to find for the rest of the public, but harassers will still reach their targets. Users blocked from replying to specific tweets can just tweet directly at the author. They can also continue to mention the author separately or screenshot their tweets and then discuss them.

It’s possible that U.S. law prevents politicians discriminating against citizens with different viewpoints by restricting their access to the politician’s comments on a public forum. Judges ruled this makes it illegal for Trump to block people on social media. But with this new tool, because anyone could still see the tweets, reply to the author separately and not be followed by the author likely doesn’t count as discrimination like blocking does, use of the Conversation Participants tool could be permissible. Someone could sue to push the issue to the courts, though, and judges might be wise to deem this unconstitutional.

Again, this is why Twitter needs to refocus on cleaning up its community rather than only letting people build tiny, temporary shelters from the abuse. It could consider blocking replies and mentions from brand new accounts without sufficient engagement or a linked phone number, as I suggested in 2017. It could also create a new mid-point punishment of a “time-out” from sending replies for harassment that it (sometimes questionably) deems below the threshold of an account suspension.

The combination of Twitter’s decade of weakness in the face of trolls with a new political landscape of normalized misinformation threaten to overwhelm its attempts to get a handle on safety.

CES 2020 coverage - TechCrunch

Will online privacy make a comeback in 2020?

Last year was a landmark for online privacy in many ways, with something of a consensus emerging that consumers deserve protection from the companies that sell their attention and behavior for profit.

The debate now is largely around how to regulate platforms, not whether it needs to happen.

The consensus among key legislators acknowledges that privacy is not just of benefit to individuals but can be likened to public health; a level of protection afforded to each of us helps inoculate democratic societies from manipulation by vested and vicious interests.

The fact that human rights are being systematically abused at population-scale because of the pervasive profiling of Internet users — a surveillance business that’s dominated in the West by tech giants Facebook and Google, and the adtech and data broker industry which works to feed them — was the subject of an Amnesty International report in November 2019 that urges legislators to take a human rights-based approach to setting rules for Internet companies.

“It is now evident that the era of self-regulation in the tech sector is coming to an end,” the charity predicted.

Democracy disrupted

The dystopian outgrowth of surveillance capitalism was certainly in awful evidence in 2019, with elections around the world attacked at cheap scale by malicious propaganda that relies on adtech platforms’ targeting tools to hijack and skew public debate, while the chaos agents themselves are shielded from democratic view.

Platform algorithms are also still encouraging Internet eyeballs towards polarized and extremist views by feeding a radicalized, data-driven diet that panders to prejudices in the name of maintaining engagement — despite plenty of raised voices calling out the programmed antisocial behavior. So what tweaks there have been still look like fiddling round the edges of an existential problem.

Worse still, vulnerable groups remain at the mercy of online hate speech which platforms not only can’t (or won’t) weed out, but whose algorithms often seem to deliberately choose to amplify — the technology itself being complicit in whipping up violence against minorities. It’s social division as a profit-turning service.

The outrage-loving tilt of these attention-hogging adtech giants has also continued directly influencing political campaigning in the West this year — with cynical attempts to steal votes by shamelessly platforming and amplifying misinformation.

From the Trump tweet-bomb we now see full-blown digital disops underpinning entire election campaigns, such as the UK Conservative Party’s strategy in the 2019 winter General Election, which featured doctored videos seeded to social media and keyword targeted attack ads pointing to outright online fakes in a bid to hack voters’ opinions.

Political microtargeting divides the electorate as a strategy to conquer the poll. The problem is it’s inherently anti-democratic.

No wonder, then, that repeat calls to beef up digital campaigning rules and properly protect voters’ data have so far fallen on deaf ears. The political parties all have their hands in the voter data cookie-jar. Yet it’s elected politicians whom we rely upon to update the law. This remains a grave problem for democracies going into 2020 — and a looming U.S. presidential election.

So it’s been a year when, even with rising awareness of the societal cost of letting platforms suck up everyone’s data and repurpose it to sell population-scale manipulation, not much has actually changed. Certainly not enough.

Yet looking ahead there are signs the writing is on the wall for the ‘data industrial complex’ — or at least that change is coming. Privacy can make a comeback.

Adtech under attack

Developments in late 2019 such as Twitter banning all political ads and Google shrinking how political advertisers can microtarget Internet users are notable steps — even as they don’t go far enough.

But it’s also a relatively short hop from banning microtargeting sometimes to banning profiling for ad targeting entirely.

Alternative online ad models (contextual targeting) are proven and profitable — just ask search engine DuckDuckGo . While the ad industry gospel that only behavioral targeting will do now has academic critics who suggest it offer far less uplift than claimed, even as — in Europe — scores of data protection complaints underline the high individual cost of maintaining the status quo.

Startups are also innovating in the pro-privacy adtech space (see, for example, the Brave browser).

Changing the system — turning the adtech tanker — will take huge effort, but there is a growing opportunity for just such systemic change.

This year, it might be too much to hope for regulators get their act together enough to outlaw consent-less profiling of Internet users entirely. But it may be that those who have sought to proclaim ‘privacy is dead’ will find their unchecked data gathering facing death by a thousand regulatory cuts.

Or, tech giants like Facebook and Google may simple outrun the regulators by reengineering their platforms to cloak vast personal data empires with end-to-end encryption, making it harder for outsiders to regulate them, even as they retain enough of a fix on the metadata to stay in the surveillance business. Fixing that would likely require much more radical regulatory intervention.

European regulators are, whether they like it or not, in this race and under major pressure to enforce the bloc’s existing data protection framework. It seems likely to ding some current-gen digital tracking and targeting practices. And depending on how key decisions on a number of strategic GDPR complaints go, 2020 could see an unpicking — great or otherwise — of components of adtech’s dysfunctional ‘norm’.

Among the technologies under investigation in the region is real-time bidding; a system that powers a large chunk of programmatic digital advertising.

The complaint here is it breaches the bloc’s General Data Protection Regulation (GDPR) because it’s inherently insecure to broadcast granular personal data to scores of entities involved in the bidding chain.

A recent event held by the UK’s data watchdog confirmed plenty of troubling findings. Google responded by removing some information from bid requests — though critics say it does not go far enough. Nothing short of removing personal data entirely will do in their view, which sums to ads that are contextually (not micro)targeted.

Powers that EU data protection watchdogs have at their disposal to deal with violations include not just big fines but data processing orders — which means corrective relief could be coming to take chunks out of data-dependent business models.

As noted above, the adtech industry has already been put on watch this year over current practices, even as it was given a generous half-year grace period to adapt.

In the event it seems likely that turning the ship will take longer. But the message is clear: change is coming. The UK watchdog is due to publish another report in 2020, based on its review of the sector. Expect that to further dial up the pressure on adtech.

Web browsers have also been doing their bit by baking in more tracker blocking by default. And this summer Marketing Land proclaimed the third party cookie dead — asking what’s next?

Alternatives and workarounds will and are springing up (such as stuffing more in via first party cookies). But the notion of tracking by background default is under attack if not quite yet coming unstuck.

Ireland’s DPC is also progressing on a formal investigation of Google’s online Ad Exchange. Further real-time bidding complaints have been lodged across the EU too. This is an issue that won’t be going away soon, however much the adtech industry might wish it.

Year of the GDPR banhammer?

2020 is the year that privacy advocates are really hoping that Europe will bring down the hammer of regulatory enforcement. Thousands of complaints have been filed since the GDPR came into force but precious few decisions have been handed down. Next year looks set to be decisive — even potentially make or break for the data protection regime.