Uber is facing Australian class action suit alleging ‘unlawful conduct’

As it gears up to go public Uber is facing legacy baggage down under: A class action lawsuit has been filed in Australia on behalf of around 6,000 taxi and hire car drivers and license owners, Reuters reported Friday.

The suit was filed Friday at the Victoria Supreme Court by personal injury and compensation law firm, Maurice Blackburn. It’s seeking compensation on behalf of thousands of taxi and hire car drivers and operators who believe they lost income or saw a fall in the value of their licence as a result of what it dubs “Uber’s unlawful conduct”.

The firm is still registering additional participants online — specifically those who were licensed to operate in four states, Victoria, Western Australian, New South Wales and Queensland, between a selection of dates spanning 2014 to 2017.

The argument behind the case is that Uber started operating illegally in the four states in 2014, by offering its UberX service which used vehicles and drivers without “the proper licences, accreditations and authorizations”, as it puts it — thereby leading to a drop in income and licence value for the plaintiffs in the class action. 

State laws were subsequently changed to put ride-hailing on a lawful footing so the case is focused on Uber’s past business conduct, with Maurice Blackburn alleging it operated unlawfully in each of the states for a period of time — hence the varying dates for registering participants. 

In a press release the firm writes that the case has been about 18 months in the making, noting too that the ‘no win, no fee’ class action is being underwritten by “one of the world’s largest litigation funders, Harbour”.

“Make no mistake, this will be a landmark case regarding the alleged illegal operations of Uber in Australia and the devastating impact that has had on the lives of hard-working and law-abiding citizens here,” said Maurice Blackburn’s national head of class actions, Andrew Watson, in a statement.

“It is not acceptable for a business to place itself above the law and operate illegally to the disadvantage of others. We’ve got a strong case, a strong team and substantial support from thousands of drivers, operators and licence owners nationwide,” he added.

The firm takes the view it has a better chance of winning compensation for plaintiffs by suing Uber, rather than the government for failing to enforce relevant regulations — pointing, for example, to Uber’s use of the controversial ‘Greyball’ software, which it describes as a “devious program”.

In 2017 the New York Times reported that Uber was using the software to identify members of code enforcement authorities or city officials trying to gather data about it offering service in areas where it’s prohibited and block their access to prevent their ability to enforce local rules.

“Uber sells the idea that it does things differently, but in reality and as we allege, this has meant operating unlawfully, using devious programs like ‘Greyball’. All of this caused extensive loss and damage to law-abiding taxi and hire car drivers, operators and licence holders across the country,” said senior associate at Maurice Blackburn, Elizabeth O’Shea, in another supporting statement.

“Uber came in and exploited people by operating outside of regulations and it was Uber’s conduct that led to horrible losses being suffered by our group members. For those reasons, we are targeting the multi-billion dollar company Uber and its associated entities to provide redress to those affected.”

The firm’s PR also includes a statement from lead plaintiff, Nick Andrianakis, a taxi driver, operator and licence owner from Brunswick, Melbourne, describing the impact of “a life’s work being stripped away from you”.

We’ve reached out to Uber for comment on the class action suit.

In a statement given to Reuters the company denied it operated illegally, telling the news agency: “Uber denies this allegation and, if a claim is served making it, the claim will be vigorously defended.”

The law firm told the news agency that the level of damages being sought could run into “hundreds of millions of dollars” — while emphasizing that any compensation would be determined as part of the case or via settlement negotiations.

While Uber’s statement to Reuters implies it has no intention of seeking a settlement to make this latest legacy legal headache go away, two months ago it did just that in the case of a separate US class-action focused on driver pay and benefits.

In that case Uber agreed to pay $20 million to settle a suit, brought six years ago, which had claimed Uber classified its drivers as contractors to avoid paying them a minimum wage and providing benefits.

Though $20M is considerably less than Uber might have been on the hook for had an appeals court not overturned an earlier decision to grant class-action status to hundreds of thousands of drivers in California and Massachusetts — ruling instead that its arbitration agreements were valid and enforceable.

That decision reduced the number of drivers in the suit to around 13,600.

China’s startup ecosystem is hitting back at demanding working hours

In China, the laws limit work to 44 hours a week and require overtime pay for anything above that. But many aren’t following the rules, and a rare online movement puts a spotlight on extended work hours in China’s booming tech sector. People from all corners of society have rallied in support for improvements to startup working conditions, while some warn of hurdles in a culture ingrained in the belief that more work leads to greater success.

In late March, anonymous activists introduced 996.ICU, a domain name that represents the grueling life of Chinese programmers: who work from 9 am to 9 pm, 6 days a week with the threat of ending up at ICU, a hospital’s intensive care unit. The site details local labor laws that explicitly prohibit overtime work without pay. The slogan “Developers’ lives matter” appears at the bottom in solemn silence.

A project called 996.ICU soon followed on GitHub, the Microsoft-owned code and tool sharing site. Programmers flocked to air their grievances, compiling a list of Chinese companies that reportedly practice 996 working. Among them were major names like e-commerce leaders Alibaba, JD.com and Pinduoduo, as well as telecoms equipment maker Huawei and Bytedance, the parent company of the red-hot short video app TikTok.

In an email response to TechCrunch, JD claimed it doesn’t force employees to work overtime.

“JD.com is a competitive workplace that rewards initiative and hard work, which is consistent with our entrepreneurial roots. We’re getting back to those roots as we seek, develop and reward staff who share the same hunger and values,” the spokesperson said.

Alibaba declined to comment on the GitHub movement, although founder Jack Ma shared on Weibo Friday his view on the 996 regime.

“No companies should or can force employees into working 996,” wrote Ma. “But young people need to understand that happiness comes from hard work. I don’t defend 996, but I pay my respect to hard workers!”

Bytedance declined to comment on whether its employees work 996. We contacted Huawei but had not heard back from the company at the time of writing.

996.ICU rapidly rocketed to be the most-starred project on GitHub, which claims to be the world’s largest host of source codes. The protest certainly turned heads among tech bosses as China-based users soon noticed a number of browsers owned by companies practicing 996 had restricted access to the webpage.

The 996 dilemma

The 996 list is far from exhaustive as it comprises of voluntary entries from GitHub users. It’s also hard to nail down the average work hours at a firm, especially a behemoth with tens of thousands of employees where policies can differ across departments. For instance, it’s widely acknowledged that developers work longer than their peers in other units. Anecdotally, TechCrunch has heard that bosses in some organizations often find ways to exploit loopholes, such as setting unrealistic KPIs without explicitly writing 996 into employee contracts.

“While our company doesn’t force us into 996, sometimes, poor planning from upper management forces us to work long hours to meet arbitrary management deadlines,” a Beijing-based engineer at a professional networking site told TechCrunch. This person is one of many sources who spoke anonymously because they are not authorized to speak to media.

china office workers

BEIJING, CHINA APRIL 25, 2018: Passenger on a train in the Beijing Subway. Donat Sorokin/TASS (Photo by Donat SorokinTASS via Getty Images)

Other companies are more vocal about 996, taking pride in their excessively diligent culture. Youzan, the Tencent-backed, Shopify -like e-commerce solution provider, explicitly demanded staff to live out 996 work styles. Employees subsequently filed complaints in January to local labor authorities, which were said to have launched an investigation into Youzan.

A lot of companies are like Youzan, which equates long hours of work with success. That mindset can easily lure programmers or other staff into accepting extra work time. But employees are hardly the only ones burning out as entrepreneurs are under even greater pressure to grow the business they build from scratch.

“The recent debate over 996 brings to light the intense competition in China’s tech industry. To survive, startups and large companies have no choice but to work extremely hard. Some renown entrepreneurs even work over 100 hours a week,” Jake Xie, vice president of investment at China Growth Capital, an early-stage venture fund, told TechCrunch.

“Overtime is a norm at many internet companies. If we don’t work more, we fall behind,” said a founder of a Shenzhen-based mobile game developing startup. Competition is particularly cut-throat in China’s mobile gaming sector, where creativity is in short supply and a popular shortcut to success is knocking off an already viral title. Speed, therefore, is all it matters.

Meanwhile, a high-performing culture brewing in China may neutralize society’s resistance to 996. Driven individuals band together at gyms and yoga studios to sweat off stress. Getting group dinners before returning to work every night becomes essential to one’s social life, especially for those that don’t yet have children.

alibaba

Photo source: Jack Ma via Weibo

“There is a belief that more hours equals more learning. I think some percentage of people want to put in more hours, and that percentage is highest for 22 to 30 years old,” a Shanghai-based executive at a tech company that values work-life balance told TechCrunch. “A few people in my team have expressed to us that they feel they cannot grow as fast as their friends who are working at companies that practice 996.”

“If you don’t work 996 when you’re young, when will you?” Wrote 54-year-old Jack Ma in his Weibo post. “To this day, I’m definitely working 12 to 12, let alone 996… Not everyone practicing 996 has the chance to do things that are valuable and meaningful with a sense of achievement. So I think it’s a blessing for the BATs of China to be able to work 996.”

(BAT is short for Baidu, Alibaba and Tencent for their digital dominance in China, akin to FANNG in the west.)

Demanding hours are certainly not unique to the tech industry. Media and literature have long documented the strenuous work conditions in China’s manufacturing sector. Neighboring Japan is plagued by karoshi or “death from overwork” among its salarymen and Korean companies are also known for imposing back-breaking hours on workers, compelling the government to step in.

Attempts to change

Despite those apparent blocks, the anti-996 movement has garnered domestic attention. The trending topic “996ICU gets blocked by large companies” has generated nearly 2,000 posts and 6.3 million views on Weibo. China’s state-run broadcaster CCTV chronicled the incident and accused overtime work of causing “substantial physical and psychological consequences” in employees. Outside China, Python creator Guido van Rossum raised awareness about China’s 996 work routine in a tweet and on a forum.

“Can we do something for 996 programmers in China?” He wrote in a thread viewed 16,700 times.

The 996 campaign that began as a verbal outcry soon led to material acts. Shanghai-based lawyer Katt Gu and startup founder Suji Yan, who say they aren’t involved in the 996.ICU project, put forward an Anti-996 License that would keep companies in violation of domestic or global labor laws from using its open source software.

But some cautioned the restriction may undermine the spirit of open source, which denotes that a piece of software is distributed free and the source code undergirding it is accessible to others so they can study, share and modify the creator’s work.

“I strongly oppose and condemn 996, but at the same time I disagree with adding discretionary clauses to an open source project or using an open source project for the political game,” You Yuxi, creator of open-source project Vue, which was released under the MIT license, said on the Chinese equivalent to Twitter, Weibo. (Gu denies her project has any “political factors.”)

Others take a less aggressive approach, applauding companies that embrace the more humane schedule of “9 am to 5 pm for 5 days a week” via the “995.WLB” GitHub project. (WLB is short for “work-life balance.”) On this list are companies like Douban, the book and film review site famous for its “slow” growth but enduring popularity with China’s self-proclaimed hippies. WeWork, the workplace service provider that bills itself as showing respect for employees’ lives outside work, was also nominated.

While many nominees on the 996 list appear to be commercially successful, others point to a selection bias in the notion that more work bears greater fruit.

“If a company is large enough and are revealed to be practicing 996, the issue gets more attention. Take Youzan and JD for example,” a Shanghai-based developer at an enterprise software startup told TechCrunch.

“Conversely, a lot of companies that do practice 996 but have not been commercially successful are overlooked. There is no sufficient evidence that shows a company’s growth is linked to 996… What bosses should evaluate is productivity, not hours.”

Or, as some may suggest, managers should get better at incentivizing employees rather than blindingly asking for more hours.

“As long as [China’s] economy doesn’t stall, it may be hard to stop 996 from happening. This is not a problem of the individual. It’s an economic problem. What we can do is offering more humane care and inspiring workers to reflect, ‘Am I working at free will and with passion?’ instead of looking at their work hours,” suggested Xie of China Growth Capital.

While a push towards more disciplined work hours may be slow to come, experts have suggested another area where workers can strive for better treatment.

“It seems almost all startups in China underfund the social security or housing fund especially when they are young, that is, before series A or even series B financing,” Benjamin Qiu, partner at law firm Loeb & Loeb LLP, explained to TechCrunch.

“Compared to 996, the employees have an even stronger legal claim on the above since it violates regulations and financially hurts the employee. That said, the official social credit and housing fund requirement in China appears to be an undue burden on the employer compared to the Silicon Valley, but if complied with, it could be understood as an offset of the 996 culture.”

A number of my interviewees spoke on conditions of anonymity, not because their companies promote 996 but, curiously, because their employers don’t want to become ensnarled in the 996 discussions. “We don’t need to tell people we support work-life balance. We show it with action,” said a spokesperson for one company.

Robocaller firm Stratics Networks exposed millions of call recordings

If you’ve ever had a voicemail appear out of nowhere, there’s a good chance Stratics Networks was involved.

The Toronto-based company is the self-proclaimed inventor of “ringless voicemails,” providing its customers a way of auto-dialing a list of phone numbers and dropping voicemails without leaving a missed call. The system uses a backdoor voicemail number typically reserved by the carrier to leave a voicemail directly in a person’s mailbox. The company once claimed it can process up to 10,000 ringless voicemails per minute — if you pay for it.

But the company left its back-end storage server open without a password, exposing thousands of outgoing and incoming recordings.

Security researcher John Wethington found the exposed server and asked TechCrunch to contact Stratics to secure the data. The server, hosted on Amazon Web Services, contained at least 100,000 recordings from more than 4,000 folders, each representing a single customer campaign.

According to BinaryEdge data, the exposed server was first detected on April 5 but may have been exposed for longer.

“This data was open to anyone with a browser and required no special access or privileges,” Wethington told TechCrunch. “I genuinely hope we were the first to identify it and responsibly disclose it because if that data is in unethical or criminal hands it’s going to be abused.”

“Organizations must consider the privacy ethics and not just the regulations when offering services,” he said. “The potential for abuse and privacy violations is every corporation and executives responsibility.”

Customers use the company’s offering to leave voicemails without needing someone to call each person — from debt collectors to doctor’s offices reminding patients about upcoming appointments. Not only does the company allow customers to record outgoing voicemails to ensure a voicemail actually dropped, it also records incoming calls when someone picks up.

It was those recordings that were exposed, said Wethington. TechCrunch reviewed several folders of recordings.

In one case, we found several counties in Florida used Stratics to inform citizens that their election postal ballots are set to expire. One folder contained more than 5,200 audio  recordings on callers responding to voicemail drops sent by Broward County and Hillsborough County. Of the several recordings we heard, many provided sensitive information over the phone — including their names, addresses, dates of birth and in some cases their voter ID numbers.

Other folders in the exposed data contained dozens of incoming call recordings from those who had been sent a voicemail drop. One of those was a law firm, which call center workers identified as Key Tax Group. Of the calls we reviewed, none knew why they were left an unsolicited voicemail but were all asked by the call center worker if they needed help with their taxes. At no point were the callers told that the calls were being recorded, despite call recording laws in several states — like California and Maryland — mandating everyone on the same call agrees that the call can be recorded. Each recording had the unsuspected caller’s phone number in the filename. When contacted by TechCrunch, several of the victims of the cold-call scam confirmed they lived in states with two-party laws.

And, one other company, which the call center worker identified as Michigan Comfort, received over a hundred calls as recently as this month from people who had been dropped an unsolicited voicemail. Much to the same pattern as the law firm, those callers were asked if they were interested in “a duct inspection or a furnace rebate.”

“You shouldn’t call people out of the blue and neither should your company,” said one angry victim in a recording.

Although Stratics’ website says it “does not tolerate spam in any form,” the company puts the onus of compliance with the customers. “You are 100% liable for compliance when making calls originating under your account,” says its website.

Shortly after contacting the company Thursday about the data exposure, the leaking server had been secured.

“We take compliance and data security very seriously, and we are currently investigating to determine to what extent, if any, information has been exposed to unauthorized access,” said Chris Collins, a spokesperson for Stratics. “We have currently engaged an outside legal firm to guide us in our investigation. We are also engaging a third party cyber security firm to perform a full internal security audit.”

TechCrunch sent Stratics several questions about spam and call recording. Collins said Stratics would “block” users found in violation of its policies, and that its customers bore the responsibility to follow all local, state and federal call recording laws.

Following our disclosure, the company had pulled its “discover” section from the site. When asked, Collins said this was “to avoid our website from being overloaded” in response to this article.

We also asked how long the data was exposed for, if the company will notify customers and regulators per state data breach notification laws, or if anyone else had accessed the storage server.

Stratics declined to comment further.

Startup Law A to Z: Regulatory Compliance

Startups are but one species in a complex regulatory and public policy ecosystem. This ecosystem is larger and more powerfully dynamic than many founders appreciate, with distinct yet overlapping laws at the federal, state and local/city levels, all set against a vast array of public and private interests. Where startup founders see opportunity for disruption in regulated markets, lawyers counsel prudence: regulations exist to promote certain strongly-held public policy objectives which (unlike your startup’s business model) carry the force of law.

Snapshot of the regulatory and public policy ecosystem. Image via Law Office of Daniel McKenzie

Although the canonical “ask forgiveness and not permission” approach taken by Airbnb and Uber circa 2009 might lead founders to conclude it is strategically acceptable to “move fast and break things” (including the law), don’t lose sight of the resulting lawsuits and enforcement actions. If you look closely at Airbnb and Uber today, each have devoted immense resources to building regulatory and policy teams, lobbying, public relations, defending lawsuits, while increasingly looking to work within the law rather than outside it – not to mention, in the case of Uber, a change in leadership as well.

Indeed, more recently, examples of founders and startups running into serious regulatory issues are commonplace: whether in healthcare, where CEO/Co-founder Conrad Parker was forced to resign from Zenefits and later fined approximately $500K; in the securities registration arena, where cryptocurrency startups Airfox and Paragon have each been fined $250K and further could be required to return to investors the millions raised through their respective ICOs; in the social media and privacy realm, where TikTok was recently fined $5.7 million for violating COPPA, or in the antitrust context, where tech giant Google is facing billions in fines from the EU.

Suffice it to say, regulation is not a low-stakes table game. In 2017 alone, according to Duff and Phelps, US financial regulators levied $24.4 billion in penalties against companies and another $621.3 million against individuals. Particularly in today’s highly competitive business landscape, even if your startup can financially absorb the fines for non-compliance, the additional stress and distraction for your team may still inflict serious injury, if not an outright death-blow.

The best way to avoid regulatory setbacks is to first understand relevant regulations and work to develop compliant policies and business practices from the beginning. This article represents a step in that direction, the fifth and final installment in Extra Crunch’s exclusive “Startup Law A to Z” series, following previous articles on corporate matters, intellectual property (IP), customer contracts and employment law.

Given the breadth of activities subject to regulation, however, and the many corresponding regulations across federal, state, and municipal levels, no analysis of any particular regulatory framework would be sufficiently complete here. Instead, the purpose of this article is to provide founders a 30,000-foot view across several dozen applicable laws in key regulatory areas, providing a “lay of the land” such that with some additional navigation and guidance, an optimal course may be charted.

The regulatory areas highlighted here include: (a) Taxes; (b) Securities; (c) Employment; (d) Privacy; (e) Antitrust; (f) Advertising, Commerce and Telecommunications; (g) Intellectual Property; (h) Financial Services and Insurance; and finally (i) Transportation, Health and Safety.

How to handle dark data compliance risk at your company

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR, or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with recordkeeping laws but raises questions about potential admissibility as evidence.

Apple ad focuses on iPhone’s most marketable feature — privacy

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.

Don’t break up big tech — regulate data access, says EU antitrust chief

Breaking up tech giants should be a measure of last resort, the European Union’s competition commissioner, Margrethe Vestager, has suggested.

“To break up a company, to break up private property would be very far reaching and you would need to have a very strong case that it would produce better results for consumers in the marketplace than what you could do with more mainstream tools,” she warned this weekend, speaking in a SXSW interview with Recode’s Kara Swisher. “We’re dealing with private property. Businesses that are built and invested in and become successful because of their innovation.”

Vestager has built a reputation for being feared by tech giants, thanks to a number of major (and often expensive) interventions since she took up the Commission antitrust brief in 2014, with still one big outstanding investigation hanging over Google.

But while opposition politicians in many Western markets — including high profile would-be U.S. presidential candidates — are now competing on sounding tough on tech, the European commissioner advocates taking a scalpel to data streams rather than wielding a break-up hammer to smash market-skewing tech giants.

“When it comes to the very far reaching proposal to split up companies, for us, from a European perspective, that would be a measure of last resort,” she said. “What we do now, we do the antitrust cases, misuse of dominant position, the tying of products, the self-promotion, the demotion of others, to see if that approach will correct and change the marketplace to make it a fair place where there’s no misuse of dominant position but where smaller competitors can have a fair go. Because they may be the next big one, the next one with the greatest idea for consumers.”

She also pointed to an agreement last month, between key European political institutions on regulating online platform transparency, as an example of the kind of fairness-focused intervention she believes can work to counter market imbalance.

The bread and butter work regulators should be focused on where big tech is concerned are things like digital sector enquiries and hearings to examine how markets are operating in detail, she suggested — using careful scrutiny to inform and shape intelligent, data-led interventions.

Albeit ‘break up Google’ clearly makes for a punchier political soundbite.

Vestager is, however, in the final months of her term as antitrust chief — with the Commission due to turn over this year. Her time at the antitrust helm will end on November 1, she confirmed. (Though she remains, at least tentatively, on a shortlist of candidates who could be appointed the next European Commission president.)

The commissioner has spoken up before about regulating access to data as a more interesting option for controlling digital giants vs breaking them up.

And some European regulators appear to be moving in that direction already. Such the German Federal Cartel Office (FCO) which last month announced a decision against Facebook which aims to limit how it can use data from its own services. The FCO’s move has been couched as akin to an internal break up of the company, at the data level, without the tech giant having to be forced to separate and sell off business units like Instagram and WhatsApp.

It’s perhaps not surprising, therefore, that Facebook founder Mark Zuckerberg announced a massive plan to merge all three services at the technical level just last week — billing the switch to encrypted content but merged metadata as a ‘pro-privacy’ move, while clearly also intending to restructure his empire in a way that works against regulatory interventions that separate and control internal data flows at the product level.

The Competition Commission does not have a formal probe of Facebook or the social media sector open at this point but Vestager said her department does have its eye on how social media giants are using data.

“We’re sort of hoovering over social media, Facebook — how data’s being used in that respect,” she said, also flagging the preliminary work it’s doing looking into Amazon’s use of merchant data. (Also still not yet a formal probe.)

“The good thing is now the debate is really sort of taking off,” she added, of competition regulation generally. “When I’ve been visiting and speaking with people on The Hill previously, I’ve sensed a new sort of interest and curiosity as to what can competition achieve for you in a society. Because if you have fair competition then you have markets serving the citizen in our role as consumer and not the other way around.”

Asked whether she’s personally convinced by Facebook’s sudden ‘appreciation’ of privacy Vestager said if the announcement signifies a genuine change of philosophy and direction which leads to shifts in its business practices it would be good news for consumers.

Though she said she’s not simply taking Zuckerberg at his word at this point. “It may be a little far-reaching to assume the best,” she said politely when pushed by Swisher on whether she believed a sincere pivot is possible from a company with such a long privacy-hostile history.

Big tech, small tax

The interview also delved into the issue of big tech and the tiny amounts it pays in tax.

Reforming the global tax system so digital businesses pay a fair share vs traditional businesses is now “urgent” work to do, said Vestager — highlighting how the lack of a consensus position among EU Member States is pushing some countries to move forward with their own measures, given resistance to Commission proposals from other corners of the bloc.

France‘s push for a tax on tech giants this year is “absolutely necessary but very unfortunate”, Vestager said.

“When you do numbers that can be compared we see that digital businesses they would pay on average nine per cent [in taxes] where traditional businesses on average pay 23 per cent,” she continued. “Yet they’re in the same market for capital, for skilled employees, sometimes competing for the same customers. So obviously this is not fair.”

The Commission’s hope is that individual “pushes” from Member States frustrated by the current tax imbalance will generate momentum for “a European-wide way of doing things” — and therefore that any fragmentation of tax policies across the bloc will be short-lived.

She also she Europe is keen for the Organisation for Economic Co-operation and Development to “push forward for this” too, remarking: “Because we sense in the OECD that a number of places in the world take an interest also in the U.S. side of things.”

Is the better way to reset inequalities related to big tech and society achieved via reforming the tax system or are regulators doomed to have to keep fining them “into the next century”, wondered Swisher.

“You get a fine when you do something illegal. You pay your taxes to contribute to society where you do your business. These are two different things and we definitely need both,” responded Vestager. “But we cannot have a situation where some businesses do not contribute and the majority of businesses they do. Because it’s simply not fair in the marketplace or fair towards citizens if this continues.”

She also made short shrift of the favored big tech lobbyist line — to loudly claim privacy regulation helps big guys because it’s easier for them to fund compliance — by pointing out that Europe’s General Data Protection Regulation has “different brackets” and does not simply clobber big and small alike with the same requirements.

Of course small businesses “don’t have the same obligations as Google”, said Vestager.

“I’d say if they find it easy, I’d say they can do better,” she added, raising the much complained about consumer rights issue of consent vs inscrutable T&Cs.

“Because I still find that it’s quite tricky to understand what it is that you accept when you accept your terms and conditions. And I think it would be great if we as citizens could really say ‘oh this is what I am signing up to and I’m perfectly happy with that’.”

Though she admitted there’s still a way to go for European privacy rights to be fully functioning as intended — arguing it’s still too hard for individual consumers to exercise the rights they have in law.

“I know I own my data but I really do not know how to exercise that ownership,” she said. “How to allow for more people to have access to my data if I want to enable innovation, new market participants coming in. If that was done in large scale you could have an innovative input into the marketplace and we’re definitely not there yet,” she said.

Asked about the idea of taxing data flows as another possible means of clipping the wings of big tech Vestager pointed to early signs of an intermediate market spinning up in Europe to help individual extract value from what corporate entities are doing with their information. So not literally a tax on data flows but a way for consumers to claw back some of the value that’s being stripped from them.

“It’s still nascent in Europe but since now we have the rights that establishes your ownership of your data we see there is a beginning market development of intermediaries saying should I enable you yourself to monetize your data, so it’s not just the giants who monetize your data. So that maybe you get a sum every month reflecting how your data has been passed on,” she said. “That is one opportunity.”

She also said the Commission is looking at how to make sure “huge amounts of data will not be a barrier to entry in a marketplace” — or present a barrier to innovation for newcomers. The latter being key given how tech giants’ massive data pools are translating into a meaty advantage in AI R&D.

In another interesting exchange, Vestager suggested the convenience of voice interfaces presents an acute competition challenge — given how the tech could naturally concentrate market power via preferring quick-fire Q&A style interactions which don’t support offering lots of choice options.

“One of the things that is really mindboggling for us is how to have choice if you have voice,” she said, arguing that voice assistance dynamic doesn’t lend itself to multiple suggestions being offered every time a user asks a question. “So how to have competition when you have voice search?.. How would this change the marketplace and how would we deal with such a market? So this is what we’re trying to figure out.”

Again she suggested regulators are thinking about how data flows behind the scenes as a potential route to remedying interfaces that work against choice.

“We’re trying to figure out how access to data will change the marketplace,” she added. “Can you give a different access to data because the one who holds the data, also holds the resources for innovation. And we cannot rely on the big guys to be the innovative ones.”

Asked for her worst case scenario for tech 10 years hence, she said it would be to have “all of the technology but none of the societal positive oversight and direction”.

On the flip side, the best case would be for legislators to be “willing to take sufficient steps in taxation and in regulating access to data and fairness in the marketplace”.

“We would also need to see technology develop to have new players,” she emphasized. “Because we still need to see what will happen with quantum computing, what will happen with blockchain, what other uses are there for all if that new technology. Because I still think that it holds a lot of promise. But only if our democracy will give it direction. Then you will have a positive outcome.”

Taxing your privacy

Data collection through mobile tracking is big business and the potential for companies helping governments monetize this data is huge. For consumers, protecting yourself against the who, what and where of data flow is just the beginning. The question now is: How do you ensure your data isn’t costing you money in the form of new taxes, fees and bills?  Particularly when the entity that stands to benefit from this data — the government — is also tasked with protecting it?

The advances in personal data collection are a source of growing concern for privacy advocates, but whereas most fears tend to focus on what type of data is being collected, who’s watching and to whom is your data being sold, the potential for this same data to be monetized via auditing and compliance fees is even more problematic.

The fact is, you don’t need massive infrastructure to now track/tax businesses and consumers. State governments and municipalities have taken notice.

The result is a potential multi-billion dollar per-year business that, with mobile tracking technology, will only grow exponentially year over year.

Yet, while the revenue upside for companies helping smart cities (and states) with taxing and tolling is significant, it is also rife with contradictions and complications that could, ultimately, pose serious problems to those companies’ underlying business models and for the investors that bet heavily on them.

Internet of Things connecting in cloud over city scape.

Photo courtesy of Getty Images/chombosan

The most common argument when privacy advocates bring up concerns around mobile data collection is that consumers almost always have the control to opt out. When governments utilize this data, however, that option is not always available. And the direct result is the monetization of a consumer’s privacy in the form of taxes and tolls. In an era where states like California and others are stepping up as self-proclaimed defenders of citizen privacy and consent, this puts everyone involved in an awkward position — to say the least.

The marriage of smart cities and next-gen location tracking apps is becoming more commonplace.  AI, always-on data flows, sensor networks and connected devices are all being employed by governments in the name of sustainable and equitable cities as well as new revenue.

New York, LA and Seattle are all implementing (or considering implementing) congestion pricing that would ultimately rely on harvesting personal data in some form or another. Oregon, which passed the first gas tax in 1919, began it’s OreGo Program two years ago utilizing data that measured miles driven to levy fees on drivers so as to address infrastructure issues with its roads and highways.

Image Courtesy of Shutterstock

As more state and local governments look to emulate these kinds of policies the revenue opportunity for companies and investors harvesting this data is obvious.  Populus, (and a portfolio company) a data platform that helps cities manage mobility, captures data from fleets like Uber and Lyft to help cities set policy and collect fees.

Similarly, ClearRoad  is a “road pricing transaction processor” that leverages data from vehicles to help governments determine road usage for new revenue streams.  Safegraph, on the other hand, is a company that daily collects millions of trackers from smartphones via apps, APIs and other delivery methods often leaving the business of disclosure up to third parties. Data like this has begun to make its way into smart city applications which could impact industries as varied as the real estate market to the Gig Economy.

“There are lots of companies that are using location technology, 3D scanning, sensor tracking and more.  So, there are lots of opportunities to improve the effectiveness of services and for governments to find new revenue streams,” says Paul Salama, COO of ClearRoad . “If you trust the computer to regulate, as opposed to the written code, then you can allow for a lot more dynamic types of regulation and that extends beyond vehicles to noise pollution, particulate emissions, temporary signage, etc.”

While most of these platforms and technologies endeavor to do some public good by creating the baseline for good policy and sustainable cities they also raise concerns about individual privacy and the potential for discrimination.  And there is an inherent contradiction for states ostensibly tasked with curbing the excesses of data collection then turning around and utilizing that same data to line the state’s coffers, sometimes without consent or consumer choice.

Image courtesy Bryce Durbin

“People care about their privacy and there are aspects that need to be hashed out”, says Salama. “But we’re talking about a lot of unknowns on that data governance side.  There’s definitely going to be some sort of reckoning at some point but it’s still so early on.”

As policy makers and people become more aware of mobile phone tracking and the largely unregulated data collection associated with it, the question facing companies in this space is how to extract all this societally beneficial data while balancing that against some pretty significant privacy concerns.

“There will be options,” says Salama.  “An example is Utah which, starting next year, will offer electric cars the option to pay a flat fee (for avoiding gas taxes) or pay-by-the-mile.  The pay-by-the-mile option is GPS enabled but it also has additional services, so you pay by your actual usage.”

Ultimately, for governments, regulation plus transparency seems the likeliest way forward.

Image courtesy Getty Images

In most instances, the path to the consumer or tax payer is either through their shared economy vehicle (car, scooter, bike, etc.) or though their mobile device.  While taxing fleets is indirect and provides some measure of political cover for the governments generating revenue off of them, there is no such cover for directly taxing citizens via data gathered through mobile apps.

The best case scenario to short circuit these inherent contradictions for governments is to actually offer choice in the form of their own opt-in for some value exchange or preferred billing method, such as Utah’s opt-in as an alternative way to pay for road use vs. gas tax.   It may not satisfy all privacy concerns, particularly when it is the government sifting through your data, but it at least offers a measure of choice and a tangible value.

If data collection and sharing were still mainly the purview of B2B businesses and global enterprises, perhaps the rising outcry over the methods and usage of data collection would remain relatively muted. But as data usage seeps into more aspects of everyday life and is adopted by smart cities and governments across the nation questions around privacy will invariably get more heated, particularly when citizen consumers start feeling the pinch in their wallet.

As awareness rises and inherent contradictions are laid bare, regulation will surely follow and those businesses not prepared may face fundamental threats to their business models that ultimately threaten their bottom line.

LinkedIn forced to ‘pause’ mentioned in the news feature in Europe after complaints about ID mix-ups

LinkedIn has been forced to ‘pause’ a feature in Europe in which the platform emails members’ connections when they’ve been ‘mentioned in the news’.

The regulatory action follows a number of data protection complaints after LinkedIn’s algorithms incorrect matched members to news articles — triggering a review of the feature and subsequent suspension order.

The feature appears as a case study in the ‘Technology Multinationals Supervision’ section of an annual report published today by the Irish Data Protection Commission (DPC). Although the report does not explicitly name LinkedIn — but we’ve confirmed it is the named professional social network.

The data watchdog’s report cites “two complaints about a feature on a professional networking platform” after LinkedIn incorrectly associated the members with media articles that were not actually about them.

“In one of the complaints, a media article that set out details of the private life and unsuccessful career of a person of the same name as the complainant was circulated to the complainant’s connections and followers by the data controller,” the DPC writes, noting the complainant initially complained to the company itself but did not receive a satisfactory response — hence taking up the matter with the regulator.

The complainant stated that the article had been detrimental to their professional standing and had resulted in the loss of contracts for their business,” it adds.

“The second complaint involved the circulation of an article that the complainant believed could be detrimental to future career prospects, which the data controller had not vetted correctly.”

LinkedIn appears to have been matching members to news articles by simple name matching — with obvious potential for identity mix-ups between people with shared names.

“It was clear from the complaints that matching by name only was insufficient, giving rise to data protection concerns, primarily the lawfulness, fairness and accuracy of the personal data processing utilised by the ‘Mentions in the news’ feature,” the DPC writes.

“As a result of these complaints and the intervention of the DPC, the data controller undertook a review of the feature. The result of this review was to suspend the feature for EU-based members, pending improvements to safeguard its members’ data.”

We reached out to LinkedIn with questions and it pointed us to this blog post where it confirms: “We are pausing our Mentioned in the News feature for our EU members while we reevaluate its effectiveness.”

LinkedIn adds that it is reviewing the accuracy of the feature, writing:

As referenced in the Irish Data Protection Commission’s report, we received useful feedback from our members about the feature and as a result are evaluating the accuracy and functionality of Mentioned in the News for all members.

The company’s blog post also points users to a page where they can find out more about the ‘mentioned in the news’ feature and get information on how to manage their LinkedIn email notification settings.

The Irish DPC’s action is not the first privacy strike against LinkedIn in Europe.

Late last year, in its early annual report, on the pre-GDPR portion of 2018, the watchdog revealed it had investigated complaints about LinkedIn related to it targeting non-users with adverts for its service.

The DPC found the company had obtained emails for 18 million people for whom it did not have consent to process their data. In that case LinkedIn agreed to cease processing the data entirely.

That complaint also led the DPC to audit LinkedIn. It then found a further privacy problem, discovering the company had been using its social graph algorithms to try to build suggested networks of compatible professional connections for non-members.

The regulator ordered LinkedIn to cease this “pre-compute processing” of non-members’ data and delete all personal data associated with it prior to GDPR coming into force.

LinkedIn said it had “voluntarily changed our practices as a result”.

Even years later, Twitter doesn’t delete your direct messages

When does “delete” really mean delete? Not always or even at all if you’re Twitter .

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.

Saini found years-old messages found in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also filed a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Saini told TechCrunch that he had “concerns” that the data was retained by Twitter for so long.

Direct messages once let users to “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears and along with its data.

But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.

A conversation, dated March 2016, with a suspended Twitter account was still retrievable today. (Image: TechCrunch

Saini says this is a “functional bug” rather than a security flaw, but argued that the bug allows anyone a “clear bypass” of Twitter mechanisms to prevent accessed to suspended or deactivated accounts.

But it’s also a privacy matter, and a reminder that “delete” doesn’t mean delete — especially with your direct messages. That can open up users, particularly high-risk accounts like journalist and activists, to government data demands that call for data from years earlier.

That’s despite Twitter’s claim that once an account has been deactivated, there is “a very brief period in which we may be able to access account information, including tweets,” to law enforcement.

A Twitter spokesperson said the company was “looking into this further to ensure we have considered the entire scope of the issue.”

Retaining direct messages for years may put the company in a legal grey area ground amid Europe’s new data protection laws, which allows users to demand that a company deletes their data.

Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, said there’s “no formality at all” to how a user can ask for their data to be deleted. Any request from a user to delete their data that’s directly communicated to the company “is a valid exercise” of a user’s rights, he said.

Companies can be fined up to four percent of their annual turnover for violating GDPR rules.

“A delete button is perhaps a different matter, as it is not obvious that ‘delete’ means the same as ‘exercise my right of erasure’,” said Brown. Given that there’s no case law yet under the new General Data Protection Regulation regime, it will be up to the courts to decide, he said.

When asked if Twitter thinks that consent to retain direct messages is withdrawn when a message or account is deleted, Twitter’s spokesperson had “nothing further” to add.