Labor leaders and startup founders talk how to build a sustainable gig economy

Over the past few years, gig economy companies and the treatment of their labor force has become a hot button issue for public and private sector debate.

At our recent annual Disrupt event in San Francisco, we dug into how founders, companies and the broader community can play a positive role in the gig economy, with help from Derecka Mehrens, an executive director at Working Partnerships USA and co-founder of Silicon Valley Rising — an advocacy campaign focused on fighting for tech worker rights and creating an inclusive tech economy — and Amanda de Cadenet, founder of Girlgaze, a platform that connects advertisers with a network of 200,000 female-identifying and non-binary creatives.

Derecka and Amanda dove deep into where incumbent gig companies have fallen short, what they’re doing to right the ship, whether VC and hyper-growth mentalities fit into a sustainable gig economy, as well as thoughts on Uber’s new ‘Uber Works’ platform and CA AB-5. The following has been lightly edited for length and clarity.

Where current gig companies are failing

Arman Tabatabai: What was the original promise and value proposition of the gig economy? What went wrong?

Derecka Mehrens: The gig economy exists in a larger context, which is one in which neoliberalism is failing, trickle-down economics is proven wrong, and every day working people aren’t surviving and are looking for something more.

And so you have a situation in which the system we put together to create employment, to create our communities, to build our housing, to give us jobs is dysfunctional. And within that, folks are going to come up with disruptive solutions to pieces of it with a promise in mind to solve a problem. But without a larger solution, that will end up, in our view, exacerbating existing inequalities.

YouTube to reduce conspiracy theory recommendations in the UK

YouTube is expanding an experimental tweak to its recommendation engine that’s intended to reduce the amplification of conspiracy theories to the UK market.

In January, the video-sharing platform said it was making changes in the US to limit the spread of conspiracy theory content, such as junk science and bogus claims about historical events — following sustained criticism of how its platform accelerates damaging clickbait.

A YouTube spokeswoman confirmed to TechCrunch it is now in the process of rolling out the same update to suppresses conspiracy recommendations in the UK. She said it will take some time to take full effect — without providing detail on when exactly the changes will be fully applied.

The spokeswoman said YouTube acknowledges that it needs to do more to reform a recommendation system that has been shown time and again lifting harmful clickbait and misinformation into mainstream view. Though YouTube claims this negative spiral occurs only sometimes, and says on average its system points users to mainstream videos.

The company calls the type of junk content it’s been experimenting with recommending less often “borderline”, saying it’s stuff that toes the line of its acceptable content policies. In practice this means stuff like videos that make nonsense claims the earth is flat, or blatant lies about historical events such as the 9/11 terror attacks, or promote harmful junk about bogus miracle cures for serious illnesses.

All of which can be filed under misinformation ‘snake oil’. But for YouTube this sort of junk has been very lucrative snake oil as a consequence of Google’s commercial imperative being to keep eyeballs engaged in order to serve more ads.

More recently, though, YouTube has taken a reputational hit as its platform as been blamed for an extremist and radicalizing impact on young and impressionable minds by encouraging users to swallow junk science and worse.

A former Google engineer, Guillaume Chaslot, who worked on the YouTube recommendation algorithms went public last year to condemn what he described as the engine’s “toxic” impact which he said “perverts civic discussion” by encouraging users to create highly engaging borderline content.

Multiple investigations by journalists have also delved into instances where YouTube has been blamed for pushing people, including the young and impressionable, towards far right points of view via its algorithm’s radicalizing rabbit hole — which exposes users to increasingly extreme points of view without providing any context about what it’s encouraging them to view. 

Of course it doesn’t have to be this way. Imagine if a YouTube viewer who sought out at a video produced by a partisan shock jock was suggested less extreme or even an entirely alternative political point of view. Or only saw calming yoga and mindfulness videos in their ‘up next’ feed.

YouTube has eschewed a more balanced approach to the content its algorithms select and recommend for commercial reasons. But it may also have been keen to avoid drawing overt attention to the fact that its algorithms are acting as de facto editors.

And editorial decisions are what media companies make. So it then follows that tech platforms which perform algorithmic content sorting and suggestion should be regulated like media businesses are. (And all tech giants in the user generated content space have been doing their level best to evade that sort of rule of law for years.)

That Google has the power to edit out junk is clear.

A spokeswoman for YouTube told us the US test of a reduction in conspiracy junk recommendations has led to a drop in the number of views from recommendations of more than 50%.

Though she also said the test is still ramping up — suggesting the impact on the viewing and amplification of conspiracy nonsense could be even greater if YouTube were to more aggressively demote this type of BS.

What’s very clear is the company has the power to flick algorithmic levers that determine what billions of people see — even if you don’t believe that might also influence how they feel and what they believe. Which is a concentration of power that should concern people on all sides of the political spectrum.

While YouTube could further limit algorithmically amplified toxicity the problem is its business continues to monetize on engagement, and clickbait’s fantastical nonsense is, by nature, highly engaging. So — for purely commercial reasons — it has a counter incentive not to clear out all YouTube’s crap.

How long the company can keep up this balancing act remains to be seen, though. In recent years some major YouTube advertisers have intervened to make it clear they do not relish their brands being associated with abusive and extremist content. Which does represent a commercial risk to YouTube — if pressure from and on advertisers steps up.

Like all powerful tech platforms, its business is also facing rising scrutiny from politicians and policymakers. And questions about how to ensure such content platforms do not have a deleterious effect on people and societies are now front of mind for governments in some markets around the world.

That political pressure — which is a response to public pressure, after a number of scandals — is unlikely to go away.

So YouTube’s still glacial response to addressing how its population-spanning algorithms negatively select for stuff that’s socially divisive and individually toxic may yet come back to bite it — in the form of laws that put firm limits on its powers to push people’s buttons.

Why commerce companies are the advertising players to watch in a privacy-centric world

The unchecked digital land grab for consumers’ personal data that has been going on for more than a decade is coming to an end, and the dominoes have begun to fall when it comes to the regulation of consumer privacy and data security.

We’re witnessing the beginning of a sweeping upheaval in how companies are allowed to obtain, process, manage, use and sell consumer data, and the implications for the digital ad competitive landscape are massive.

On the backdrop of evolving privacy expectations and requirements, we’re seeing the rise of a new class of digital advertising player: consumer-facing apps and commerce platforms. These commerce companies are emerging as the most likely beneficiaries of this new regulatory privacy landscape — and we’re not just talking about e-commerce giants like Amazon.

Traditional commerce companies like eBay, Target and Walmart have publicly spoken about advertising as a major focus area for growth, but even companies like Starbucks and Uber have an edge in consumer data consent and, thus, an edge over incumbent media players in the fight for ad revenues.

Tectonic regulatory shifts

GettyImages 912948496

Image via Getty Images / alashi

By now, most executives, investors and entrepreneurs are aware of the growing acronym soup of privacy regulation, the two most prominent ingredients being the GDPR (General Data Protection Regulation) and the CCPA (California Consumer Privacy Act).

The startups creating the future of RegTech and financial services

Technology has been used to manage regulatory risk since the advent of the ledger book (or the Bloomberg terminal, depending on your reference point). However, the cost-consciousness internalized by banks during the 2008 financial crisis combined with more robust methods of analyzing large datasets has spurred innovation and increased efficiency by automating tasks that previously required manual reviews and other labor-intensive efforts.

So even if RegTech wasn’t born during the financial crisis, it was probably old enough to drive a car by 2008. The intervening 11 years have seen RegTech’s scope and influence grow.

RegTech startups targeting financial services, or FinServ for short, require very different growth strategies — even compared to other enterprise software companies. From a practical perspective, everything from the security requirements influencing software architecture and development to the sales process are substantially different for FinServ RegTechs.

The most successful RegTechs are those that draw on expertise from security-minded engineers, FinServ-savvy sales staff as well as legal and compliance professionals from the industry. FinServ RegTechs have emerged in a number of areas due to the increasing directives emanating from financial regulators.

This new crop of startups performs sophisticated background checks and transaction monitoring for anti-money laundering purposes pursuant to the Bank Secrecy Act, the Office of Foreign Asset Control (OFAC) and FINRA rules; tracks supervision requirements and retention for electronic communications under FINRA, SEC, and CFTC regulations; as well as monitors information security and privacy laws from the EU, SEC, and several US state regulators such as the New York Department of Financial Services (“NYDFS”).

In this article, we’ll examine RegTech startups in these three fields to determine how solutions have been structured to meet regulatory demand as well as some of the operational and regulatory challenges they face.

Know Your Customer and Anti-Money Laundering

UK’s CMA launches investigation into digital advertising and its “potential harm” to consumers

Just two weeks after the UK’s Information Commissioner published a damning report setting out major privacy and other concerns about programmatic advertising, today the country’s Competition and Markets Authority poured more cold water on the digital advertising industry that could have a direct impact not just on digital advertising leaders like Google and Facebook, but the wider ecosystem of companies that form the adtech market.

The CMA has today launched an investigation to assess “three broad potential sources of harm to consumers in connection with the market for digital advertising”: the extent of market power held by platform providers; consumers’ control over their data; and competition in the space.

Or, in its own words:

to what extent online platforms have market power in user-facing markets, and what impact this has on consumers; whether consumers are able and willing to control how data about them is used and collected by online platforms; and whether competition in the digital advertising market may be distorted by any market power held by platforms.”

If competition, data protection or other violations are found, this could have a direct impact on the companies involved. The CMA has a track record of using its investigations to mandate changes at the company level, with one recent example being its order to Facebook and eBay to crack down on fake reviews. If companies fail to comply, there is then scope to use the evidence the CMA has amassed of illegal activity to take them to court and levy fines.

The online advertising industry is a massive beast that fuels a large part of how we interact online, with companies that account for the majority of our online activity, such as Facebook and Google, also some of the biggest names in digital ads. In the US, it’s predicted that digital ads will overtake spend on traditional marketing sometime this year, and the state of affairs in the UK is not far behind.

But while advertising is the great cash cow of the online world, it’s not all green fields and sunshine. Many consumers are not happy about the extent of commercial profiling, and — at a time when we are faced with daily examples of data breaches — even less so at how murky the business of online advertising really is. That’s before considering regulatory frameworks like GDPR that have also helped to raise awareness.

The launch of the investigation comes some four months after Philip Hammond, the Chancellor of the Exchequer, wrote to the CMA asking for it to launch an investigation. That request came on the heels of an independent review commissioned by the government recommended the investigation.

In the end, the CMA has coincided this investigation with the launch of its wider Digital Markets Strategy, a new framework that it has created to navigate the new and often tricky waters of providing consumer protection while at the same time fostering digital innovation. (The recent controversy around Superhuman is just the latest example of the contradiction that can sometimes exist between the two.)

The CMA said that it will be accepting comments relevant to the topic and the three areas it has outlined until the end of this month — July 30. To get a full picture of the situation, the CMA says it wants to hear from the full spectrum of organizations and businesses impacted, including online platforms themselves, advertisers, publishers, intermediaries within the ad tech stack, representative professional bodies, government and consumer groups.

These need to come in writing and can be emailed to [email protected]

From August to December, the CMA will then evaluate what it receives. If it decides that there is no need for further investigation, the CMA says that it will publish a statement closing the matter by January 2, 2020. If it decides that it will dig deeper, the resulting report will come out by July 2, 2020.

One important caveat: the investigation and any subsequent actions that might get taken are dependent on the UK not being spun into chaos over a no-deal Brexit, where the UK exits the European Union with no trade, immigration or other agreements in place. Such a situation could create unexpected (extra) work for different government organizations, which would likely take precedence over this, making this yet another example of how the Brexit mess is getting in the way of actually useful work getting done.

FCC passes measure urging carriers to block robocalls by default

The FCC voted at its open meeting this week to adopt an anti-robocall measure, but it may or may not lead to any abatement of this maddening practice — and it might not be free, either. That said, it’s a start towards addressing a problem that’s far from simple and enormously irritating to consumers.

The last two years have seen the robocall problem grow and grow, and although there are steps you can take right now to improve things, they may not totally eliminate the issue or perhaps won’t be available on your plan or carrier.

Under fire for not acting quick enough in the face of a nationwide epidemic of scam calls, the FCC has taken action about as fast as a federal regulator can be expected to, and there are two main parts to its plan to fight robocalls, one of which was approved today at the Commission’s open meeting.

The first item was proposed formally last month by Chairman Ajit Pai, and although it amounts to little more than nudging carriers, it could be helpful.

Carriers have the ability to apply whatever tools they have to detect and block robocalls before they even reach users’ phones. But it’s possible, if unlikely, that a user may prefer not to have that service active. And carriers have complained that they are afraid blocking calls by default may in fact be prohibited by existing FCC regulations.

The FCC has said before that this is not the case and that carriers should go ahead and opt everyone into these blocking services (one can always opt out), but carriers have balked. The rulemaking approved today basically just makes it crystal clear that carriers are permitted, and indeed encouraged, to opt consumers into call-blocking schemes.

That’s good, but to be clear, Wednesday’s resolution does not require carriers to do anything, nor does it prohibit carriers from charging for such a service — as indeed Sprint, AT&T, and Verizon already do in some form or another. (TechCrunch is owned by Verizon Media, but this does not affect our coverage.)

Commissioner Starks noted in his approving statement that the FCC will be watching the implementation of this policy carefully for the possibility of abuse by carriers.

At my request, the item [i.e. his addition to the proposal] will give us critical feedback on how our tools are performing. It will now study the availability of call blocking solutions; the fees charged, if any, for these services; the effectiveness of various categories of call blocking tools; and an assessment of the number of subscribers availing themselves of available call blocking tools.

A second rule is still gestating, existing right now more or less only as a threat from the FCC should carriers fail to step up their game. The industry has put together a sort of universal caller ID system called STIR/SHAKEN (Secure Telephony Identity Revisited / Secure Handling of Asserted information using toKENs), but has been slow to roll it out. Pai said late last year that if carriers didn’t put it in place by the end of 2019, the FCC would be forced to take regulatory action.

Why the Commission didn’t simply take regulatory action in the first place is a valid question, and one some Commissioners and others have asked. Be that as it may, the threat is there and seems to have spurred carriers to action. There have been tests, but as yet no carrier has rolled out a working anti-robocall system based on STIR/SHAKEN.

Pai has said regarding these systems that “we [i.e. the FCC] do not anticipate that there would be costs passed on to the consumer,” and it does seem unlikely that your carrier will opt you into a call-blocking scheme that costs you money. But never underestimate the underhandedness and avarice of a telecommunications company. I would not be surprised if new subscribers get this added as a line item or something; Watch your bills carefully.

Google Play cracks down on marijuana apps, loot boxes and more

On Wednesday, Google href="https://techcrunch.com/2019/05/29/following-ftc-complaint-google-rolls-out-new-policies-around-kids-apps-on-google-play/"> rolled out new policies around kids’ apps on Google Play following an FTC complaint claiming a lack of attention to app’s compliance with children’s privacy laws, and other rules around content. However, kids’ apps weren’t the only area being addressed this week. As it turns out, Google also cracked down on loot boxes, marijuana apps, while also expanding sections detailing prohibitions around hate speech, sexual content, and counterfeit goods, among other things.

The two more notable changes include a crackdown on “loot boxes” and a ban on apps that offer marijuana delivery — while the service providers’ apps can remain, the actual ordering process has to take place outside of the app itself, Google said.

Specifically, Google will no longer allow apps offering the ability to order marijuana through an in-app shopping cart, those that assist users in the delivery or pickup of marijuana, or those that facilitate the sale of THC products.

This isn’t a huge surprise — Apple already bans apps that allow for the sale of marijuana, tobacco, or other controlled substances in a similar fashion. On iOS, apps like Eaze and Weedmaps are allowed, but they don’t offer an ordering function. That’s the same policy Google is now applying on Google Play, too.

This is a complex subject for Google, Apple, and other app marketplace providers to tackle. Though some states have legalized the sale of marijuana, the laws vary. And it’s still illegal according to the federal government. Opting out of playing middleman here is probably the right step for app marketplace platforms.

That said, we understand Google has no intention of outright banning marijuana ordering and delivery apps.

The company knows they’re popular and wants them to stay. It’s even giving them a grace period of a 30 days to make changes, and is working with the affected app developers to ensure they’ll remain accessible.

“These apps simply need to move the shopping cart flow outside of the app itself to be compliant with this new policy,” a spokesperson explained. “We’ve been in contact with many of the developers and are working with them to answer any technical questions and help them implement the changes without customer disruption.”

Another big change impacts loot boxes — a form of gambling popular among gamers. Essentially, people pay a fee to receive a random selection of in-game items, some of which may be rare or valuable. Loot boxes have been heavily criticized for a variety of reasons, including their negative effect on gameplay and how they’re often marketed to children.

Last week, a new Senate bill was introduced with bipartisan support that would prohibit the sale of loot boxes to children, and fine those in violation.

Google Play hasn’t gone so far as to ban loot boxes entirely, but instead says games have to now disclose the odds of getting each item.

In addition to these changes, Google rolled out a handful of more minor updates, detailed on its Developer Policy Center website. 

Here, Google says it has expanded the definition of what it considers sexual content to include a variety of new examples, like illustrations of sexual poses, content depicts sexual aids and fetishes, and depictions of nudity that wouldn’t be appropriate in a public context. It also added “content that is lewd or profane,” according to Android Police which compared the old and new versions of the policy.

Definitions that are somewhat “open to interpretation” is something that Apple commonly uses to gain better editorial control over its own App Store. By adding a ban of “lewd or profane” content, Google can opt to reject apps that aren’t covered by other examples.

Google also expanded its list of examples around hate speech to include: “compilations of assertions intended to prove that a protected group is inhuman, inferior or worthy of being hated;” “apps that contain theories about a protected group possessing negative characteristics (e.g. malicious, corrupt, evil, etc.), or explicitly or implicitly claims the group is a threat;” and “content or speech trying to encourage others to believe that people should be hated or discriminated against because they are a member of a protected group.”

Additional changes include an update to the Intellectual Property policy that more clearly prohibits the sale or promotion for sale of counterfeit goods within an app; a clarification of the User Generated Content policy to explicitly prohibit monetization features that encourage objectionable behavior by users; and an update the Gambling policy with more examples.

A Google spokesperson says the company regularly updates its Play Store developer policies in accordance with best practices and legal regulations around the world. However, the most recent set of changes to err on the side of getting ahead of increased regulation — not only in terms of kids’ apps and data privacy, but also other areas now under legal scrutiny, like loot boxes and marijuana sales.

 

 

What does ‘regulating Facebook’ mean? Here’s an example

Many officials claim that governments should regulate Facebook and other social platforms, but few describe what it actually means. A few days ago, France released a report that outlines what France — and maybe the European Union — plans to do when it comes to content moderation.

It’s an insightful 34-page document with a nuanced take on toxic content and how to deal with it. There are some brand new ideas in the report that are worth exploring. Instead of moderating content directly, the regulator in charge of social networks would tell Facebook and other social networks a list of objectives. For instance, if a racist photo goes viral and is distributed to 5 percent of monthly active users in France, you could consider that the social network has failed to fulfill its obligations.

This isn’t just wishful thinking as the regulator would be able to fine the company up to 4 percent of the company’s global annual turnover in case of a systemic failure to moderate toxic content.

The government plans to turn the report into new pieces of regulation in the coming months. France doesn’t plan to stop there. It is already lobbying other countries (in Europe, the Group of 7 nations and beyond) so that they could all come up with cross-border regulation and have a real impact on moderation processes. So let’s dive into the future of social network regulation.

Facebook first opened its doors

When Facebook CEO Mark Zuckerberg testified before Congress in April 2018, it felt like regulation was inevitable. And the company itself has been aware of this for a while.

Macron defends his startup-friendly policies

For the third year as president, France’s president Emmanuel Macron talked to the French tech ecosystem at VivaTech in Paris. This time, he used this opportunity to defend his policies so far and say that tech startups have nearly everything they need to succeed

Frichti’s Julia Bijaoui, TransferWise’s Flora Coleman, OpenClassrooms’ Pierre Dubuc, Vinted’s Thomas Plantenga and UiPath’s Daniel Dines shared the stage with Macron and each asked one question about funding, European regulation, talent, the digital single market, etc.

Just like last year, Macron took a strong stance when it comes to corporate taxes. “In order to compete with American giants, you need to make sure that competition is fair. You pay taxes, so the tech giant that is competing against you should pay taxes too,” Macron said.

France recently approved a tax on tech giants. If you generate more than €750 million in revenue globally and €25 million in France, you have to pay 3 percent of your French revenue in taxes, even if your company is registered in Ireland, Luxembourg or the Netherlands.

“It’s a temporary measure because we want a tax at the European level, and more generally at the OECD level,” Macron said.

When it comes to funding, things look much better now than a few years ago. There are now more than a handful of French unicorns. And Macron defended his taxation policies, such as a the flat tax on capital gain and the end of the wealth tax on your shares in public or private companies.

And yet, it’s still complicated when it comes to exits — if you want to go down the public road, you most likely have to IPO in the U.S. “We have to build a European financial capital market,” Macron said. “It’ll require some modifications and deeper European integration,” he added later.

Given that Europe is about to vote for the European Parliament, a lot of Macron’s solutions involved the European Union. It sometimes felt like Macron was campaigning for his own party by saying that he wants to go further, but you need to vote for his party first.

When it comes to talent, Macron emphasized the quality of French universities and engineering schools. “We are competitive in terms of human capital and it’s no coincidence. A few years ago, everybody was saying ‘there are a lot of French people in Silicon Valley’. French people living in France are the same, but they cost much, much less,” Macron said.

He then mentioned the French Tech Visa to attract foreign talent, a special visa for tech talent and their families. The program has been overhauled a couple of months ago.

When it comes to regulation, Macron says that the European Union should follow the GDPR model. “What we did on privacy, one regulation for all, we have to do it for other areas,” he said. “On competition, on taxation, on data, we need to regulate.”

Macron concluded by defending a third way to regulate and foster tech companies, which is different from China and the U.S. “Europe can become the tech leader of tomorrow because we are building a tech ecosystem that is compatible with democracy,” he said.

According to him, China doesn’t do enough when it comes to individual rights and human rights, which could eventually backfire for tech companies. And American companies have become too powerful and out of control for the U.S. government.

G7 countries to sign charter on tech regulation in August

Digital ministers of the Group of 7 nations are meeting today to discuss an upcoming charter on toxic content and tech regulation at large. Those countries plan to sign a charter during the annual G7 meeting in Biarritz, France in August.

“Everyone has to deal with hateful content,” France Digital Minister Cédric O said in a meeting with a few journalists. “This industry needs to reach maturity and, in order to do that, we need to rethink the accountability of those companies and the role of governments.”

You may have noticed that G7 countries also announced the Christchurch Call today. It is a nonbinding pledge asking tech companies to improve their moderation processes to prevent terrorist content from going viral.

Those two things are separate. The French government views the Christchurch Call as a way to start a discussion with tech platforms and put the spotlight on a particular issue. But the charter should be broader than the Christchurch Call and mention other issues.

And yet, it’s going to be hard to sign a common agreement between such a diverse group of countries. “There are Nordic countries that are very concerned about free speech and there are Latin countries that are pushing for more regulation,” Cédric O said.

In addition to the Group of 7 nations (Canada, France, Germany, Italy, Japan, the U.K. and the U.S.), officials from Australia, India and New Zealand are participating in today’s discussions.

The charter won’t define hateful speech too precisely so that countries can interpret that phrase in their own way. But the negotiations should lead to a set of principles that each country can turn into laws.

In particular, officials want to encourage transparency when it comes to moderation processes through audits, as well as increased cooperation between tech companies, governments and civil society.

In December 2018, the Group of 7 nations announced plans to create a global panel to study the effects of AI. Ministers are discussing the implementation of this panel during today’s meeting, as well.

Sources working for the French Economy Ministry say that the U.S. might not sign the charter in August. “We won’t compromise too much — either all countries can agree on a strong stance, or some countries don’t sign the charter,” a source said.