Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit

A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol, and other well known celebrities.

The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his and other celebrities’ likeness to shill Bitcoin scams via fake ads run on its platform.

In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide data on the accounts running them within a week.

Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.

The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams.

Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.

In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook. It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing celebrity images on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.

However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.

It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.

Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered. Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced

Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising.

It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.

The judgement orders Facebook to remove fake scam ads containing celebrity likenesses from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).

The court order also requires that Facebook provides data to the affected celebrity on the accounts that had been misusing their likeness within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.

Facebook has also been ordered to pay the case costs.

Responding to the judgement in a statement, a Facebook spokesperson told us:

We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.

One legal expert describes the judgement as “pivotal“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.

Injunctions are faster and more effective, Hildebrandt added.

The judgement also raises questions around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.

Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.

The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order. 

Facebook sues OnlineNIC for domain name fraud associated with malicious activity

Facebook today announced it has filed suit in California against domain registrar OnlineNIC and its proxy service ID Shield for registering domain names that pretend to be associated with Facebook, like www-facebook-login.com or facebook-mails.com, for example. Facebook says these domains are intentionally designed to mislead and confuse end users, who believe they’re interacting with Facebook.

These fake domains are also often associated with malicious activity, like phishing.

While some who register such domains hope to eventually sell them back to Facebook at a marked-up price, earning a profit, others have worse intentions. And with the launch of Facebook’s own cryptocurrency, Libra, a number of new domain cybersquatters have emerged. Facebook was recently able to take down some of these, like facebooktoken.org and ico-facebook.org, one of which had already started collecting personal information from visitors by falsely touting a Facebook ICO.

Facebooks’ new lawsuit, however, focuses specifically on OnlineNIC, which Facebook says has a history of allowing cybersquatters to register domains with its privacy/proxy service, ID Shield. The suit alleges that the registered domains, like hackingfacebook.net, are being used for malicious activity, including “phishing and hosting websites that purported to sell hacking tools.”

The suit also references some 20 other domain names that are confusingly similar to Facebook and Instagram trademarks, it says.

Screen Shot 2019 10 31 at 1.27.38 PM

OnlineNIC has been sued before for allowing this sort of activity, including by Verizon, Yahoo, Microsoft and others. In the case of Verizon (disclosure: TechCrunch parent), OnlineNIC was found liable for registering more than 600 domain names similar to Verizon’s trademark, and the courts awarded $33.15 million in damages as a result, Facebook’s filing states.

Facebook is asking for a permanent injunction against OnlineNIC’s activity, as well as damages.

The company says it took this issue to the courts because OnlineNIC has not been responsive to its concerns. Facebook today proactively reports instances of abuse with domain name registrars and their privacy/proxy services, and often works with them to take down malicious domains. But the issue is widespread — there are tens of millions of domain names registered through these services today. Some of these businesses are not reputable, however. Some, like OnlineNIC, will not investigate or even respond to Facebook’s abuse reports.

The news of the lawsuit was previously reported by Cnet and other domain name news sources, based on courthouse filings.

Attorney David J. Steele, who previously won the $33 million judgement for Verizon, is representing Facebook in the case.

“By mentioning our apps and services in the domain names, OnlineNIC and ID Shield intended to make them appear legitimate and confuse people. This activity is known as cybersquatting and OnlineNIC has a history of this behavior,” writes Facebook, in an announcement. “This lawsuit is one more step in our ongoing efforts to protect people’s safety and privacy,” it says.

OnlineNIC has been asked for comment and we’ll update if it responds.

EU-US Privacy Shield passes third Commission ‘health check’ — but litigation looms

The third annual review of the EU-US Privacy Shield data transfer mechanism has once again been nodded through by Europe’s executive.

This despite the EU parliament calling last year for the mechanism to be suspended.

The European Commission also issued US counterparts with a compliance deadline last December — saying the US must appoint a permanent ombudsperson to handle EU citizens’ complaints, as required by the arrangement, and do so by February.

This summer the US senate finally confirmed Keith Krach — under secretary of state for economic growth, energy, and the environment — in the ombudsperson role.

The Privacy Shield arrangement was struck between EU and US negotiators back in 2016 — as a rushed replacement for the prior Safe Harbor data transfer pact which in fall 2015 was struck down by Europe’s top court following a legal challenge after NSA whistleblower Edward Snowden revealed US government agencies were liberally helping themselves to digital data from Internet companies.

At heart is a fundamental legal clash between EU privacy rights and US national security priorities.

The intent for the Privacy Shield framework is to paper over those cracks by devising enough checks and balances that the Commission can claim it offers adequate protection for EU citizens personal data when taken to the US for processing, despite the lack of a commensurate, comprehensive data protection region. But critics have argued from the start that the mechanism is flawed.

Even so around 5,000 companies are now signed up to use Privacy Shield to certify transfers of personal data. So there would be major disruption to businesses were it to go the way of its predecessor — as has looked likely in recent years, since Donald Trump took office as US president.

The Commission remains a staunch defender of Privacy Shield, warts and all, preferring to support data-sharing business as usual than offer a pro-active defence of EU citizens’ privacy rights.

To date it has offered little in the way of objection about how the US has implemented Privacy Shield in these annual reviews, despite some glaring flaws and failures (for example the disgraced political data firm, Cambridge Analytica, was a signatory of the framework, even after the data misuse scandal blew up).

The Commission did lay down one deadline late last year, regarding the ongoing lack of a permanent ombudsperson. So it can now check that box.

It also notes approvingly today that the final two vacancies on the US’ Privacy and Civil Liberties Oversight Board have been filled, meaning it’s fully-staffed for the first time since 2016.

Commenting in a statement, commissioner for justice, consumers and gender equality, Věra Jourová, added: “With around 5,000 participating companies, the Privacy Shield has become a success story. The annual review is an important health check for its functioning. We will continue the digital diplomacy dialogue with our U.S. counterparts to make the Shield stronger, including when it comes to oversight, enforcement and, in a longer-term, to increase convergence of our systems.”

Its press release characterizes US enforcement action related to the Privacy Shield as having “improved” — citing the Federal Trade Commission taking enforcement action in a grand total of seven cases.

It also says vaguely that “an increasing number” of EU individuals are making use of their rights under the Privacy Shield, claiming the relevant redress mechanisms are “functioning well”. (Critics have long suggested the opposite.)

The Commission is recommending further improvements too though, including that the US expand compliance checks such as concerning false claims of participation in the framework.

So presumably there’s a bunch of entirely fake compliance claims going unchecked, as well as actual compliance going under-checked…

“The Commission also expects the Federal Trade Commission to further step up its investigations into compliance with substantive requirements of the Privacy Shield and provide the Commission and the EU data protection authorities with information on ongoing investigations,” the EC adds.

All these annual Commission reviews are just fiddling around the edges, though. The real substantive test for Privacy Shield which will determine its long term survival is looming on the horizon — from a judgement expected from Europe’s top court next year.

In July a hearing took place on a key case that’s been dubbed Schrems II. This is a legal challenge which initially targeted Facebook’s use of another EU data transfer mechanism but has been broadened to include a series of legal questions over Privacy Shield — now with the Court of Justice of the European Union.

There is also a separate litigation directly targeting Privacy Shield that was brought by a French digital rights group which argues it’s incompatible with EU law on account of US government mass surveillance practices.

The Commission’s PR notes the pending litigation — writing that this “may also have an impact on the Privacy Shield”. “A hearing took place in July 2019 in case C-311/18 (Schrems II) and, once the Court’s judgement is issued, the Commission will assess its consequences for the Privacy Shield,” it adds.

So, tl;dr, today’s third annual review doesn’t mean Privacy Shield is out of the legal woods.

$35B face data lawsuit against Facebook will proceed

Facebook just lost a battle in its war to stop a $35 billion class action lawsuit regarding alleged misuse of facial recognition data in Illinois. Today it was denied its request for an en banc hearing before the full slate of ninth circuit judges that could have halted the case. Now the case will go to trial unless the Supreme Court intercedes.

The suit alleges that Illinois citizens didn’t consent to having their uploaded photos scanned with facial recognition and weren’t informed of how long the data would be saved when the mapping started in 2011. Facebook could face $1000 to $5000 in penalties per user for 7 million people, which could sum to a maximum of $35 billion.

facebook facial recognition photo review

A three-judge panel of ninth circuit judges rejected Facebook’s motion to dismiss the case and its appeal of the class certification of the plaintiffs back in August. One of those judges said that it “seems likely” that the Facebook facial recognition data could be used to identify them in surveillance footage or even unlock a biometrically secured cell phone. Facebook had originally built the feature to power photo tag suggestions, asking users if it’s them or a particular friend in an untagged photo.

Nicholas Iovino spotted the announcement today that we’ve attained and embedded below. When asked for comment, a Facebook spokesperson responded “Facebook has always told people about its use of face recognition technology and given them control over whether it’s used for them. We are reviewing our options and will continue to defend ourselves vigorously.”

[Image Credit: Mike MacKenzie]

Additional reporting by Zack Whittaker

Former Tinder CEO strikes back against sexual misconduct accusations with defamation lawsuit

Former Tinder CEO Greg Blatt has filed a defamation lawsuit against Sean Rad and Rosette Pambakian, seeking at least $50 million in damages and accusing them of having “conspired to make false allegations of sexual harassment and sexual assault against Blatt with the specific intent to damage Blatt’s good name, personal and professional reputation, and credibility.”

In response, Rad and Pambakian’s attorney Orin Snyder described the suit as part of a campaign “to retaliate against and smear a victim of sexual assault and the person who reported it.”

Last year, Rad (Tinder’s co-founder and former CEO), Pambakian (its former vice president of marketing and communications) and other Tinder founders and executives filed a lawsuit against Tinder’s parent company Match Group and its majority shareholder IAC, accusing them of financial manipulation to lower the company’s valuation and stripping the plaintiffs of lucrative stock options.

The suit also accused Blatt (pictured above) of groping and sexually harassing Pambakian at the company’s 2016 holiday party, when Blatt was still Tinder’s CEO. In response to the suit, Match and IAC said the claims were meritless.

At the time, Pambakian was still employed at Tinder. She later dropped out of the suit due to an arbitration agreement, and was fired a few months after, leading her to claim that Match did this “in blatant retaliation for joining a group of colleagues and Tinder’s original founding members in a lawsuit against Match and IAC, standing up for our rights, calling out the company’s CEO Greg for sexual misconduct, and confronting the company about covering up what happened to me.”

Pambakian is now pursuing a separate suit against Blatt and Match Group, accusing them of wrongful termination and sexual assault.

Blatt’s new suit, however, claims:

Rad and Pambakian have attempted to weaponize an important social movement, undermining the plight of true victims of sexual abuse by making false accusations in cynical pursuit of a $2 billion windfall … Blatt is expected to be a key witness for IAC and Match in the Valuation Lawsuit. Damaging Blatt’s credibility and tarnishing his character are important elements of Pambakian’s and Rad’s litigation strategy in that action.

The suit also says that the encounter with Pambakian at the Tinder holiday party involved consensual flirting and kissing, but that they “never engaged in any further physical encounters” and made mutual apologies the following Monday.

According to the suit, Rad filed a complaint months later accusing Blatt of harassing Pambakian, and the complaint “was thoroughly investigated by in-house counsel and two outside law firms,” who concluded that there was no harassment or abuse.

The holiday party and Rad’s subsequent complaint are also discussed in a draft of Blatt’s resignation letter from Tinder (which has been obtained by TechCrunch and other publications), in which Blatt said that after joining a “female executive” and other Tinder employees in a hotel room, he “engaged in some snuggling and nuzzling (I can’t come up with words that better describe what I would call the most superficial of human contact) with the female executive.”

Blatt went on to describe his behavior as “really dumb,” while also insisting that “the snuggling and nuzzling was consensual.”

Blatt’s complaint includes an email that appears to be from Rad to his financial advisor, written shortly before Rad’s complaint, in which he wrote about Blatt: “Fuck him. We’re at war. We will destroy him.”

The suit also claims that Rad and the firm Bench Walk Advisors offered Pambakian millions of dollars for participating in the lawsuit. (Snyder told The Verge there were no upfront payments for participation: “The only payments were triggered by IAC/Match retaliating against plaintiffs by stripping away their hard-earned equity.”)

Here’s Snyder’s full statement in response to Blatt’s suit:

This is a new low for IAC/Match and their former CEO. They continue to retaliate against and smear a victim of sexual assault and the person who reported it. Their attacks are based on lies and documents that are taken out of context. When all of the evidence comes to light, it will be obvious what happened here. It’s shameful that these public companies are continuing to cover-up the truth.

And you can read the full suit below.

2019-10-3 Blatt Dkt 18 Firs… by TechCrunch on Scribd

Europe’s top court sets new line on policing illegal speech online

Europe’s top court has set a new line for the policing of illegal speech online. The ruling has implications for how speech is regulated on online platforms — and is likely to feed into wider planned reform of regional rules governing platforms’ liabilities.

Per the CJEU decision, platforms such as Facebook can be instructed to hunt for and remove illegal speech worldwide — including speech that’s “equivalent” to content already judged illegal.

Although any such takedowns remain within the framework of “relevant international law”.

So in practice it does not that mean a court order issued in one EU country will get universally applied in all jurisdictions as there’s no international agreement on what constitutes unlawful speech or even more narrowly defamatory speech.

Existing EU rules on the free flow of information on ecommerce platforms — aka the eCommerce Directive — which state that Member States cannot force a “general content monitoring obligation” on intermediaries, do not preclude courts from ordering platforms to remove or block illegal speech, the court has decided.

That decision worries free speech advocates who are concerned it could open the door to general monitoring obligations being placed on tech platforms in the region, with the risk of a chilling effect on freedom of expression.

Facebook has also expressed concern. Responding to the ruling in a statement, a spokesperson told us:

“This judgement raises critical questions around freedom of expression and the role that internet companies should play in monitoring, interpreting and removing speech that might be illegal in any particular country. At Facebook, we already have Community Standards which outline what people can and cannot share on our platform, and we have a process in place to restrict content if and when it violates local laws. This ruling goes much further. It undermines the long-standing principle that one country does not have the right to impose its laws on speech on another country. It also opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is “equivalent” to content that has been found to be illegal. In order to get this right national courts will have to set out very clear definitions on what ”identical” and ”equivalent” means in practice. We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression.”

The legal questions were referred to the CJEU by a court in Austria, and stem from a defamation action brought by Austrian Green Party politician, Eva Glawischnig, who in 2016 filed suit against Facebook after the company refused to take down posts she claimed were defamatory against her.

In 2017 an Austrian court ruled Facebook should take the defamatory posts down and do so worldwide. However Glawischnig also wanted it to remove similar posts, not just identical reposts of the illegal speech, which she argued were equally defamatory.

The current situation where platforms require notice of illegal content before carrying out a takedown are problematic, from one perspective, given the scale and speed of content distribution on digital platforms — which can make it impossible to keep up with reporting re-postings.

Facebook’s platform also has closed groups where content can be shared out of sight of non-members, and where an individual could therefore have no ability to see unlawful content that’s targeted at them — making it essentially impossible for them to report it.

While the case concerns the scope of the application of defamation law on Facebook’s platform the ruling clearly has broader implications for regulating a range of “unlawful” content online.

Specifically the CJEU has ruled that an information society service “host provider” can be ordered to:

  • … remove information which it stores, the content of which is identical to the content of information which was previously declared to be unlawful, or to block access to that information, irrespective of who requested the storage of that information;
  • … remove information which it stores, the content of which is equivalent to the content of information which was previously declared to be unlawful, or to block access to that information, provided that the monitoring of and search for the information concerned by such an injunction are limited to information conveying a message the content of which remains essentially unchanged compared with the content which gave rise to the finding of illegality and containing the elements specified in the injunction, and provided that the differences in the wording of that equivalent content, compared with the wording characterising the information which was previously declared to be illegal, are not such as to require the host provider to carry out an independent assessment of that content;
  • … remove information covered by the injunction or to block access to that information worldwide within the framework of the relevant international law

The court has sought to balance the requirement under EU law of no general monitoring obligation on platforms with the ability of national courts to regulate information flow online in specific instances of illegal speech.

In the judgement the CJEU also invokes the idea of Member States being able to “apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities” — saying the eCommerce Direction does not stand in the way of states imposing such a requirement.

Some European countries are showing appetite for tighter regulation of online platforms. In the UK, for instance, the government laid out proposals for regulating a board range of online harms earlier this year. While, two years ago, Germany introduced a law to regulate hate speech takedowns on online platforms.

Over the past several years the European Commission has also kept up pressure on platforms to speed up takedowns of illegal content — signing tech companies up to a voluntary code of practice, back in 2016, and continuing to warn it could introduce legislation if targets are not met.

Today’s ruling is thus being interpreted in some quarters as opening the door to a wider reform of EU platform liability law by the incoming Commission — which could allow for imposing more general monitoring or content-filtering obligations, aligned with Member States’ security or safety priorities.

“We can trace worrying content blocking tendencies in Europe,” says Sebastian Felix Schwemer, a researcher in algorithmic content regulation and intermediary liability at the University of Copenhagen. “The legislator has earlier this year introduced proactive content filtering by platforms in the Copyright DSM Directive (“uploadfilters”) and similarly suggested in a Proposal for a Regulation on Terrorist Content as well as in a non-binding Recommendation from March last year.”

Critics of a controversial copyright reform — which was agreed by European legislators earlier this year — have warned consistently that it will result in tech platforms pre-filtering user generated content uploads. Although the full impact remains to be seen, as Member States have two years from April 2019 to pass legislation meeting the Directive’s requirements.

In 2018 the Commission also introduced a proposal for a regulation on preventing the dissemination of terrorist content online — which explicitly included a requirement for platforms to use filters to identify and block re-uploads of illegal terrorist content. Though the filter element was challenged in the EU parliament.

“There is little case law on the question of general monitoring (prohibited according to Article 15 of the E-Commerce Directive), but the question is highly topical,” says Schwemer. “Both towards the trend towards proactive content filtering by platforms and the legislator’s push for these measures (Article 17 in the Copyright DSM Directive, Terrorist Content Proposal, the Commission’s non-binding Recommendation from last year).”

Schwemer agrees the CJEU ruling will have “a broad impact” on the behavior of online platforms — going beyond Facebook and the application of defamation law.

“The incoming Commission is likely to open up the E-Commerce Directive (there is a leaked concept note by DG Connect from before the summer),” he suggests. “Something that has previously been perceived as opening Pandora’s Box. The decision will also play into the coming lawmaking process.”

The ruling also naturally raises the question of what constitutes “equivalent” unlawful content? And who and how will they be the judge of that?

The CJEU goes into some detail on “specific elements” it says are needed for non-identical illegal speech to be judged equivalently unlawful, and also on the limits of the burden that should be placed on platforms so they are not under a general obligation to monitor content — ultimately implying that technology filters, not human assessments, should be used to identify equivalent speech.

From the judgement:

… it is important that the equivalent information referred to in paragraph 41 above contains specific elements which are properly identified in the injunction, such as the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal. Differences in the wording of that equivalent content, compared with the content which was declared to be illegal, must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content.

In those circumstances, an obligation such as the one described in paragraphs 41 and 45 above, on the one hand — in so far as it also extends to information with equivalent content — appears to be sufficiently effective for ensuring that the person targeted by the defamatory statements is protected. On the other hand, that protection is not provided by means of an excessive obligation being imposed on the host provider, in so far as the monitoring of and search for information which it requires are limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies.

“The Court’s thoughts on the filtering of ‘equivalent’ information are interesting,” Schwemer continues. “It boils down to that platforms can be ordered to track down illegal content, but only under specific circumstances.

“In its rather short judgement, the Court comes to the conclusion… that it is no general monitoring obligation on hosting providers to remove or block equivalent content. That is provided that the search of information is limited to essentially unchanged content and that the hosting provider does not have to carry out an independent assessment but can rely on automated technologies to detect that content.”

While he says the court’s intentions — to “limit defamation” — are “good” he points out that “relying on filtering technologies is far from unproblematic”.

Filters can indeed be an extremely blunt tool. Even basic text filters can be triggered by words that contain a prohibited spelling. While applying filters to block defamatory speech could lead to — for example — inadvertently blocking lawful reactions that quote the unlawful speech.

The ruling also means platforms and/or their technology tools are being compelled to define the limits of free expression under threat of liability. Which pushes them towards setting a more conservative line on what’s acceptable expression on their platforms — in order to shrink their legal risk.

Although definitions of what is unlawful speech and equivalently unlawful will ultimately rest with courts.

It’s worth pointing out that platforms are already defining speech limits — just driven by their own economic incentives.

For ad supported platforms, these incentives typically demand maximizing engagement and time spent on the platform — which tends to encourage users to spread provocative/outrageous content.

That can sum to clickbait and junk news. Equally it can mean the most hateful stuff under the sun.

Without a new online business model paradigm that radically shifts the economic incentives around content creation on platforms the tension between freedom of expression and illegal hate speech will remain. As will the general content monitoring obligation such platforms place on society.

Dating app maker Match sued by FTC for fraud

They’re just not that into you. Or maybe it was a bot? The U.S. Federal Trade Commission on Wednesday announced it has sued Match Group, the owner of just about all the dating apps — including Match, Tinder, OKCupid, Hinge, PlentyofFish, and others — for fraudulent business practices. According to the FTC, Match tricked hundreds of thousands of consumers into buying subscriptions, exposed customers to the risk of fraud, and engaged in other deceptive and unfair practices.

The suit focuses only on Match.com and boils down to this: Match.com didn’t just turn a blind eye to its massive bot and scammer problem, the FTC claims. It knowingly profited from it. And it made deceiving users a core part of its business practices.

The charges against Match are fairly significant.

The FTC says that most consumers aren’t aware that 25 to 30 percent of Match registrations per day come from scammers. This includes romance scams, phishing scams, fraudulent advertising, and extortion scams. During some months from 2013 to 2016, more than half the communications taking place on Match were from accounts the company identified as fraudulent.

Bots and scammers, of course, are a problem all over the web. The difference is that, in Match’s case, it indirectly profited from this, at consumers’ expense, the suit claims.

The dating app sent out marketing emails (i.e., the “You caught his eye” notices) to potential subscribers about new messages in the app’s inbox. However, it did so after it had already flagged the message’s sender as a suspected bot or scammer.

Screen Shot 2019 09 26 at 2.57.37 PM

“We believe that Match.com conned people into paying for subscriptions via messages the company knew were from scammers,” said Andrew Smith, Director of the FTC’s Bureau of Consumer Protection. “Online dating services obviously shouldn’t be using romance scammers as a way to fatten their bottom line.”

From June 2016 to May 2018, Match’s own analysis found 499,691 consumers signed up for subscriptions within 24 hours of receiving an email touting the fraudulent communication, the FTC said. Some of these consumers joined Match only to find the message that brought them there was a scam. Others joined after Match deleted the scammers’ account, following its fraud review process. That left them to find the account that messaged them was now “unavailable.”

In all cases, the victims were now stuck with a subscription — and a hassle when they tried to cancel.

Because of Match’s allegedly “deceptive advertising, billing, and cancellation practices,” consumers would often try to reverse their charges through their bank. Match would then ban the users from the app.

Related to this, Match is also in violation of the “Restore Online Shoppers’ Confidence Act” (ROSCA) by failing to provide a simple way for customers to stop the recurring charges, the FTC says. In 2015, one Match internal document showed how it took over 6 clicks to cancel a subscription, and often led consumers to thinking they canceled when they did not.

Screen Shot 2019 09 26 at 2.59.35 PM

And the suit alleges Match tricked people into free, six-month subscriptions by promising them they wouldn’t have to pay if they didn’t meet someone. It didn’t, however, adequately disclose that there were other, specific steps that had to be taken, involving how they had to use their subscription or redeem their free months.

Screen Shot 2019 09 26 at 2.58.39 PM

Match, naturally, disputes the matter. It claims that it is, in fact, fighting fraud and that it handles 85% of potentially improper accounts in the first four hours, often before they become active. And it handles 96% of those fraudulent accounts within a day.

“For nearly 25 years Match has been focused on helping people find love, and fighting the criminals that try to take advantage of users. We’ve developed industry-leading tools and A.I. that block 96% of bots and fake accounts from our site within a day and are relentless in our pursuit to rid our site of these malicious accounts,” Match stated, in response to the news. “The FTC has misrepresented internal emails and relied on cherry-picked data to make outrageous claims and we intend to vigorously defend ourselves against these claims in court.”

The Match Group, as you may know, loves to have its day in court.

The FTC’s lawsuit isn’t the only one facing Match’s parent company because it doesn’t (allegedly) play fair.

A group of Tinder execs are currently suing Match and its controlling shareholder IAC for manipulating financial data to strip them of their stock options. The suit today continues, even though some plaintiffs had to drop out because Match had snuck an arbitration clause into its employees’ recent compliance acknowledgments.

Now those former plaintiffs are acting as witnesses, and Match is trying to argue that the litigation funding agreement overcompensates them for their testimony in violation of the law. The judge called that motion a “smoke screen” and an attempt to “litigate [the plaintiffs] to death until they settle.”

The Match Group also got into it with Tinder’s rival Bumble, which it failed to acquire twice. It filed a lawsuit over infringed patents, which Bumble said was meant to bring down its valuation. Bumble then filed and later dropped its own $400M suit over Match fraudulently obtaining Bumble’s trade secrets.

In the latest lawsuit, the FTC is asking Match to pay back the “ill-gotten” money and wants to impose civil penalties and other relief. While the financial impacts may not be enough to take down a company with the resources of Match, the headlines from the trial could bring about an increase in negative consumer sentiment over Match and online dating in general. It’s a business that’s become commonplace and normalized in society, but also has a reputation of being a little scammy at times, too. This suit won’t help.

And given that Match Group operates a majority of the U.S.’s top dating apps, that could have a larger, trickle-down effect on its broader business.

The FTC suit is available below.

An explosive breach of contract lawsuit against former Sequoia Capital partner Michael Goguen has been dropped

Three-and-a-half years ago, a lawsuit hit the San Mateo, Ca. county courthouse that briefly attracted the attention of the worldwide venture capital community given its salacious nature. The defendant: longtime VC Michael Goguen, who’d spent 20 years with Sequoia Capital in Menlo Park, Ca. The plaintiff: a former girlfriend who described him through the filing as a “worse predator than the human traffickers.”She said in the filing that she would know, having become a “victim of human trafficking” at age 15 when she was “brought to America in 2001,” then “sold as a dancer to a strip club” in Texas, which is where she she says first encountered Goguen.

What she wanted from the lawsuit was money that she said was owed to her by Goguen: $40 million over four installments that the lawsuit stated were for “compensation for the sexual abuse and [a sexual] infection she contracted from him.” According to her suit, Goguen agreed to these terms, paying Baptiste a first installment of $10 million, but then refused to make further payments.

At the time, Goguen called the allegations “horrific” and suggested Baptiste was a spurned lover, saying they’d had a “10+ year romantic relationship that ended badly.” He also filed a cross complaint alleging extortion.

Today, that cross complaint lives on, while Baptiste’s case again Goguen was just quietly dismissed by arbitrator Read Ambler, a retired judge who served 20 years with the Santa Clara County Superior Court and who wrote in a ruling yesterday filed in San Mateo court that Baptiste’s failures to undergo medical examinations doomed her case, as did her failure to produce documents necessary in the discovery process.

“The record presented further establishes that Baptiste’s’ failures were willful,” Ambler writes. “Baptiste appears to believe that the information responsive to the discovery at issue is either not relevant, or with respect to the medical examinations, not permitted by law. While Baptiste is free to believe what she wants to believe, the orders are binding on Baptiste, and her failure to comply with the orders is unacceptable.”

Baptiste doesn’t currently have legal representation, though four sets of lawyers have represented her over time.

Patricia Glaser, a high-powered attorney who took on Baptiste’s case originally (and later agreed to represent Hollywood producer Harvey Weinstein), asked to be relieved from the case five months later, citing “irreconcilable differences.” More recently, an L.A.-based couple that operates the Sherman Law Group in L.A. filed a motion to be relieved as Baptiste’s counsel, citing “irreconcilable differences and a breakdown in communication.”

Goguen’s attorneys say he will continue to pursue his counterclaims against Baptiste and looks forward to “complete vindication.”

Though Ambler never remarked on the merits or Baptiste’s claims, Goguen’s attorney Diane Doolittle further said today in a statement that: “Amber Laurel Baptiste’s sensationalized lawsuit against Silicon Valley venture capitalist Michael Goguen collapsed under the weight of its own falsehood yesterday, when a judge dismissed the case because of Baptiste’s repeated, egregious and willful misconduct. Over the course of this case, Baptiste perjured herself, concealed, destroyed and falsified key evidence, and demonstrated her contempt for the legal system by systematically violating numerous court orders.”

Baptiste could not be reached for comment.

Baptiste’s lawsuit against Goguen prompted Sequoia to part ways with him almost immediately. Later the very day that TechCrunch broke news of the suit in 2016, a Sequoia spokesman told us that while the firm understood “these allegations of serious improprieties” to be “unproven and unrelated to Sequoia” its management committee had nevertheless “decided that Mike’s departure was the appropriate course of action.”

Goguen, who sold an $11 million home in Atherton, Ca., in 2017, has spent much of his time in recent years at another home in Whitefish, Montana, where he has seemingly been wooing locals. An August story about Goguen in The Missoulian about a separate case describes him “known locally for philanthropic ventures.”

Continues the story: “Such donations have funded Montana’s Internet Crimes Against Children Task Force and a Flathead group teaching girls to code. Two Bear Air, his northwestern Montana search and rescue outfit free to anyone who has needed it, has performed well over 500 missions and 400 rescues, according to executive director and chief pilot Jim Pierce. Goguen has personally completed 30 rescues, the Daily Inter Lake reported in February. The Flathead Beacon reports he was honored with the Great Whitefish Award earlier this year.”

Lyft faces lawsuit that alleges kidnapping at gunpoint and rape

Lyft is facing another lawsuit pertaining to its handling of alleged sexual assaults at the hands of drivers on its platform. In a suit filed today in the San Franciso Superior Court, Alison Turkos accuses Lyft of eleven counts, including general negligence, vicarious liability for assault with a deadly weapon, sexual assault, and sexual battery, and breach of contract.

The lawsuit describes how the plaintiff’s Lyft driver allegedly kidnapped her at gunpoint and took her across state lines, where the driver and other men took turns raping her, the lawsuits states.

“Alison remembers the men cheering and high fiving each other as they continued to rape her,” the lawsuit alleges. “Their attack was so brutal that the next day Alison experienced severe vaginal pain and bleeding. Her body was so exhausted from the attack and resulting trauma that Alison could not even leave her bed or raise her arms.”

When the plaintiff reported it to Lyft, the lawsuit alleges Lyft simply apologized for “inconvenience” and gave her a partial refund for the ride. The plaintiff says she reported the crime to the police, who performed a rape kit that found evidence of semen from at least two men on the clothing she wore that night.

The New York Police Department then transferred the case to the FBI, according to the lawsuit. The lawsuit states the FBI is now investigating the incident as a human trafficking case. However, Lyft “has been wholly uncooperative” throughout the NYPD and FBI’s investigation, the lawsuit alleges.

The lawsuit seeks special damages, including economic restitution to cover past and future hospital expenses, as well as expenses relating to her profession and loss of earning capacity.

“By failing to take reasonable steps to confront the problem of multiple rapes and sexual assaults of LYFT passengers by LYFT drivers, LYFT has acted in conscious disregard of the safety of its passengers,” the lawsuit alleges.

This suit comes just weeks after fourteen women filed suit against Lyft alleging the company has not addressed complaints pertaining to sexual assault. Both suits recommended Lyft adopt new policies, such as the addition of a surveillance camera to the app that can record audio and video of all rides.

Meanwhile, Lyft recently announced new safety features, including trip check-ins if a ride seems to be taking longer than it should and in-app 911 calling.

“We’re committed to playing a significant role in connecting our communities with transportation, and we understand the responsibilities that come along with that,” Lyft co-founder and President John Zimmer wrote in a blog post. “We’ve known since the beginning that as part of our mission, we must heavily invest in safety. We continue to welcome accountability and partnership to best protect our rider and driver community.”

It’s no coincidence that Lyft announced these safety features in light of the lawsuit on behalf of those fourteen women. The company had previously taken some steps to address safety, but at a much slower pace than competitor Uber, which has also faced a number of sexual assault and abuse lawsuits. Between 2014-2018, CNN found 103 Uber drivers who had been accused of sexual assault or abuse of passengers.

Over the years, both companies have taken steps to ramp up their respective safety procedures. In April, Uber launched a campus safety initiative while Lyft implemented continuous background checks and enhanced its identity verification process for drivers. Uber, however, implemented continuous background checks about a full year before Lyft, and added an in-app 911 calling feature more than a year before Lyft.

“We don’t take lightly any instances where someone’s safety is compromised, especially in the rideshare industry, including the allegations of assault in the news last week,” Zimmer said earlier this month in that same blog post. “The reality is that certain populations carry a disproportionate burden simply trying to get to work or back home after a night out — in the U.S., one in six women will face some form of sexual violence in their lives. The onus is on all of us to learn from any incident, whether it occurs on our platform or not, and then work to help prevent them.”

TechCrunch is awaiting comment from Lyft regarding this lawsuit. We’ll update this story if we hear back.

AT&T faked DirecTV Now numbers, lawsuit alleges

AT&T faked the numbers for its DirecTV Now streaming service ahead of the company’s Time Warner merger, according to a lawsuit filed by investors, Bloomberg reported. The suit alleges the media giant pressured employees to boost DirecTV Now’s numbers by secretly adding the product to existing customers’ accounts. It also claims the company touted DirecTV Now’s user growth, when in reality, subscribers were leaving as their promotional periods ended and the service’s price hikes were limiting new sign-ups.

The suit says a variety of tactics were used to promote the idea that DirecTV Now was growing organically. For example, it claims that employees were taught how and encouraged to convert activation fees that customers typically had to pay to upgrade their phones into DirecTV Now subscriptions. This involved the customer being told the fee was being “waived,” when instead the customer was charged anyway and the payment was applied to up to 3 DirecTV Now accounts using fake emails.

One former employee even said that around 40%-50% of customers he dealt with in early 2017 were complaining about being charged for DirecTV Now, which they had never signed up for. This was supported by other employees, the suit cites, and was a directive that came top from upper management to the sales channel.

In addition, the suit speaks to overly aggressive sales quotas, high churn from deeply discounted promotions, technical issues, and unsustainable pricing. It noted how AT&T finally disclosed that by the end of 2018, none of the 500,000 heavily discounted DirecTV Now subscribers remained on the service, and subscriptions had dropped by 267,000 as a result. In April 2019, it reported another 83,000 subscribers had left the service, and in July, 168,000 had abandoned it.

But ahead of the Time Warner merger, AT&T touted the service’s success, the suit said. It didn’t disclose any of the risks associated with DirecTV Now, despite SEC obligations. The plaintiffs believe AT&T should have noted what made its stock risky, including the fact that DirecTV Now was not profitable, its growth had been dependent on aggressive promotions, and it faced severe technical challenges.

“By buying AT&T’s securities at these artificially inflated and artificially maintained prices, the Class members suffered economic losses, which losses were a direct and proximate result of Defendants’ fraudulent conduct,” the suit states.

“We plan to fight these baseless claims in court,” AT&T said in a statement to Bloomberg.

DirecTV Now had a rough start to begin with, having suffered heavily from glitches, including freezing, buffering, and more. While that can happen at first with new streaming services, AT&T’s glitches were bad enough that many wanted to cancel.

TechCrunch reported in 2017 how customers complained they weren’t able to get refunds from AT&T, even though they weren’t able to use the service as promised. Some had even filed complaints with the FCC, we found. In January, we also noted how the service’s price hikes and promotional packages ending led to a sizable loss of subscribers and that AT&T was “losing the cord cutters.”

The filing of the lawsuit comes at a time where AT&T has seen much upheaval. This month, activist investor  Elliott Management Corp. disclosed its $3.2 billion stake in AT&T and criticized the company’s acquisition strategy. It also suggested that AT&T should sell some assets that don’t fit its future direction, like the DirecTV satellite service and Mexican wireless business.  AT&T CEO Randall Stephenson defended the company’s $85 billion acquisition of Time Warner today, in response to this criticism.

In addition, AT&T CEO of Communications, John Donovan, recently announced his retirement, with WarnerMedia CEO John Stankey being promoted to president and chief operating officer at AT&T.

The full complaint is below.