Digital banking startup Revolut raises $250M at a valuation of $1.7B

Revolut, the London-based fintech that offers a digital banking account and sprawling set of other financial services, is disclosing that it has raised a whopping $250 million in Series C funding, less than three years since launching.

The new round, which gives the company a $1.7 billion post-money valuation — a five-fold increase in under a year, we’re told — was led by Hong Kong based DST Global, along with a group of new and existing investors that includes Index Ventures, and Ribbit Capital. In case you aren’t keeping up, it brings the total amount raised by Revolut to $340 million in less than 36 months.

To put this into context, TransferWise — London’s undisputed fintech darling and on some features a direct competitor to Revolut — recently announced $280 million in Series D investment, giving the company a reported post-money valuation of $1.6 billion. The difference? It took TransferWise seven years compared to Revolut’s three.

That’s testament to how much value investors are now placing on bank-disrupting fintech or perhaps signs of a fintech bubble. Or both. It is also worth remembering that these are private valuations with neither company yet to float on the public markets, even if TranserWise looks increasingly a candidate to do so.

Meanwhile, Revolut says the new round of funding and surge in valuation follows “incredible growth figures to date,” with the fintech now processing $1.8 billion through the platform each month and signing up between 6,000 and 8,000 new customers every day.

It claims nearly 2 million customers in total, of which 250,000 are daily active users, roughly 400,000 are weekly active users and 900,000 are monthly active users. The company says the target is 100 million customers in the next five years.

For a little more context, TransferWise has 3 million customers. I’m also told U.K. challenger bank Monzo now has 630,000 current account customers, of which 200,000 are daily active users, 360,000 are weekly active users and 500,000 are monthly active users. (In both Revolut and Monzo’s case, active users are defined as making at least one financial transaction.)

With the aim of persuading both consumers and businesses to ditch their traditional bank, Revolut offers most of the features you’d expect of a current account, including physical and virtual debit cards, direct debits and money transfer. Its “attack vector” (to borrow Monzo’s Tom Blomfield’s phrase) was originally low exchange fees when spending in a foreign currency, which undoubtedly fuelled much of the startup’s early growth and mindshare, but new features and products are being added at an increasingly fast pace.

Many of these are through partnerships with other fintech companies, and include travel insurance, phone insurance, credit, savings, and cryptocurrency. The latter looks like riding the hype cycle almost perfectly. Revolut is also applying for a European banking license, which would enable it to begin balance sheet lending, too.

To that end, Revolut says the Series C funding will be used to go beyond Europe and expand worldwide, starting with the U.S., Canada, Singapore, Hong Kong, and Australia this year. The company also expects to increase its workforce from 350 to around 800 employees in 2018.

Allegro.AI nabs $11M for a platform that helps businesses build computer vision-based services

Artificial intelligence and the application of it across nearly every aspect of our lives is shaping up to be one of the major step changes of our modern society. Today, a startup that wants to help other companies capitalise on AI’s advances is announcing funding and emerging from stealth mode.

Allegro.AI, which has built a deep learning platform that companies can use to build and train computer-vision-based technologies — from self-driving car systems through to security, medical and any other services that require a system to read and parse visual data — is today announcing that it has raised $11 million in funding, as it prepares for a full-scale launch of its commercial services later this year after running pilots and working with early users in a closed beta.

The round may not be huge by today’s startup standards, but the presence of strategic investors speaks to the interest that the startup has sparked and the gap in the market for what it is offering. It includes MizMaa Ventures — a Chinese fund that is focused on investing in Israeli startups, along with participation from Robert Bosch Venture Capital GmbH (RBVC), Samsung Catalyst Fund and Israeli fund Dynamic Loop Capital. Other investors (the $11 million actually covers more than one round) are not being disclosed.

Nir Bar-Lev, the CEO and cofounder (Moses Guttmann, another cofounder, is the company’s CTO), started Allegro.AI first as Seematics in 2016 after he left Google, where he had worked in various senior roles for over 10 years. It was partly that experience that led him to the idea that with the rise of AI, there would be an opportunity for companies that could build a platform to help other less AI-savvy companies build AI-based products.

“We’re addressing a gap in the industry,” he said in an interview. Although there are a number of services, for example Rekognition from Amazon’s AWS, which allow a developer to ping a database by way of an API to provide analytics and some identification of a video or image, these are relatively basic and couldn’t be used to build and “teach” full-scale navigation systems, for example.

“An ecosystem doesn’t exist for anything deep-learning based.” Every company that wants to build something would have to invest 80-90 percent of their total R&D resources on infrastructure, before getting to the many other apsects of building a product, he said, which might also include the hardware and applications themselves. “We’re providing this so that the companies don’t need to build it.”

Instead, the research scientists that will buy in the Allegro.AI platform — it’s not intended for non-technical users (not now at least) — can concentrate on overseeing projects and considering strategic applications and other aspects of the projects. He says that currently, its direct target customers are tech companies and others that rely heavily on tech, “but are not the Googles and Amazons of the world.”

Indeed, companies like Google, AWS, Microsoft, Apple and Facebook have all made major inroads into AI, and in one way or another each has a strong interest in enterprise services and may already be hosting a lot of data in their clouds. But Bar-Lev believes that companies ultimately will be wary to work with them on large-scale AI projects:

“A lot of the data that’s already on their cloud is data from before the AI revolution, before companies realized that the asset today is data,” he said. “If it’s there, it’s there and a lot of it is transactional and relational data.

“But what’s not there is all the signal-based data, all of the data coming from computer vision. That is not on these clouds. We haven’t spoken to a single automotive who is sharing that with these cloud providers. They are not even sharing it with their OEMs. I’ve worked at Google, and I know how companies are afraid of them. These companies are terrified of tech companies like Amazon and so on eating them up, so if they can now stop and control their assets they will do that.”

Customers have the option of working with Allegro either as a cloud or on-premise product, or a combination of the two, and this brings up the third reason that Allegro believes it has a strong opportunity. The quantity of data that is collected for image-based neural networks is massive, and in some regards it’s not practical to rely on cloud systems to process that. Allegro’s emphasis is on building computing at the edge to work with the data more efficiently, which is one of the reasons investors were also interested.

“AI and machine learning will transform the way we interact with all the devices in our lives, by enabling them to process what they’re seeing in real time,” said David Goldschmidt, VP and MD at Samsung Catalyst Fund, in a statement. “By advancing deep learning at the edge, Allegro.AI will help companies in a diverse range of fields—from robotics to mobility—develop devices that are more intelligent, robust, and responsive to their environment. We’re particularly excited about this investment because, like Samsung, Allegro.AI is committed not just to developing this foundational technology, but also to building the open, collaborative ecosystem that is necessary to bring it to consumers in a meaningful way.”

Allegro.AI is not the first company with hopes of providing AI and deep learning as a service to the enterprise world: Element.AI out of Canada is another startup that is being built on the premise that most companies know they will need to consider how to use AI in their businesses, but lack the in-house expertise or budget (or both) to do that. Until the wider field matures and AI know-how becomes something anyone can buy off-the-shelf, it’s going to present an interesting opportunity for the likes of Allegro and others to step in.

 

 

 

WhatsApp raises minimum age to 16 in Europe ahead of GDPR

Tech giants are busy updating their T&Cs ahead of the EU’s incoming data protection framework, GDPR. Which is why, for instance, Facebook-owned Instagram is suddenly offering a data download tool. You can thank European lawmakers for being able to take your data off that platform.

Facebook -owned WhatsApp is also making a pretty big change as a result of GDPR — noting in its FAQs that it’s raising the minimum age for users of the messaging platform to 16 across the “European Region“. This includes in both EU and non-EU countries (such as Switzerland), as well as the in-the-process-of-brexiting UK (which is set to leave the EU next year).

In the US, the minimum age for WhatsApp usage remains 13.

Where teens are concerned GDPR introduces a new provision concerning children’s personal data — setting a 16-year-old age limit on kids being able to consent to their data being processed — although it does allow some wiggle room for individual countries to write a lower age limit into their laws, setting a hard cap at 13-years-old.

WhatsApp isn’t bothering to try to vary the age gate depending on limits individual EU countries have set, though. Presumably to reduce the complexity of complying with the new rules.

But also likely because it’s confident WhatsApp-loving teens won’t have any trouble circumventing the new minimum age limit. And therefore that there’s no real risk to its business because teenagers will easily ignore the rules.

Certainly it’s unclear whether WhatsApp and its parent Facebook will do anything at all to enforce the age limit — beyond asking users to state they are at least 16 (and taking them at their word). So in practice, while on paper the 16-years-old minimum seems like a big deal, the change may do very little to protect teens from being data-mined by the ad giant.

We’ve asked WhatsApp whether it will cross-check users’ accounts with Facebook accounts and data holdings to try to verify a teen really is 16, for example, but nothing in its FAQ on the topic suggests it plans to carry out any active enforcement at all — instead it merely notes:

  • Creating an account with false information is a violation of our Terms
  • Registering an account on behalf of someone who is underage is also a violation of our Terms

Ergo, that does sound very much like a buck being passed. And it will likely be up to parents to try to actively enforce the limit — by reporting their own underage WhatApp-using kids to the company (which would then have to close the account). Clearly few parents would relish the prospect of doing that.

Yet Facebook does already share plenty of data between WhatsApp and its other companies for all sorts of self-serving, business-enhancing purposes — and even including, as it couches it, “to ensure safety and security”. So it’s hardly short of data to carry out some age checks of its own and proactively enforce the limit.

One curious difference is that Facebook’s approach to teen usage of WhatsApp is notably distinct to the one it’s taking with teens on its main social platform — also as it reworks the Facebook T&Cs ahead of GDPR.

Under the new terms there Facebook users between the ages of 13 and 15 will need to get parental permission to be targeted with ads or share sensitive info on Facebook.

But again, as my TC colleague Josh Constine pointed out, the parental consent system Facebook has concocted is laughably easy for teens to circumvent — merely requiring they select one of their Facebook friends or just enter an email address (which could literally be an alternative email address they themselves control). That entirely unverified entity is then asked to give ‘consent’ for their ‘child’ to share sensitive info. So, basically, a total joke.

As we’ve said before, Facebook’s approach to GDPR ‘compliance’ is at best described as ‘doing the minimum possible’. And data protection experts say legal challenges are inevitable.

Also in Europe Facebook has previously been forced via regulatory intervention to give up one portion of the data sharing between its platforms — specifically for ad targeting purposes. However its WhatsApp T&Cs also suggest it is confident it will find a way to circumvent that in future, as it writes it “will only do so when we reach an understanding with the Irish Data Protection Commissioner on a future mechanism to enable such use” — i.e. when, not if.

Last month it also signed an undertaking with the DPC on this related to GDPR compliance, so again appears to have some kind of regulatory-workaround ‘mechanism’ in the works.

Kogan: ‘I don’t think Facebook has a developer policy that is valid’

A Cambridge University academic at the center of a data misuse scandal involving Facebook user data and political ad targeting faced questions from the UK parliament this morning.

Although the two-hour evidence session in front of the DCMS committee’s fake news enquiry raised rather more questions than it answered — with professor Aleksandr Kogan citing an NDA he said he had signed with Facebook to decline to answer some of the committee’s questions (including why and when exactly the NDA was signed).

TechCrunch understands the NDA relates to standard confidentiality provisions regarding deletion certifications and other commitments made by Kogan to Facebook not to misuse user data — after the company learned he had user passed data to SCL in contravention of its developer terms.

Asked why he had a non disclosure agreement with Facebook Kogan told the committee it would have to ask Facebook. He also declined to say whether any of his company co-directors (one of whom now works for Facebook) had been asked to sign an NDA. Nor would he specify whether the NDA had been signed in the US.

Asked whether he had deleted all the Facebook data and derivatives he had been able to acquire Kogan said yes “to the best of his knowledge”, though he also said he’s currently conducting a review to make sure nothing has been overlooked.

A few times during the session Kogan made a point of arguing that data audits are essentially useless for catching bad actors — claiming that anyone who wants to misuse data can simply put a copy on a hard drive and “store it under the mattress”.

(Incidentally, the UK’s data protection watchdog is conducting just such an audit of Cambridge Analytica right now, after obtaining a warrant to enter its London offices last month — as part of an ongoing, year-long investigation into social media data being used for political ad targeting.)

Your company didn’t hide any data in that way did it, a committee member asked Kogan? “We didn’t,” he rejoined.

“This has been a very painful experience because when I entered into all of this Facebook was a close ally. And I was thinking this would be helpful to my academic career. And my relationship with Facebook. It has, very clearly, done the complete opposite,” Kogan continued.  “I had no interest in becoming an enemy or being antagonized by one of the biggest companies in the world that could — even if it’s frivolous — sue me into oblivion. So we acted entirely as they requested.”

Despite apparently lamenting the breakdown in his relations with Facebook — telling the committee how he had worked with the company, in an academic capacity, prior to setting up a company to work with SCL/CA — Kogan refused to accept that he had broken Facebook’s terms of service — instead asserting: “I don’t think they have a developer policy that is valid… For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy.”

“I just don’t believe that’s their policy,” he repeated when pressed on whether he had broken Facebook’s ToS. “If somebody has a document that isn’t their policy you can’t break something that isn’t really your policy. I would agree my actions were inconsistent with the language of this document — but that’s slightly different from what I think you’re asking.”

“You should be a professor of semantics,” quipped the committee member who had been asking the questions.

A Facebook spokesperson told us it had no public comment to make on Kogan’s testimony. But last month CEO Mark Zuckerberg couched the academic’s actions as a “breach of trust” — describing the behavior of his app as “abusive”.

In evidence to the committee today, Kogan told it he had only become aware of an “inconsistency” between Facebook’s developer terms of service and what his company did in March 2015 — when he said he begun to suspect the veracity of the advice he had received from SCL. At that point Kogan said GSR reached out to an IP lawyer “and got some guidance”.

(More specifically he said he became suspicious because former SCL employee Chris Wylie did not honor a contract between GSR and Eunoia, a company Wylie set up after leaving SLC, to exchange data-sets; Kogan said GSR gave Wylie the full raw Facebook data-set but Wylie did not provide any data to GSR.)

“Up to that point I don’t believe I was even aware or looked at the developer policy. Because prior to that point — and I know that seems shocking and surprising… the experience of a developer in Facebook is very much like the experience of a user in Facebook. When you sign up there’s this small print that’s easy to miss,” he claimed.

“When I made my app initially I was just an academic researcher. There was no company involved yet. And then when we commercialized it — so we changed the app — it was just something I completely missed. I didn’t have any legal resources, I relied on SCL [to provide me with guidance on what was appropriate]. That was my mistake.”

“Why I think this is still not Facebook’s policy is that we were advised [by an IP lawyer] that Facebook’s terms for users and developers are inconsistent. And that it’s not actually a defensible position for Facebook that this is their policy,” Kogan continued. “This is the remarkable thing about the experience of an app developer on Facebook. You can change the name, you can change the description, you can change the terms of service — and you just save changes. There’s no obvious review process.

“We had a terms of service linked to the Facebook platform that said we could transfer and sell data for at least a year and a half — nothing was ever mentioned. It was only in the wake of the Guardian article [in December 2015] that they came knocking.”

Kogan also described the work he and his company had done for SCL Elections as essentially worthless — arguing that using psychometrically modeled Facebook data for political ad targeting in the way SCL/CA had apparently sought to do was “incompetent” because they could have used Facebook’s own ad targeting platform to achieve greater reach and with more granular targeting.

“It’s all about the use-case. I was very surprised to learn that what they wanted to do is run Facebook ads,” he said. “This was not mentioned, they just wanted a way to measure personality for many people. But if the use-case you have is Facebook ads it’s just incompetent to do it this way.

“Taking this data-set you’re going to be able to target 15% of the population. And use a very small segment of the Facebook data — page likes — to try to build personality models. When do this when you could very easily go target 100% and use much more of the data. It just doesn’t make sense.”

Asked what, then, was the value of the project he undertook for SCL, Kogan responded: “Given what we know now, nothing. Literally nothing.”

He repeated his prior claim that he was not aware that work he was providing for SCL Elections would be used for targeting political ads, though he confirmed he knew the project was focused on the US and related to elections.

He also said he knew the work was being done for the Republican party — but claimed not to know which specific candidates were involved.

Pressed by one committee member on why he didn’t care to know which politicians he was indirectly working for, Kogan responded by saying he doesn’t have strong personal views on US politics or politicians generally — beyond believing that most US politicians are at least reasonable in their policy positions.

“My personal position on life is unless I have a lot of evidence I don’t know. Is the answer. It’s a good lesson to learn from science — where typically we just don’t know. In terms of politics in particular I rarely have a strong position on a candidate,” said Kogan, adding that therefore he “didn’t bother” to make the effort to find out who would ultimately be the beneficiary of his psychometric modeling.

Kogan told the committee his initial intention had not been to set up a business at all but to conduct not-for-profit big data research — via an institute he wanted to establish — claiming it was Wylie who had advised him to also set up the for-profit entity, GSR, through which he went on to engage with SCL Elections/CA.

“The initial plan was we collect the data, I fulfill my obligations to SCL, and then I would go and use the data for research,” he said.

And while Kogan maintained he had never drawn a salary from the work he did for SCL — saying his reward was “to keep the data”, and get to use it for academic research — he confirmed SCL did pay GSR £230,000 at one point during the project; a portion of which he also said eventually went to pay lawyers he engaged “in the wake” of Facebook becoming aware that data had been passed to SCL/CA by Kogan — when it contacted him to ask him to delete the data (and presumably also to get him to sign the NDA).

In one curious moment, Kogan claimed not to know his own company had been registered at 29 Harley Street in London — which the committee noted is “used by a lot of shell companies some of which have been used for money laundering by Russian oligarchs”.

Seeming a little flustered he said initially he had registered the company at his apartment in Cambridge, and later “I think we moved it to an innovation center in Cambridge and then later Manchester”.

“I’m actually surprised. I’m totally surprised by this,” he added.

Did you use an agent to set it up, asked one committee member. “We used Formations House,” replied Kogan, referring to a company whose website states it can locate a business’ trading address “in the heart of central London” — in exchange for a small fee.

“I’m legitimately surprised by that,” added Kogan of the Harley Street address. “I’m unfortunately not a Russian oligarch.”

Later in the session another odd moment came when he was being asked about his relationship with Saint Petersburg University in Russia — where he confirmed he had given talks and workshops, after traveling to the country with friends and proactively getting in touch with the university “to say hi” — and specifically about some Russian government-funded research being conducted by researchers there into cyberbullying.

Committee chair Collins implied to Kogan the Russian state could have had a specific malicious interest in such a piece of research, and wondered whether Kogan had thought about that in relation to the interactions he’d had with the university and the researchers.

Kogan described it as a “big leap” to connect the piece of research to Kremlin efforts to use online platforms to interfere in foreign elections — before essentially going on to repeat a Kremlin talking point by saying the US and the UK engage in much the same types of behavior.

“You can make the same argument about the UK government funding anything or the US government funding anything,” he told the committee. “Both countries are very famous for their spies.

“There’s a long history of the US interfering with foreign elections and doing the exact same thing [creating bot networks and using trolls for online intimidation].”

“Are you saying it’s equivalent?” pressed Collins. “That the work of the Russian government is equivalent to the US government and you couldn’t really distinguish between the two?”

“In general I would say the governments that are most high profile I am dubious about the moral scruples of their activities through the long history of UK, US and Russia,” responded Kogan. “Trying to equate them I think is a bit of a silly process. But I think certainly all these countries have engaged in activities that people feel uncomfortable with or are covert. And then to try to link academic work that’s basic science to that — if you’re going to down the Russia line I think we have to go down the UK line and the US line in the same way.

“I understand Russia is a hot-button topic right now but outside of that… Most people in Russia are like most people in the UK. They’re not involved in spycraft, they’re just living lives.”

“I’m not aware of UK government agencies that have been interfering in foreign elections,” added Collins.

“Doesn’t mean it’s not happened,” replied Kogan. “Could be just better at it.”

During Wylie’s evidence to the committee last month the former SCL data scientist had implied there could have been a risk of the Facebook data falling into the hands of the Russian state as a result of Kogan’s back and forth travel to the region. But Kogan rebutted this idea — saying the data had never been in his physical possession when he traveled to Russia, pointing out it was stored in a cloud hosting service in the US.

“If you want to try to hack Amazon Web Services good luck,” he added.

He also claimed not to have read the piece of research in question, even though he said he thought the researcher had emailed the paper to him — claiming he can’t read Russian well.

Kogan seemed most comfortable during the session when he was laying into Facebook’s platform policies — perhaps unsurprisingly, given how the company has sought to paint him as a rogue actor who abused its systems by creating an app that harvested data on up to 87 million Facebook users and then handing information on its users off to third parties.

Asked whether he thought a prior answer given to the committee by Facebook — when it claimed it had not provided any user data to third parties — was correct, Kogan said no given the company provides academics with “macro level” user data (including providing him with this type of data, in 2013).

He was also asked why he thinks Facebook lets its employees collaborate with external researchers — and Kogan suggested this is “tolerated” by management as a strategy to keep employees stimulated.

Committee chair Collins asked whether he thought it was odd that Facebook now employs his former co-director at GSR, Joseph Chancellor — who works in its research division — despite Chancellor having worked for a company Facebook has said it regards as having violated its platform policies.

“Honestly I don’t think it’s odd,” said Kogan. “The reason I don’t think it’s odd is because in my view Facebook’s comments are PR crisis mode. I don’t believe they actually think these things — because I think they realize that their platform has been mined, left and right, by thousands of others.

“And I was just the unlucky person that ended up somehow linked to the Trump campaign. And we are where we are. I think they realize all this but PR is PR and they were trying to manage the crisis and it’s convenient to point the finger at a single entity and try to paint the picture this is a rogue agent.

At another moment during the evidence session Kogan was also asked to respond to denials previously given to the committee by former CEO of Cambridge Analytica Alexander Nix — who had claimed that none of the data it used came from GSR and — even more specifically — that GSR had never supplied it with “data-sets or information”.

“Fabrication,” responded Kogan. “Total fabrication.”

“We certainly gave them [SCL/CA] data. That’s indisputable,” he added.

In written testimony to the committee he also explained that he in fact created three apps for gathering Facebook user data. The first one — called the CPW Lab app — was developed after he had begun a collaboration with Facebook in early 2013, as part of his academic studies. Kogan says Facebook provided him with user data at this time for his research — although he said these datasets were “macro-level datasets on friendship connections and emoticon usage” rather than information on individual users.

The CPW Lab app was used to gather individual level data to supplement those datasets, according to Kogan’s account. Although he specifies that data collected via this app was housed at the university; used for academic purposes only; and was “not provided to the SCL Group”.

Later, once Kogan had set up GSR and was intending to work on gathering and modeling data for SCL/Cambridge Analytica, the CPW Lab app was renamed to the GSR App and its terms were changed (with the new terms provided by Wylie).

Thousands of people were then recruited to take this survey via a third company — Qualtrics — with Kogan saying SCL directly paid ~$800,000 to it to recruit survey participants, at a cost of around $3-$4 per head (he says between 200,000 and 300,000 people took the survey as a result in the summer of 2014; NB: Facebook doesn’t appear to be able to break out separate downloads for the different apps Kogan ran on its platform — it told us about 305,000 people downloaded “the app”).

In the final part of that year, after data collection had finished for SCL, Kogan said his company revised the GSR App to become an interactive personality quiz — renaming it “thisisyourdigitallife” and leaving the commercial portions of the terms intact.

“The thisisyourdigitallife App was used by only a few hundred individuals and, like the two prior iterations of the application, collected demographic information and data about “likes” for survey participants and their friends whose Facebook privacy settings gave participants access to “likes” and demographic information. Data collected by the thisisyourdigitallife App was not provided to SCL,” he claims in the written testimony.

During the oral hearing, Kogan was pressed on misleading T&Cs in his two commercial apps. Asked by a committee member about the terms of the GSR App not specifying that the data would be used for political targeting, he said he didn’t write the terms himself but added: “If we had to do it again I think I would have insisted to Mr Wylie that we do add politics as a use-case in that doc.”

“It’s misleading,” argued the committee member. “It’s a misrepresentation.”

“I think it’s broad,” Kogan responded. “I think it’s not specific enough. So you’re asking for why didn’t we go outline specific use-cases — because the politics is a specific use-case. I would argue that the politics does fall under there but it’s a specific use-case. I think we should have.”

The committee member also noted how, “in longer, denser paragraphs” within the app’s T&Cs, the legalese does also state that “whatever that primary purpose is you can sell this data for any purposes whatsoever” — making the point that such sweeping terms are unfair.

“Yes,” responded Kogan. “In terms of speaking the truth, the reality is — as you’ve pointed out — very few if any people have read this, just like very few if any people read terms of service. I think that’s a major flaw we have right now. That people just do not read these things. And these things are written this way.”

“Look — fundamentally I made a mistake by not being critical about this. And trusting the advice of another company [SCL]. As you pointed out GSR is my company and I should have gotten better advice, and better guidance on what is and isn’t appropriate,” he added.

“Quite frankly my understanding was this was business as usual and normal practice for companies to write broad terms of service that didn’t provide specific examples,” he said after being pressed on the point again.

“I doubt in Facebook’s user policy it says that users can be advertised for political purposes — it just has broad language to provide for whatever use cases they want. I agree with you this doesn’t seem right, and those changes need to be made.”

At another point, he was asked about the Cambridge University Psychometrics Centre — which he said had initially been involved in discussions between him and SCL to be part of the project but fell out of the arrangement. According to his version of events the Centre had asked for £500,000 for their piece of proposed work, and specifically for modeling the data — which he said SCL didn’t want to pay. So SCL had asked him to take that work on too and remove the Centre from the negotiations.

As a result of that, Kogan said the Centre had complained about him to the university — and SCL had written a letter to it on his behalf defending his actions.

“The mistake the Psychometrics Centre made in the negotiation is that they believed that models are useful, rather than data,” he said. “And actually just not the same. Data’s far more valuable than models because if you have the data it’s very easy to build models — because models use just a few well understood statistical techniques to make them. I was able to go from not doing machine learning to knowing what I need to know in one week. That’s all it took.”

In another exchange during the session, Kogan denied he had been in contact with Facebook in 2014. Wylie previously told the committee he thought Kogan had run into problems with the rate at which the GSR App was able to pull data off Facebook’s platform — and had contacted engineers at the company at the time (though Wylie also caveated his evidence by saying he did not know whether what he’d been told was true).

“This never happened,” said Kogan, adding that there was no dialogue between him and Facebook at that time. “I don’t know any engineers at Facebook.”

Uber to stop storing precise location pick-ups/drop-offs in driver logs

Uber is planning to tweak the historical pick-up and drop-off logs that drivers can see in order to slightly obscure the exact location, rather than planting an exact pin in it (as now). The idea is to provide a modicum more privacy for users while still providing drivers with what look set to be remain highly detailed trip logs.

The company told Gizmodo it will initially pilot the change with drivers, but intends the privacy-focused feature to become the default setting “in the coming months”.

Earlier this month Uber also announced a complete redesign of the drivers’ app — making changes it said had been informed by “months” of driver conversations and feedback. It says the pilot of location obfuscation will begin once all drivers have the new app.

The ride-hailing giant appears to be trying to find a compromise between rider safety concerns — there have been reports of Uber drivers stalking riders, for example — and drivers wanting to have precise logs so they can challenge fare disputes.

Location data is our most sensitive information, and we are doing everything we can do to protect privacy around it,” a spokesperson told us. “The new design provides enough information for drivers to identify past trips for customer support issues or earning disputes without granting them ongoing access to rider addresses.”

In the current version of the pilot — according to screenshots obtained by Gizmodo — the location of the pin has been expanded into a circle, so it’s indicating a shaded area a few meters around a pick-up or drop-off location.

According to Uber the design may still change, as is said it intends to gather driver feedback. We’ve asked if it’s also intending to gather rider feedback on the design.

Asked whether it’s making the change as part of an FTC settlement last year — which followed an investigation into data mishandling, privacy and security complaints dating back to 2014 and 2015 — an Uber spokesman told us: “Not specifically, but user expectations are shifting and we are working to build privacy into the DNA of our products.”

Earlier this month the company agreed to a revised settlement with the FTC, including agreeing that it may be subject to civil penalties if it fails to notify the FTC of future privacy breaches — likely in light of the 2016 data breach affecting 57 million riders and drivers which the company concealed until 2017.

An incoming update to European privacy rules (called GDPR) — which beefs up fines for violations and applies extraterritorially (including, for example, if an EU citizen is using the Uber app on a trip to the U.S.) — also tightens the screw on data protection, giving individuals expanded rights to control their personal information held by a company.

A precise location log would likely be considered personal data that Uber would have to provide to any users requesting their information under GDPR, for example.

Although it’s less clear whether the relatively small amount of obfuscation it’s toying with here would be enough to ensure the location logs are no longer judged as riders’ personal data under the regulation.

Last year the company also ended a controversial feature in which its app had tracked the location of users even after their trip had ended.

Facebook hit with defamation lawsuit over fake ads

In an interesting twist, Facebook is being sued in the UK for defamation by consumer advice personality, Martin Lewis, who says his face and name have been repeatedly used on fake adverts distributed on the social media giant’s platform.

Lewis, who founded the popular MoneySavingExpert.com tips website, says Facebook has failed to stop the fake ads despite repeat complaints and action on his part, thereby — he contends — tarnishing his reputation and causing victims to be lured into costly scams.

“It is consistent, it is repeated. Other companies such as Outbrain who have run these adverts have taken them down. What is particularly pernicious about Facebook is that it says the onus is on me, so I have spent time and effort and stress repeatedly to have them taken down,” Lewis told The Guardian.

“It is facilitating scams on a constant basis in a morally repugnant way. If Mark Zuckerburg wants to be the champion of moral causes, then he needs to stop its company doing this.”

In a blog post Lewis also argues it should not be difficult for Facebook — “a leader in face and text recognition” — to prevent scammers from misappropriating his image.

“I don’t do adverts. I’ve told Facebook that. Any ad with my picture or name in is without my permission. I’ve asked it not to publish them, or at least to check their legitimacy with me before publishing. This shouldn’t be difficult,” he writes. “Yet it simply continues to repeatedly publish these adverts and then relies on me to report them, once the damage has been done.”

“Enough is enough. I’ve been fighting for over a year to stop Facebook letting scammers use my name and face to rip off vulnerable people – yet it continues. I feel sick each time I hear of another victim being conned because of trust they wrongly thought they were placing in me. One lady had over £100,000 taken from her,” he adds.

Some of the fake ads appear to be related to cryptocurrency scams — linking through to fake news articles promising “revolutionary Bitcoin home-based opportunity”.

So the scammers look to be using the same playbook as the Macedonian teens who, in 2016, concocted fake news stories about US politics to generate a mint in ad clicks — also relying on Facebook’s platform to distribute their fakes and scale the scam.

In January Facebook revised its ads policy to specifically ban cryptocurrency, binary options and initial coin offerings. But as Lewis’ samples show, the scammers are circumventing this prohibition with ease — using Lewis’ image to drive unwitting clicks to a secondary offsite layer of fake news articles that directly push people towards crypto scams.

It would appear that Facebook does nothing to verify the sites to which ads on its platform are directing its users, just as it does not appear to proactive police whether ad creative is legal — at least unless nudity is involved.

Here’s one sample fake ad that Lewis highlights:

And here’s the fake news article it links to — touting a “revolutionary” Bitcoin opportunity, in a news article style mocked up to look like the Daily Mirror newspaper…

The lawsuit is a personal action by Lewis who is seeking exemplary damages in the high court. He says he’s not looking to profit himself — saying he would donate any winnings to charities that aim to combat fraud. Rather he says he’s taking the action in the hopes the publicity will spotlight the problem and force Facebook to stamp out fake ads.

In a statement, Mark Lewis of the law firm Seddons, which Lewis has engaged for the action, said: “Facebook is not above the law – it cannot hide outside the UK and think that it is untouchable.  Exemplary damages are being sought. This means we will ask the court to ensure they are substantial enough that Facebook can’t simply see paying out damages as just the ‘cost of business’ and carry on regardless. It needs to be shown that the price of causing misery is very high.”

In a response statement to the suit, a Facebook spokesperson told us: “We do not allow adverts which are misleading or false on Facebook and have explained to Martin Lewis that he should report any adverts that infringe his rights and they will be removed. We are in direct contact with his team, offering to help and promptly investigating their requests, and only last week confirmed that several adverts and accounts that violated our Advertising Policies had been taken down.”

Facebook’s ad guidelines do indeed prohibit ads that contain “deceptive, false, or misleading content, including deceptive claims, offers, or business practices” — and, as noted above, they also specifically prohibit cryptocurrency-related ads.

But, as is increasingly evident where big tech platforms are concerned, meaningful enforcement of existing policies is what’s sorely lacking.

The social behemoth claims to have invested significant resources in its ad review program — which includes both automated and manual review of ads. Though it also relies on users reporting problem content, thereby shifting the burden of actively policing content its systems are algorithmically distributing and monetizing (at massive scale) onto individual users (who are, by the by, not being paid for all this content review labor… hmmm… ).

In Lewis’ case the burden is clearly also highly personal, given the fake ads are not just dodgy content but are directly misappropriating his image and name in an attempt to sell a scam.

“On a personal note, as well as the huge amount of time, stress and effort it takes to continually combat these scams, this whole episode has been extremely depressing – to see my reputation besmirched by such a big company, out of an unending greed to keep raking in its ad cash,” he also writes.

The sheer scale of Facebook’s platform — which now has more than 2BN active users globally — contrasts awkwardly with the far smaller number of people the company employs for content moderation tasks.

And unsurprisingly, given that huge discrepancy, Facebook has been facing increasing pressure over various types of problem content in recent years — from Kremlin propaganda to hate speech in Myanmar.

Last year it told US lawmakers it would be increasing the number of staff working on safety and security issues from 10,000 to 20,000 by the end of this year. Which is still a tiny drop in the ocean of content distributed daily on its platform. We’ve asked how many people work in Facebook’s ad review team specifically and will update this post with any response.

Given the sheer scale of content continuously generated by a 2BN+ user-base, combined with a platform structure that typically allows for instant uploads, a truly robust enforcement of Facebook’s own policies is going to require legislative intervention.

And, in the meanwhile, Facebook operating a policy that’s essentially unenforceable risks looking intentional — given how much profit the company continues to generate by being able to claim it’s just a platform, rather than be ruled like a publisher.

Google confirms some of its own services are now getting blocked in Russia over the Telegram ban

A shower of paper airplanes darted through the skies of Moscow and other towns in Russia today, as users answered the call of entrepreneur Pavel Durov to send the blank missives out of their windows at a pre-appointed time in support of Telegram, a messaging app he founded that was blocked last week by Russian regulator Roskomnadzor (RKN) that uses a paper airplane icon. RKN believes the service is violating national laws by failing to provide it with encryption keys to access messages on the service (Telegram has refused to comply).

The paper plane send-off was a small, flashmob turn in a “Digital Resistance” — Durov’s preferred term — that has otherwise largely been played out online: currently, nearly 18 million IP addresses are knocked out from being accessed in Russia, all in the name of blocking Telegram.

And in the latest development, Google has now confirmed to us that its own services are now also being impacted. From what we understand, Google Search, Gmail and push notifications for Android apps are among the products being affected.

“We are aware of reports that some users in Russia are unable to access some Google products, and are investigating those reports,” said a Google spokesperson in an emailed response. We’d been trying to contact Google all week about the Telegram blockade, and this is the first time that the company has both replied and acknowledged something related to it.

(Amazon has acknowledged our messages but has yet to reply to them.)

Google’s comments come on the heels of RKN itself also announcing today that it had expanded its IP blocks to Google’s services. At its peak, RKN had blocked nearly 19 million IP addresses, with dozens of third-party services that also use Google Cloud and Amazon’s AWS, such as Twitch and Spotify, also getting caught in the crossfire.

Russia is among the countries in the world that has enforced a kind of digital firewall, blocking periodically or permanently certain online content. Some turn to VPNs to access that content anyway, but it turns out that Telegram hasn’t needed to rely on that workaround to get used.

“RKN is embarrassingly bad at blocking Telegram, so most people keep using it without any intermediaries,” said Ilya Andreev, COO and co-founder of Vee Security, which has been providing a proxy service to bypass the ban. Currently, it is supporting up to 2 million users simultaneously, although this is a relatively small proportion considering Telegram has around 14 million users in the country (and, likely, more considering all the free publicity it’s been getting).

As we described earlier this week, the reason so many IP addresses are getting blocked is because Telegram has been using a technique that allows it to “hop” to a new IP address when the one that it’s using is blocked from getting accessed by RKN. It’s a technique that a much smaller app, Zello, had also resorted to using for nearly a year when the RKN announced its own ban.

Zello ceased its activities earlier this year when RKN got wise to Zello’s ways and chose to start blocking entire subnetworks of IP addresses to avoid so many hops, and Amazon’s AWS and Google Cloud kindly asked Zello to stop as other services also started to get blocked. So, when Telegram started the same kind of hopping, RKN, in effect, knew just what to do to turn the screws. (And it also took the heat off Zello, which miraculously got restored.)

So far, Telegram’s cloud partners have held strong and have not taken the same route, although getting its own services blocked could see Google’s resolve tested at a new level.

Some believe that one outcome could be the regulator playing out an elaborate game of chicken with Telegram and the rest of the internet companies that are in some way aiding and abetting it, spurred in part by Russia’s larger profile and how such blocks would appear to international audiences.

“Russia can’t keep blocking random things on the Internet,” Andreev said. “Russia is working hard to make its image more alluring to foreigners in preparation for the World Cup,” which is taking place this June and July. “They can’t have tourists coming and realising Google doesn’t work in Russia.”

We’ll update this post and continue to write on further developments as we learn more.

German Supreme Court dismisses Axel Springer lawsuit, says ad blocking is legal

Germany’s Supreme Court dismissed a lawsuit yesterday from Axel Springer against Eyeo, the company behind AdBlock Plus.

The European publishing giant (which acquired Business Insider in 2015) argued that ad blocking, as well as the business model where advertisers pay to be added to circumvent the white list, violated Germany’s competition law. Axel Springer won a partial victory in 2016, when a lower court ruled that it shouldn’t have to pay for white listing.

However, the Supreme Court has now overturned that decision. In the process, it declared that ad-blocking and Eyeo’s white list are both legal. (German speakers can read the court’s press release.)

After the ruling, Eyeo sent me the following statement from Ben Williams, its head of operations and communications:

Today, we are extremely pleased with the ruling from Germany’s Supreme Court in favor of Adblock Plus/eyeo and against the German media publishing company Axel Springer. This ruling confirms — just as the regional courts in Munich and Hamburg stated previously — that people have the right in Germany to block ads. This case had already been tried in the Cologne Regional Court, then in the Regional Court of Appeals, also in Cologne — with similar results. It also confirms that Adblock Plus can use a whitelist to allow certain acceptable ads through. Today’s Supreme Court decision puts an end to Axel Springer’s claim that they be treated differently for the whitelisting portion of Adblock Plus’ business model.

Axel Springer, meanwhile, described ad blocking as “an attack on the heart of the free media” and said it would appeal to the country’s Constitutional Court.

Facebook has auto-enrolled users into a facial recognition test in Europe

Facebook users in Europe are reporting the company has begun testing its controversial facial recognition technology in the region.

Jimmy Nsubuga, a journalist at Metro, is among several European Facebook users who have said they’ve been notified by the company they are in its test bucket.

The company has previously said an opt-in option for facial recognition will be pushed out to all European users next month. It’s hoping to convince Europeans to voluntarily allow it to expand its use of the controversial, privacy-hostile tech — which was turned off in the bloc after regulatory pressure, back in 2012.

Under impending changes to Facebook’s T&Cs — ostensibly to comply with the EU’s incoming GDPR data protection standard — the company has crafted a manipulative consent flow that tries to sell people on giving it their data; including filling in facial recognition blanks in its business by convincing Europeans to agree to it grabbing and using their biometric data. 

Notably Facebook is not offering a voluntary opt-in to Europeans who find themselves in its facial recognition test bucket. Rather people are automatically being made into its lab rats — and have to actively delve into the settings to say no.

In a notification to affected users, the company writes [emphasis ours]: “You control face recognition. This setting is on, but you can turn it off at any time, which applies to features we may add later.”

Not only is the tech turned on, but users who click through to the settings to try and turn it off will also find Facebook attempting to dissuade them from doing that — with manipulative examples of how the tech can “protect” them.

As another Facebook user who found herself enrolled in the test — journalist Jennifer Baker — points out, what it’s doing here is incredibly disingenuous because it’s using fear to try to manipulate people’s choices.

Under the EU’s incoming data protection framework Facebook will not be able to automatically opt users into the tech — it will have to convince people to switch facial recognition features on.

But the experiment it’s running here (without gaining individuals’ upfront consent) looks very much like a form of A/B testing — to see which of its disingenuous examples is best able to convince users to accept what is a highly privacy-hostile technology by voluntarily switching it on.

But given that Facebook controls the entire consent flow, and can rely on big data insights gleaned from its own platform (of 2BN+ users), this is not even remotely a fair fight.

Consent is being manipulated, not freely given. This is big data-powered mass manipulation of human decisions — i.e. until the ‘right’ answer (for Facebook’s business) is ‘selected’ by the user.

Data protection experts we spoke to earlier this week do not believe Facebook’s approach to consent will be legal under GDPR. Legal challenges are certain at this point.

But legal challenges also take time. And in the meanwhile Facebook users will be being manipulated into agreeing with things that align with the company’s data-harvesting interests — and handing over their sensitive personal information without understanding the full implications.

It’s also not clear how many Facebook users are being auto-enrolled into this facial recognition test — we’ve put questions to it and will update this post with any reply.

Last month Facebook said it would be rolling out “a limited test of some of the additional choices we’ll ask people to make as part of GDPR”.

It also said it was “starting by asking only a small percentage of people so that we can be sure everything is working properly”, and further claimed: “[T]he changes we’re testing will let people choose whether to enable facial recognition, which has previously been unavailable in the EU.”

Facebook’s wording in those statements is very interesting — with no number put on how many people will be made into test subjects (though it is very clearly trying to play the experiment down; “limited test”, “small”) — so we simply don’t know how many Europeans are having their facial data processed by Facebook right now, without their upfront consent.

Nor do we know where in Europe all these test subjects are located. But it’s pretty likely the test contravenes even current EU data protection laws. (GDPR applies from May 25.)

Facebook’s description of its testing plan last month was also disingenuous as it implied users would get to choose to enable facial recognition. In fact, it’s just switching it on — saddling test subjects with the effort of opting out.

The company was likely hoping the test would not attract too much attention — given how much GDPR news is flowing through its PR channels, and how much attention the topic is generally sucking up — and we can see why now because it’s essentially reversed its 2012 decision to switch off facial recognition in Europe (made after the feature attracted so much blow-back), to grab as much data as it can while it can.

Millions of Europeans could be having their fundamental rights trampled on here, yet again. We just don’t know what the company actually means by “small”. (The EU has ~500M inhabitants — even 1%, a “small percentage”, of that would involve millions of people… )

Once again Facebook isn’t telling how many people it’s experimenting on.

Eventbrite acquires Spanish ticketing platform Ticketea

Eventbrite has been shopping again in Europe — announcing today that it’s picked up Spanish ticketing firm, Ticketea. Terms of the deal have not been disclosed.

The Madrid-based events discovery and ticketing platform lets people find and book tickets for a variety of live experiences — including festivals, concerts and performing arts shows. It focuses on Spanish speaking countries and small and mid-sized event organizers.

Eventbrite said the acquisition will help expand its global footprint in music events, including via the Arenal Sound, Viña Rock, Low Festival, and Dreambeach festivals.

It also flagged Ticketea’s “robust ecosystem of third-party integrations” — selling tickets for prominent entertainment events and brands, such as The Billy Elliot Musical, Cirque du Soleil, and Museo Nacional del Prado — as another attraction.

In a statement on the acquisition Julia Hartz, CEO and co-founder of Eventbrite, lauded Ticketea’s approach to solving the event industry’s challenges — saying its “robust discovery platform” was of interest, along with the company’s “strong leadership position” in the southern European market (not just Spain).

“There is incredible synergy between our two companies from a business, platform, and brand perspective,” added Hartz. “We’re thrilled to welcome their talented team, who shares our core mission of bringing people together through live experiences, to the Eventbrite family.”

Javier Andres, co-founder and CEO of Ticketea, is joining Eventbrite as country director for Spain and Portugal.

“We have been building a significant market presence in Spain for nearly a decade. It’s exciting to be recognized by the global leader in event technology as they invest more heavily in our growing market,” he said in a supporting statement.

“We look forward to extending the impact of both our team and technology far beyond country borders, to the more than 180 countries and territories where their powerful platform gives rise to millions of events today.”

According to Crunchbase Ticketea has raised just $5.7M since being founded, all the way back in 2009, so its investors — which include Madrid-based VC firm Seaya Ventures — are likely to be patting themselves on the back about a nice little return on their investment.

Ticketea is not the only European ticket firm that Eventbrite has bagged in recent years. Last year the billion-dollar event-management platform also acquired Ticketscript, a ticketing startup based out of Amsterdam.

In 2017 it also splurged on US-based Nivite, and Ticketfly — picking the latter up from Pandora, and shelling out $200M.