Jack Dorsey and Twitter ignored opportunity to meet with civic group on Myanmar issues

Responding to criticism from his recent trip to Myanmar, Twitter CEO Jack Dorsey said he’s keen to learn about the country’s racial tension and human rights atrocities, but it has emerged that both he and Twitter’s public policy team ignored an opportunity to connect with a key civic group in the country.

A loose group of six companies in Myanmar has engaged with Facebook in a bid to help improve the situation around usage of its services in the country — often with frustrating results — and key members of that alliance, including Omidyar-backed accelerator firm Phandeeyar, contacted Dorsey via Twitter DM and emailed the company’s public policy contacts when they learned that the CEO was visiting Myanmar.

The plan was to arrange a forum to discuss the social media concerns in Myanmar to help Dorsey gain an understanding of life on the ground in one of the world’s fastest-growing internet markets.

“The Myanmar tech community was all excited, and wondering where he was going,” Jes Kaliebe Petersen, the Phandeeyar CEO, told TechCrunch in an interview. “We wondered: ‘Can we get him in a room, maybe at a public event, and talk about technology in Myanmar or social media, whatever he is happy with?'”

The DMs went unread. In a response to the email, a Twitter staff member told the group that Dorsey was visiting the country strictly on personal time with no plans for business. The Myanmar-based group responded with an offer to set up a remote, phone-based briefing for Twitter’s public policy team with the ultimate goal of getting information to Dorsey and key executives, but that email went unanswered.

When we contacted Twitter, a spokesperson initially pointed us to a tweet from Dorsey in which he said: “I had no conversations with the government or NGOs during my trip.”

However, within two hours of our inquiry, a member of Twitter’s team responded to the group’s email in an effort to restart the conversation and set up a phone meeting in January.

“We’ve been in discussions with the group prior to your outreach,” a Twitter spokesperson told TechCrunch in a subsequent email exchange.

That statement is incorrect.

Still, on the bright side, it appears that the group may get an opportunity to brief Twitter on its concerns on social media usage in the country after all.

The micro-blogging service isn’t as well-used in Myanmar as Facebook, which has some 20 million monthly users and is practically the de facto internet, but there have been concerns in Myanmar. For one thing, there was been the development of a somewhat sinister bot army in Myanmar and other parts of Southeast Asia, while it remains a key platform for influencers and thought-leaders.

“[Dorsey is] the head of a social media company and, given the massive issues here in Myanmar, I think it’s irresponsible of him to not address that,” Petersen told TechCrunch.

“Twitter isn’t as widely used as Facebook but that doesn’t mean it doesn’t have concerns happening with it,” he added. “As we’d tell Facebook or any large tech company with a prominent presence in Myanmar, it’s important to spend time on the ground like they’d do in any other market where they have a substantial presence.”

The UN has concluded that Facebook plays a “determining” role in accelerating ethnic violence in Myanmar. While Facebook has tried to address the issues, it hasn’t committed to opening an office in the country and it released a key report on the situation on the eve of the U.S. mid-term elections, a strategy that appeared designed to deflect attention from the findings. All of which suggests that it isn’t really serious about Myanmar.

Rudy Giuliani, a Trump cybersecurity adviser, doesn’t understand the internet

Welcome back to the latest edition of politicians don’t get technology! Our latest guest is Rudy Giuliani, former New York mayor and current cybersecurity adviser to President Trump.

Rudy Giuliani doesn’t understand Twitter or the internet.

It’s embarrassing enough that Giuliani inadvertently tweeted a link to a website criticizing Trump, but now he is doubling down on cyberstupidity by claiming that “someone to invade my text with a disgusting anti-President message.”

Ignorant as to what had happened, he latched on to apparent anti-Republican bias within Twitter, a theme that Trump and other Republicans have pushed despite no evidence.

“Don’t tell me they are not committed cardcarrying anti-Trumpers,” added Giuliani, who — we repeat — is a cybersecurity adviser to the White House .

The explanation is quite simple.

Giuliani’s original tweet on November 30 (above) didn’t contain a period between sentences, which created a hyperlink to G-20.in. An eagle-eyed member of the public — named by the BBC as Atlanta-based marketing director Jason Velazquez — clicked through the link and, finding that it was blank, quickly registered the domain and created a website carrying the “a disgusting anti-President message” that Giuliani referred to.

The G-20.in website that appears in Giuliani’s tweet

“When I realised that the URL was available, my heart began to race a bit. I remember thinking: ‘This guy — Giuliani — has no idea,'” Velazquez told the BBC. “I quickly upload my files, tweeted about what I had done, and left my apartment.”

The tweet itself was well-covered by media, but Giuliani absurd return to the topic has given the site even more coverage.

Both of Giuliani’s tweets remain online and undeleted — as of 22:40 PST — but, in the positive count, it does appear that he has figured out how to create Twitter threads by replying to previous tweets.

This incident follows another moment of Twitter-based comedy from Giuliani when he sent a curious message following news that Trump’s ex-attorney Michael Cohen had made a plea agreement.

That tweet recalled Trump’s own ‘covfefe’ typo last year.

Floyd Mayweather and DJ Khaled to pay SEC fines for flogging garbage ICOs

Floyd Mayweather Jr. and DJ Khaled have agreed to “pay disgorgement, penalties and interest” for failing to disclose promotional payments from three ICOs including Centra Tech. Mayweather received $100,000 from Centra Tech while Khaled got $50,000 from the failed ICO. The SEC cited Khaled and Mayweather’s social media feeds, noting they touted securities for pay without disclosing their affiliation with the companies.

Mayweather, you’ll recall, appeared on Instagram with a whole lot of cash while Khaled called Centra Tech a “Game changer.”

“You can call me Floyd Crypto Mayweather from now on,” wrote Mayweather. Sadly, the SEC ruled he is no longer allowed to use the nom de guerre “Crypto” anymore.

Without admitting or denying the findings, Mayweather and Khaled agreed to pay disgorgement, penalties and interest. Mayweather agreed to pay $300,000 in disgorgement, a $300,000 penalty, and $14,775 in prejudgment interest. Khaled agreed to pay $50,000 in disgorgement, a $100,000 penalty, and $2,725 in prejudgment interest. In addition, Mayweather agreed not to promote any securities, digital or otherwise, for three years, and Khaled agreed to a similar ban for two years. Mayweather also agreed to continue to cooperate with the investigation.

“These cases highlight the importance of full disclosure to investors,” said Stephanie Avakian of the SEC. “With no disclosure about the payments, Mayweather and Khaled’s ICO promotions may have appeared to be unbiased, rather than paid endorsements.”

The SEC indicted Centra Tech’s founders, Raymond Trapani, Sohrab Sharma, and Robert Farkas, for fraud.

UK parliament seizes cache of internal Facebook documents to further privacy probe

Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed.

In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers to seize a cache of documents pertaining to a US lawsuit to further its attempt to hold Facebook to account for misuse of user data.

Facebook’s oversight — or rather lack of it — where user data is concerned has been a major focus for the committee, as its enquiry into disinformation and data misuse has unfolded and scaled over the course of this year, ballooning in scope and visibility since the Cambridge Analytica story blew up into a global scandal this April.

The internal documents now in the committee’s possession are alleged to contain significant revelations about decisions made by Facebook senior management vis-a-vis data and privacy controls — including confidential emails between senior executives and correspondence with Zuckerberg himself.

This has been a key line of enquiry for parliamentarians. And an equally frustrating one — with committee members accusing Facebook of being deliberately misleading and concealing key details from it.

The seized files pertain to a US lawsuit that predates mainstream publicity around political misuse of Facebook data, with the suit filed in 2015, by a US startup called Six4Three, after Facebook removed developer access to friend data. (As we’ve previously reported Facebook was actually being warned about data risks related to its app permissions as far back as 2011 — yet it didn’t full shut down the friends data API until May 2015.)

The core complaint is an allegation that Facebook enticed developers to create apps for its platform by implying they would get long-term access to user data in return. So by later cutting data access the claim is that Facebook was effectively defrauding developers.

Since lodging the complaint, the plaintiffs have seized on the Cambridge Analytica saga to try to bolster their case.

And in a legal motion filed in May Six4Three’s lawyers claimed evidence they had uncovered demonstrated that “the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones”.

The startup used legal powers to obtain the cache of documents — which remain under seal on order of a California court. But the UK parliament used its own powers to swoop in and seize the files from the founder of Six4Three during a business trip to London when he came under the jurisdiction of UK law, compelling him to hand them over.

According to the Observer, parliament sent a serjeant at arms to the founder’s hotel — giving him a final warning and a two-hour deadline to comply with its order.

“When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents,” it adds, apparently revealing how Facebook lost control over some more data (albeit, its own this time).

In comments to the newspaper yesterday, DCMS committee chair Damian Collins said: “We are in uncharted territory. This is an unprecedented move but it’s an unprecedented situation. We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.”

Collins later tweeted the Observer’s report on the seizure, teasing “more next week” — likely a reference to the grand committee hearing in parliament already scheduled for November 27.

But it could also be a hint the committee intends to reveal and/or make use of information locked up in the documents, as it puts questions to Facebook’s VP of policy solutions…

That said, the documents are subject to the Californian superior court’s seal order, so — as the Observer points out — cannot be shared or made public without risk of being found in contempt of court.

A spokesperson for Facebook made the same point, telling the newspaper: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook. We have no further comment.”

Facebook’s spokesperson added that Six4Three’s “claims have no merit”, further asserting: “We will continue to defend ourselves vigorously.”

And, well, the irony of Facebook asking for its data to remain private also shouldn’t be lost on anyone at this point…

Another irony: In July, the Guardian reported that as part of Facebook’s defence against Six4Three’s suit the company had argued in court that it is a publisher — seeking to have what it couched as ‘editorial decisions’ about data access protected by the US’ first amendment.

Which is — to put it mildly — quite the contradiction, given Facebook’s long-standing public characterization of its business as just a distribution platform, never a media company.

So expect plenty of fireworks at next week’s public hearing as parliamentarians once again question Facebook over its various contradictory claims.

It’s also possible the committee will have been sent an internal email distribution list by then, detailing who at Facebook knew about the Cambridge Analytica breach in the earliest instance.

This list was obtained by the UK’s data watchdog, over the course of its own investigation into the data misuse saga. And earlier this month information commissioner Elizabeth Denham confirmed the ICO has the list and said it would pass it to the committee.

The accountability net does look to be closing in on Facebook management.

Even as Facebook continues to deny international parliaments any face-time with its founder and CEO (the EU parliament remains the sole exception).

Last week the company refused to even have Zuckerberg do a video call to take the committee’s questions — offering its VP of policy solutions, Richard Allan, to go before what’s now a grand committee comprised of representatives from seven international parliaments instead.

The grand committee hearing will take place in London on Tuesday morning, British time — followed by a press conference in which parliamentarians representing Facebook users from across the world will sign a set of ‘International Principles for the Law Governing the Internet’, making “a declaration on future action”.

So it’s also ‘watch this space’ where international social media regulation is concerned.

As noted above, Allan is just the latest stand-in for Zuckerberg. Back in April the DCMS committee spend the best part of five hours trying to extract answers from Facebook CTO, Mike Schroepfer.

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?” one committee member asked him then.

“It stops with Mark,” replied Schroepfer.

But Zuckerberg definitely won’t be stopping by on Tuesday.

Facebook policy VP, Richard Allan, to face the international ‘fake news’ grilling that Zuckerberg won’t

An unprecedented international grand committee comprised of 22 representatives from seven parliaments will meet in London next week to put questions to Facebook about the online fake news crisis and the social network’s own string of data misuse scandals.

But Facebook founder Mark Zuckerberg won’t be providing any answers. The company has repeatedly refused requests for him to answer parliamentarians’ questions.

Instead it’s sending a veteran EMEA policy guy, Richard Allan, now its London-based VP of policy solutions, to face a roomful of irate MPs.

Allan will give evidence next week to elected members from the parliaments of Argentina, Brazil, Canada, Ireland, Latvia, Singapore, along with members of the UK’s Digital, Culture, Media and Sport (DCMS) parliamentary committee.

At the last call the international initiative had a full eight parliaments behind it but it’s down to seven — with Australia being unable to attend on account of the travel involved in getting to London.

A spokeswoman for the DCMS committee confirmed Facebook declined its last request for Zuckerberg to give evidence, telling TechCrunch: “The Committee offered the opportunity for him to give evidence over video link, which was also refused. Facebook has offered Richard Allan, vice president of policy solutions, which the Committee has accepted.”

“The Committee still believes that Mark Zuckerberg is the appropriate person to answer important questions about data privacy, safety, security and sharing,” she added. “The recent New York Times investigation raises further questions about how recent data breaches were allegedly dealt with within Facebook, and when the senior leadership team became aware of the breaches and the spread of Russian disinformation.”

The DCMS committee has spearheaded the international effort to hold Facebook to account for its role in a string of major data scandals, joining forces with similarly concerned committees across the world, as part of an already wide-ranging enquiry into the democratic impacts of online disinformation that’s been keeping it busy for the best part of this year.

And especially busy since the Cambridge Analytica story blew up into a major global scandal this April, although Facebook’s 2018 run of bad news hasn’t stopped there…

The evidence session with Allan is scheduled to take place at 11.30am (GMT) on November 27 in Westminster. (It will also be streamed live on the UK’s parliament.tv website.)

Afterwards a press conference has been scheduled — during which DCMS says a representative from each of the seven parliaments will sign a set of ‘International Principles for the Law Governing the Internet’.

It bills this as “a declaration on future action from the parliaments involved” — suggesting the intent is to generate international momentum and consensus for regulating social media.

The DCMS’ preliminary report on the fake news crisis, which it put out this summer, called for urgent action from government on a number of fronts — including floating the idea of a levy on social media to defence democracy.

However UK ministers failed to leap into action, merely putting out a tepid ‘wait and see’ response. Marshalling international action appears to be DCMS’ alternative action plan.

At next week’s press conference, grand committee members will take questions following Allan’s evidence — so expect swift condemnation of any fresh equivocation, misdirection or question-dodging from Facebook (which has already been accused by DCMS members of a pattern of evasive behavior).

Last week’s NYT report also characterized the company’s strategy since 2016, vis-a-vis the fake news crisis, as ‘delay, deny, deflect’.

The grand committee will hear from other witnesses too, including the UK’s information commissioner Elizabeth Denham who was before the DCMS committee recently to report on a wide-ranging ecosystem investigation it instigated in the wake of the Cambridge Analytica scandal.

She told it then that Facebooks needs to take “much greater responsibility” for how its platform is being used, and warning that unless the company overhauls its privacy-hostile business model it risk burning user trust for good.

Also giving evidence next week: Deputy information commissioner Steve Wood; the former Prime Minister of St Kitts and Nevis, Rt Hon Dr Denzil L Douglas (on account of Cambridge Analytica/SCL Elections having done work in the region); and the co-founder of PersonalData.IO, Paul-Olivier Dehaye.

Dehaye has also given evidence to the committee before — detailing his experience of making Subject Access Requests to Facebook — and trying and failing to obtain all the data it holds on him.

Facebook policy VP, Richard Allan, to face the international ‘fake news’ grilling that Zuckerberg won’t

An unprecedented international grand committee comprised of 22 representatives from seven parliaments will meet in London next week to put questions to Facebook about the online fake news crisis and the social network’s own string of data misuse scandals.

But Facebook founder Mark Zuckerberg won’t be providing any answers. The company has repeatedly refused requests for him to answer parliamentarians’ questions.

Instead it’s sending a veteran EMEA policy guy, Richard Allan, now its London-based VP of policy solutions, to face a roomful of irate MPs.

Allan will give evidence next week to elected members from the parliaments of Argentina, Brazil, Canada, Ireland, Latvia, Singapore, along with members of the UK’s Digital, Culture, Media and Sport (DCMS) parliamentary committee.

At the last call the international initiative had a full eight parliaments behind it but it’s down to seven — with Australia being unable to attend on account of the travel involved in getting to London.

A spokeswoman for the DCMS committee confirmed Facebook declined its last request for Zuckerberg to give evidence, telling TechCrunch: “The Committee offered the opportunity for him to give evidence over video link, which was also refused. Facebook has offered Richard Allan, vice president of policy solutions, which the Committee has accepted.”

“The Committee still believes that Mark Zuckerberg is the appropriate person to answer important questions about data privacy, safety, security and sharing,” she added. “The recent New York Times investigation raises further questions about how recent data breaches were allegedly dealt with within Facebook, and when the senior leadership team became aware of the breaches and the spread of Russian disinformation.”

The DCMS committee has spearheaded the international effort to hold Facebook to account for its role in a string of major data scandals, joining forces with similarly concerned committees across the world, as part of an already wide-ranging enquiry into the democratic impacts of online disinformation that’s been keeping it busy for the best part of this year.

And especially busy since the Cambridge Analytica story blew up into a major global scandal this April, although Facebook’s 2018 run of bad news hasn’t stopped there…

The evidence session with Allan is scheduled to take place at 11.30am (GMT) on November 27 in Westminster. (It will also be streamed live on the UK’s parliament.tv website.)

Afterwards a press conference has been scheduled — during which DCMS says a representative from each of the seven parliaments will sign a set of ‘International Principles for the Law Governing the Internet’.

It bills this as “a declaration on future action from the parliaments involved” — suggesting the intent is to generate international momentum and consensus for regulating social media.

The DCMS’ preliminary report on the fake news crisis, which it put out this summer, called for urgent action from government on a number of fronts — including floating the idea of a levy on social media to defence democracy.

However UK ministers failed to leap into action, merely putting out a tepid ‘wait and see’ response. Marshalling international action appears to be DCMS’ alternative action plan.

At next week’s press conference, grand committee members will take questions following Allan’s evidence — so expect swift condemnation of any fresh equivocation, misdirection or question-dodging from Facebook (which has already been accused by DCMS members of a pattern of evasive behavior).

Last week’s NYT report also characterized the company’s strategy since 2016, vis-a-vis the fake news crisis, as ‘delay, deny, deflect’.

The grand committee will hear from other witnesses too, including the UK’s information commissioner Elizabeth Denham who was before the DCMS committee recently to report on a wide-ranging ecosystem investigation it instigated in the wake of the Cambridge Analytica scandal.

She told it then that Facebooks needs to take “much greater responsibility” for how its platform is being used, and warning that unless the company overhauls its privacy-hostile business model it risk burning user trust for good.

Also giving evidence next week: Deputy information commissioner Steve Wood; the former Prime Minister of St Kitts and Nevis, Rt Hon Dr Denzil L Douglas (on account of Cambridge Analytica/SCL Elections having done work in the region); and the co-founder of PersonalData.IO, Paul-Olivier Dehaye.

Dehaye has also given evidence to the committee before — detailing his experience of making Subject Access Requests to Facebook — and trying and failing to obtain all the data it holds on him.

Google lays outs narrow “EU election advertiser” policy ahead of 2019 vote

Google has announced its plan for combating election interference in the European Union, ahead of elections next May when up to 350 million voters across the region will vote to elect 705 Members of the European Parliament.

In a blog post laying out a narrow approach to democracy-denting disinformation, Google says it will introduce a verification system for “EU election advertisers to make sure they are who they say they are”, and require that any election ads disclose who is paying for them.

The details of the verification process are not yet clear so it’s not possible to assess how robust a check this might be.

But Facebook, which also recently announced checks on political advertisers, had to delay its UK launch of ID checks earlier this month, after the beta system was shown being embarrassingly easy to game. So just because a piece of online content has an ‘ID badge’ on it does not automatically make it bona fide.

Google’s framing of “EU election advertisers” suggests it will exclude non-EU based advertisers from running election ads, at least as it’s defining these ads. (But we’ve asked for a confirm on that.)

What’s very clear from the blog post is that the adtech giant is defining political ads as an extremely narrowly category — with only ads that explicitly mention political parties, candidates or a current officeholder falling under the scope of the policy.

Here’s how Google explains what it means by “election ads”:

“To bring people more information about the election ads they see across Google’s ad networks, we’ll require that ads that mention a political party, candidate or current officeholder make it clear to voters who’s paying for the advertising.”

So any ads still intended to influence public opinion — and thus sway potential voters — but which cite issues, rather than parties and/or politicians, will fall entirely outside the scope of its policy.

Yet of course issues are material to determining election outcomes.

Issue-based political propaganda is also — as we all know very well now — a go-to tool for the shadowy entities using Internet platforms for highly affordable, mass-scale online disinformation campaigns.

The Kremlin seized on divisive issues for much of the propaganda it deployed across social media ahead of the 2016 US presidential elections, for example.

Russia didn’t even always wrap its politically charged infowar bombs in an ad format either.

All of which means that any election ‘security’ effort that fixes on a narrow definition (like “election ads”) seems unlikely to offer much more than a micro bump in the road for anyone wanting to pay to play with democracy.

The only real fix for this problem is likely full disclosure of all advertising and advertisers; Who’s paying for every online ad, regardless of what it contains, plus a powerful interface for parsing that data mountain.

Of course neither Google nor Facebook is offering that — yet.

Because, well, this is self-regulation, ahead of election laws catching up.

What Google is offering for the forthcoming EU parliament elections is an EU-specific Election Ads Transparency Report (akin to the one it already launched for the US mid-terms) — which it says it will introduce (before the May vote) to provide a “searchable ad library to provide more information about who is purchasing election ads, whom they’re targeted to, and how much money is being spent”.

“Our goal is to make this information as accessible and useful as possible to citizens, practitioners, and researchers,” it adds.

The rest of its blog post is given over to puffing up a number of unrelated steps it says it will also take, in the name of “supporting the European Union Parliamentary Elections”, but which don’t involve Google itself having to be any more transparent about its own ad platform.

So it says it will —

  • be working with data from Election Commissions across the member states to “make authoritative electoral information available and help people find the info they need to get out and vote”
  • offering in-person security training to the most vulnerable groups, who face increased risks of phishing attacks (“We’ll be walking them through Google’s Advanced Protection Program, our strongest level of account security and Project Shield, a free service that uses Google technology to protect news sites and free expression from DDoS attacks on the web.”)
  • collaborating — via its Google News Lab entity — with news organizations across all 27 EU Member States to “support online fact checking”. (The Lab will “be offering a series of free verification workshops to point journalists to the latest tools and technology to tackle disinformation and support their coverage of the elections”)

No one’s going to turn their nose up at security training and freebie resource.

But the scale of the disinformation challenge is rather larger and more existential than a few free workshops and an anti-DDoS tool can fix.

The bulk of Google’s padding here also fits comfortably into its standard operating philosophy where the user-generated content that fuels its business is concerned; aka ‘tackle bad speech with more speech’. Crudely put: More speech, more ad revenue.

Though, as independent research has repeatedly shown, fake news flies much faster and is much, much harder to unstick than truth.

Which means fact checkers, and indeed journalists, are faced with the Sisyphean task of unpicking all the BS that Internet platforms are liberally fencing and accelerating (and monetizing as they do so).

The economic incentives inherent in the dominant adtech platform of the Internet should really be front and center when considering the modern disinformation challenge.

But of course Google and Facebook aren’t going to say that.

Meanwhile lawmakers are on the back foot. The European Commission has done something, signing tech firms up to a voluntary Code of Practice for fighting fake news — Google and Facebook among them.

Although, even in that dilute, non-legally binding document, signatories are supposed to have agreed to take action to make both political advertising and issue based advertising “more transparent”.

Yet here’s Google narrowly defining election ads in a way that lets issues slide on past.

We asked the company what it’s doing to prevent issue-based ads from interfering in EU elections. At the time of writing it had not responded to that question.

Safe to say, ‘election security’ looks to be a very long way off indeed.

Not so the date of the EU poll. That’s fast approaching: May 23 through 26, 2019.

Metacert’ Cryptonite can catch phishing links in your email

Metacert, founded by Paul Walsh, originally began as a way to watch chat rooms for fake Ethereum scams. Walsh, who was an early experimenter in cryptocurrencies, grew frustrated when he saw hackers dumping fake links into chat rooms, resulting in users regularly losing cash to scammers.

Now Walsh has expanded his software to email. A new product built for email will show little green or red shields next to links, confirming that a link is what it appears to be. A fake link would appear red while a real PayPal link, say, would appear green. The plugin works with Apple’s Mail app on the iPhone and is called Cryptonite.

“The system utilizes the MetaCert Protocol infrastructure/registry,” said Walsh. “It contains 10 billion classified URLs. This is at the core of all of MetaCert’s products and services. It’s a single API that’s used to protect over 1 million crypto people on Telegram via a security bot and it’s the same API that powers the integration that turned off phishing for the crypto world in 2017. Even when links are shortened? MetaCert unfurls them until it finds the real destination site, and then checks the Protocol to see if it’s verified, unknown or classified as phishing. It does all this in less that 300ms.”

Walsh is also working on a system to scan for Fake News in the wild using a similar technology to his anti-phishing solution. The company is raising currently and is working on a utility token.

Walsh sees his first customers as enterprise and expects IT shops to implement the software to show employees which links are allowed, i.e. company or partner links, and which ones are bad.

“It’s likely we will approach this top down and bottom up, which is unusual for enterprise security solutions. But ours is an enterprise service that anyone can install on their phone in less than a minute,” he said. “SMEs isn’t typically a target market for email security companies but we believe we can address this massive market with a solution that’s not scary to setup and expensive to support. More research is required though, to see if our hypothesis is right.”

“With MetaCert’s security, training is reduced to a single sentence ‘if it doesn’t have a green shield, assume it’s not safe,” said Walsh.

Facebook bug let websites read ‘likes’ and interests from a user’s profile

Facebook has fixed a bug that let any website pull information from a user’s profile — including their ‘likes’ and interests — without that user’s knowledge.

That’s the findings from Ron Masas, a security researcher at Imperva, who found that Facebook search results weren’t properly protected from cross-site request forgery (CSRF) attacks. In other words, a website could quietly siphon off certain bits of data from your logged-in Facebook profile in another tab.

Masas demonstrated how a website acting in bad faith could embed an IFRAME — used to nest a webpage within a webpage — to silently collect profile information.

“This allowed information to cross over domains — essentially meaning that if a user visits a particular website, an attacker can open Facebook and can collect information about the user and their friends,” said Masas.

The malicious website could open several Facebook search queries in a new tab, and run queries that could return “yes” or “no” responses — such as if a Facebook user likes a page, for example. Masas said that the search queries could return more complex results — such as returning all a user’s friends with a particular name, a user’s posts with certain keywords, and even more personal demographics — such as all of a person’s friends with a certain religion in a named city.

“The vulnerability exposed the user and their friends’ interests, even if their privacy settings were set so that interests were only visible to the user’s friends,” he said.

A snippet from a proof-of-concept built by Masas to show him exploiting the bug. (Image: Imperva/supplied)

In fairness, it’s not a problem unique to Facebook nor is it particularly covert. But given the kind of data available, Masas said this kind of data would be “attractive” to ad companies.

Imperva privately disclosed the bug in May. Facebook fixed the bug days later by adding CSRF protections and paid out $8,000 in two separate bug bounties.

Facebook told TechCrunch that the company hasn’t seen any abuse.

“We appreciate this researcher’s report to our bug bounty program,” said Facebook spokesperson Margarita Zolotova in a statement. “As the underlying behavior is not specific to Facebook, we’ve made recommendations to browser makers and relevant web standards groups to encourage them to take steps to prevent this type of issue from occurring in other web applications.”

It’s the latest in a string of data exposures and bugs that have put Facebook user data at risk after the Cambridge Analytica scandal this year, which saw a political data firm vacuum up profiles on 87 million users to use for election profiling — including users’ likes and interests.

Months later, the social media giant admitted millions of user account tokens had been stolen from hackers who exploited a chain of bugs.

YouTube VR finally lands on the Oculus Go

Today, Google’s YouTube VR app arrives on the $199 Oculus Go, bringing the largest library of VR content on the web to Facebook’s entry-level VR device.

YouTube brings plenty of content in conventional and more immersive video types. It’s undoubtedly the biggest single hub of 360 content and native formats like VR180, though offering access to the library at large is probably far more important to the Oculus platform.

One of the interesting things about Oculus’s strategy with the Go headset is that gaming turned out to be the minority use case following media consumption. If you find it hard to believe that so many people are out there binging on 360 videos it’s because they probably aren’t. Users have kind of co-opted the device’s capabilities to make it a conventional movie and TV viewing device, there are apps from Netflix and Hulu while Facebook has also built Oculus TV, a feature that’s still in its infancy but basically offers an Apple TV-like environment for watching a lot of 2D content in a social environment.

At the company’s Oculus Connect conference this past year CTO John Carmack remarked how about 70 percent of time spent by users on the Go has been watching videos with about 30 percent of user time has gone to gaming. Oculus has positioned itself as a gaming company in a lot of ways via its investments so it will be interesting to see how it grows its mobile platform to make the video aspect of its VR business more attractive.

With YouTube, the company has pretty easy access to effortlessly bringing a bunch of content onboard, this would have been a great partner for Oculus TV, but a dedicated app brings a lot to users. It wasn’t super clear whether Google was going to play hardball with the YouTube app and keep standalone access confined to its Daydream platform, as the company’s homegrown VR ambitions seem to have grown more subdued, it looks like they’ve had some time to focus on external platforms.

You can download the YouTube VR app here.