Waymo’s self-driving trucks and minivans are headed to New Mexico and Texas

Waymo said Thursday it will begin mapping and eventually testing its autonomous long-haul trucks in Texas and parts of New Mexico, the latest sign that the Alphabet company is expanding beyond its core focus of launching a robotaxi business.

Waymo said in a tweet posted early Thursday it had picked these areas because they are “interesting and promising commercial routes.” Waymo also said it would “explore how the Waymo Driver” — the company’s branded self-driving system — could be used to “create new transportation solutions.”

Waymo plans to mostly focus on interstates because Texas has a particularly high freight volume, the company said. The program will begin with mapping conducted by Waymo’s Chrysler Pacifica minivans.

The mapping and eventual testing will occur on highways around Dallas, Houston and El Paso. In New Mexico, Waymo will focus on the southern most part of the state.

Interstate 10 will be a critical stretch of highway in both states — and one that is already a testbed for TuSimple, a self-driving trucking startup that has operations in Tucson and San Diego. TuSimple tests and carries freight along the Tucson to Phoenix corridor on I-10. The company also tests on I-10 in New Mexico and Texas.

 

Waymo, which is best known for its pursuit of a robotaxi service, integrated its self-driving system into Class 8 trucks and began testing them in Arizona in August 2017. The company stopped testing its trucks on Arizona roads sometime later that year. The company brought back its truck testing to Arizona in May 2019.

Those early Arizona tests were aimed at gathering initial information about driving trucks in the region, while the new round of truck testing in Arizona marks a more advanced stage in the program’s development, Waymo said at the time.

Waymo has been testing its self-driving trucks in a handful of locations in the U.S., including Arizona, the San Francisco area and Atlanta. In 2018, the company announced plans to use its self-driving trucks to deliver freight bound for Google’s  data centers in Atlanta.

Waymo’s trucking program has had a higher profile in the past year. In June, Waymo brought on 13 robotics experts, a group that includes Anki’s  co-founder and former CEO Boris Sofman, to lead engineering in the autonomous trucking division.

Opera and the firm short-selling its stock (alleging Africa fintech abuses) weigh in

Internet services company Opera has come under a short-sell assault based on allegations of predatory lending practices by its fintech products in Africa.

Hindenburg Research issued a report claiming (among other things) that Opera’s finance products in Nigeria and Kenya have run afoul of prudent consumer practices and Google Play Store rules for lending apps.

Hindenburg — which is based in NYC and managed by financial analyst Nate Anderson — went on to suggest Opera’s U.S. listed stock was grossly overvalued.

That’s a primer on the key info, though there are several additional shades of the who, why, and where of this story to break down, before getting to what Opera and Hindenburg had to say.

A good start is Opera’s ownership and scope. Founded in Norway, the company is an internet services provider, largely centered around its Opera browser.

Opera was acquired in 2016 for $600 million by a consortium of Chinese investors, led by current Opera CEO Yahui Zhou.

Two years later, Opera went public in an IPO on NASDAQ, where its shares currently trade.

Web Broswers Africa 2019 Opera

Though Opera’s web platform isn’t widely used in the U.S. — where it has less than 1% of the browser market — it has been number-one in Africa, and more recently a distant second to Chrome, according to StatCounter.

On the back of its browser popularity, Opera went on an African venture-spree in 2019, introducing a suite of products and startup verticals in Nigeria and Kenya, with intent to scale more broadly across the continent.

In Nigeria these include motorcycle ride-hail service ORide and delivery app OFood.

Central to these services are Opera’s fintech apps: OPay in Nigeria and OKash and Opesa in Kenya — which offer payment and lending options.

Fintech focused VC and startups have been at the center of a decade long tech-boom in several core economies in Africa, namely Kenya and Nigeria.

In 2019 Opera led a wave of Chinese VC in African fintech, including $170 million in two rounds to its OPay payments service in Nigeria.

Opera’s fintech products in Africa (as well as Opera’s Cashbean in India) are at the core of Hindenburg Research’s brief and short-sell position. 

The crux of the Hindenburg report is that due to the declining market-share of its browser business, Opera has pivoted to products generating revenue from predatory short-term loans in Africa and India at interest rates of 365 to 876%, so Hindenburg claims.

The firm’s reporting goes on to claim Opera’s payment products in Nigeria and Kenya are afoul of Google rules.

“Opera’s short-term loan business appears to be…in violation of the Google Play Store’s policies on short-term and misleading lending apps…we think this entire line of business is at risk of…being severely curtailed when Google notices and ultimately takes corrective action,” the report says.

Based on this, Hindenburg suggested Opera’s stock should trade at around $2.50, around a 70% discount to Opera’s $9 share-price before the report was released on January 16.

Hindenburg also disclosed the firm would short Opera.

Founder Nate Anderson confirmed to TechCrunch Hindenburg continues to hold short positions in Opera’s stock — which means the firm could benefit financially from declines in Opera’s share value. The company’s stock dropped some 18% the day the report was published.

On motivations for the brief, “Technology has catalyzed numerous positive changes in Africa, but we do not think this is one of them,” he said.

“This report identified issues relating to one company, but what we think will soon become apparent is that in the absence of effective local regulation, predatory lending is becoming pervasive across Africa and Asia…proliferated via mobile apps,” Anderson added.

While the bulk of Hindenburg’s critique was centered on Opera, Anderson also took aim at Google.

“Google has become the primary facilitator of these predatory lending apps by virtue of Android’s dominance in these markets. Ultimately, our hope is that Google steps up and addresses the bigger issue here,” he said.

TechCrunch has an open inquiry into Google on the matter. In the meantime, Opera’s apps in Nigeria and Kenya are still available on GooglePlay, according to Opera and a cursory browse of the site.

For its part, Opera issued a rebuttal to Hindenburg and offered some input to TechCrunch through a spokesperson.

In a company statement opera said, “We have carefully reviewed the report published by the short seller and the accusations it put forward, and our conclusion is very clear: the report contains unsubstantiated statements, numerous errors, and misleading conclusions regarding our business and events related to Opera.”

Opera added it had proper banking licenses in Kenyan or Nigeria. “We believe we are in compliance with all local regulations,” said a spokesperson.

TechCrunch asked Hindenburg’s Nate Anderson if the firm had contacted local regulators related to its allegations. “We reached out to the Kenyan DCI three times before publication and have not heard back,” he said.

As it pertains to Africa’s startup scene, there’ll be several things to follow surrounding the Opera, Hindenburg affair.

The first is how it may impact Opera’s business moves in Africa. The company is engaged in competition with other startups across payments, ride-hail, and several other verticals in Nigeria and Kenya. Being accused of predatory lending, depending on where things go (or don’t) with the Hindenburg allegations, could put a dent in brand-equity.

There’s also the open question of if/how Google and regulators in Kenya and Nigeria could respond. Contrary to some perceptions, fintech regulation isn’t non-existent in both countries, neither are regulators totally ineffective.

Kenya passed a new data-privacy law in November and Nigeria recently established guidelines for mobile-money banking licenses in the country, after a lengthy Central Bank review of best digital finance practices.

Nigerian regulators demonstrated they are no pushovers with foreign entities, when they slapped a $3.9 billion fine on MTN over a regulatory breach in 2015 and threatened to eject the South African mobile-operator from the country.

As for short-sellers in African tech, they are a relatively new thing, largely because there are so few startups that have gone on to IPO.

In 2019, Citron Research head and activist short-seller Andrew Left — notable for shorting Lyft and Tesla — took short positions in African e-commerce company Jumia, after dropping a report accusing the company of securities fraud. Jumia’s share-price plummeted over 50% and has only recently begun to recover.

As of Wednesday, there were signs Opera may be shaking off Hindenburg’s report — at least in the market — as the company’s shares had rebounded to $7.35.

Google’s Collections feature now pushes people to save recipes & products, using AI

Google is giving an AI upgrade to its Collections feature — basically, Google’s own take on Pinterest, but built into Google Search. Originally a name given to organizing images, the Collections feature that launched in 2018 let you save for later perusal any type of search result — images, bookmarks or map locations — into groups called “Collections.” Starting today, Google will make suggestions about items you can add to Collections based on your Search history across specific activities like cooking, shopping or hobbies.

The idea here is that people often use Google for research but don’t remember to save web pages for easy retrieval. That leads users to dig through their Google Search History in an effort to find the lost page. Google believes that AI smarts can improve the process by helping users build reference collections by starting the process for them.

Here’s how it works. After you’ve visited pages on Google Search in the Google app or on the mobile web, Google will group together similar pages related to things like cooking, shopping and hobbies, then prompt you to save them to suggested Collections.

For example, after an evening of scouring the web for recipes, Google may share a suggested Collection with you titled “Dinner Party,” which is auto-populated with relevant pages from your Search history. You can uncheck any recipes that don’t belong and rename the collection from “Dinner Party” to something else of your choosing, if you want. You then tap the “Create” button to turn this selection from your Search history into a Collection.

These Collections can be found later in the Collections tab in the Google app or through the Google.com side menu on the mobile web. There is an option to turn off this feature in Settings, but it’s enabled by default.

The Pinterest-like feature aims to keep Google users from venturing off Google sites to other places where they can save and organize things they’re interested in — whether that’s a list of recipes they want to add to a pinboard on Pinterest or a list of clothing they want to add to a wish list on Amazon. In particular, retaining e-commerce shoppers from leaving Google for Amazon is something the company is heavily focused on these days. The company recently rolled out a big revamp of its Google Shopping vertical, and just this month launched a way to shop directly from search results.

The issue with sites like Pinterest is that they’re capturing shoppers at an earlier stage in the buying process — during the information-gathering and inspiration-seeking research stage, that is. By saving links to Pinterest’s pinboards, shoppers ready to make a purchase are bypassing Google (and its advertisers) to check out directly with retailers.

Meanwhile, Google is simultaneously losing traffic to Amazon, which now surpasses Google for product searches. Even Instagram, of all places, has become a rival, as it’s now a place to shop. The app’s Shopping feature is funneling users right from its visual ads to a checkout page in the app. PayPal, catching wind of this trend, recently spent $4 billion to buy Honey in order to capture shoppers earlier in their journey.

For users, Google Collections is just about encouraging you to put your searches into groups for later access. But for Google, it’s also about getting people to shop on Google and stay on Google, no matter what they’re researching. Suggested Collections may lure you in as an easy way to organize recipes, but ultimately this feature will be about getting users to develop a habit of saving their searches to Google — and particularly their product searches.

Once you have a Collection set up, Google can point you to other related items, including websites, images  and more. Most importantly, this will serve as a new way to get users to perform more product searches, too, as it can send users to other product pages without the user having to type in an explicit search query.

The update also comes with an often-requested collaboration feature, which means you can now share a collection with others for either viewing or editing.

Sharing and related content suggestions are live worldwide.

The AI-powered suggested collections are live in the U.S. for English users starting today and will reach more markets in time.

Google Cloud gets a Secret Manager

Google Cloud today announced Secret Manager, a new tool that helps its users securely store their API keys, passwords, certificates and other data. With this, Google Cloud is giving its users a single tool to manage this kind of data and a centralized source of truth, something that even sophisticated enterprise organizations often lack.

“Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication,” Google developer advocate Seth Vargo and product manager Matt Driscoll wrote in today’s announcement. “Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.”

With Berglas, Google already offered an open-source command-line tool for managing secrets. Secret Manager and Berglas will play well together and users will be able to move their secrets from the open-source tool into Secret Manager and use Berglas to create and access secrets from the cloud-based tool as well.

With KMS, Google also offers a fully managed key management system (as do Google Cloud’s competitors). The two tools are very much complementary. As Google notes, KMS does not actually store the secrets — it encrypts the secrets you store elsewhere. Secret Manager provides a way to easily store (and manage) these secrets in Google Cloud.

Secret Manager includes the necessary tools for managing secret versions and audit logging, for example. Secrets in Secret Manager are also project-based global resources, the company stresses, while competing tools often manage secrets on a regional basis.

The new tool is now in beta and available to all Google Cloud customers.

Google Cloud lands Lufthansa Group and Sabre as new customers

Google’s strategy for bringing new customers to its cloud is to focus on the enterprise and specific verticals like healthcare, energy, financial service and retail, among others. Its healthcare efforts recently experienced a bit of a setback, with Epic now telling its customers that it is not moving forward with its plans to support Google Cloud, but in return, Google now got to announce two new customers in the travel business: Lufthansa Group, the world’s largest airline group by revenue, and Sabre, a company that provides backend services to airlines, hotels and travel aggregators.

For Sabre, Google Cloud is now the preferred cloud provider. Like a lot of companies in the travel (and especially the airline) industry, Sabre runs plenty of legacy systems and is currently in the process of modernizing its infrastructure. To do so, it has now entered a 10-year strategic partnership with Google “to improve operational agility while developing new services and creating a new marketplace for its airline,  hospitality and travel agency customers.” The promise, here, too, is that these new technologies will allow the company to offer new travel tools for its customers.

When you hear about airline systems going down, it’s often Sabre’s fault, so just being able to avoid that would already bring a lot of value to its customers.

“At Google we build tools to help others, so a big part of our mission is helping other companies realize theirs. We’re so glad that Sabre has chosen to work with us to further their mission of building the future of travel,” said Google CEO Sundar Pichai . “Travelers seek convenience, choice and value. Our capabilities in AI and cloud computing will help Sabre deliver more of what consumers want.”

The same holds true for Google’s deal with Lufthansa Group, which includes German flag carrier Lufthansa itself, but also subsidiaries like Austrian, Swiss, Eurowings and Brussels Airlines, as well as a number of technical and logistics companies that provide services to various airlines.

“By combining Google Cloud’s technology with Lufthansa Group’s operational expertise, we are driving the digitization of our operation even further,” said Dr. Detlef Kayser, member of the executive board of the Lufthansa Group. “This will enable us to identify possible flight irregularities even earlier and implement countermeasures at an early stage.”

Lufthansa Group has selected Google as a strategic partner to “optimized its operations performance.” A team from Google will work directly with Lufthansa to bring this project to life. The idea here is to use Google Cloud to build tools that help the company run its operations as smoothly as possible and to provide recommendations when things go awry due to bad weather, airspace congestion or a strike (which seems to happen rather regularly at Lufthansa these days).

Delta recently launched a similar platform to help its employees.

Canonical’s Anbox Cloud puts Android in the cloud

Canonical, the company behind the popular Ubuntu Linux distribution, today announced the launch of Anbox Cloud, a new platform that allows enterprises to run Android in the cloud.

On Anbox Cloud, Android becomes the guest operating system that runs containerized applications. This opens up a range of use cases, ranging from bespoke enterprise app to cloud gaming solutions.

The result is similar to what Google does with Android apps on Chrome OS, though the implementation is quite different and is based on the LXD container manager, as well as a number of Canonical projects like Juju and MAAS for provisioning the containers and automating the deployment. “LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity,” the company points out in its announcements.

Anbox itself, it’s worth noting, is an open-source project that came out of Canonical and the wider Ubuntu ecosystem. Launched by Canonical engineer Simon Fels in 2017, Anbox runs the full Android system in a container, which in turn allows you to run Android application on any Linux-based platform.

What’s the point of all of this? Canonical argues that it allows enterprises to offload mobile workloads to the cloud and then stream those applications to their employees’ mobile devices. But Canonical is also betting on 5G to enable more use cases, less because of the available bandwidth but more because of the low latencies it enables.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, Director of Product at Canonical, in today’s announcement. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”

Outside of the enterprise, one of the use cases that Canonical seems to be focusing on is gaming and game streaming. A server in the cloud is generally more powerful than a smartphone, after all, though that gap is closing.

Canonical also cites app testing as another use case, given that the platform would allow developers to test apps on thousands of Android devices in parallel. Most developers, though, prefer to test their apps in real — not emulated — devices, given the fragmentation of the Android ecosystem.

Anbox Cloud can run in the public cloud, though Canonical is specifically partnering with edge computing specialist Packet to host it on the edge or on-premise. Silicon partners for the project are Ampere and Intel .

Apple Card users can now download monthly transactions in a spreadsheet

One of the big questions I got around the time the Apple Card launched was whether you’d be able to download a file of your transactions to either work with manually or import into a piece of expenses management software. The answer, at the time, was no.

Now Apple is announcing that Apple Card users will be able to export monthly transactions to a downloadable spreadsheet that they can use with their personal budgeting apps or sheets.

When I shot out a request for recommendations for a Mint replacement for my financing and budgeting a lot of the responses showed just how spreadsheet oriented many of the tools on the market are. Mint accepts imports, as do others like Clarity Money, YNAB and Lunch Money. As do, of course, personal solutions rolled in Google Sheets or other spreadsheet programs.

The one rec I got the most and which I’m trying out right now, Copilot, does not currently support importing spreadsheets but founder Andres Ugarte told me that it’s on their list to add. Ugarte told me that they’re happy to see the download feature appear because it lets users monitor their finances on their own terms. “Apple Card support has been a top request from our users, so we are very excited to provide a way for them to import their data into Copilot .”

Here’s how to export a spreadsheet of your monthly transactions:

  • Open Wallet
  • Tap ‘Apple Card’
  • Tap ‘Card Balance’
  • Tap on one of the monthly statements
  • Tap on ‘Export Transactions’

If you don’t yet have a monthly statement you won’t see this feature until you do. The last step brings up a standard share sheet letting you email or send the file however you normally would. The current format is CSV but in the near future you’ll get an OFX option as well.

So if you’re using one of the tools (or spreadsheet setups) that would benefit from being able to download a monthly statement of your Apple Card transactions then you’re getting your wish from the Apple Card team today. If you use a tool that requires something more along the lines of API -level access like something using Plaid or another account-linking-centric tool then you’re going to have to wait longer.

No info from Apple on when that will arrive if at all but I know that the team is continuing to launch new features so my guess is that this is coming at some point.

Diligent’s Vivian Chu and Labrador’s Mike Dooley will discuss assistive robotics at TC Sessions: Robotics+AI

Too often the world of robotics seems to be a solution in search of a problem. Assistive robotics, on the other hand, are among one of the primary real-world tasks existing technology can seemingly address almost immediately.

The concept for the technology has been around for some time now and has caught on particularly well in places like Japan, where human help simply can’t keep up with the needs of an aging population. At TC Sessions: Robotics+AI at U.C. Berkeley on March 3, we’ll be speaking with a pair of founders developing offerings for precisely these needs.

Vivian Chu is the cofounder and CEO of Diligent Robotics. The company has developed the Moxi robot to help assist with chores and other non-patient tasks, in order to allow caregivers more time to interact with patients. Prior to Diligent, Chu worked at both Google[X] and Honda Research Institute.

Mike Dooley is the cofounder and CEO of Labrador Systems. The Los Angeles-based company recently closed a $2 million seed round to develop assistive robots for the home. Dooley has worked at a number of robotics companies including, most recently a stint as the VP of Product and Business Development at iRobot.

Early Bird tickets are now on sale for $275, but you better hurry, prices go up in less than a month by $100. Students can book a super discounted ticket for just $50 right here.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.