Fable Studio founder Edward Saatchi on designing virtual beings

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 3 of 3: designing virtual companions

In this discussion, Fable CEO Edward Saatchi addresses the technical and artistic dynamics of virtual companions: AIs created to establish one-to-one relationships with consumers. After mobile, Saatchi says he believes such virtual beings will act as the next paradigm for human-computer interaction.

Maze raises $2 million and adds Figma support to enable user testing at scale

Maze wants to reinvent usability tests by letting you turn design prototypes into tests in just a few clicks. It could become the equivalent of a developing test suite for developers, but this time for designers — it could be something that you run before shipping an update to make sure everything works fine. The startup just raised a $2 million funding round and launched a couple of new features.

Since I first covered the company, Maze founders Jonathan Widawski and Thomas Mary still have the same vision. The company wants to empower designers and turn them into user testing experts. With Maze, you can turn your InVision, Marvel or Sketch projects into a browser-based user test.

You can then share a link with a group of users to get actionable insights on your upcoming design changes. Everything works in a web browser on both desktop and mobile.

After running a testing campaign, you get a detailed report with a success rate (how many people tapped on all the right buttons to achieve something in your app), where your users drop off, polling results and more.

That product has been working well, attracting 20,000 users working for IBM, Greenpeace, Accenture, BMW and more.

Now, Maze also supports Figma projects. Given the hype behind Figma, adding this feature is important to stay relevant. It also opens up a new market for Maze — companies using Figma as their main design tool.

Maze has also added a feature that should be particularly useful for companies that are just starting with user testing. The startup can put together a testers panel for you.

This is completely optional and you can just stick with your monthly software-as-a-service plan and work with your own panel. But it provides a good end-to-end experience if you want to centralize all your user testing needs under one roof.

Maze has also raised a $2 million funding round. Amplify Partners is leading the round with existing investors Seedcamp and Partech also participating. Business angles in this round also include Eric Wittman, the former Director of Operations at Adobe and COO at Figma, Peter Skomoroch, the former Head of AI Automation & Data Products at Workday, and Datadog CEO Olivier Pomel.

AerCap CEO warns against panic discounts for Boeing 737 MAX jets

AerCap CEO warns against panic discounts for Boeing 737 MAX jetsOwners of Boeing's grounded 737 MAX jet must keep cool heads and avoid offering panic discounts on sale or lease prices to avoid undermining its long-term value, the head of one of the world's largest aircraft lessors Aercap said on Monday. "Discipline and keeping a cool head is vital because if people panic and lease the airplane or sell the airplane at knock-down rates for an extended period of time, it will be harder for the residual value of that asset to recover," Aercap's Aengus Kelly told the Airline Economics aviation finance conference in Dublin. Aercap has 100 Boeing 737 MAX jets on order.


Embraer studies turboprop to be developed through Boeing venture

Embraer studies turboprop to be developed through Boeing ventureBrazilian planemaker Embraer is in the advanced stages of studying the launch of a new turboprop aircraft to be developed through a venture it is planning with Boeing, subject to corporate approvals, a top executive said on Monday. The aircraft would be in the same size range or even larger than the 70-seat ATR-72, a Franco-Italian aircraft that currently dominates the market, Embraer Commercial Aviation Chief Executive John Slattery told Reuters. Embraer agreed in 2018 to fold its commercial aircraft activities into a venture to be controlled by Boeing.


Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

The marketplace of ideas is a weapons market now

The most interesting thing I saw online this week was Venkatesh Rao’s “Internet of Beefs” essay. I don’t agree with all of it. I’m not even sure I agree with most of it. But it’s a sharp, perceptive, well-argued piece which offers an explanation for why online public spaces have almost all become battlefields, or, as he puts it:

“are now being slowly taken over by beef-only thinkers … Anything that is not an expression of pure, unqualified support for whatever they are doing or saying is received as a mark of disrespect, and a provocation … as the global culture wars evolve into a stable, endemic, background societal condition of continuous conflict.” He goes on to taxonomizes the online knights and mooks who fight in this conflict, in incisive detail.

I agree this continuous conflict exists. (There exists another theory arguing that it’s really mostly bots and disinformation ops. Maybe, I guess, but that claim seems increasingly unconvincing.) I think this seething tire-fire conflict is part of something larger: the transition of the marketplace of ideas from a stock market into a weapons market.

Once, the idea was, there existed a “marketplace of ideas,” wherein people from across the political spectrum — generally the highly educated, but with some room for notions bubbling up from the grassroots — would introduce ideas for initiatives, actions, programs, and/or laws. These ideas would be considered, contrasted, debated, honed, amended, and weighed, and over time, in the same way stock markets identify the best companies, the the marketplace of ideas would identify the finest concepts. These in turn would then see actual implementation, courtesy of those in power — i.e. the rich and the elected — for the greater good of all.

This was the world of think tanks, of policy documents, of presentations at important conferences, of reporting breathlessly on major speeches, of trial-balloon op-eds, of congressional and parliamentary testimony, of councils and summits and studies that produced lavishly bound reports with the expectation that they would be seriously and judiciously considered by all sides of a debate. It was a world where new ideas might climb the hierarchy of the so-called great and good until they rose high enough that it was seen fit to actually implement them.

I don’t know if you’ve noticed, but if we ever lived in a world anything like that, well, we don’t any more. Some reject it on the (correct) grounds that this so-called marketplace of ideas, shockingly, always seemed to favor entrenching the interests of those “great and good,” the rich and the elected, the councilors and the presenters, rather than the larger population. Others simply want more for themselves and less for everyone else, rather than aiming for any kind of Pareto-optimal ideal outcome for all.

Nowadays the primary goal is to win the conflict, and other outcomes are at best secondary. Policy documents and statistical analyses are not taken for serious across-the-board consideration; they are simply weapons, or fig leaves, to serve as defenses or pretexts for decisions which have already been made.

This may seem so self-evident that it’s not even worth writing about — you probably need only consider your local national politics — but the strange thing is that so many of the participants in the whole apparatus, the policy analysts and think tankers and speechgivers and presenters, don’t seem to realize that nowadays their output is used as weapons and pretexts, rather than ideas to compete with other ideas in a rational marketplace.

Let’s pick a few relatively apolitical/acultural ones, to minimize the chance of your own ingrained conflict responses kicking in. Consider NIMBYism in Bay Area real estate: the opposition to building more housing on the grounds that this could not possibly lower housing prices. It’s a perfect object example of a low-level constant conflict in which all participants have long sine decided on their sides. There is no point in bringing conflicting data to a NIMBY (and, of course, they would say the same about a YIMBY like myself) as they will find a way to dismiss or ignore it. You can lead a horse to data, but you can’t make them think.

A couple more low-politics examples from my own online spaces: in the cryptocurrency world, most participants are so incentivized to believe in their One Truth that nearly every idea or proposal leads to an angry chorus denouncing all other truths. Or consider advocates of greater law enforcement “lawful access” to all encrypted messaging, vs. my own side, that of privacy advocates devoutly opposed to such. Neither side seems particularly interested in actually seriously considering any new data or new idea which might support the other side’s arguments. The dispute is more fundamental than that.

There exist a few remaining genuine marketplaces of ideas. Engineering standards and protocols, for one. (Yes, politics and personal hobbyhorses / vendettas get everywhere, even there, but relatively speaking.) The law, for another, albeit seemingly decreasingly so. But increasingly, academic papers, policy analyses, cross-sectional studies, closely argued op-eds, center-stage presentations, etc., are all artifacts of a world which no longer exists, if it ever really did. Nowadays these artifacts are largely just used to add a veneer of respectability to pre-existing tribal beliefs.

This isn’t true of every politician, CEO, billionaire, or other decisionmaker. And it’s certainly more true of one side than the other. But the increasingly irrelevant nature of our so-called marketplace of ideas seems hard to ignore. Perhaps, when it comes to the the tangible impact of these ceaseless online coal-fire conflicts, that old joke at the expense of academia applies: the discourse is so vicious because the stakes are so small.

Max Q: SpaceX succeeds with a spectacular Crew Dragon test launch

Max Q is a new weekly newsletter all about space. Sign up here to receive it weekly on Sundays in your inbox.

We’re off and running with good milestones achieved for NASA’s commercial crew program, which means it’s more likely than ever we’ll actually see astronauts launch from U.S. soil before the year is out.

If that’s not enough to get you pumped about the space sector in 2020, we also have a great overview of 2019 in space tech investment, and a look forward at what’s happening next year from Space Angels’ Chad Anderson. Plus, we announced our own dedicated space event, which is happening this June.

SpaceX successfully tests Crew Dragon safety system

SpaceX launched its Crew Dragon commercial astronaut spacecraft on Sunday. No one was on board, but the test was crucial because it included firing off the in-flight abort (IFA) safety system that will protect actual astronauts should anything go wrong with future real missions.

The SpaceX in-flight abort test included this planned fireball, as the Falcon 9 rocket it launched upon broke up.

The IFA seems to have worked as intended, propelling the Crew Dragon away from the Falcon 9 it was launched on top of at high speed. In an actual emergency, this would ensure that the astronauts aboard were transported to a safe distance, and then returned to Earth at a safe speed using the onboard parachutes, which seem to have deployed exactly as planned.

Elon Musk details Starship operational plans

SpaceX CEO Elon Musk is looking a bit further ahead, in the meantime, to when his company’s Starship spacecraft is fully operational and making regular trips to Mars. Musk said he wants to be launching Starships as much as thrice daily, with the goal of moving megatons of cargo and up to a million people to Mars at full target operating pace.

SpinLaunch raises $35M more for catapult launcher

Secretive space launch startup SpinLaunch is adding to its operating capital with a new $35 million investment, a round led by Airbus Ventures, GV and more. The company wants to use rotational force to effectively fling payloads out of Earth’s atmosphere – without using any rockets. Sounds insane, but I’ve heard from people much smarter than me that the company, and the core concept, is sound.

What 2020 holds for space startup invesment

I spoke to Space Angels CEO Chat Anderson about his company’s quarterly tracking of private investment in the space technology sector, which they’ve been doing since 2017. They’re uniquely well-positioned to combine data from both public sources and the companies they speak to, and perform due diligence on, so there’s no better place to look for insight on where we’ve been, and an educated perspective on where we’re going. (ExtraCrunch subscription required).

Rocket Lab is expanding its LA presence

Rocket Lab was born in New Zealand, and still operates a facility and main launch pad there, but it’s increasingly building out its U.S. presence, too. Now, the company shared its plans to build a combined HQ/Mission Control/rocket fab facility in LA. Construction is already underway, and it should be completed later this year.

Orbex lands a new customer with lots of rideshare mission experience

‘Rideshare’ in space means something entirely different than it does on Earth – you’re not hailing an Uber, you’re booking one portion of cargo space aboard a rocket with a group of other clients. Orbex has a new customer that bought up all the capacity for one of its future rideshare missions, planned for 2022. The new launch provider hasn’t actually launched any rockets, however, so it’ll have to pass that key milestone before it makes good on that new contract.

We’re having a space event!

Yes, it’s official: TechCrunch is hosting its on space-focused tech event on June 25 in LA. This will be a one-day, high-profile program featuring discussions with the top companies and people in space tech, startups and investment. We’ll be revealing more about programming over the next few months, but if you get in now you can guarantee your spot.