Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

While creating self-driving car systems, it’s natural that different companies might independently arrive at similar methods or results — but the similarities in a recent “first of its kind” Nvidia proposal to work done by Mobileye two years ago were just too much for the latter company’s CEO to take politely.

Amnon Shashua, in a blog post on parent company Intel’s news feed cheekily titled “Innovation Requires Originality, openly mocks Nvidia’s “Safety Force Field,” pointing out innumerable similarities to Mobileye’s “Responsibility Sensitive Safety” paper from 2017.

He writes:

It is clear Nvidia’s leaders have continued their pattern of imitation as their so-called “first-of-its-kind” safety concept is a close replica of the RSS model we published nearly two years ago. In our opinion, SFF is simply an inferior version of RSS dressed in green and black. To the extent there is any innovation there, it appears to be primarily of the linguistic variety.

Now, it’s worth considering the idea that the approach both seem to take is, like many in the automotive and autonomous fields and others, simply inevitable. Car makers don’t go around accusing each other of using the similar setup of four wheels and two pedals. It’s partly for this reason, and partly because the safety model works better the more cars follow it, that when Mobileye published its RSS paper, it did so publicly and invited the industry to collaborate.

Many did, and as Shashua points out, including Nvidia, at least for a short time in 2018, after which Nvidia pulled out of collaboration talks. To do so and then, a year afterwards, propose a system that is, if not identical, then at least remarkably similar, and without crediting or mentioning Mobileye is suspicious to say the least.

The (highly simplified) foundation of both is calculating a set of standard actions corresponding to laws and human behavior that plan safe maneuvers based on the car’s own physical parameters and those of nearby objects and actors. But the similarities extend beyond these basics, Shashua writes (emphasis his):

RSS defines a safe longitudinal and a safe lateral distance around the vehicle. When those safe distances are compromised, we say that the vehicle is in a Dangerous Situation and must perform a Proper Response. The specific moment when the vehicle must perform the Proper Response is called the Danger Threshold.

SFF defines identical concepts with slightly modified terminology. Safe longitudinal distance is instead called “the SFF in One Dimension;” safe lateral distance is described as “the SFF in Higher Dimensions.”  Instead of Proper Response, SFF uses “Safety Procedure.” Instead of Dangerous Situation, SFF replaces it with “Unsafe Situation.” And, just to be complete, SFF also recognizes the existence of a Danger Threshold, instead calling it a “Critical Moment.”

This is followed by numerous other close parallels, and just when you think it’s done, he includes a whole separate document (PDF) showing dozens of other cases where Nvidia seems (it’s hard to tell in some cases if you’re not closely familiar with the subject matter) to have followed Mobileye and RSS’s example over and over again.

Theoretical work like this isn’t really patentable, and patenting wouldn’t be wise anyway, since widespread adoption of the basic ideas is the most desirable outcome (as both papers emphasize). But it’s common for one R&D group to push in one direction and have others refine or create counter-approaches.

You see it in computer vision, where for example Google boffins may publish their early and interesting work, which is picked up by FAIR or Uber and improved or added to in another paper 8 months later. So it really would have been fine for Nvidia to publicly say “Mobileye proposed some stuff, that’s great but here’s our superior approach.”

Instead there is no mention of RSS at all, which is strange considering their similarity, and the only citation in the SFF whitepaper is “The Safety Force Field, Nvidia, 2017,” in which, we are informed on the very first line, “the precise math is detailed.”

Just one problem: This paper doesn’t seem to exist anywhere. It certainly was never published publicly in any journal or blog post by the company. It has no DOI number and doesn’t show up in any searches or article archives. This appears to be the first time anyone has ever cited it.

It’s not required for rival companies to be civil with each other all the time, but in the research world this will almost certainly be considered poor form by Nvidia, and that can have knock-on effects when it comes to recruiting and overall credibility.

I’ve contacted Nvidia for comment (and to ask for a copy of this mysterious paper). I’ll update this post if I hear back.

How to watch the live stream for today’s Apple keynote

Apple is holding a keynote today on its campus in Cupertino, and the company is expected to talk about new services. Don’t expect any new device, today’s event should be all about content. At 10 AM PT (1 PM in New York, 5 PM in London, 6 PM in Paris), you’ll be able to watch the event as the company is streaming it live.

Rumor has it that the company plans to unveil multiple new services. The most anticipated one will be a new video streaming service that should compete with Netflix, Amazon Prime Video and others. In addition to that service, Apple will unveil an Apple News subscription to access magazines and premium articles for a flat monthly fee.

But we might also hear about a mysterious credit card and a gaming subscription service. Details are still thin, so it’s going to be interesting to hear Apple talk about all those services.

If you have an Apple TV, you can download the Apple Events app in the App Store. It lets you stream today’s event and rewatch old ones. The app icon was updated a few days ago for the event.

And if you don’t have an Apple TV, the company also lets you live-stream the event from the Apple Events section on its website. This video feed now works in all major browsers — Safari, Microsoft Edge, Google Chrome and Mozilla Firefox.

So to recap, here’s how you can watch today’s Apple event:

  • Your favorite web browser on the Mac or Windows 10.
  • An Apple TV with the Apple Events app in the App Store.
  • Google Chrome on your Android phone.
  • And here’s the link to the live stream.

Of course, you also can read TechCrunch’s live blog if you’re stuck at work and really need our entertaining commentary track to help you get through your day. We have a team in the room.

Apple could charge $9.99 per month each for HBO, Showtime and Starz

The Wall Street Journal has published a report on Apple’s media push. The company is about to unveil a new video streaming service and an Apple News subscription on Monday.

According to The WSJ, you’ll be able to subscribe to multiple content packages to increase the video library in a new app called Apple TV — it’s unclear if this app is going to replace the existing Apple TV app.

The service would work more or less like Amazon Prime Video Channels. Users will be able to subscribe to HBO, Showtime or Starz for a monthly fee. The WSJ says that these three partners would charge $9.99 per month each.

According to a previous report from CNBC, it differs from the existing Apple TV app as you won’t be redirected to another app. Everything will be available within a single app.

Controlling the experience from start to finish would be a great advantage for users. As many people now suffer from subscription fatigue, Apple would be able to centralize all your content subscriptions in a single app. You could tick and untick options depending on your needs.

But some companies probably don’t want to partner with Apple. It’s highly unlikely that you’ll find Netflix or Amazon Prime Video content in the Apple TV app. Those services also want to control the experience from start to finish. It’s also easier to gather data analytics when subscribers are using your own app.

Apple should open up the Apple TV app to other platforms. Just like you can play music on Apple Music on Android, a Sonos speaker or an Amazon Echo speaker, Apple is working on apps for smart TVs. The company has already launched iTunes Store apps on Samsung TVs, so it wouldn’t be a big surprise.

The company has also spent a ton of money on original content for its own service. Details are still thin on this front. Many of those shows might not be ready for Monday. Do you have to pay to access Apple’s content too? How much? We’ll find out on Monday.

When it comes to Apple News, The WSJ says that content from 200 magazines and newspapers will be available for $9.99 per month. The Wall Street Journal confirms a New York Times report that said that The Wall Street Journal was part of the subscription.

Apple is also monitoring the App Store to detect popular apps according to multiple metrics, The WSJ says. Sure, Apple runs the App Store. But Facebook faced a public outcry when people realized that Facebook was monitoring popular apps with a VPN app called Onavo.

Apple could announce its gaming subscription service on Monday

Apple is about to announce some new services on Monday. While everybody expects a video streaming service as well as a news subscription, a new report from Bloomberg says that the company might also mention its gaming subscription.

Cheddar first reported back in January that Apple has been working on a gaming subscription. Users could pay a monthly subscription fee to access a library of games. We’re most likely talking about iOS games for the iPhone and iPad here.

Games are the most popular category on the App Store, so it makes sense to turn this category into a subscription business. And yet, most of them are free-to-play, ad-supported games. Apple doesn’t necessarily want to target those games in particular.

According to Bloomberg, the service will focus on paid games from third-party developers, such as Minecraft, NBA 2K games and the GTA franchise. Users would essentially pay to access this bundle of games. Apple would redistribute revenue to game developers based on how much time users spend within a game in particular.

It’s still unclear whether Apple will announce the service or launch it on Monday. The gaming industry is more fragmented than the movie and TV industry, so it makes sense to talk about the service publicly even if it’s not ready just yet.

The damage of defaults

Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.

The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.

But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!

The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!

A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.

Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.

It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.

Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.

Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.

Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.

I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.

Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.

We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.

Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.

The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.

Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.

But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.

Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.

When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)

The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.

But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.

With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.

And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)

Of course AI makes this problem so much worse.

Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.

The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.

Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).

You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.

Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.

Expensive earbuds that won’t stay put is just a handy visual metaphor.

And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.

It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.

Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)

As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.

And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.

The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.

Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.

Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.

Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.

Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.

But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.

So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.

It’s someone’s standard. It’s certainly not mine.

We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.

What’s clear is the future of technology design can’t be so stubborn.

It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.

Above all, it needs a listening ear on the world.

Indifference to difference and a blindspot for diversity will find no future here.

Gates-backed Lumotive upends lidar conventions using metamaterials

Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. Lumotive is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry.

The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “Intellectual Ventures” tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith.

Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device.

Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically.

The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well.

Flash, 2D and 1D lidar

Lidar basically works by bouncing light off the environment and measuring how and when it returns; this can be accomplished in several ways.

Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited.

2D or raster scan lidar takes an NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then does it again, again and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations.

1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness.

Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective:

The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow or reverse a beam that’s being moved by a high-speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over.

This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar.

Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically steered lidar system would detect the deer at the same time as the mechanically steered ones, or perhaps a bit sooner; upon noticing this movement, it could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution.

Just for illustration. The beam isn’t some big red thing that comes out.

Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile, the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile, it has an enormous aperture, allowing high sensitivity.

In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate.

Meta doesn’t mean mega-expensive

Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech.

CG render of a lidar metamaterial chip.The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel.

CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.”

An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or a larger or custom-shape one for a specialty vehicle or aircraft.

The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip.

Right now the company is seeking investment after running on a 2017 seed round funded by Bill Gates and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing, but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.

This is what the Huawei P30 will look like

You can already find many leaked photos of Huawei’s next flagship device — the P30 and P30 Pro. The company is set to announce the new product at an event in Paris next week. So here’s what you should expect.

Reliable phone leaker Evan Blass tweeted many different photos of the new devices in three different tweets:

As you can see, both devices feature three cameras on the back of the device. The notch is getting smaller and now looks like a teardrop. Compared to the P20 and P20 Pro, the fingerprint sensor is gone. It looks like Huawei is going to integrate the fingerprint sensor in the display just like Samsung did with the Samsung Galaxy S10.

mysmartprice also shared some ads with some specifications. The P30 Pro will have a 10x hybrid zoom while the P30 will have a 5x hybrid zoom — it’s unclear how it’ll work to combine a hardware zoom with a software zoom. Huawei has been doing some good work on the camera front, so this is going to be a key part of next week’s presentation.

For the first time, Huawei will put wireless charging in its flagship device — it’s about time. And it looks like the P30 Pro will adopt a curved display for the first time as well. I’ll be covering the event next week so stay tuned.

Markforged raises $82 million for its industrial 3D printers

3D printer manufacturer Markforged has raised another round of funding. Summit Partners is leading the $82 million Series D round with Matrix Partners, Microsoft’s Venture Arm, Next47 and Porsche SE also participating.

When you think about 3D printers, chances are you’re thinking about microwave-sized, plastic-focused 3D printers for hobbyists. Markforged is basically at the other end of the spectrum, focused on expensive 3D printers for industrial use cases.

In addition to increased precision, Markforged can manufacture parts in strong materials, such as carbon fiber, kevlar or stainless steel. And it can greatly impacts your manufacturing process.

For instance, you can prototype your next products with a Markforged printer. Instead of getting sample parts from third-party companies, you can manufacture your parts in house. If you’re not going to sell hundreds of thousands of products, you could even consider using Markforged to produce parts for your commercial products.

If you work in an industry that requires a ton of different parts but don’t need a lot of inventory, you could also consider using a 3D printer to manufacture parts whenever you need them.

Markforged has a full-stack approach and controls everything from the 3D printer, software and materials. Once you’re done designing your CAD 3D model, you can send it to your fleet of printers. The company’s application also lets you manage different versions of the same part and collaborate with other people.

According to the company’s website, Markforged has attracted 4,000 customers, such as Canon, Microsoft, Google, Amazon, General Motors, Volkswagen and Adidas. The company has shipped 2,500 printers in 2018.

With today’s funding round, the company plans to do more of the same — you can expect mass production printers and more materials in the future. Eventually, Markforged wants to make it cheaper to manufacture parts at scale instead of producing those parts through other means.

The 9 biggest questions about Google’s Stadia game streaming service

Google’s Stadia is an impressive piece of engineering to be sure: Delivering high definition, high framerate, low latency video to devices like tablets and phones is an accomplishment in itself. But the game streaming services faces serious challenges if it wants to compete with the likes of Xbox and PlayStation, or even plain old PCs and smartphones.

Here are our nine biggest questions about what the service will be and how it’ll work.

1. What’s the game selection like?

We saw Assassin’s Creed: Odyssey (a lot) and Doom: Eternal, and a few other things running on Stadia, but otherwise Google’s presentation was pretty light on details as far as what games exactly we can expect to see on there.

It’s not an easy question to answer, since this isn’t just a question of “all PC games,” or “all games from these 6 publishers.” Stadia requires a game be ported, or partly recoded to fit its new environment — in this case a Linux-powered PC. That’s not unusual, but it isn’t trivial either.

Porting is just part of the job for a major studio like Ubisoft, which regularly publishes on multiple platforms simultaneously, but for a smaller developer or a more specialized game, it’s not so straightforward. Jade Raymond will be in charge of both first-party games just for Stadia as well as developer relations; she said that the team will be “working with external developers to bring all of the bleeding edge Google technology you have seen today available to partner studios big and small.”

What that tells me is that every game that comes to Stadia will require special attention. That’s not a good sign for selection, but it does suggest that anything available on it will run well.

2. What will it cost?

Perhaps the topic Google avoided the most was what the heck the business model is for this whole thing.

Do you pay a subscription fee? Is it part of YouTube or maybe YouTube Red? Do they make money off sales of games after someone plays the instant demo? Is it free for an hour a day? Will it show ads every 15 minutes? Will publishers foot the bill as part of their normal marketing budget? No one knows!

It’s a difficult play because the most obvious way to monetize also limits the product’s exposure. Asking people to subscribe adds a lot of friction to a platform where the entire idea is to get you playing within 5 seconds.

Putting ads in is an easy way to let people jump in and have it be monetized a small amount. You could even advertise the game itself and offer a one-time 10 percent off coupon or something. Then mention that YouTube Red subscribers don’t see ads at all.

Sounds reasonable, but Google didn’t mention anything like this at all. We’ll probably hear more later this year closer to launch, but it’s hard to judge the value of the service when we have no idea what it will cost.

3. What about iOS devices?

Google and Apple are bitter rivals in a lot of ways, but it’s hard to get around the fact that iPhone owners tend to be the most lucrative mobile customers. Yet there were none in the live demo and no availability mentioned for iOS.

Depending on its business model, Google may have locked itself out of the App Store. Apple doesn’t let you essentially run a store within its store (as we have seen in cases like Amazon and Epic) and if that’s part of the Stadia offering, it’s not going to fly.

An app that just lets you play might be a possibility, but since none was mentioned, it’s possible Google is using Stadia as a platform exclusive to draw people to Pixel devices. That kind of puts a limit on the pitch that you can play on devices you already have.

4. What about games you already own?

A big draw of game streaming is to buy a game once and play it anywhere. Sometimes you want to play the big awesome story parts on your 60-inch TV in surround sound, but do a little inventory and quest management on your laptop at the cafe. That’s what systems like Steam Link offer.

Epic Games is taking on Steam with its own digital game store, which includes higher take-home revenue rates for developers.

But Google didn’t mention how its ownership system will work, or whether there would be a way to play games you already own on the service. This is a big consideration for many gamers.

It was mentioned that there would be cross platform play and perhaps even the ability to bring saves to other platforms, but how that would work was left to the imagination. Frankly I’m skeptical.

Letting people show they own a game and giving them access to it is a recipe for scamming and trouble, but not supporting it is missing out on a huge application for the service. Google’s caught between a rock and a hard place here.

5. Can you really convert viewers to players?

This is a bit more of an abstract question, but it comes from the basic idea that people specifically come to YouTube and Twitch to watch games, not play them. Mobile viewership is huge because streams are a great way to kill time on a train or bus ride, or during a break at school. These viewers often don’t want to play at those times, and couldn’t if they did want to!

So the question is, are there really enough people watching gaming content on YouTube who will actually actively switch to playing just like that?

Photo: Maskot / Getty Images

To be fair, the idea of a game trailer that lets you play what you just saw five seconds later is brilliant. I’m 100 percent on board there. But people don’t watch dozens of hours of game trailers a week — they watch famous streamers play Fortnite and PUBG and do speedruns of Dark Souls and Super Mario Bros 1. These audiences are much harder to change into players.

The potential of joining a game with a streamer, or affecting them somehow, or picking up at the spot they left off, to try fighting a boss on your own or seeing how their character controls, is a good one, but making that happen goes far, far beyond the streaming infrastructure Google has created here. It involves rewriting the rules on how games are developed and published. We saw attempts at this from Beam, later acquired by Microsoft, but it never really bloomed.

Streaming is a low-commitment, passive form of entertainment, which is kind of why it’s so popular. Turning that into an active, involved form of entertainment is far from straightforward.

6. How’s the image quality?

Games these days have mind-blowing graphics. I sure had a lot of bad things to say about Anthem, but when it came to looks that game was a showstopper. And part of what made it great were the tiny details in textures and subtle gradations of light that are only just recently possible with advances in shaders, volumetric fog, and so on. Will those details really come through in a stream?


Don’t get me wrong. I know a 1080p stream looks decent. But the simple fact is that high-efficiency HD video compression reduces detail in a noticeable way. You just can’t perfectly recreate an image if you have to send it 60 times per second with only a few milliseconds to compress and decompress it. It’s how image compression works.

For some people this won’t be a big deal. They really might not care about the loss of some visual fidelity — the convenience factor may outweigh it by a ton. But there are others for whom it may be distracting, those who have invested in a powerful gaming console or PC that gives them better detail at higher framerates than Stadia can possibly offer.

It’s not apples to apples but Google has to consider these things, especially when the difference is noticeable enough that game developers and publishers start to note that a game is “best experienced locally” or something like that.

7. Will people really game on the go?

I don’t question whether people play games on mobile. That’s one of the biggest businesses in the world. But I’m not sure that people want to play Assassin’s Creed: Odyssey on their iPa… I mean, Pixel Slate. Let alone their smartphone.

Games on phones and tablets are frequently time-killers driven by addictive short-duration game sessions. Even the bigger, more console-like games on mobile usually aim for shorter play sessions. That may be changing in some ways for sure but it’s a consideration, and AAA console games really just aren’t designed for 5-10 minute gaming sessions.

Add to that that you have to carry around what looks like a fairly bulky controller and this becomes less of an option for things like planes, cafes, subway rides, and so on. Even if you did bring it, could you be sure you’ll get the 10 or 20 Mbps you’ll need to get that 60FPS video rate? And don’t say 5G. If anyone says 5G again after the last couple months I’m going to lose it.

Naturally the counterpoint here is Nintendo’s fabulously successful and portable Switch. But the Switch plays both sides, providing a console-like experience on the go that makes sense because of its frictionless game state saving and offline operation. Stadia doesn’t seem to offer anything like that. In some ways it could be more compelling, but it’s a hard sell right now.

8. How will multiplayer work?

Obviously multiplayer gaming is huge right now and likely will be forever, so the Stadia will for sure support multiplayer one way or another. But multiplayer is also really complicated.

It used to be that someone just picked up the second controller and played Luigi. Now you have friend codes, accounts, user IDs, automatic matchmaking, all kinds of junk. If I want to play The Division 2 with a friend via Stadia, how does that work? Can I use my existing account? How do I log in? Are there IP issues and will the whole rigmarole of the game running in some big server farm set off cheat detectors or send me a security warning email? What if two people want to play a game locally?

Many of the biggest gaming properties in the world are multiplayer focused, and without a very, very clear line on this it’s going to turn a lot of people off. The platform might be great for it — but they have some convincing to do.

9. Stadia?

Branding is hard. Launching a product that aims to reach millions and giving it a name that not only represents it well but isn’t already taken is hard. But that said… Stadia?

I guess the idea is that each player is kind of in a stadium of their own… or that they’re in a stadium where Ninja is playing, and then they can go down to join? Certainly Stadia is more distinctive than stadium and less copyright-fraught than Colosseum or the like. Arena is probably out too.

If only Google already owned something that indicated gaming but was simple, memorable, and fit with its existing “Google ___” set of consumer-focused apps, brands, and services.

Oh well!

Camera maker Insta360 raises $30M as it eyes 2020 IPO

Insta360, one of the pioneers in making 360-degree cameras, just raised $30 million in a Series C+ funding round from Chinese investors, including Everest Venture Capital, MG Holdings and Huajin Capital.

The Shenzhen-based camera maker declined to disclose its latest valuation. It plans to use the fresh proceeds in research and development, marketing and after-sales services in its key international markets, including the United States and Japan, which are the company’s second and third-largest markets behind China.

Some of its past backers include IDG Capital, Qiming Ventures, home appliance maker Suning Holdings Group and file-sharing service Xunlei.

The company started making 360-degree cameras — thus the brand name — in 2014 when founder Liu Jingkang saw a gap in the market for compact, easy-to-use cameras shooting high-definition 360-degree footage. Over the years it has evolved into a four-pronged business covering all sorts of needs: 360-degree cameras for professionals and amateur users creating virtual reality content, action cameras for sports lovers and smartphone accessories for average consumers.

In stark contrast to loss-making GoPro, which Insta360 rivals in the action camera vertical, the Chinese firm has been profitable since 2017 and is planning to file for an initial public offering in China next year, Liu told TechCrunch in an interview. The company declined to provide more details of the planned flotation but said the success of its action camera line has helped it achieve five-times revenue growth in two years and reach profitability.

From professionals to amateurs

Though the VR sector remains in its infant stage, Liu is optimistic that 360 content will become a much sought-after media form in the years to come.

“Many families will be consuming virtual reality content for entertainment in the future, so we have a huge market for 360 content. That’s why we make a 360 camera each year to keep our top-tier position,” said Liu.


The Insta360 One X / Photo: Insta360

The action-camera market, by comparison, is more mature. Insta360 is riding a larger social trend of live blogging and short-form videos that has generated a huge demand for quality video content. Dozens of camera options, from Snap Spectacles to Tencent’s clone of the Snap glasses, are available to help people churn out content for video-sharing apps, but Liu saw problems in many of these products.

“[Video-shooting] spectacles, for examples, are quite offensive. Not everyone wants to wear them,” said the founder. “Many cameras do a bad job at video stabilization, so people end up with unusable footage. Lastly, and this is the key issue, users don’t know how to handle their footage.”

To that end, Insta360’s latest answer to documenting sports events and traveling is a camera that can easily be held by hand or slipped into a pocket. Called the One X, the gadget shoots in 5.7K resolution at 30 frames per second, delivering pleasingly smooth stabilization even when thrown around. The camera also comes with a software toolkit that automatically selects and stitches together users’ footage, which makes sharing to TikTok and Instagram a cinch. Check out TechCrunch’s review of One X below:

Insta360 has also been chasing after the masses, and its latest bid is an add-on lens that can instantly turn an iPhone into a 360-degree camera. The idea is that as users get a taste of the basic 360-degree experience, they may want to upgrade to a higher-end model.

“Insta360 has a rare ability to take cutting-edge imaging tech and put it into products that consumers want to use today,” said Gavin Li, senior director at Huajin Capital. “They’re moving faster and innovating more than their competitors, and they’re taking bold new approaches to the defining communication tool of our time: the camera.”