Pat Gelsinger stepping down as VMware CEO to replace Bob Swan at Intel

In a move that could have wide ramifications across the tech landscape, Intel announced that VMware CEO Pat Gelsinger would be replacing interim CEO Bob Swann at Intel on February 15th. The question is why would he leave his job to run a struggling chip giant.

The bottom line is he has a long history with Intel, working with some of the biggest names in chip industry lore before he joined VMware in 2009. It has to be a thrill for him to go back to his roots and try to jump start the company.

“I was 18 years old when I joined Intel, fresh out of the Lincoln Technical Institute. Over the next 30 years of my tenure at Intel, I had the honor to be mentored at the feet of Grove, Noyce and Moore,” Gelsinger wrote in a blog post announcing his new position.

Certainly Intel recognized that the history and that Gelsinger’s deep executive experience should help as the company attempts to compete in an increasingly aggressive chip industry landscape. “Pat is a proven technology leader with a distinguished track record of innovation, talent development, and a deep knowledge of Intel. He will continue a values-based cultural leadership approach with a hyper focus on operational execution,” Omar Ishrak, independent chairman of the Intel board said in a statement.

But Gelsinger is walking into a bit of a mess. As my colleague Danny Crichton wrote in his year-end review of the chip industry last month, Intel is far behind its competitors, and it’s going to be tough to play catch-up:

Intel has made numerous strategic blunders in the past two decades, most notably completely missing out on the smartphone revolution and also the custom silicon market that has come to prominence in recent years. It’s also just generally fallen behind in chip fabrication, an area it once dominated and is now behind Taiwan-based TSMC, Crichton wrote.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy agrees with this assertion, saying that Swan was dealt a bad hand, walking in to clean up a mess that has years long timelines. While Gelsinger faces similar issues, Moorhead thinks he can refocus the company. “I am not foreseeing any major strategic changes with Gelsinger, but I do expect him to focus on the company’s engineering culture and get it back to an execution culture” Moorhead told me.

The announcement comes against the backdrop of massive chip industry consolidation last year with over $100 billion changing hands in four deals with NVidia nabbing ARM for $40 billion, the $35 billion AMD-Xilink deal, Analog snagging Maxim for $21 billion and Marvell grabbing Inphi for a mere $10 billion, not to mention Intel dumping its memory unit to SK Hynix for $9 billion.

As for VMware, it has to find a new CEO now. As Moorhead says, the obvious choice will be current COO Sanjay Poonen. Holger Mueller, an analyst at Constellation Research says it will be up to Michael Dell who to hand the reins to, but he believes Gelsinger was stuck at Dell and would not get a broader role, so he left.

“VMware has a deep bench, but it will be up to Michael Dell to get a CEO who can innovate on the software side and keep the unique DNA of VMware inside the Dell portfolio going strong, Dell needs the deeper profits of this business for its turnaround,” he said.

The stock market seems to like the move for Intel with the company stock up 7.26%, but not so much for VMware, whose stock was down close to the same amount at 7.72% as went to publication.

Google, Intel, Zoom and others launch a new alliance to get enterprises to use more Chrome

A group of industry heavyweights, including Google, Box, Citrix, Dell, Imprivata, Intel, Okta, RingCentral, Slack, VMware and Zoom, today announced the launch of the Modern Computing Alliance.

The mission for this new alliance is to “drive ‘silicon-to-cloud’ innovation for the benefit of enterprise customers — fueling a differentiated modern computing platform and providing additional choice for integrated business solutions.”

Whoever wrote this mission statement was clearly trying to see how many words they could use without actually saying something.

Here is what the alliance is really about: even though the word Chrome never appears on its homepage and Google’s partners never quite get to mentioning it either, it’s all about helping enterprises adopt Chrome and Chrome OS. “The focus of the alliance is to drive innovation and interoperability in the Google Chrome ecosystem, increasing options for enterprise customers and helping to address some of the biggest tech challenges facing companies today,” a Google spokesperson told me.

I’m not sure why it’s not called the Chrome Enterprise Alliance, but Modern Computing Alliance may just have more of a ring to it. This also explains why Microsoft isn’t part of it, though this is only the initial slate of members and others may follow at some point in the future.

Led by Google, the alliance’s focus is on bringing modern web apps to the enterprise, with a focus on performance, security, identity management and productivity. And all of that, of course, is meant to run well on Chrome and Chrome OS and be interoperable.

“The technology industry is moving towards an open, heterogeneous ecosystem that allows freedom of choice while integrating across the stack. This reality presents both a challenge and an opportunity,” Google’s Chrome OS VP John Solomon writes today.

As enterprises move to the cloud, building better web applications and maybe even Progressive Web Applications that work just as well as native solutions is obviously a noble goal and it’s nice to see these companies work together. Given the pandemic, all of this has taken on a new urgency now, too. The plan is for the alliance to release products — though it’s unclear what form these will take — in the first half of 2021. Hopefully, these will play nicely with any browser. A lot of these ‘alliances’ fizzle out quite quickly, so we’ll keep an eye on what happens here.

Bonus: the industry has a long history of alliance like these. Here’s a fun 1991 story about a CPU alliance between Intel, IBM, MIPS and others.

Apple reportedly testing Intel-beating high core count Apple Silicon chips for high-end Macs

Apple is reportedly developing a number of Apple Silicon chip variants with significantly higher core counts relative to the M1 chips that it uses in today’s MacBook Air, MacBook Pro and Mac mini computers based on its own ARM processor designs. According to Bloomberg, the new chips include designs that have 16 power cores and hour high-efficiency cores, intended for future iMacs and more powerful MacBook Pro models, as well as a 32-performance core top-end version that would eventually power the first Apple Silicon Mac Pro.

The current M1 Mac has four performance cores, along with four high-efficiency cores. It also uses either seven or eight dedicated graphics cores, depending on the Mac model. Apple’s next-gen chips could leap right to 16 performance cores, or Bloomberg says they could opt to use eight or 12-core versions of the same, depending primarily on what kinds of yields they see from manufacturing processes. Chipmaking, particularly in the early stages of new designs, often has error rates that render a number of the cores on each new chip unusable, so manufacturers often just ‘bin’ those chips, offering them to the market as lower max core count designs until manufacturing success rates improve.

Apple’s M1 system on a chip.

Regardless of whether next-gen Apple Silicon Macs use 16, 12 or eight-performance core designs, they should provide ample competition for their Intel equivalents. Apple’s debut M1 line has won the praise of critics and reviewers for significant performance benefits over not only their predecessors, but also much more expensive and powerful Mac powered by higher-end Intel chips.

The report also says that Apple is developing new graphics processors that include both 16- and 32-core designs for future iMacs and pro notebooks, and that it even has 64- and 128-core designs in development for use in high-end pro machines like the Mac Pro. These should offer performance that can rival even dedicated GPU designs from Nvidia and AMD for some applications, though they aren’t likely to appear in any shipping machines before either late 2021 or 2022 according to the report.

Apple has said from the start that it plans to transition its entire line to its own Apple Silicon processors by 2022. The M1 Macs now available are the first generation, and Apple has begun with its lowest-power dedicated Macs, with a chip design that hews closely to the design of the top-end A-series chips that power its iPhone and iPad line. Next-generation M-series chips look like they’ll be further differentiated from Apple’s mobile processors, with significant performance advantages to handle the needs of demanding professional workloads.

After Apple’s M1 launch, Intel announces its own white-label laptop

Its long fruitful relationship with Apple may be sunsetting soon, but Intel’s still got a fairly massive footprint in the PC market. There’s never a good time to get complacent, though (a lesson the company learned the hard way on the mobile front).

This week the chip giant is debuting its own laptop, the NUC M15. More properly, the NUC M15 Laptop Kit; the device is actually a white-label system. It’s essentially a reference design so smaller device makers don’t have to commit to the long and expensive process of building a system from scratch.

It is, as The Verge notes, not the first time the company has created this sort of reference design. It recently created a gaming system to similar ends. But much like the recent MacBooks, the system is designed to offer high performance in a package designed more for productivity.

There are two configurations for the system, featuring either a Core i7 chip coupled with 16GB of RAM or a Core i5 with 8GB of RAM. That will, obviously, be complemented by Windows 10, which will take advantage of the 15.6-inch touchscreen.

Pricing and timing and all of that good stuff will likely depend on which vendors take the system across the finish line.

Deep Vision announces its low-latency AI processor for the edge

Deep Vision, a new AI startup that is building an AI inferencing chip for edge computing solutions, is coming out of stealth today. The six-year-old company’s new ARA-1 processors promise to strike the right balance between low latency, energy efficiency and compute power for use in anything from sensors to cameras and full-fledged edge servers.

Because of its strength in real-time video analysis, the company is aiming its chip at solutions around smart retail, including cashier-less stores, smart cities and Industry 4.0/robotics. The company is also working with suppliers to the automotive industry, but less around autonomous driving than monitoring in-cabin activity to ensure that drivers are paying attention to the road and aren’t distracted or sleepy.

Image Credits: Deep Vision

The company was founded by its CTO Rehan Hameed and its Chief Architect Wajahat Qadeer​, who recruited Ravi Annavajjhala, who previously worked at Intel and SanDisk, as the company’s CEO. Hameed and Qadeer developed Deep Vision’s architecture as part of a Ph.D. thesis at Stanford.

“They came up with a very compelling architecture for AI that minimizes data movement within the chip,” Annavajjhala explained. “That gives you extraordinary efficiency — both in terms of performance per dollar and performance per watt — when looking at AI workloads.”

Long before the team had working hardware, though, the company focused on building its compiler to ensure that its solution could actually address its customers’ needs. Only then did they finalize the chip design.

Image Credits: Deep Vision

As Hameed told me, Deep Vision’s focus was always on reducing latency. While its competitors often emphasize throughput, the team believes that for edge solutions, latency is the more important metric. While architectures that focus on throughput make sense in the data center, Deep Vision CTO Hameed argues that this doesn’t necessarily make them a good fit at the edge.

“[Throughput architectures] require a large number of streams being processed by the accelerator at the same time to fully utilize the hardware, whether it’s through batching or pipeline execution,” he explained. “That’s the only way for them to get their big throughput. The result, of course, is high latency for individual tasks and that makes them a poor fit in our opinion for an edge use case where real-time performance is key.”

To enable this performance — and Deep Vision claims that its processor offers far lower latency than Google’s Edge TPUs and Movidius’ MyriadX, for example — the team is using an architecture that reduces data movement on the chip to a minimum. In addition, its software optimizes the overall data flow inside the architecture based on the specific workload.

Image Credits: Deep Vision

“In our design, instead of baking in a particular acceleration strategy into the hardware, we have instead built the right programmable primitives into our own processor, which allows the software to map any type of data flow or any execution flow that you might find in a neural network graph efficiently on top of the same set of basic primitives,” said Hameed.

With this, the compiler can then look at the model and figure out how to best map it on the hardware to optimize for data flow and minimize data movement. Thanks to this, the processor and compiler can also support virtually any neural network framework and optimize their models without the developers having to think about the specific hardware constraints that often make working with other chips hard.

“Every aspect of our hardware/software stack has been architected with the same two high-level goals in mind,” Hameed said. “One is to minimize the data movement to drive efficiency. And then also to keep every part of the design flexible in a way where the right execution plan can be used for every type of problem.”

Since its founding, the company raised about $19 million and has filed nine patents. The new chip has been sampling for a while and even though the company already has a couple of customers, it chose to remain under the radar until now. The company obviously hopes that its unique architecture can give it an edge in this market, which is getting increasingly competitive. Besides the likes of Intel’s Movidius chips (and custom chips from Google and AWS for their own clouds), there are also plenty of startups in this space, including the likes of Hailo, which raised a $60 million Series B round earlier this year and recently launched its new chips, too.

Apple brings back its ‘I’m a PC’ spokesman for Arm-based Mac launch event

Apple brought back actor John Hodgman for a brief cameo in today’s Arm-based Mac launch event, reprising his role as the dorky “I’m a PC” character, now tasked with poking fun at Intel-based PCs in the face of an Apple silicon future for the company.

The short slot aired following the end of Tuesday’s “One More Thing” event where they showed off their new M1 chip and new designs for their upcoming MacBook Air, MacBook Pro and Mac Mini. Hodgman’s character appeared in a white room amid the vintage ad campaign’s signature tune, while touching on some of the new machines’ advances in power management.

There was notably no cameo from Justin Long, and it’s unclear whether Hodgman’s appearance will only grace today’s event or whether Apple has plans for a throwback ad campaign. Nevertheless, it was a fun nod to a popular campaign from Apple.

You can catch the appearance below at the 45:27 mark.

Intel has acquired Cnvrg.io, a platform to manage, build and automate machine learning

Intel continues to snap up startups to build out its machine learning and AI operations. In the latest move, TechCrunch has learned that the chip giant has acquired Cnvrg.io, an Israeli company that has built and operates a platform for data scientists to build and run machine learning models, which can be used to train and track multiple models and run comparisons on them, build recommendations and more.

Intel confirmed the acquisition to us with a short note. “We can confirm that we have acquired Cnvrg,” a spokesperson said. “Cnvrg will be an independent Intel company and will continue to serve its existing and future customers.” Those customers include Lightricks, ST Unitas and Playtika.

Intel is not disclosing any financial terms of the deal, nor who from the startup will join Intel. Cnvrg, co-founded by Yochay Ettun (CEO) and Leah Forkosh Kolben, had raised $8 million from investors that include Hanaco Venture Capital and Jerusalem Venture Partners and PitchBook estimates that it was valued at around $17 million in its last round. 

It was only a week ago that Intel made another acquisition to boost its AI business, also in the area of machine learning modeling: it picked up SigOpt, which had developed an optimization platform to run machine learning modeling and simulations.

While SigOpt is based out of the Bay Area, Cnvrg is in Israel and joins an extensive footprint that Intel has built in the country specifically in the area of artificial intelligence research and development, banked around its Mobileye autonomous vehicle business (which it acquired for more than $15 billion in 2017) and its acquisition of AI chipmaker Habana (which it acquired for $2 billion at the end of 2019).

Cnvrg.io’s platform works across on-premise, cloud and hybrid environments and it comes in paid and free tiers (we covered the launch of the free service, branded Core, last year). It competes with the likes of Databricks, Sagemaker and Dataiku as well as smaller operations like H2O.ai that are built on open source frameworks. Cnvrg’s premise is that it provides a user-friendly platform for data scientists so that they can concentrate on devising algorithms and measuring how they work, not building or maintaining the platform that they run on.

While Intel is not saying much about the deal, it seems that some of the same logic behind last week’s SigOpt acquisition applies here as well: Intel has been refocusing its business around next-generation chips to better compete against the likes of Nvidia and smaller players like GraphCore. So it makes sense to also provide/invest in AI tools for customers, specifically services to help with the compute loads that they will be running on those chips.

It’s notable that in our article about the Core free tier last year, Frederic noted that those using the platform in the cloud can do so with Nvidia-optimized containers that run on a Kubernetes cluster. It’s not clear if that will continue to be the case, or if containers will be optimized instead for Intel architecture, or both. Cnvrg’s other partners include Red Hat and NetApp.

Intel’s focus on the next generation of computing aims to offset declines in its legacy operations. In the last quarter, Intel reported a 3% decline in its revenues, led by a drop in its data center business. It said that it’s projecting the AI silicon market to be bigger than $25 billion by 2024, with AI silicon in the data center to be greater than $10 billion in that period.

In 2019, Intel reported some $3.8 billion in AI-driven revenue, but it hopes that tools like SigOpt’s will help drive more activity in that business, dovetailing with the push for more AI applications in a wider range of businesses.

AWS launches its next-gen GPU instances

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of Nvidia’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.12xlarge instance, in AWS slang, and the eight A100 GPUs are connected over Nvidia’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for one-year reserved instances and $11.57 for three-year reserved instances.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Here’s why Intel’s stock just dropped 10% after reporting earnings

The third-quarter earnings cycle is just getting underway, but we’ve already seen a few companies post numbers that investors did not like. Netflix missed on several metrics yesterday and was punished, and today Intel is joining the video streaming giant in stock-market purgatory.

Intel shares are off around 10% in after-hours trading after the chip company reported its Q3 data. Investors had expected Intel to report an adjusted $1.11 in per-share profit, off around 22% from the year-ago period. They also expected it to report revenues of $18.26 billion in Q3, down a more modest 5% compared to the year-ago Q3.

Notably, Intel beat revenue expectations with top line of $18.3 billion, and met earnings-per-share estimates of $1.11, on an adjusted basis.

So, why are Intel shares sharply lower?

Quick consensus appears to point to weakness in the company data-focused business unit, the smaller of Intel’s two halves (the other focuses on PC chips). Inside the data-side of Intel, its Data Center Group (DCG) had mixed results, including cloud revenue growth of 15%. However, at the same time, the DCG’s “Enterprise & Government” business shrank 47% compared to the year-ago period, following what Intel described as “two quarters of more than 30 percent growth.”

Off that weakness, the resulting top line miss was sharp, with the market expecting $6.22 billion in revenue and DCG only delivering $5.9 billion.

Intel blamed COVID-19 for the weak economics conditions at play in the result. The company also highlighted COVID-19 when it discussed results from its internet of things business and memory operation, which declined 33% and 11% on a year-over-year basis, respectively.

Perhaps due to COVID-19’s recent resurgence in both North America and Europe, investors are concerned that the macroeconomic issues harming Intel’s growth could continue. If so, growth could be negative for a longer period than anticipated. That perspective could have led to some selling of Intel’s equity after the earnings report.

Could guidance have a part to play in Intel’s share price decline? Probably not. Better than what it reported for Q3 2020, Intel’s forward guidance shows a small revenue beat versus expectations, and a small profit beat as well. Intel forecasts revenues of $17.4 billion for Q4 2020 and adjusted earnings per share of $1.10, while the street was looking for $17.34 billion in top line and adjusted earnings per share of $1.06.

Given that Intel is prepped to best expectations in Q4, it’s hard to pin its share-price declines on guidance. That leaves the weakness in its data business as the most obvious culprit.

It is dangerous to over-describe why a stock or a group of stocks move at any given time. But in this case, it seems plain that the revenue miss inside Intel’s data business was at least a portion of why it shed value. As to whether the company’s COVID-19 notes are valid is up to you and how you handicap the broader economy.

Intel agrees to sell its NAND business to SK Hynix for $9 billion

SK Hynix, one of the world’s largest chip makers, announced today it will pay $9 billion for Intel’s flash memory business. Intel said it will use proceeds from the deal to focus on artificial intelligence, 5G and edge computing.

“For Intel, this transaction will allow us to to further prioritize our investments in differentiated technology where we can play a bigger role in the success of our customers and deliver attractive returns to our stockholders,” said Intel chief executive officer Bob Swan in the announcement.

The Wall Street Journal first reported earlier this week that the two companies were nearing an agreement, which will turn SK Hynix into one of the world’s largest NAND memory makers, second only to Samsung Electronics.

The deal with SK Hynix is the latest one Intel has made so it can double down on developing technology for 5G network infrastructure. Last year, Intel sold the majority of its modem business to Apple for about $1 billion, with Swan saying that the time that the deal would allow Intel to “[put] our full effort into 5G where it most closely aligns with the needs of our global customer base.”

Once the deal is approved and closes, Seoul-based SK Hynix will take over Intel’s NAND SSD and NAND component and wafer businesses, and its NAND foundry in Dalian, China. Intel will hold onto its Optane business, which makes SSD memory modules. The companies said regulatory approval is expected by late 2021, and a final closing of all assets, including Intel’s NAND-related intellectual property, will take place in March 2025.

Until the final closing takes places, Intel will continue to manufacture NAND wafers at the Dalian foundry and retain all IP related to the manufacturing and design of its NAND flash wafers.

As the Wall Street Journal noted, the Dalian facility is Intel’s only major foundry in China, which means selling it to SK Hynix will dramatically reduce its presence there as the United States government puts trade restrictions on Chinese technology.

In the announcement, Intel said it plans to use proceeds from the sale to “advance its long-term growth priorities, including artificial intelligence, 5G networking and the intelligent, autonomous edge.”

During the six-month period ending on June 27, 2020, NAND business represented about $2.8 billion of revenue for its Non-volatile Memory Solutions Group (NSG), and contributed about $600 million to the division’s operating income. According to the Wall Street Journal, this made up the majority of Intel’s total memory sales during that period, which was about $3 billion.

SK Hynix CEO Seok-Hee Lee said the deal will allow the South Korean company to “optimize our business structure, expanding our innovative portfolio in the NAND flash market segment, which will be comparable with what we achieved in DRAM.”