D-Wave partners with NEC to build hybrid HPC and quantum apps

D-Wave Systems announced a partnership with Japanese industrial giant NEC today to build what they call “hybrid apps and services” that work on a combination of NEC high-performance computers and D-Wave’s quantum systems.

The two companies also announced that NEC will be investing $10 million in D-Wave, which has raised $204 million prior to this, according to Crunchbase data.

D-Wave’s chief product officer and EVP of R&D, Alan Baratz, whom the company announced this week will be taking over as CEO effective January 1st, says the company has been able to do a lot of business in Japan, and the size of this deal could help push the technology further. “Our collaboration with global pioneer NEC is a major milestone in the pursuit of fully commercial quantum applications,” he said in a statement.

The company says it is one of the earliest deals between a quantum vendor and a multinational IT company with the size and scale of NEC. The deal involves three key elements. First of all, NEC and D-Wave will come together to develop hybrid services that combine NEC’s supercomputers and other classical systems with D-Wave’s quantum technology. The hope is that by combining the classical and quantum systems, they can create better performance for lower cost than you could get if you tried to do similar computing on a strictly classical system.

The two companies will also work together with NEC customers to build applications that will take advantage of this hybrid approach. Also, NEC will be an authorized reseller of D-Wave cloud services.

For NEC, which claims to have demonstrated the world’s first quantum bit device way back in 1999, it is about finding ways to keep advancing commercial quantum computing. “Quantum computing development is critical for the future of every industry tasked with solving today’s most complex problems. Hybrid applications and greater access to quantum systems is what will allow us to achieve truly commercial-grade quantum solutions,” Motoo Nishihara, executive vice president and CTO at NEC Corporation, said in a statement.

This deal should help move the companies toward that goal.

AWS is sick of waiting for your company to move to the cloud

AWS held its annual re:Invent customer conference last week in Las Vegas. Being Vegas, there was pageantry aplenty, of course, but this year’s model felt a bit different than in years past, lacking the onslaught of major announcements we are used to getting at this event.

Perhaps the pace of innovation could finally be slowing, but the company still had a few messages for attendees. For starters, AWS CEO Andy Jassy made it clear he’s tired of the slow pace of change inside the enterprise. In Jassy’s view, the time for incremental change is over, and it’s time to start moving to the cloud faster.

AWS also placed a couple of big bets this year in Vegas to help make that happen. The first involves AI and machine learning. The second, moving computing to the edge, closer to the business than the traditional cloud allows.

The question is what is driving these strategies? AWS had a clear head start in the cloud, and owns a third of the market, more than double its closest rival, Microsoft. The good news is that the market is still growing and will continue to do so for the foreseeable future. The bad news for AWS is that it can probably see Google and Microsoft beginning to resonate with more customers, and it’s looking for new ways to get a piece of the untapped part of the market to choose AWS.

Move faster, dammit

The worldwide infrastructure business surpassed $100 billion this year, yet we have only just scratched the surface of this market. Surely, digital-first companies, those born in the cloud, understand all of the advantages of working there, but large enterprises are still moving surprisingly slowly.

Jassy indicated more than once last week that he’s had enough of that. He wants to see companies transform more quickly, and in his view it’s not a technical problem, it’s a lack of leadership. If you want to get to the cloud faster, you need executive buy-in pushing it.

Jassy outlined four steps in his keynote to help companies move faster and get more workloads in the cloud. He believes in doing so, it will not only continue to enrich his own company, it will also help customers avoid disruptive forces in their markets.

For starters, he says that it’s imperative to get the senior team aligned behind a change. “Inertia is a powerful thing,” Jassy told the audience at his keynote on Tuesday. He’s right of course. There are forces inside every company designed with good reason to protect the organization from massive systemic changes, but these forces — whether legal, compliance, security or HR — can hold back a company when meaningful change is needed.

He said that a fuller shift to the cloud requires ambitious planning. “It’s easy to go a long time dipping your toe in the water if you don’t have an aggressive goal,” he emphasized. To move faster, you also need staff that can help you get there — and that requires training.

Finally, you need a thoughtful, methodical migration plan. Most companies start with the stuff that’s easy to move to the cloud, then begin to migrate workloads that require some adjustments. They continue along this path all the way to things you might not choose to move at all.

Jassy knows that the faster companies get on board and move to the cloud, the better off his company is going to be, assuming it can capture the lion’s share of those workloads. The trouble is that after you move that first easy batch, getting to the cloud becomes increasingly challenging, and that’s one of the big reasons why companies have moved slower than Jassy would like.

The power of machine learning to drive adoption

One way to motivate folks to move faster is help them understand the power of machine learning. AWS made a slew of announcements around machine learning designed to give customers a more comprehensive Amazon solution. This included SageMaker Studio, a machine learning development environment along with notebook, debugging and monitoring tools. Finally, the company announced AutoPilot, a tool that gives more insight into automatically-generated machine learning models, another way to go faster.

The company also announced a new connected keyboard called DeepComposer, designed to teach developers about machine learning in a fun way. It joins DeepLens and DeepRacer, two tools released at previous re:Invents. All of this is designed for developers to help them get comfortable with machine learning.

It wasn’t a coincidence the company also announced a significant partnership with the NFL to use machine learning to help make players safer. It’s an excellent use case. The NFL has tons of data on its players, and it has decades of film. If it can use that data as fuel for machine learning-driven solutions to help prevent injuries, it could end up being a catalyst for meaningful change driven by machine learning in the cloud.

Machine learning provides another reason to move to the cloud. This shows that the cloud isn’t just about agility and speed, it’s also about innovation and transformation. If you can take advantage of machine learning to transform your business, it’s another reason to move to the cloud.

Moving to the edge

Finally, AWS recognizes that computing in cloud can only get you so far. In spite of the leaps it has made architecturally, there is still a latency issue that will be unacceptable for some workloads. That’s why it was a big deal that the company announced a couple of edge computing solutions including the general availability of Outposts, its private cloud in a box along with a new concept called Local Zones last week.

The company announced Outposts last year as a way to bring the cloud on prem. It is supposed to behave exactly the same way as traditional cloud resources, but AWS installs, manages and maintains a physical box in your data center. It’s the ultimate in edge computing, bringing the compute power right into your building.

For those who don’t want to go that far, AWS also introduced Local Zones, starting with one in LA, where the cloud infrastructure resources are close by instead of in your building. The idea is the same — to reduce the physical distance between you and your compute resources and reduce latency.

All of this is designed to put the cloud in reach of more customers, to help them move to the cloud faster. Sure, it’s self-serving, but 11 years after I first heard the term cloud computing, maybe it really is time to give companies a harder push.

NFL-AWS partnership hopes to reduce head injuries with machine learning

Today at AWS re:Invent in Las Vegas, NFL commissioner Roger Goodell joined AWS CEO Andy Jassy onstage to announce a new partnership to use machine learning to help reduce head injuries in professional football.

“We’re excited to announce a new strategic partnership together, which is going to combine cloud computing, machine learning and data science to work on transforming player health and safety,” Jassy said today.

NFL football is a fast and violent sport involving large men. Injuries are a part of the game, but the NFL is hoping to reduce head injuries in particular, a huge problem for the sport. A 2017 study found that 110 out of 111 deceased NFL players had chronic traumatic encephalopathy (CTE).

The NFL has a head start in machine learning due to the sheer amount of data it collects on its players. The sport also has decades of video. That means they should be able to create meaningful simulations that can help improve helmet design and also lead to rule changes that could reduce the concussion risk that is endemic in the sport.

Goodell recognizes that the sport has all this data, but lacks the expertise to put it to work. That’s where the partnership comes in. “I think what’s most exciting to me is that there are very few relationships that we get involved with where the partner and the NFL can change the game,” he said.

Jeff Miller, executive VP for Health and safety innovation for the NFL, says this partnership is part of a broader initiative the NFL has taken over the last few years to find ways to reduce head injuries in the game. “About three and a half years ago the NFL started a project called ‘The Engineering Roadmap’, which was a multibillion-dollar effort supported by our owners to better understand the impact of concussions on the field, then design ways to mitigate those injuries and move the helmet industry forward,” Miller said today.

Jeff Crandall, chairman of the NFL engineering committee, says this involves three main pieces. The first is understanding what happens on the field, particularly who is getting injured and why. Secondly, it involves taking that data and sharing it with the helmet industry to help them build better helmets. The final piece is incentivizing the helmet industry to build better helmets, and to that end the league established the $3 million helmet challenge.

The way AWS helps is of course putting all this data to work with its machine learning toolset. AWS’s VP of artificial intelligence, Matt Wood, says that having all this data is a huge advantage and allows them to put it to work in a data lake, and then use the AWS SageMaker toolset to help make sense of it and produce safer outcomes.

The hope is to help understand, not only how head injuries occur, and to prevent them to the extent possible in a violent sport, but also design better equipment and rule changes to reduce the number of injuries overall. Putting data to work and combining it with machine learning tools could help.

AWS Outposts begins to take shape to bring the cloud into the data center

When AWS announced Outposts last year, a private cloud hardware stack they install in your data center, there were a lot of unanswered questions. This week at AWS re:Invent in Las Vegas, the company announced general availability as the vision for this approach began to become clearer.

AWS CEO Andy Jassy, speaking at a press conference earlier today, said there are certain workloads like running a factory that need compute resources to be close because of low-latency requirements. That’s where Outposts could play well, and where similar existing solutions in his opinion fell short because there wasn’t a smooth connection between the on-prem hardware and the cloud.

“We tried to rethink this with a different approach,” he said. “We thought about it more as trying to distribute AWS on premises. With Outposts, you have racks of AWS servers that have compute, storage, database and analytics and machine learning on them. You get to decide what composition you want and we deliver that to you,” he said.

The hardware is equipped with a slew of services, including Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Amazon ECS, Amazon Elastic Kubernetes Service and Amazon EMR. Conspicuously missing is S3 storage, but Amazon promises that will be coming in 2020 with other services on deck, as well.

Make no mistake, the world’s premiere cloud infrastructure vendor will be installing a rack of hardware inside your data center. AWS has formed a team inside the company to handle installation, monitoring and management of the equipment.

The easy way to think about this would be that it’s a way for companies that might be afraid to go all-in on the cloud to start experimenting with a cloud-like environment, which you can manage from an AWS console or VMware (beginning next year). Yet an Amazon spokesperson indicated that many companies like Morningstar and Philips Healthcare, both of which are already AWS public cloud customers, are choosing Outposts because it gives these customers ultra-low latency, almost like a hyper-local availability zone.

These customers need to keep compute resources as close as possible to run a particular set of jobs. While a local availability zone like the one announced for Los Angeles yesterday could also suffice for this, Outposts could help when there isn’t a local option.

Customers can sign up for Outposts in a similar fashion to any EC2 instance, but instead of spinning it up in the cloud, an order goes to the Outposts team, and it gets racked, stacked and installed on prem.

From then on, Amazon still handles the management just as it does with a public cloud instance. For now, installation and management is being handled by an internal Amazon team, but over time they plan to work with systems integrators to help handle some of that workload.

AWS launches new local zone in LA

AWS announced a new local zone today in LA, designed to provide customers in southern California with a set of higher bandwidth, lower latency compute resources. It’s not a coincidence that this area is the epicenter of the entertainment industry.

Having a local zone gives LA-area companies, whether that’s related to video processing, gaming, ad tech or machine learning, access to a much more localized set of resources, wrote Jeff Barr of AWS in a blog post announcing the new zone.

“Today we are launching a Local Zone in Los Angeles, California. The Local Zone is a new type of AWS infrastructure deployment that brings select AWS services very close to a particular geographic area. This Local Zone is designed to provide very low latency (single-digit milliseconds) to applications that are accessed from Los Angeles and other locations in Southern California,” Barr wrote

As he pointed out, LA is home to a lot of companies that require this kind of local compute for gaming, 3D modeling and rendering, video processing (including real-time color correction), video streaming and media production pipelines.

The LA zone is actually part of a broader US West (Oregon) Region. Those customers who want to take advantage of the new zone will have to opt in by selecting it in the Local Zones console. It will be billed separately, but also includes access to savings plans.

AWS announces EKS on Fargate is available

Today at AWS re:Invent in Las Vegas, the company announced that Elastic Kubernetes Service is available on Fargate.

EKS is Amazon’s flavor of Kubernetes. Fargate is a service announced in 2017 that enables you to launch containerized applications without worrying about the underlying infrastructure.

“Starting today, you can start using Amazon Elastic Kubernetes Service to run Kubernetes pods on AWS Fargate. Amazon EKS and Fargate make it straightforward to run Kubernetes-based applications on AWS by removing the need to provision and manage infrastructure for pods,” the company wrote in a blog post announcing the new feature.

Pods are simply a group of containers you launch on the same Kubernetes cluster. If you think about the fact that Kubernetes enables you to launch these pods in an automated fashion, it makes sense to also provision the underlying infrastructure required to run those pods in an automated fashion.

“With AWS Fargate, you pay only for the amount of vCPU and memory resources that your pod needs to run. This includes the resources the pod requests in addition to a small amount of memory needed to run Kubernetes components alongside the pod. Pods running on Fargate follow the existing pricing model,” the company wrote in the blog.

That means developers won’t have to worry about over provisioning because Fargate should run the exact number of resources needed to run that pod at any given moment and no more.

This feature is available starting today in US East (N. Virginia), US East (Ohio), Europe (Ireland), and Asia Pacific (Tokyo).

Canalys: Chinese cloud infrastructure spending reaches almost $3B a quarter

Canalys released its latest cloud infrastructure spending numbers for China today, and it’s all trending upward. For starters, the market reached $2.9 billion for the quarter, an increase of 60.8%. China now accounts for 10.4% of worldwide cloud spending, meaning it’s second only to the U.S. in overall spending.

That is pretty amazing, given that China was late in coming to the cloud, but also not surprising given the sheer size of the overall potential market. Once it got going, it was bound to gain momentum simply because of that size. Still, it is surprising that it is three times the size in terms of market share of the next closest country, according to Canalys.

Most of the business is going to Chinese cloud companies. Alibaba, which, like Amazon, has a retail arm and a cloud arm, leads the way by far, with 45% of the market share (worth $1.3 billion). Tencent is second, with 18.6%, followed by AWS with 8.6% and Baidu with 8.2%. AWS was the only non-Chinese company to register any market share.

Wong Yih Khai, senior analyst at Canalys, says the market demand for cloud infrastructure services in China continues to grow at a rapid pace led by demand for artificial intelligence services.

“With this growing demand, cloud service providers are having to differentiate themselves in a highly competitive environment. One of the key emerging differentiators, especially among local cloud service providers, is the development of artificial intelligence (AI) capabilities, either as a service or embedded in their own offerings. AI for facial recognition is already widely used across the country in many smart city deployments and will be a key part of healthcare, retail, finance, transport and industry cloud solutions,” he said in a statement.

Interestingly enough, the market share breaks down somewhat like worldwide market share, where Amazon leads with around 34%, with Microsoft in second with around 15% and Google in third with around 8%.

Box looks to balance growth and profitability as it matures

Prevailing wisdom states that as an enterprise SaaS company evolves, there’s a tendency to sacrifice profitability for growth — understandably so, especially in the early days of the company. At some point, however, a company needs to become profitable.

Box has struggled to reach that goal since going public in 2015, but yesterday, it delivered a mostly positive earnings report. Wall Street seemed to approve, with the stock up 6.75% as we published this article.

Box CEO Aaron Levie says the goal moving forward is to find better balance between growth and profitability. In his post-report call with analysts, Levie pointed to some positive numbers.

“As we shared in October [at BoxWorks], we are focused on driving a balance of long-term growth and improved profitability as measured by the combination of revenue growth plus free cash flow margin. On this combined metric, we expect to deliver a significant increase in FY ’21 to at least 25% and eventually reaching at least 35% in FY ’23,” Levie said.

Growing the platform

Part of the maturation and drive to profitability is spurred by the fact that Box now has a more complete product platform. While many struggle to understand the company’s business model, it provides content management in the cloud and modernizing that aspect of enterprise software. As a result, there are few pure-play content management vendors that can do what Box does in a cloud context.

Xerox tells HP it will bring takeover bid directly to shareholders

Xerox fired the latest volley in the Xerox -HP merger letter wars today. Xerox CEO John Visentin wrote to the HP board that his company planned to take its $33.5 billion offer directly to HP shareholders.

He began his letter with a tone befitting a hostile takeover attempt, stating that their refusal to negotiate defied logic. “We have put forth a compelling proposal – one that would allow HP shareholders to both realize immediate cash value and enjoy equal participation in the substantial upside expected to result from a combination. Our offer is neither ‘highly conditional’ nor ‘uncertain’ as you claim,” Visentin wrote in his letter.

He added, “We plan to engage directly with HP shareholders to solicit their support in urging the HP Board to do the right thing and pursue this compelling opportunity.”

The letter was in response to one yesterday from HP in which it turned down Xerox’s latest overture, stating that the deal seemed beyond Xerox’s ability to afford it. It called into question Xerox’s current financial situation, citing Xerox’s own financial reports, and took exception to the way in which Xerox was courting the company.

“It is clear in your aggressive words and actions that Xerox is intent on forcing a potential combination on opportunistic terms and without providing adequate information,” the company wrote.

Visentin fired back in his letter, “While you may not appreciate our “aggressive” tactics, we will not apologize for them. The most efficient way to prove out the scope of this opportunity with certainty is through mutual due diligence, which you continue to refuse, and we are obligated to require.”

He further pulled no punches writing that he believes the deal is good for both companies and good for the shareholders. “The potential benefits of a combination between HP and Xerox are self-evident. Together, we could create an industry leader – with enhanced scale and best-in-class offerings across a complete product portfolio — that will be positioned to invest more in innovation and generate greater returns for shareholders.”

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategies, thinks HP ultimately has the upper hand in this situation. “I feel like we have seen this movie before when Carl Icahn meddled with Dell in a similar way. Xerox is a third of the size HP Inc., has been steadily declining in revenue, is running out of options, and needs HP more than HP needs it.”

It would seem Xerox has chosen a no-holds barred approach to the situation. The pen is now in HP’s hands as we await the next letter and see how the printing giant intends to respond to the latest missive from Xerox.

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about is making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s SQL (and Postgres) compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Assay wrote in a blog post announcing these new capabilities.

Assay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.