Don’t expect Ubuntu maker Canonical to IPO this year

Canonical, the company best known for its Ubuntu Linux distribution, is on a path to an IPO. That’s something Canonical founder and CEO Mark Shuttleworth has been quite open about. But don’t expect that IPO to happen this year.

“We did decide as a company — and that’s not just my decision — but we did decide that we want to have a commercial focus,” Shuttleworth told me during an interview at the OpenStack Summit in Vancouver, Canada today. “So we picked cloud and IoT as the areas to develop that. And being a public company, given that most of our customers are now global institutions, it makes for us also to be a global institution. I think it would be great for my team to be part of a public company. It would be a lot of work, but we are not shy of work.”

Unsurprisingly, Shuttleworth didn’t want to talk about the exact timeline for the IPO, though. “We will do the right thing at the right time,” he said. That right time is not this year, though. “No, there is a process that you have to go through and that takes time. We know what we need to hit in terms of revenue and growth and we’re on track.”

Getting the company on track was very much Shuttleworth’s focus over the course of the last year. That meant killing projects like the Ubuntu Phone (which Shuttleworth said was “painful,”) as well as the Unity desktop environment. Instead, the company’s focus is now squarely on helping enterprises stand up and manage their private clouds — no matter whether those run OpenStack, Kubernetes or a combination of those.

That doesn’t mean Canonical has forgotten about the desktop, though. Shuttleworth told me that the desktop team still has the same size as before. He also noted that the desktop is still a passion for him.

“We took some big risks a year ago,” he said. “We cut a bunch of stuff that people loved about us. We had to see if people were going to respond commercially.” That move is paying off now, though. During a keynote earlier today, Shuttleworth noted that Canonical is now in talks for about 200 new deployments for 2018 — up from about 40 in 2017.

While the hype around OpenStack has died down considerably over the course of the last two years, Canonical is still seeing good growth there — especially now that there are only a few major players left, including RedHat, which he name-checked a number of times during both his keynote and our conversation.

Why are things going well for Canonical when others couldn’t make a business out of OpenStack? “I believe for this community — the OpenStack community — it’s really important to deliver on the underlying promise of more cost-effective infrastructure,” he said. “You can love technology and you can have new projects and it can all be kumbaya and open source. In practice, to me, most of the stuff that we saw at OpenStack was bullshit. The stuff that really matters is computers, virtual machines, virtual disks, virtual networks. So we ruthlessly focus on delivering that and then also solving all the problems around that.”

Today, Canonical can deliver an OpenStack platform to an enterprise in two weeks — with all of the hardware and services in place. “I don’t mind being a bit controversial because we are delivering the promise of OpenStack,” he said. “The promise of OpenStack wasn’t delivering endless summits and endless new projects and endless new ideas.” That, he said, is exactly the kind of bullshit he was referring to in his earlier comments.

Looking ahead, Shuttleworth noted that he’s especially interested in what Canonical can do around IoT solutions, too. Thanks to Ubuntu Core and its Snap system, it has all the tools in place, including a lightweight management layer. The company also is focusing heavily on getting more customers in the financial services sector. No doubt, having a bunch of large banks and brokerages as reference customers will help the company when it comes to its IPO — and my guess is that we can expect that one to happen next year.

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

AWS adds more EC2 instance types with local NVMe storage

AWS is adding a new kind of virtual machine to its growing list of EC2 options. These new machines feature local NVMe storage, which offers significantly faster throughput than standard SSDs.

These new so-called C5d instances join the existing lineup of compute-optimized C5 instances the service already offered. AWS cites high-performance computing workloads, real-time analytics, multiplayer gaming and video encoding as potential use cases for its regular C5 machines and with the addition of this faster storage option, chances are users who switch will see even better performance.

Since the local storage is attached to the machine, it’ll also be terminated when the instance is stopped, so this is meant for storing intermediate files, not long-term storage.

Both C5 and C5d instances share the same underlying platform, with 3.0 GHz Intel Xeon Platinum 8000 processors.

The new instances are now available in a number of AWS’s U.S. regions, as well as in the service’s Canada regions. Prices are, unsurprisingly a bit higher than for regular C5 machines, starting at $0.096 per hour for the most basic machine with 4 in AWS’s Oregon region, for example. Regular C5 machines start at $0.085 per hour.

It’s worth noting that the EC2 F1 instances, which offer access to FPGAs, also use NVMe storage. Those are highly specialized machines, though, while the C5 instances are interesting to a far wider audience of developers.

On top of the NVMe announcement, AWS today also noted that its EC2 Bare Metal Instances are now generally available. These machines provide direct access to all the features of the underlying hardware, making them ideal for running applications that simply can’t run on virtualized hardware and for running secured container clusters. These bare metal instances also offer support for NVMe storage.

What we know about Google’s Duplex demo so far

The highlight of Google’s I/O keynote earlier this month was the reveal of Duplex, a system that can make calls to set up a salon appointment or a restaurant reservation for you by calling those places, chatting with a human and getting the job done. That demo drew lots of laughs at the keynote, but after the dust settled, plenty of ethical questions popped up because of how Duplex tries to fake being human. Over the course of the last few days, those were joined by questions from people like writer John Gruber about whether the demo was staged or edited. Axios then asked Google a few simple questions about the demo that Google has refused to answer.

We have reached out to Google with a number of very specific questions about this and have not heard back. As far as I can tell, the same is true for other outlets that have contacted the company.

If you haven’t seen the demo, take a look at this before you read on.

So did Google fudge this demo? Here is why people are asking and what we know so far:

During his keynote, Google CEO Sundar Pichai noted multiple times that we were listening to real calls and real conversations (“What you will hear is the Google Assistant actually calling a real salon.”). The company made the same claims in a blog post (“While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.”).

Google has so far declined to disclose the name of the businesses it worked with and whether it had permission to record those calls. California is a two-consent state, so our understanding is that permission to record these calls would have been necessary (unless those calls were made to businesses in a state with different laws). So on top of the ethics questions, there are also a few legal questions here.

We have some clues, though. In the blog post, Google Duplex lead Yaniv Leviathan and engineering manager Matan Kalman posted a picture of themselves eating a meal “booked through a call from Duplex.” Thanks to the wonder of crowdsourcing and a number of intrepid sleuths, we know that this restaurant was Hongs Gourmet in Saratoga, California. We called Hongs Gourmet last night, but the person who answered the phone referred us to her manager, who she told us had left for the day. (We’ll give it another try today.)

Sadly, the rest of Google’s audio samples don’t contain any other clues as to which restaurants were called.

What prompted much of the suspicion here is that nobody who answers the calls from the Assistant in Google’s samples identifies their name or the name of the business. My best guess is that Google cut those parts from the conversations, but it’s hard to tell. Some of the audio samples do however sound as if the beginning was edited out.

Google clearly didn’t expect this project to be controversial. The keynote demo was clearly meant to dazzle — and it did so in the moment because, if it really works, this technology represents the culmination of years of work on machine learning. But the company clearly didn’t think through the consequences.

My best guess is that Google didn’t fake these calls. But it surely only presented the best examples of its tests. That’s what you do in a big keynote demo, after all, even though in hindsight, showing the system fail or trying to place a live call would have been even better (remember Steve Job’s Starbucks call?).

For now, we’ll see if we can get more answers, but so far all of our calls and emails have gone unanswered. Google could easily do away with all of those questions around Duplex by simply answering them, but so far, that’s not happening.

Contentstack doubles down on its headless CMS

It’s been about two years since Built.io launched Contentstack, a headless content management system for the enterprise. Contentstack was always a bit of an odd product at Built.io, which mostly focuses on providing integration tools like Flow for large companies (think IFTTT, but for enterprise workflows). Contentstack is pretty successful in its own right, though, with customers ranging from the Miami Heat to Cisco and Best Buy. Because of this, Built.io decided to spin out the service into its own business at the beginning of this year, and now it’s doubling down on serving modern enterprises that want to bring their CMS strategy into the 21st century.

As Built.io COO Matthew Baier told me, the last few years were quite good to Contentstack . The company doubled its deal sizes since January, for example, and it’s now seeing hockey-stick growth. Contentstack now has about 40 employees and a dedicated support team and sales staff. Why spin it out as its own company? “This has been a red-hot space for us,” Baier said. “What we decided to do last year was to do both opportunities justice and really double down on Contentstack as a separate business.”

Back when Contentstack launched, the service positioned itself as an alternative to Drupal and WordPress. Now, the team is looking at it more in terms of Adobe’s CMS tools.

And these days, it’s all about headless CMS, which essentially decouples the backend from the front-end presentation. That’s a relatively new trend in the world of CMS, but one that enables companies to bring their content (be that text, images or video and audio) to not just the web but also mobile apps and new platforms like Amazon’s Alexa and Google’s Assistant. Using this model, the CMS essentially becomes another API the front-end developers can use. Contentstack likes to call this “Content-as-a-Service,” but I’m tired of X-as-a-Service monikers, so I won’t do that. It is worth noting that in this context, “content” can be anything from blog posts to the descriptions and images that go with a product on an e-commerce site.

“Headless CMS is exciting because it is modernizing the space,” explained Baier. “It’s probably the most exciting thing to happen in this space in 25 years. […] We are doing for CMS what Salesforce did for CRM.”

Not every company needs this kind of system that’s ready for an omni-channel strategy, of course, but even for companies that still mostly focus on the web — or whose website is the main product — a service like Contentstack makes sense because it allows them to quickly iterate on the front end without having to worry about the backend service that powers it.

The latest version of Contentstack introduces a number of new features for content editors, including a better workflow management system that streamlines the creating, review and deployment of content in the system, as well as support for publishing rules that ensure only approved content makes it into the official channels (it wouldn’t be an enterprise product if it didn’t have some role-based controls, right?). Also new in today’s update is the ability to bundle content together and then release it en masse, maybe to coincide with a major release, promotional campaign or other event.

Looking ahead, Baier tells me that the team wants to delve a bit deeper into how it can integrate with more third-party services. Given that this is Built.io’s bread and butter, that’s probably no major surprise, but in the CMS world, integrations are often a major paint point. It’s those integrations, though, that users really need as they now rely on more third-party services than ever to run their businesses. “We believe the future is in these composable stacks,” Baier noted.

The team is also looking at how it can best use AI and machine learning, especially in the context of SEO.

One thing Contentstack and Built.io have never done is take outside money. Baier says “never say never,” but it doesn’t look like the company is likely to look for outside funding anytime soon.

Heptio launches an open source load balancer for Kubernetes and OpenStack

Heptio is one of the more interesting companies in the container ecosystem. In part, that’s due to the simple fact that it was founded by Craig McLuckie and Joe Beda, two of the three engineers behind the original Kubernetes project, but also because of the technology it’s developing and the large amount of funding it has raised to date.

As the company announced today, it saw its revenue grow 140 percent from the last quarter of 2017 to the first quarter of 2018. In addition, Heptio says its headcount quadrupled since the beginning of 2017. Without any actual numbers, that kind of data doesn’t mean all that much. It’s easy to achieve high growth numbers if you’re starting out from zero, after all. But it looks like things are going well at the company and that the team is finding its place in the fast-growing Kubernetes ecosystem.

In addition to announcing these numbers, the team also today launched a new open source project that will join the company’s existing stable of tools like the cluster recovery tool Ark and the Kubernetes cluster monitoring tool Sonobuoy.

This new tool, Heptio Gimbal, has a very specific use case that is probably only of interest to a relatively small number of users — but for them, it’ll be a lifeline. Gimbal, which Heptio developed together with Yahoo Japan subsidiary Actapio, helps enterprises route traffic into both Kubernetes clusters and OpenStack deployments. Many enterprises now run these technologies in parallel and while some are now moving beyond OpenStack and toward a more Kubernetes -centric architecture, they aren’t likely to do away with their OpenStack investments anytime soon.

“We approached Heptio to help us modernize our infrastructure with Kubernetes without ripping out legacy investments in OpenStack and other back-end systems,” said Norifumi Matsuya, CEO and President at Actapio. “Application delivery at scale is key to our business. We needed faster service discovery and canary deployment capability that provides instant rollback and performance measurement. Gimbal enables our developers to address these challenges, which at the macro-level helps them increase their productivity and optimize system performance.”

Gimbal uses many of Heptio’s existing open source tools, as well as the Envoy proxy, which is part of the Cloud Native Computing Foundation’s stable of cloud-native projects. For now, Gimbal only supports one specific OpenStack release (the ‘Mitaka’ release from 2016), but the team is looking at adding support for VMware and EC2 in the future.

GitLab gets a native integration with Google’s Kubernetes Engine

GitLab, one of the most popular self-hosted Git services, has been on a bit of a roll lately. Barely two weeks after launching its integration with GitHub, the company today announced that developers on its platform can now automatically spin up a cluster on Google’s Kubernetes Engine (GKE) and deploy their applications to it with just a few clicks.

To build this feature, the company collaborated with Google, but this integration also makes extensive use of GitLab’s existing Auto DevOps tools, which already offers a similar functionality for working with containers. Auto DevOps aims to take all the grunt work out of setting up CI/CD pipelines and deploying to containers.

“Before the GKE integration, GitLab users needed an in-depth understanding of Kubernetes to manage their own clusters,” said GitLab CEO Sid Sijbrandij in today’s announcement. “With this collaboration, we’ve made it simple for our users to set up a managed deployment environment on [Google Cloud Platform] and leverage GitLab’s robust Auto DevOps capabilities.”

To make use of the GKE integration, developers only have to connect to their Google accounts from GitLab. Since GKE automatically manages the cluster, developers will ideally be able to fully focus on writing their application and leave the deployment and management to GitLab and Google.

These new features, which are part of the GitLab 10.6 release are now available to all GitLab users.

 

Red Hat looks beyond Linux

The Red Hat Linux distribution is turning 25 years old this week. What started as one of the earliest Linux distributions is now the most successful open-source company, and its success was a catalyst for others to follow its model. Today’s open-source world is very different from those heady days in the mid-1990s when Linux looked to be challenging Microsoft’s dominance on the desktop, but Red Hat is still going strong.

To put all of this into perspective, I sat down with the company’s current CEO (and former Delta Air Lines COO) Jim Whitehurst to talk about the past, present and future of the company, and open-source software in general. Whitehurst took the Red Hat CEO position 10 years ago, so while he wasn’t there in the earliest days, he definitely witnessed the evolution of open source in the enterprise, which is now more widespread than every.

“Ten years ago, open source at the time was really focused on offering viable alternatives to traditional software,” he told me. “We were selling layers of technology to replace existing technology. […] At the time, it was open source showing that we can build open-source tech at lower cost. The value proposition was that it was cheaper.”

At the time, he argues, the market was about replacing Windows with Linux or IBM’s WebSphere with JBoss. And that defined Red Hat’s role in the ecosystem, too, which was less about technological information than about packaging. “For Red Hat, we started off taking these open-source projects and making them usable for traditional enterprises,” said Whitehurst.

Jim Whitehurst, Red Hat president and CEO (photo by Joan Cros/NurPhoto via Getty Images)

About five or six ago, something changed, though. Large corporations, including Google and Facebook, started open sourcing their own projects because they didn’t look at some of the infrastructure technologies they opened up as competitive advantages. Instead, having them out in the open allowed them to profit from the ecosystems that formed around that. “The biggest part is it’s not just Google and Facebook finding religion,” said Whitehurst. “The social tech around open source made it easy to make projects happen. Companies got credit for that.”

He also noted that developers now look at their open-source contributions as part of their resumé. With an increasingly mobile workforce that regularly moves between jobs, companies that want to compete for talent are almost forced to open source at least some of the technologies that don’t give them a competitive advantage.

As the open-source ecosystem evolved, so did Red Hat. As enterprises started to understand the value of open source (and stopped being afraid of it), Red Hat shifted from simply talking to potential customers about savings to how open source can help them drive innovation. “We’ve gone from being commeditizers to being innovators. The tech we are driving is now driving net new innovation,” explained Whitehurst. “We are now not going in to talk about saving money but to help drive innovation inside a company.”

Over the last few years, that included making acquisitions to help drive this innovation. In 2015, Red Hat bought IT automation service Ansible, for example, and last month, the company closed its acquisition of CoreOS, one of the larger independent players in the Kubernetes container ecosystem — all while staying true to its open-source root.

There is only so much innovation you can do around a Linux distribution, though, and as a public company, Red Hat also had to look beyond that core business and build on it to better serve its customers. In part, that’s what drove the company to launch services like OpenShift, for example, a container platform that sits on top of Red Hat Enterprise Linux and — not unlike the original Linux distribution — integrates technologies like Docker and Kubernetes and makes them more easily usable inside an enterprise.

The reason for that? “I believe that containers will be the primary way that applications will be built, deployed and managed,” he told me, and argued that his company, especially after the CoreOS acquisition, is now a leader in both containers and Kubernetes. “When you think about the importance of containers to the future of IT, it’s a clear value for us and for our customers.”

The other major open-source project Red Hat is betting on is OpenStack . That may come as a bit of a surprise, given that popular opinion in the last year or so has shifted against the massive project that wants to give enterprises an open source on-premise alternative to AWS and other cloud providers. “There was a sense among big enterprise tech companies that OpenStack was going to be their savior from Amazon,” Whitehurst said. “But even OpenStack, flawlessly executed, put you where Amazon was five years ago. If you’re Cisco or HP or any of those big OEMs, you’ll say that OpenStack was a disappointment. But from our view as a software company, we are seeing good traction.”

Because OpenStack is especially popular among telcos, Whitehurst believes it will play a major role in the shift to 5G. “When we are talking to telcos, […] we are very confident that OpenStack will be the platform for 5G rollouts.”

With OpenShift and OpenStack, Red Hat believes that it has covered both the future of application development and the infrastructure on which those applications will run. Looking a bit further ahead, though, Whitehurst also noted that the company is starting to look at how it can use artificial intelligence and machine learning to make its own products smarter and more secure, but also at how it can use its technologies to enable edge computing. “Now that large enterprises are also contributing to open source, we have a virtually unlimited amount of material to bring our knowledge to,” he said.

 

Google needs your help finding Waldo

At some point in the not-so-distant past, April Fools was about pranks and hoaxes, but given that we apparently have enough of those on the web, the day has somehow morphed into a celebration of random jokey things. This year’s Google Maps gag is no exception.

Starting today, when you open Google Maps on your phone or desktop, you’ll see Waldo in his trademark red and white sweater, waving at you you from the side of your screen. That’s because Waldo is sharing his location with you for the next few days and he really wants to be found (or not… I’m never quite sure about what Waldo’s real motivations are…). You can also ask the Google Assistant “Hey Google. Where’s Waldo?”

Then, when you click on Waldo in the map, you get to see a standard “Where is Waldo” image and your job is to find him, as well as Woof, Wenda, Wizard Whitebeard and Odlaw.

Now if Google had wanted to make this a real April Fools joke, it would’ve announced this and then never released it or just shown you a standard “Where is Waldo” image without Waldo. That way, it would’ve driven everybody mad. But I’m pretty sure it’s for real, so head over to Google Maps and give it a try.

Google needs your help finding Waldo

At some point in the not-so-distant past, April Fools was about pranks and hoaxes, but given that we apparently have enough of those on the web, the day has somehow morphed into a celebration of random jokey things. This year’s Google Maps gag is no exception.

Starting today, when you open Google Maps on your phone or desktop, you’ll see Waldo in his trademark red and white sweater, waving at you you from the side of your screen. That’s because Waldo is sharing his location with you for the next few days and he really wants to be found (or not… I’m never quite sure about what Waldo’s real motivations are…). You can also ask the Google Assistant “Hey Google. Where’s Waldo?”

Then, when you click on Waldo in the map, you get to see a standard “Where is Waldo” image and your job is to find him, as well as Woof, Wenda, Wizard Whitebeard and Odlaw.

Now if Google had wanted to make this a real April Fools joke, it would’ve announced this and then never released it or just shown you a standard “Where is Waldo” image without Waldo. That way, it would’ve driven everybody mad. But I’m pretty sure it’s for real, so head over to Google Maps and give it a try.