Google gives its AI the reins over its data center cooling systems

The inside of data centers is loud and hot — and keeping servers from overheating is a major factor in the cost of running them. It’s no surprise then that the big players in this space, including Facebook, Microsoft and Google all looking for different ways of saving cooling cost. Facebook uses cool outside air when possible, Microsoft is experimenting with underwater data centers and Google is being Google and looking to its AI models for some extra savings.

A few years ago, Google, through its DeepMind affiliate, started looking into how it could use machine learning to provide its operators some additional guidance on how to best cool its data centers. At the time, though, the system only made recommendations and the human operators decided whether to implement them. Those humans can now take longer naps during the afternoon because the team has decided that the models are now good enough to give the AI-powered system full control over the cooling system. Operators can still intervene, of course, but as long as the AI doesn’t decide to burn the place down, the system runs autonomously.

[gallery ids="1693037,1693038,1693039,1693040"]

The new cooling system is now in place in a number of Google’s data centers. Every five minutes, the system polls thousands of sensors inside the data center and chooses the optimal actions based on this information. There are all kinds of checks and balances here, of course, so the chances of one of Google’s data centers going up in flames because of this is low.

Like most machine learning models, this one also became better as it gathered more data. It’s now delivering energy savings of 30 percent on average compared to the data centers’ historical energy usage.

One thing that’s worth noting here is that Google is obviously trying to save a few bucks, but in many ways, the company is also looking at this as a way of promoting its own machine learning services. What works in a data center, after all, should also work in a large office building. “In the long term, we think there’s potential to apply this technology in other industrial settings and help tackle climate change on an even grander scale,” DeepMind writes in today’s announcement.

Google Search’s new featured snippet panel saves you more clicks

Google is introducing an additional format for featured snippets in its search results today. For years, these snippets have appeared at the top of the search results page and featured both images and text that Google thinks are relevant to your query. They are all about Google saving you a click. Today, Google is going beyond this single answer for some queries and introducing a panel that also features relevant subtopics, saving you even more clicks.

Google’s canonical example for a query to trigger this new panel is “Quartz Vs. Granite.” This query brings up the usual snippet, plus subtopics like cost, benefits, weight and durability. Those topics are automatically chosen based on what Google’s algorithms understand about this topic.

You don’t need a [vs.] query to trigger this, though. If you look for something like “emergency funds,” you’ll also see a similar panel.

For now, I was only able to trigger these new panels on mobile, but Google says it is rolling out this feature over the coming days, so it may be a while before you spot one in the wild. I was also unsuccessful in triggering them with any other query I tried, but maybe you are luckier than me.

Google notes that today’s announcement is part of an ongoing effort to provide more comprehensive results to your questions. This February, for example, Google started showing multiple featured snippets when its systems think a query has multiple interpretations.

Descartes Labs launches its geospatial analysis platform

Descartes Labs, a New Mexico-based geospatial analytics startup, today announced that its platform is now out of beta. The well-funded company already allowed businesses to analyze satellite imagery it pulls in from NASA and ESA and build predictive models based on this data, but starting today, it is adding both weather data to its library, as well as commercial high-resolution imagery thanks to a new partnership with Airbus’ OneAtlas project.

As Descartes Labs co-founder Mark Johnson, who you may remember from Zite, told me, the team now regularly pulls in 100 terabytes of new data every day. The company’s clients then use this data to predict the growth of crops, for example. And while Descartes Labs can’t disclose most of its clients, Johnson told me that Cargill and teams at Los Alamos National Labs are among its users.

While anybody could theoretically access the same data and spin up thousands of compute nodes to analyze it and build models, the value of a service like this is very much about abstracting all of that work away and letting developers and analysts focus on what they do best.

“If you look at the early beta customers of the system, typically it’s a company that has some kind of geospatial expertise,” Johnson told me. “Oftentimes, they’re collecting data of their own and their primary challenge is that the folks on their team who ought to be spending all their time doing science on the datasets — the majority of their time, sometimes 80 plus percent of their time — they are collecting the data, cleaning the data, getting the data analysis ready. So only a small percentage of their work time is spent on analysis.”

So far, Descartes Labs’ infrastructure, which mostly runs on the Google Cloud Platform, has processed over 11 petabytes of compressed data. Thanks to the partnership with Airbus, it’s now also getting very high-resolution data for its users. While some of the free data from the Landsat satellites, for example, have a resolution of 30m per pixel, the Airbus data comes in at 1.5m per pixel across the entire world and 50cm per pixel over 2,600 cities. Add NOAA’s global weather data to this, and it’s easy to imagine what kind of models developers could build based on all of this information.

Many users, Johnson tells me, also bring their own data to the service to build better models or see

While Descartes Labs’ early focus was on developers, it’s worth noting that the team has now also built a viewer that allows any user (who pays for the service) to work with the base map and add layers of additional information on top.

Johnson tells me that the team plans to add more datasets over time, though the focus of the service will always remain on spatial data.

Gmail for iOS and Android now lets you turn off conversation view

When Gmail launched with its threaded conversation view feature as the default and only option, some people sure didn’t like it and Google quickly allowed users to turn it off. On mobile, though, you were stuck with it. But here’s some good news for you conversation view haters: you can now turn it off on mobile, too.

The ability to turn off conversation view is now rolling out to all Gmail app users on iOS and Android . So if you want Gmail to simply show you all emails as they arrive, without grouping them to”make them easier to digest and follow,” you’re now free to do so.

If you’ve always just left conversation view on by default, maybe now is a good time to see if you like the old-school way of looking at your email better. I personally prefer conversation view since it helps me keep track of conversations (and I get too many emails already), but it’s pretty much a personal preference.

To make the change, simply tap on your account name in the Settings menu and look for the “conversation view” check box. That’s it. Peace restored.

JBL’s $250 Google Assistant smart display is now available for pre-order

It’s been a week since Lenovo’s Google Assistant-powered smart display went on sale and slowly but surely, its competitors are launching their versions, too. Today, JBL announced that its $249.95 JBL Link View is now available for pre-order, with an expected ship date of September 3, 2018.

JBL went for a slightly different design than Lenovo (and the upcoming LG WK9), but in terms of functionality, these devices are pretty much the same. The Link View features an 8-inch HD screen and unlike Lenovo’s Smart Display, JBL is not making a larger 10-inch version. It’s got two 10W speakers and the usual support for Bluetooth, as well as Google’s Chromecast protocol.

JBL says the unit is splash proof (IPX4), so you can safely use to watch YouTube recipe videos in your kitchen. It also offers a 5MP front-facing camera for your video chats and a privacy switch that lets you shut off the camera and microphone.

JBL, Lenovo and LG all announced their Google Assistant smart displays at CES earlier this. Lenovo was the first to actually ship a product and both the hardware as well as Google’s software received a positive reception. There’s no word on when LG’s WK9 will hit the market.

Freshworks raises $100M

Freshworks, a company that offers a variety of business software tools ranging from IT management to CRM for sales and customer support software, today announced that it has raised a $100 million funding round co-led by Sequoia and Accel Partners, with participation from CaptialG.

The company’s last funding round came in the form of a $55 million Series F round led by Sequoia in 2016. Today’s round brings the San Bruno-based company’s total funding to $250 million, at a valuation that’s now north of $1.5 billion, the company tells us. Freshworks also today noted that it now pulls in over $100 million in annual recurring revenue.

In addition to the new funding, Freshworks also today announced that it has hired a former AppDynamics VP of finance and treasury Suresh Seshardi as its CFO. Seshardi helped AppDynamics prepare for its IPO, so it’s a fair bet that he’ll do the same at Freshworks. AppDynamics, of course, famously didn’t actually IPO but was instead acquired by Cisco only hours before the team was supposed to ring the bell on Wall Street.

Freshworks CEO Girish Mathrubootham tells us we shouldn’t hold our breath waiting for his company to IPO. “Freshworks hasn’t started the IPO process but we do feel that we will eventually go public in the U.S.,” he said. “With that said, our primary focus right now is on growing the business and investing in our platform. When the timing is right, we’ll make that decision.”

Freshworks, which launched its first product back in 2010, also tells us that it plans to use the new cash to invest in its platform and especially in looking at how it can use AI to bring new innovations to its tools.

Current Freshworks users include the likes of Sling TV, Honda, Hugo Boss, Toshiba and Cisco. In total, the company’s tools are now in use by about 150,000 businesses, making it one of the larger SaaS providers you have probably never heard of.

 

Google Calendar makes rescheduling meetings easier

Nobody really likes meetings — and the few people who do like them are the ones you probably don’t want to have meetings with. So when you’ve reached your fill and decide to reschedule some of those obligations, the usual process of trying to find a new meeting time begins. Thankfully, the Google Calendar team has heard your sighs of frustration and built a new tool that makes rescheduling meetings much easier.

Starting in two weeks, on August 13th, every guest will be able to propose a new meeting time, attach a message to the organizer to that update to explain themselves. The organizer can then review and accept or deny that new time slot. If the other guests have made their calendar’s public, the organizer can also see the other attendee’s availability in a new side-by-side view to find a new time.

What’s a bit odd here is that this is still mostly a manual feature. To find meeting slots to begin with, Google already employs some of its machine learning smarts to find the best times. This new feature doesn’t seem to employ the same algorithms to proposed dates and times for rescheduled meetings.

This new feature will work across G Suite domains and also with Microsoft Exchange. It’s worth noting, though, that this new option won’t be available for meetings with more than 200 attendees and all-day events.

 

BMW’s Alexa integration gets it right

BMW will in a few days start rolling out to many of its drivers support for Amazon’s Alexa voice assistant. The fact that BWM is doing this doesn’t come as a surprise, given that it has long talked about its plans to bring Alexa — and potentially other personal assistants like Cortana and the Google Assistant — to its cars. Ahead of its official launch in Germany, Austria, the U.S. and U.K. (with other countries following at a later date), I went to Munich to take a look at what using Alexa in a BMW is all about.

As Dieter May, BMW’s senior VP for digital products told me earlier this year, the company has long held that in-car digital assistants have to be more than just an “Echo Dot in a cup holder,” meaning that they have to be deeply integrated into the experience and the rest of the technology in the car. And that’s exactly what BMW has done here — and it has done it really well.

What maybe surprised me the most was that we’re not just talking about the voice interface here. BMW is working directly with the Alexa team at Amazon to also integrate visual responses from Alexa. Using the tablet-like display you find above the center console of most new BMWs, the service doesn’t just read out the answer but also shows additional facts or graphs when warranted. That means Alexa in a BMW is a lot more like using an Echo Show than a Dot (though you’re obviously not going to be able to watch any videos on it).

In the demo I saw, in a 2015 BMW X5 that was specifically rigged to run Alexa ahead of the launch, the display would activate when you ask for weather information, for example, or for queries that returned information from a Wikipedia post.

What’s cool here is that the BMW team styled these responses using the same design language that also governs the company’s other in-car products. So if you see the weather forecast from Alexa, that’ll look exactly like the weather forecast from BMW’s own Connected Drive system. The only difference is the “Alexa” name at the top-left of the screen.

All of this sounds easy, but I’m sure it took a good bit of negotiation with Amazon to build a system like this, especially because there’s an important second part to this integration that’s quite unique. The queries, which you start by pushing the usual “talk” button in the car (in newer models, the Alexa wake word feature will also work), are first sent to BMW’s servers before they go to Amazon. BMW wants to keep control over the data and ensure its users’ privacy, so it added this proxy in the middle. That means there’s a bit of an extra lag in getting responses from Amazon, but the team is working hard on reducing this, and for many of the queries we tried during my demo, it was already negligible.

As the team told me, the first thing it had to build was a way to switch that can route your queries to the right service. The car, after all, already has a built-in speech recognition service that lets you set directions in the navigation system, for example. Now, it has to recognize that the speaker said “Alexa” at the beginning of the query, then route it to the Alexa service. The team also stressed that we’re talking about a very deep integration here. “We’re not just streaming everything through your smartphone or using some plug-and-play solution,” a BMW spokesperson noted.

“You get what you’d expect from BMW, a deep integration, and to do that, we use the technology we already have in the car, especially the built-in SIM card.”

One of the advantages of Alexa’s open ecosystem is its skills. Not every skill makes sense in the context of the car, and some could be outright distracting, so the team is curating a list of skills that you’ll be able to use in the car.

It’s no secret that BMW is also working with Microsoft (and many of its cloud services run on Azure). BMW argues that Alexa and Cortana have different strengths, though, with Cortana being about productivity and a connection to Office 365, for example. It’s easy to imagine a future where you could call up both Alexa and Cortana from your car — and that’s surely why BMW built its own system for routing voice commands and why it wants to have control over this process.

BMW tells me that it’ll look at how users will use the new service and tune it accordingly. Because a lot of the functionality runs in the cloud, updates are obviously easy and the team can rapidly release new features — just like any other software company.

Google wants Go to become the go-to language for writing cloud apps

The Google -incubated Go language is one of the fastest growing programming languages today, with about one million active developers using it worldwide. But the company believes it can still accelerate its growth, especially when it comes to its role in writing cloud applications. And to do this, the company today announced Go Cloud, a new open-source library and set of tools that makes it easier to build cloud apps with Go .

While Go is highly popular among developers, Google argues that the language was missing a standard library for interfacing with cloud services. Today, developers often have to essentially write their own libraries to use the features of each cloud, but organizations today want to be able to easily move their workloads between clouds.

What Go Cloud then gives these developers is a set of open generic cloud APIs for accessing blog storage, MySQL databases and runtime configuration, as well as an HTTP server with built-in logging, tracing and health checking. Right now, the focus is on AWS and Google Cloud Platform. Over time, Google plans to add more features to Go Cloud and support for more cloud providers (and those cloud providers can, of course, build their own support, too).

This, Google argues, allows developer teams to build applications that can easily run on any supported cloud without having to re-architect large parts of their applications.

As Google VP of developer relations Adam Seligman told me, the company hopes this move will kick off an explosion of libraries around Go — and, of course, that it will accelerate Go’s growth as a language for the cloud.

Figure Eight partners with Google to give AutoML developers better training data

Figure Eight, a platform that helps developers train, test and fine-tune their machine learning models, today announced a major new collaboration with Google that essentially turns Figure Eight into the de facto standard for creating and annotating machine learning data for Google Cloud’s AutoML service.

As Figure Eight’s CEO Robin Bordoli told me, Google had long been a customer, but the two companies decided to work closer together now that AutoML is launching in beta and expanding its product portfolio, too. As Bordoli argues, training data remains one of the biggest bottlenecks for developers who want to build their own machine learning models — and Google recognized this, too. “It’s their recognition that the lack of training data is a fundamental bottleneck to the adoption of AutoML,” he told me.

Since AutoML’s first product focuses on machine vision, it’s maybe no surprise that Figure Eight’s partnership with Google is also currently mostly about this kind of visual training data. Its service is meant to help relatively inexperienced developers collect data, prepare it for use in AutoML and then experiment with the results.

What makes Figure Eight stand out from other platforms is that it keeps the human in the loop. Bordoli argues that you can’t simply use AI tools to annotate your training data, just like you can’t fully rely on humans either (unless you want to employ entire countries as image taggers). “Human labeling is a key need for our customers, and we are excited to partner with Figure Eight to enhance our support in this area,” said Francisco Uribe, the product manager for Google Cloud AutoML at Google.

As part of this partnership, Figure Eight has developed a number of AutoML-specific templates and processes for uploading the data. It also offers its customers assistance with creating the training data (while also ensuring AI fairness). Google Cloud users can use the Figure Eight platform to label up to 1,000 images and they do, of course, get access to the company’s data labeling annotators if they don’t want to do all the work themselves.

Ahead of today’s announcement, Figure Eight had already generated more than 10 billion data labels and today’s announcement will surely accelerate this.