GitHub launches Actions, its workflow automation tool

For the longest time, GitHub was all about storing source code and sharing it either with the rest of the world or your colleagues. Today, the company, which is in the process of being acquired by Microsoft, is taking a step in a different but related direction by launching GitHub Actions. Actions allow developers to not just host code on the platform but also run it. We’re not talking about a new cloud to rival AWS here, but instead about something more akin to a very flexible IFTTT for developers who want to automate their development workflows, whether that is sending notifications or building a full continuous integration and delivery pipeline.

This is a big deal for GitHub . Indeed, Sam Lambert, GitHub’s head of platform, described it to me as “the biggest shift we’ve had in the history of GitHub.” He likened it to shortcuts in iOS — just more flexible. “Imagine an infinitely more flexible version of shortcut, hosted on GitHub and designed to allow anyone to create an action inside a container to augment and connect their workflow.”

GitHub users can use Actions to build their continuous delivery pipelines, and the company expects that many will do so. And that’s pretty much the first thing most people will think about when they hear about this new project. GitHub’s own description of Actions in today’s announcement makes definitely fits that bill, too. “Easily build, package, release, update, and deploy your project in any language—on GitHub or any external system—without having to run code yourself,” the company writes. But it’s about more than that.

“I see CI/CD as one narrow use case of actions. It’s so, so much more,” Lambert stressed. “And I think it’s going to revolutionize DevOps because people are now going to build best in breed deployment workflows for specific applications and frameworks, and those become the de facto standard shared on GitHub. […] It’s going to do everything we did for open source again for the DevOps space and for all those different parts of that workflow ecosystem.”

That means you can use it to send a text message through Twilio every time someone uses the ‘urgent issue’ tag in your repository, for example. Or you can write a one-line command that searches your repository with a basic grep command. Or really run any other code you want to because all you have to do to turn any code in your repository into an Action is to write a Docker file for it so that GitHub can run it. “As long as there is a Docker file, we can build it, run in and connect it to your workflow,” Lambert explained. If you don’t want to write a Docker file, though, there’s also a visual editor you can use to build your workflow.

As Corey Wilkerson, GitHub’s head of product engineering also noted, many of these Actions already exist in repositories on GitHub today. And there are now over 96 million of those on GitHub, so that makes for a lot of potential actions that will be available from the start.

With Actions, which is now in limited public beta, developers can set up the workflow to build, package, release, update and deploy their code without having to run the code themselves.

Now developers could host those Actions themselves — they are just Docker containers, after all — but GitHub will also host and run the code for them. And that includes developers on the free open source plan.

Over time — and Lambert seemed to be in favor of this — GitHub could also allow developers to sell their workflows and Actions through the GitHub marketplace. For now, that’s not an option, but it it’s definitely that’s something the company has been thinking about. Lambert also noted that this could be a way for open source developers who don’t want to build an enterprise version of their tools (and the sales force that goes with that) to monetize their efforts.

While GitHub will make its own actions available to developers, this is an open platform and others in the GitHub community can contribute their own actions, too.

In addition to Actions, GitHub also announced a number of other new features on its platform. As the company stressed during today’s event, it’s mission is to make the life of developers easier — and most of the new features may be small but do indeed make it easier for developers to do their jobs.

So what else is new? GitHub Connect, which connects the silo of GitHub Enterprise with the open source repositories on its public site, is now generally available, for example. GitHub Connect enables new features like unified search, that can search through both the open source code on the site and internal code, as well as a new Unified Business Identity feature that brings together the multiple GitHub Business accounts that many businesses now manage (thanks, shadow IT) under a single umbrella to improve billing, licensing and permissions.

The company also today launched three new courses in its Learning Lab that make it easier for developers to get started with the service, as well as a business version of Learning Lab for larger organizations.

What’s maybe even more interesting for developers whose companies use GitHub Enterprise, though, is that the company will now allow admins to enable a new feature that will display those developers’ work as part of their public profile. Given that GitHub is now the de facto resume for many developers, that’s a big deal. Much of their work, after all, isn’t in open source or in building side projects, but in the day-to-day work at their companies.

The other new features the company announced today are pretty much all about security. The new GitHub Security Advisory API, for example, makes it easier for developers to find threads in their code through automatic vulnerability scans, while the new security vulnerability alerts for Java and .NET projects now extend GitHub’s existing alerts to these two languages. If your developers are prone to putting their security tokens into public code, then you can now rest easier since GitHub will now also start scanning all public repositories for known token formats. If it finds one, it’ll alert you and you can set off to create a new one.

Twilio acquires email API platform SendGrid for $2 billion in stock

Twilio, the ubiquitous communications platform, today announced its plan to acquire the API-centric email platform SendGrid for about $2 billion in an all-stock transaction. That’s Twilio’s largest acquisition to date, but also one that makes a lot of sense given that both companies aim to make building communications platforms easier for developers.

“The two companies share the same vision, the same model, and the same values,” said Twilio co-founder and CEO Jeff Lawson in today’s announcement. “We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement.”

SendGrid will become a wholly owned subsidiary of Twilio and its common stock will be converted into Twilio stock. The companies expect the acquisition to close in the first half of 2019, after it has been cleared by the authorities.

Twilio’s current focus is on omnichannel communication, and email is obviously a major part of that. And while it offers plenty of services around voice, video and chat, email hasn’t been on its radar in the same way. This acquisition now allows it to quickly build up expertise in this area and expand its services there.

SendGrid went public in 2017. At the time, it priced its stock at $16. Today, before the announcement, the company was trading at just under $31, though that price obviously spiked after the announcement went public. That’s still down from a high of more than $36.5 last month, but that’s in line with the overall movement of the market in recent weeks.

Today’s announcement comes shortly before Twilio’s annual developer conference, so I expect we’ll hear a lot more about its plans for SendGrid later this week.

We asked Twilio for more details about its plans for SendGrid after the acquisition closes. We’ll update this post once we hear more.

Google+ for G Suite lives on and gets new features

You thought Google+ was dead, didn’t you? And it is — if you’re a consumer. But the business version of Google’s social network will live on for the foreseeable future — and it’s getting a bunch of new features today.

Google+ for G Suite isn’t all that different from the Google+ for consumers, but its focus is very much on allowing users inside a company to easily share information. Current users include the likes of Nielsen and French retailer Auchan.

The new features that Google is announcing today give admins more tools for managing and reviewing posts, allow employees to tag content and provide better engagement metrics to posters.

Recently Google introduced the ability for admins to bulk-add groups of users to a Google+ community, for example. And soon, those admins will be able to better review and moderate posts made by their employees. Soon, admins will also be able to define custom streams so that employees could get access to a stream with all of the posts from a company’s leadership team, for example.

But what’s maybe more important in this context is that tags now make it easy for employees to route content to everybody in the company, no matter which group they work in. “Even if you don’t know all employees across an organization, tags makes it easier to route content to the right folks,” the company explains in today’s blog post. “Soon you’ll be able to draft posts and see suggested tags, like #research or #customer-insights when posting customer survey results.”

As far as the new metrics go, there’s nothing all that exciting going on here, but G Suite customers who keep their reporting structure in the service will be able to provide analytics to employees so they can see how their posts are being viewed across the company and which teams engage most with them.

At the end of the day, none of these are revolutionary features. But the timing of today’s announcement surely isn’t a coincidence, given that Google announced the death of the consumer version of Google+ — and the data breach that went along with that — only a few days ago. Today’s announcement is clearly meant to be a reminder that Google+ for the enterprise isn’t going away and remains in active development. I don’t think all that many businesses currently use Google+, though, and with Hangouts Chat and other tools, they now have plenty of options for sharing content across groups.

Nvidia launches Rapids to help bring GPU acceleration to data analytics

Nvidia, together with partners like IBM, HPE, Oracle, Databricks and others, is launching a new open-source platform for data science and machine learning today. Rapids, as the company is calling it, is all about making it easier for large businesses to use the power of GPUs to quickly analyze massive amounts of data and then use that to build machine learning models.

“Businesses are increasingly data-driven,” Nvidia’s VP of Accelerated Computing Ian Buck told me. “They sense the market and the environment and the behavior and operations of their business through the data they’ve collected. We’ve just come through a decade of big data and the output of that data is using analytics and AI. But most it is still using traditional machine learning to recognize complex patterns, detect changes and make predictions that directly impact their bottom line.”

The idea behind Rapids then is to work with the existing popular open-source libraries and platforms that data scientists use today and accelerate them using GPUs. Rapids integrates with these libraries to provide accelerated analytics, machine learning and — in the future — visualization.

Rapids is based on Python, Buck noted; it has interfaces that are similar to Pandas and Scikit, two very popular machine learning and data analysis libraries, and it’s based on Apache Arrow for in-memory database processing. It can scale from a single GPU to multiple notes and IBM notes that the platform can achieve improvements of up to 50x for some specific use cases when compared to running the same algorithms on CPUs (though that’s not all that surprising, given what we’ve seen from other GPU-accelerated workloads in the past).

Buck noted that Rapids is the result of a multi-year effort to develop a rich enough set of libraries and algorithms, get them running well on GPUs and build the relationships with the open-source projects involved.

“It’s designed to accelerate data science end-to-end,” Buck explained. “From the data prep to machine learning and for those who want to take the next step, deep learning. Through Arrow, Spark users can easily move data into the Rapids platform for acceleration.”

Indeed, Spark is surely going to be one of the major use cases here, so it’s no wonder that Databricks, the company founded by the team behind Spark, is one of the early partners.

“We have multiple ongoing projects to integrate Spark better with native accelerators, including Apache Arrow support and GPU scheduling with Project Hydrogen,” said Spark founder Matei Zaharia in today’s announcement. “We believe that RAPIDS is an exciting new opportunity to scale our customers’ data science and AI workloads.”

Nvidia is also working with Anaconda, BlazingDB, PyData, Quansight and scikit-learn, as well as Wes McKinney, the head of Ursa Labs and the creator of Apache Arrow and Pandas.

Another partner is IBM, which plans to bring Rapids support to many of its services and platforms, including its PowerAI tools for running data science and AI workloads on GPU-accelerated Power9 servers, IBM Watson Studio and Watson Machine Learning and the IBM Cloud with its GPU-enabled machines. “At IBM, we’re very interested in anything that enables higher performance, better business outcomes for data science and machine learning — and we think Nvidia has something very unique here,” Rob Thomas, the GM of IBM Analytics told me.

“The main benefit to the community is that through an entirely free and open-source set of libraries that are directly compatible with the existing algorithms and subroutines that their used to — they now get access to GPU-accelerated versions of them,” Buck said. He also stressed that Rapids isn’t trying to compete with existing machine learning solutions. “Part of the reason why Rapids is open source is so that you can easily incorporate those machine learning subroutines into their software and get the benefits of it.”

Cloud Foundry expands its support for Kubernetes

Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses run the Application Platform and their container-based applications in parallel. In addition, Cloud Foundry has also long been the force behind BOSH, a tool for building, deploying and managing cloud applications.

The addition of the Container Runtime a year go seemed to muddle the organization’s mission a bit, but now that the dust has settled, the intent here is starting to become clearer. As Cloud Foundry CTO Chip Childers told me, what enterprises are mostly using the Container Runtime for is for running the pre-packaged applications they get from their vendors. “The Container Runtime — or really any deployment of Kubernetes — when used next to or in conjunction with the App Runtime, that’s where people are largely landing packaged software being delivered by an independent software vendor,” he told me. “Containers are the new CD-ROM. You just want to land it in a good orchestration platform.”

Because the Application Runtime launched well before Kubernetes was a thing, the Cloud Foundry project built its own container service, called Diego.

Today, the Cloud Foundry foundation is launching two new Kubernetes-related projects that take the integration between the two to a new level. The first is Project Eirini, which was launched by IBM and is now being worked on by Suse and SAP as well. This project has been a long time in the making and it’s something that the community has expected for a while. It basically allows developers to choose between using the existing Diego orchestrator and Kubernetes when it comes to deploying applications written for the Application Runtime. That’s a big deal for Cloud Foundry.

“What Eirini does, is it takes that Cloud Foundry Application Runtime — that core PaaS experience that the [Cloud Foundry] brand is so tied to and it allows the underlying Diego scheduler to be replaced with Kubernetes as an option for those use cases that it can cover,” Childers explained. He added that there are still some use cases the Diego container management system is better suited for than Kubernetes. One of those is better Windows support — something that matters quite a bit to the enterprise companies that use Cloud Foundry. Childers also noted that the multi-tenancy guarantees of Kubernetes are a bit less stringent than Diego’s.

The second new project is ContainerizedCF, which was initially developed by Suse. Like the name implies, ContainerizedCF basically allows you to package the core Cloud Foundry Application Runtime and deploy it in Kubernetes clusters with the help of the BOSH deployment tool. This is pretty much what Suse is already using to ship its Cloud Foundry distribution.

Clearly then, Kubernetes is becoming part and parcel of what the Cloud Foundry PaaS service will sit on top of and what developers will use to deploy the applications they write for it in the near future. At first glance, this focus on Kubernetes may look like it’s going to make Cloud Foundry superfluous, but it’s worth remembering that, at its core, the Cloud Foundry Application Runtime isn’t about infrastructure but about a developer experience and methodology that aims to manage the whole lifecycle of the application development. If Kubernetes can be used to help manage that infrastructure, then the Cloud Foundry project can focus on what it does best, too.

The Google Assistant gets more visual

Google today is launching a major visual redesign of its Assistant experience on phones. While the original vision of the Assistant focused mostly on voice, half of all interactions with the Assistant actually include touch. So with this redesign, Google acknowledges that and brings more and larger visuals to the Assistant experience.

If you’ve used one of the recent crop of Assistant-enabled smart displays, then some of what’s new here may look familiar. You now get controls and sliders to manage your smart home devices, for example. Those include sliders to dim your lights and buttons to turn them on or off. There also are controls for managing the volume of your speakers.Even in cases where the Assistant already offered visual feedback — say when you ask for the weather — the team has now also redesigned those results and brought them more in line with what users are already seeing on smart displays from the likes of Lenovo and LG. On the phone, though, that experience still feels a bit more pared down than on those larger displays.

With this redesign, which is going live on both Android and in the iOS app today, Google is also bringing a little bit more of the much-missed Google Now experience back to the phone. While you could already bring up a list of upcoming appointments, commute info, recent orders and other information about your day from the Assistant, that feature was hidden behind a rather odd icon that many users surely ignored. Now, after you’ve long-pressed the home button on your Android phone, you can swipe up to get that same experience. I’m not sure that’s more discoverable than previously, but Google is saving you a tap.

[gallery ids="1725618,1725621,1725611,1725608,1725609,1725614,1725615,1725617,1725616,1725619,1725620,1725624"]

In addition to the visual redesign of the Assistant, Google also today announced a number of new features for developers. Unsurprisingly, one part of this announcement focuses on allowing developers to build their own visual Assistant experiences. Google calls these “rich responses” and provides developers with a set of pre-made visual components that they can easily use to extend their Assistant actions. And because nothing is complete with GIFs, they can now use GIFs in their Assistant apps, too.

But in addition to these new options for creating more visual experiences, Google is also making it a bit easier for developers to take their users money.

While they could already sell physical goods through their Assistant actions, starting today, they’ll also be able to sell digital goods. Those can be one-time purchases for a new level in a game or recurring subscriptions. Headspace, which has long offered a very basic Assistant experience, now lets you sign up for subscriptions right from the Assistant on your phone, for example.

Selling digital goods directly in the Assistant is one thing, but that sale has to sync across different applications, too, so Google today is also launching a new sign-in service for the Assistant that allows developers to log in and link their accounts.

“In the past, account linking could be a frustrating experience for your users; having to manually type a username and password — or worse, create a new account — breaks the natural conversational flow,” the company explains. “With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.”

Starbucks has already integrated this feature into its Assistant experience to give users access to their rewards account. Adding the new Sign-In for the Assistant has almost doubled its conversion rate.

[gallery ids="1725662,1725659,1725661,1725657,1725658,1725660"]

Google’s head of its $110B+ ads and commerce business is leaving for Greylock Partners

Sridhar Ramaswamy, Google’s head of commerce, is leaving the company after more than 15 years and will be joining Greylock Partners, sources inside the company told us and Google confirmed. Ramaswamy will become a venture partner at Greylock Partners . At Google, his position will be filled by Prabhakar Raghavan, who was previously the company’s VP of apps for Google Cloud.

While at Google, Ramaswamy oversaw virtually all of Google’s Ads and Commerce products — that is, basically everything outside of the Google Cloud that makes the company most of its money. Ramaswamy joined Google as an engineer, but quickly moved up in the company’s ranks. He took his current position back in 2014, after Susan Wojcicki moved to YouTube.

At Greylock, Ramaswamy will focus on earlier-stage entrepreneurial projects.

Prabhakar Raghavan, vice president of engineering and products at Google Inc., speaks during the company’s Cloud Next ’18 event in San Francisco, California, U.S., on Tuesday, July 24, 2018.

Google’s advertising revenue still accounts for 84 percent of the total revenue of Alphabet. Last quarter, Google’s advertising revenues came in at over $28 billion. Annual revenue for 2017 was over $110 billion. It’s no secret, though, that Google has struggled to build a stronger commerce business, with projects like Google Express falling relatively flat as its competitors continue to grow.

Raghavan, who will take his place, joined Google in 2012, after a seven-year stint as executive VP and head of Yahoo Labs, which he founded. Like Ramaswamy before him, Raghavan will focus on products while Philipp Schindler will continue in his role as Google’s Chief Business Officer, working side-by-side with Raghavan.

Before Yahoo, Raghavan was the chief technology officer at Verity and worked at IBM Research. He is also the author of two computer science textbooks.

The 7 most important announcements from Microsoft Ignite today

Microsoft is hosting its Ignite conference in Orlando, Florida this week. And although Ignite isn’t the household name that Microsoft’s Build conference has become over the course of the last few years, it’s a massive event with over 30,000 attendees and plenty of news. Indeed, there was so much news this year that Microsoft provided the press with a 27-page booklet with all of it.

We wrote about quite a few of these today, but here are the most important announcements, including one that wasn’t in Microsoft’s booklet but was featured prominently on stage.

1. Microsoft, SAP and Adobe take on Salesforce with their new Open Data Initiative for customer data

What was announced: Microsoft is teaming up with Adobe and SAP to create a single model for representing customer data that businesses will be able to move between systems.

Why it matters: Moving customer data between different enterprise systems is hard, especially because there isn’t a standardized way to represent this information. Microsoft, Adobe and SAP say they want to make it easier for this data to flow between systems. But it’s also a shot across the bow of Salesforce, the leader in the CRM space. It also represents a chance for these three companies to enable new tools that can extract value from this data — and Microsoft obviously hopes that these businesses will choose its Azure platform for analyzing the data.


2. Microsoft wants to do away with more passwords

What was announced: Businesses that use Microsoft Azure Active Directory (AD) will now be able to use the Microsoft Authenticator app on iOS and Android in place of a password to log into their business applications.

Why it matters: Passwords are annoying and they aren’t very secure. Many enterprises are starting to push their employees to use a second factor to authenticate. With this, Microsoft now replaces the password/second factor combination with a single tap on your phone — ideally without compromising security.


3. Microsoft’s new Windows Virtual Desktop lets you run Windows 10 in the cloud

What was announced: Microsoft now lets businesses rent a virtual Windows 10 desktop in Azure.

Why it matters: Until now, virtual Windows 10 desktops were the domain of third-party service providers. Now, Microsoft itself will offer these desktops. The company argues that this is the first time you can get a multiuser virtualized Windows 10 desktop in the cloud. As employees become more mobile and don’t necessarily always work from the same desktop or laptop, this virtualized solution will allow organizations to offer them a full Windows 10 desktop in the cloud, with all the Office apps they know, without the cost of having to provision and manage a physical machine.


4. Microsoft Office gets smarter

What was announced: Microsoft is adding a number of new AI tools to its Office productivity suite. Those include Ideas, which aims to take some of the hassle out of using these tools. Ideas may suggest a layout for your PowerPoint presentation or help you find interesting data in your spreadsheets, for example. Excel is also getting a couple of new tools for pulling in rich data from third-party sources. Microsoft is also building a new unified search tool for finding data across an organization’s network.

Why it matters: Microsoft Office remains the most widely used suite of productivity applications. That makes it the ideal surface for highlighting Microsoft’s AI chops, and anything that can improve employee productivity will surely drive a lot of value to businesses. If that means sitting through fewer badly designed PowerPoint slides, then this whole AI thing will have been worth it.


5. Microsoft’s massive Surface Hub 2 whiteboards will launch in Q2 2019

What was announced: The next version of the Surface Hub, Microsoft’s massive whiteboard displays, will launch in Q2 2019. The Surface Hub 2 is both lighter and thinner than the original version. Then, in 2020, an updated version, the Surface Hub 2X, will launch that will offer features like tiling and rotation.

Why it matters: We’re talking about a 50-inch touchscreen display here. You probably won’t buy one, but you’ll want one. It’s a disappointment to hear that the Surface Hub 2 won’t launch into next year and that some of the advanced features most users are waiting for won’t arrive until the refresh in 2020.


6. Microsoft Teams gets bokeh and meeting recordings with transcripts

What was announced: Microsoft Teams, its Slack competitor, can now blur the background when you are in a video meeting and it’ll automatically create transcripts of your meetings.

Why it matters: Teams has emerged as a competent Slack competitor that’s quite popular with companies that are already betting on Microsoft’s productivity tools. Microsoft is now bringing many of its machine learning smarts to Teams to offer features that most of its competitors can’t match.


7. Microsoft launches Azure Digital Twins

What was announced: Azure Digital Twins allows enterprises to model their real-world IoT deployments in the cloud.

Why it matters: IoT presents a massive new market for cloud services like Azure. Many businesses were already building their own version of Digital Twins on top of Azure, but those homegrown solutions didn’t always scale. Now, Microsoft is offering this capability out of the box, and for many businesses, this may just be the killer feature that will make them decide on standardizing their IoT workloads on Azure. And as they use Azure Digital Twins, they’ll also want to use the rest of Azure’s many IoT tools.

more Microsoft Ignite 2018 coverage

Microsoft, SAP and Adobe take on Salesforce with their new Open Data Initiative for customer data

Microsoft, SAP and Adobe today announced a new partnership: the Open Data Initiative. This alliance, which is a clear attack against Salesforce, aims to create a single data model for consumer data that is then portable between platforms. That, the companies argue, will provide more transparency and privacy controls for consumers, but the core idea here is to make it easier for enterprises to move their customers’ data around.

That data could be standard CRM data, but also information about purchase behavior and other information about customers. Right now, moving that data between platforms is often hard, given that there’s no standard way for structuring it. That’s holding back what these companies can do with their data, of course, and in this age of machine learning, data is everything.

“We want this to be an open framework”, Microsoft CEO Satya Nadella said during his keynote at the company’s annual Ignite conference. “We are very excited about the potential here about what truly putting customers in control of their own data for our entire industry,” he added.

The exact details of how this is meant to work are a bit vague right now, though. Unsurprisingly, Adobe plans to use this model for its Customer Experience Platform, while Microsoft will build it into its Dynamics 365 CRM service and SAP will support it on its Hana database platform and CRM platforms, too. Underneath all of this is a single data model and then, of course, Microsoft Azure — at least on the Microsoft side.

“Adobe, Microsoft and SAP are partnering to reimagine the customer experience management category,” said Adobe CEO Shantanu Narayen. “Together we will give enterprises the ability to harness and action massive volumes of customer data to deliver personalized, real-time customer experiences at scale.”

Together, these three companies have the footprint to challenge Salesforce’s hold on the CRM market and create a new standard. SAP, especially, has put a lot of emphasis on the CRM market lately and while that’s growing fast, it’s still far behind Salesforce.

more Microsoft Ignite 2018 coverage

Google’s GitHub competitor gets better search tools

Google today announced an update to Cloud Source Repositories, its recently relaunched Git-based source code repository, that brings a significantly better search experience to the service. This new search feature is based on the same tool that Google’s own engineers use day in and day out and it’s now available in the beta release of Cloud Source Repositories.

If you’ve been on the internet for a while, then you probably remember Google Code Search. Code Search allowed you to search through any open-source code on the internet. Sadly, Google shut this down back in 2012. This new feature isn’t quite the same, though. It only allows you to search your own code — or that from other people in your company. It’s just as fast as Google’s own search, though, and allows you to use regular expressions and other advanced search features.

One nifty feature here is that for Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field.

Google argues that searching through code locally is not very efficient and means you are often looking at outdated code.

As Google also notes, you can mirror your code from GitHub and Bitbucket with Cloud Source Repositories. I’m not sure a lot of developers will do this only to get the advanced search tools, but it’s definitely a way for Google to get more users onto its platform, which is a bit of an underdog in an ecosystem that’s dominated by the likes of GitHub.

“One key benefit is that now all owned repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query,” Cloud Source Repositories product manager Russell Wolf writes in today’s announcement. “This works whether you have a small weekend project or a code base the size of Google’s. And it’s fast: You’ll get the answers you need super quickly—much faster than previous functionality—so you can get back to writing code. And indexing is super fast, too, so the time between new code being added and being available means you’re always searching up-to-date code.”