India may become next restricted market for U.S. cloud providers

Data sovereignty is on the rise across the world. Laws and regulations increasingly require that citizen data be stored in local data centers, and often restricts movement of that data outside of a country’s borders. The European Union’s GDPR policy is one example, although it’s relatively porous. China’s relatively new cloud computing law is much more strict, and forced Apple to turn over its Chinese-citizen iCloud data to local providers and Amazon to sell off data center assets in the country.

Now, it appears that India will join this policy movement. According to Aditya Kalra in Reuters, an influential cloud policy panel has recommended that India mandate data localization in the country, for investigative and national security reasons, in a draft report set to be released later this year. That panel is headed by well-known local entrepreneur Kris Gopalakrishnan, who founded Infosys, the IT giant.

That report would match other policy statements from the Indian political establishment in recent months. The government’s draft National Digital Communications Policy this year said that data sovereignty is a top mission for the country. The report called for the government by 2022 to “Establish a comprehensive data protection regime for digital communications that safeguards the privacy, autonomy and choice of individuals and facilitates India’s effective participation in the global digital economy.”

It’s that last line that is increasingly the objective of governments around the world. While privacy and security are certainly top priorities, governments now recognize that the economics of data are going to be crucial for future innovation and growth. Maintaining local control of data — through whatever means necessary — ensures that cloud providers and other services have to spend locally, even in a global digital economy.

India is both a crucial and an ironic manifestation of this pattern. It is crucial because of the size of its economy: public cloud revenues in the country are expected to hit $2.5 billion this year, according to Gartner’s estimates, an annual growth rate of 37.5%. It is ironic because much of the historical success of India’s IT industry has been its ability to offer offshoring and data IT services across borders.

Indian Prime Minister Narendra Modi has made development and rapid economic growth a top priority of his government. (Krisztian Bocsi/Bloomberg via Getty Images)

India is certainly no stranger to localization demands. In areas as diverse as education and ecommerce, the country maintains strict rules around local ownership and investment. While those rules have been opening up slowly since the 1990s, the explosion of interest in cloud computing has made the gap in regulations around cloud much more apparent.

If the draft report and its various recommendations become law in India, it would have significant effects on public cloud providers like Microsoft, Google, Amazon, and Alibaba, all of whom have cloud operations in the country. In order to comply with the regulations, they would almost certainly have to expend significant resources to build additional data centers locally, and also enforce data governance mechanisms to ensure that data didn’t flow from a domestic to a foreign data center accidentally or programmatically.

I’ve written before that these data sovereignty regulations ultimately benefit the largest service providers, since they’re the only ones with the scale to be able to competently handle the thicket of constantly changing regulations that govern this space.

In the India case though, the expense may well be warranted. Given the phenomenal growth of the Indian cloud IT sector, it’s highly likely that the major cloud providers are already planning a massive expansion to handle the increasing storage and computing loads required by local customers. Depending on how simple the regulations are written, there may well be limited cost to the rules.

One question will involve what level of foreign ownership will be allowed for public cloud providers. Given that several foreign companies already exist in the marketplace, it might be hard to completely eliminate them entirely in favor of local competitors. Yet, the large providers will have their work cut out for them to ensure the market stays open to all.

The real costs though would be borne by other companies, such as startups who rely on customer datasets to power artificial intelligence. Can Indian datasets be used to train an AI model that is used globally? Will the economics be required to stay local, or will the regulations be robust enough to handle global startup innovation? It would be a shame if the very law designed to encourage growth in the IT sector was the one that put a dampener on it.

India’s chief objective is to ensure that Indian data benefits Indian citizens. That’s a laudable goal on the surface, but deeply complicated when it comes time to write these sorts of regulations. Ultimately, consumers should have the right to park their data wherever they want — with a local provider or a foreign one. Data portability should be key to data sovereignty, since it is consumers who will drive innovation through their demand for best-in-class services.

Google wants Go to become the go-to language for writing cloud apps

The Google -incubated Go language is one of the fastest growing programming languages today, with about one million active developers using it worldwide. But the company believes it can still accelerate its growth, especially when it comes to its role in writing cloud applications. And to do this, the company today announced Go Cloud, a new open-source library and set of tools that makes it easier to build cloud apps with Go .

While Go is highly popular among developers, Google argues that the language was missing a standard library for interfacing with cloud services. Today, developers often have to essentially write their own libraries to use the features of each cloud, but organizations today want to be able to easily move their workloads between clouds.

What Go Cloud then gives these developers is a set of open generic cloud APIs for accessing blog storage, MySQL databases and runtime configuration, as well as an HTTP server with built-in logging, tracing and health checking. Right now, the focus is on AWS and Google Cloud Platform. Over time, Google plans to add more features to Go Cloud and support for more cloud providers (and those cloud providers can, of course, build their own support, too).

This, Google argues, allows developer teams to build applications that can easily run on any supported cloud without having to re-architect large parts of their applications.

As Google VP of developer relations Adam Seligman told me, the company hopes this move will kick off an explosion of libraries around Go — and, of course, that it will accelerate Go’s growth as a language for the cloud.

Figure Eight partners with Google to give AutoML developers better training data

Figure Eight, a platform that helps developers train, test and fine-tune their machine learning models, today announced a major new collaboration with Google that essentially turns Figure Eight into the de facto standard for creating and annotating machine learning data for Google Cloud’s AutoML service.

As Figure Eight’s CEO Robin Bordoli told me, Google had long been a customer, but the two companies decided to work closer together now that AutoML is launching in beta and expanding its product portfolio, too. As Bordoli argues, training data remains one of the biggest bottlenecks for developers who want to build their own machine learning models — and Google recognized this, too. “It’s their recognition that the lack of training data is a fundamental bottleneck to the adoption of AutoML,” he told me.

Since AutoML’s first product focuses on machine vision, it’s maybe no surprise that Figure Eight’s partnership with Google is also currently mostly about this kind of visual training data. Its service is meant to help relatively inexperienced developers collect data, prepare it for use in AutoML and then experiment with the results.

What makes Figure Eight stand out from other platforms is that it keeps the human in the loop. Bordoli argues that you can’t simply use AI tools to annotate your training data, just like you can’t fully rely on humans either (unless you want to employ entire countries as image taggers). “Human labeling is a key need for our customers, and we are excited to partner with Figure Eight to enhance our support in this area,” said Francisco Uribe, the product manager for Google Cloud AutoML at Google.

As part of this partnership, Figure Eight has developed a number of AutoML-specific templates and processes for uploading the data. It also offers its customers assistance with creating the training data (while also ensuring AI fairness). Google Cloud users can use the Figure Eight platform to label up to 1,000 images and they do, of course, get access to the company’s data labeling annotators if they don’t want to do all the work themselves.

Ahead of today’s announcement, Figure Eight had already generated more than 10 billion data labels and today’s announcement will surely accelerate this.

Okta nabs ScaleFT to build out ‘Zero Trust’ security framework

Okta, the cloud identity management company, announced today it has purchased a startup called ScaleFT to bring the Zero Trust concept to the Okta platform. Terms of the deal were not disclosed.

While Zero Trust isn’t exactly new to a cloud identity management company like Okta, acquiring ScaleFT gives them a solid cloud-based Zero Trust foundation on which to continue to develop the concept internally.

“To help our customers increase security while also meeting the demands of the modern workforce, we’re acquiring ScaleFT to further our contextual access management vision — and ensure the right people get access to the right resources for the shortest amount of time,” Okta co-founder and COO Frederic Kerrest said in a statement.

Zero Trust is a security framework that acknowledges work no longer happens behind the friendly confines of a firewall. In the old days before mobile and cloud, you could be pretty certain that anyone on your corporate network had the authority to be there, but as we have moved into a mobile world, it’s no longer a simple matter to defend a perimeter when there is effectively no such thing. Zero Trust means what it says: you can’t trust anyone on your systems and have to provide an appropriate security posture.

The idea was pioneered by Google’s “BeyondCorp” principals and the founders of ScaleFT are adherents to this idea. According to Okta, “ScaleFT developed a cloud-native Zero Trust access management solution that makes it easier to secure access to company resources without the need for a traditional VPN.”

Okta wants to incorporate the ScaleFT team and, well, scale their solution for large enterprise customers interested in developing this concept, according to a company blog post by Kerrest.

“Together, we’ll work to bring Zero Trust to the enterprise by providing organizations with a framework to protect sensitive data, without compromising on experience. Okta and ScaleFT will deliver next-generation continuous authentication capabilities to secure server access — from cloud to ground,” Kerrest wrote in the blog post.

ScaleFT CEO and co-founder Jason Luce will manage the transition between the two companies, while CTO and co-founder Paul Querna will lead strategy and execution of Okta’s Zero Trust architecture. CSO Marc Rogers will take on the role of Okta’s Executive Director, Cybersecurity Strategy.

The acquisition allows the Okta to move beyond purely managing identity into broader cyber security, at least conceptually. Certainly Roger’s new role suggests the company could have other ideas to expand further into general cyber security beyond Zero Trust.

ScaleFT was founded in 2015 and has raised $2.8 million over two seed rounds, according to Crunchbase data.

Microsoft launches new wide-area networking options for Azure

Microsoft is launching a few new networking features today that will make it easier for business to use the company’s Azure cloud to securely connect their own offices and infrastructure using Azure and its global network.

The first of these is the Azure Virtual LAN service, which allows businesses to connect their various branches to and through Azure. This basically works like an airline hub and spoke model where Azure becomes the central hub that all data between branches flows through. The advantage of this, Microsoft argues, is that it allows admins to manage their wide-area networks from a central dashboard and, of course, that it makes it easy to bind additional Azure services and appliances to the network. And with that, users also get access to all of the security services that Azure has to offer.

One new security service that Microsoft is launching today is the Azure Firewall, a new cloud-native security service that is meant to protect a businesses virtual network resources.

In addition to these two new networking features, Microsoft also today announced that it is expanding its Azure Data Box service, which is basically Microsoft’s version of the AWS Snowball appliances for moving data into the cloud by loading it onto a shippable appliance, to two new regions: Europe and the United Kingdom (and let’s not argue about the fact that the U.K. is still part of Europe). There is also now a ‘Data Box Disk’ option for those who don’t need to move petabytes of data. Orders with up to five of those disks can hold up to 40 terabytes of data and are currently in preview.

Serverless computing could unleash a new startup ecosystem

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Blocks of servers in cloud data center.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

Serverless computing could unleash a new startup ecosystem

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Blocks of servers in cloud data center.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

Oracle could be feeling cloud transition growing pains

Oracle is learning that it’s hard for enterprise companies born in the data center to make the transition to the cloud, an entirely new way of doing business. Yesterday it reported its earnings and it was a mixed bag, made harder by changing the way the company counts cloud revenue.

In its earnings press release from yesterday, it put it this way: “Q4 Cloud Services and License Support revenues were up 8% to $6.8 billion. Q4 Cloud License and On-Premise License revenues were down 5% to $2.5 billion.”

Let’s compare that with the language from their Q3 revenue in March: “Cloud Software as a Service (SaaS) revenues were up 33% to $1.2 billion. Cloud Platform as a Service (PaaS) plus Infrastructure as a Service (IaaS) revenues were up 28% to $415 million. Total Cloud Revenues were up 32% to $1.6 billion.”

See how they broke out the cloud revenue loudly and proudly in March, yet chose to combine their cloud revenue with license revenue in June.

In the post-reporting earnings call, Safra Catz, Oracle Co-CEO, responding to a question from analyst John DiFucci, took exception to the idea that the company was somehow obfuscating cloud revenue by reporting it in this way. “So first of all, there is no hiding. I told you the Cloud number, $1.7 billion. You can do the math. You see we are right where we said we’d be.”

She says the new reporting method is due to the new combined licensing products that lets customer use their license on-premise or in the cloud. Fair enough, but if your business is booming you probably want to let investors know about that. They seem to be uneasy about this approach with the stock down over 7 percent today as of publishing this article.

Oracle Stock Chart: Google

Oracle could of course settle all of this by spelling out their cloud revenue, but instead chose a different path. John Dinsdale, an analyst with Synergy Research, a firm that watches the cloud market was dubious about Oracle’s reasoning.

“Generally speaking, when a company chooses to reduce the amount of financial detail it shares on its key strategic initiatives, that is not a good sign. I think one of the justifications put forward is that is becoming difficult to differentiate between cloud and non-cloud revenues. If that is indeed what Oracle is claiming, I have a hard time buying into that argument. Its competitors are all moving in the opposite direction,” he said.

Indeed most are. While it’s often hard to tell exactly the nature of cloud revenue, the bigger players have been more open about this. For instance in its most recent earnings report, Microsoft reported its Azure cloud revenue grew 93 percent. Amazon reported its cloud revenue from AWS was up 49 percent to $5.4 billion in revenue, getting very specific about the revenue number.

Further you can see from Synergy’s most recent market share cloud growth numbers from the 4th quarter last year, Oracle was lumped in with “the Next 10,” not large enough to register on its own.

That Oracle chose not to break out cloud revenue this quarter can’t be seen as a good sign. To be fair, we haven’t really seen Google break out their cloud revenue either with one exception in February. But when the guys at the top of the market shout about their growth, and the guys further down don’t, you can draw your own conclusions.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

Salesforce deepens data sharing partnership with Google

Last Fall at Dreamforce, Salesforce announced a deepening friendship with Google . That began to take shape in January with integration between Salesforce CRM data and Google Analytics 360 and Google BigQuery. Today, the two cloud giants announced the next step as the companies will share data between Google Analytics 360 and the Salesforce Marketing Cloud.

This particular data sharing partnership makes even more sense as the companies can share web analytics data with marketing personnel to deliver ever more customized experiences for users (or so the argument goes, right?).

That connection certainly didn’t escape Salesforce’s VP of product marketing, Bobby Jania. “Now, marketers are able to deliver meaningful consumer experiences powered by the world’s number one marketing platform and the most widely adopted web analytics suite,” Jania told TechCrunch.

Brent Leary, owner of the consulting firm CRM Essentials says the partnership is going to be meaningful for marketers. “The tighter integration is a big deal because a large portion of Marketing Cloud customers are Google Analytics/GA 360 customers, and this paves the way to more seamlessly see what activities are driving successful outcomes,” he explained.

The partnership involves four integrations that effectively allow marketers to round-trip data between the two platforms. For starters, consumer insights from both Marketing Cloud and Google Analytics 360, will be brought together into a single analytics dashboard inside Marketing Cloud. Conversely, Market Cloud data will be viewable inside Google Analytics 360 for attribution analysis and also to use the Marketing Cloud information to deliver more customized web experiences. All three of these integrations will be generally available starting today.

A fourth element of the partnership being announced today won’t be available in Beta until the third quarter of this year. “For the first time ever audiences created inside the Google Analytics 360 platform can be activated outside of Google. So in this case, I’m able to create an audience inside of Google Analytics 360 and then I’m able to activate that audience in Marketing Cloud,” Jania explained.

An audience is like a segment, so if you have a group of like-minded individuals in the Google analytics tool, you can simply transfer it to Salesforce Marketing Cloud and send more relevant emails to that group.

This data sharing capability removes a lot of the labor involved in trying to monitor data stored in two places, but of course it also raises questions about data privacy. Jania was careful to point out that the two platforms are not sharing specific information about individual consumers, which could be in violation of the new GDPR data privacy rules that went into effect in Europe at the end of last month.

“What we’re [we’re sharing] is either metadata or aggregated reporting results. Just to be clear there’s no personal identifiable data that is flowing between the systems so everything here is 100% GDPR-compliant,” Jania said.

But Leary says it might not be so simple, especially in light of recent data sharing abuses. “With Facebook having to open up about how they’re sharing consumer data with other organizations, companies like Salesforce and Google will have to be more careful than ever before about how the consumer data they make available to their corporate customers will be used by them. It’s a whole new level of scrutiny that has to be apart of the data sharing equation,” Leary said.

The announcements were made today at the Salesforce Connections conference taking place in Chicago this week.