An argument against cloud-based applications

In the last decade we’ve seen massive changes in how we consume and interact with our world. The Yellow Pages is a concept that has to be meticulously explained with an impertinent scoff at our own age. We live within our smartphones, within our apps.

While we thrive with the information of the world at our fingertips, we casually throw away any semblance of privacy in exchange for the convenience of this world.

This line we straddle has been drawn with recklessness and calculation by big tech companies over the years as we’ve come to terms with what app manufacturers, large technology companies, and app stores demand of us.

Our private data into the cloud

According to Symantec, 89% of our Android apps and 39% of our iOS apps require access to private information. This risky use sends our data to cloud servers, to both amplify the performance of the application (think about the data needed for fitness apps) and store data for advertising demographics.

While large data companies would argue that data is not held for long, or not used in a nefarious manner, when we use the apps on our phones, we create an undeniable data trail. Companies generally keep data on the move, and servers around the world are constantly keeping data flowing, further away from its source.

Once we accept the terms and conditions we rarely read, our private data is no longer such. It is in the cloud, a term which has eluded concrete understanding throughout the years.

A distinction between cloud-based apps and cloud computing must be addressed. Cloud computing at an enterprise level, while argued against ad nauseam over the years, is generally considered to be a secure and cost-effective option for many businesses.

Even back in 2010, Microsoft said 70% of its team was working on things that were cloud-based or cloud-inspired, and the company projected that number would rise to 90% within a year. That was before we started relying on the cloud to store our most personal, private data.

Cloudy with a chance of confusion

To add complexity to this issue, there are literally apps to protect your privacy from other apps on your smart phone. Tearing more meat off the privacy bone, these apps themselves require a level of access that would generally raise eyebrows if it were any other category of app.

Consider the scenario where you use a key to encrypt data, but then you need to encrypt that key to make it safe. Ultimately, you end up with the most important keys not being encrypted. There is no win-win here. There is only finding a middle ground of contentment in which your apps find as much purchase in your private data as your doctor finds in your medical history.

The cloud is not tangible, nor is it something we as givers of the data can access. Each company has its own cloud servers, each one collecting similar data. But we have to consider why we give up this data. What are we getting in return? We are given access to applications that perhaps make our lives easier or better, but essentially are a service. It’s this service end of the transaction that must be altered.

App developers have to find a method of service delivery that does not require storage of personal data. There are two sides to this. The first is creating algorithms that can function on a local basis, rather than centralized and mixed with other data sets. The second is a shift in the general attitude of the industry, one in which free services are provided for the cost of your personal data (which ultimately is used to foster marketing opportunities).

Of course, asking this of any big data company that thrives on its data collection and marketing process is untenable. So the change has to come from new companies, willing to risk offering cloud privacy while still providing a service worth paying for. Because it wouldn’t be free. It cannot be free, as free is what got us into this situation in the first place.

Clearing the clouds of future privacy

What we can do right now is at least take a stance of personal vigilance. While there is some personal data that we cannot stem the flow of onto cloud servers around the world, we can at least limit the use of frivolous apps that collect too much data. For instance, games should never need access to our contacts, to our camera and so on. Everything within our phone is connected, it’s why Facebook seems to know everything about us, down to what’s in our bank account.

This sharing takes place on our phone and at the cloud level, and is something we need to consider when accepting the terms on a new app. When we sign into apps with our social accounts, we are just assisting the further collection of our data.

The cloud isn’t some omnipotent enemy here, but it is the excuse and tool that allows the mass collection of our personal data.

The future is likely one in which devices and apps finally become self-sufficient and localized, enabling users to maintain control of their data. The way we access apps and data in the cloud will change as well, as we’ll demand a functional process that forces a methodology change in service provisions. The cloud will be relegated to public data storage, leaving our private data on our devices where it belongs. We have to collectively push for this change, lest we lose whatever semblance of privacy in our data we have left.

Vantage makes managing AWS easier

Vantage, a new service that makes managing AWS resources and their associated spend easier, is coming out of stealth today. The service offers its users an alternative to the complex AWS console with support for most of the standard AWS services, including EC2 instances, S3 buckets, VPCs, ECS and Fargate and Route 53 hosted zones.

The company’s founder, Ben Schaechter, previously worked at AWS and Digital Ocean (and before that, he worked on Crunchbase, too). Yet while DigitalOcean showed him how to build a developer experience for individuals and small businesses, he argues that the underlying services and hardware simply weren’t as robust as those of the hyperclouds. AWS, on the other hand, offers everything a developer could want (and likely more), but the user experience leaves a lot to be desired.

Image Credits: Vantage

“The idea was really born out of ‘what if we could take the user experience of DigitalOcean and apply it to the three public cloud providers, AWS, GCP and Azure,” Schaechter told me. “We decided to start just with AWS because the experience there is the roughest and it’s the largest player in the market. And I really think that we can provide a lot of value there before we do GCP and Azure.”

The focus for Vantage is on the developer experience and cost transparency. Schaechter noted that some of its users describe it as being akin to a “Mint for AWS.” To get started, you give Vantage a set of read permissions to your AWS services and the tool will automatically profile everything in your account. The service refreshes this list once per hour, but users can also refresh their lists manually.

Given that it’s often hard enough to know which AWS services you are actually using, that alone is a useful feature. “That’s the number one use case,” he said. “What are we paying for and what do we have?”

At the core of Vantage is what the team calls “views,” which allows you to see which resources you are using. What is interesting here is that this is quite a flexible system and allows you to build custom views to see which resources you are using for a given application across regions, for example. Those may include Lambda, storage buckets, your subnet, code pipeline and more.

On the cost-tracking side, Vantage currently only offers point-in-time costs, but Schaechter tells me that the team plans to add historical trends as well to give users a better view of their cloud spend.

Schaechter and his co-founder bootstrapped the company and he noted that before he wants to raise any money for the service, he wants to see people paying for it. Currently, Vantage offers a free plan, as well as paid “pro” and “business” plans with additional functionality.

Image Credits: Vantage 

Google expands its cloud with new regions in Chile, Germany and Saudi Arabia

It’s been a busy year of expansion for the large cloud providers, with AWS, Azure and Google aggressively expanding their data center presence around the world. To cap off the year, Google Cloud today announced a new set of cloud regions, which will go live in the coming months and years. These new regions, which will all have three availability zones, will be in Chile, Germany and Saudi Arabia. That’s on top of the regions in Indonesia, South Korea, the U.S. (Last Vegas and Salt Lake City) that went live this year — and the upcoming regions in France, Italy, Qatar and Spain the company also announced over the course of the last twelve months.

Image Credits: Google

In total, Google currently operates 24 regions with 73 availability zones, not counting those it has announced but that aren’t live yet. While Microsoft Azure is well ahead of the competition in terms of the total number of regions (though some still lack availability zones), Google is now starting to pull even with AWS, which currently offers 24 regions with a total of 77 availability zones. Indeed, with its 12 announced regions, Google Cloud may actually soon pull ahead of AWS, which is currently working on six new regions.

The battleground may soon shift away from these large data centers, though, with a new focus on edge zones close to urban centers that are smaller than the full-blown data centers the large clouds currently operate but that allow businesses to host their services even closer to their customers.

All of this is a clear sign of how much Google has invested in its cloud strategy in recent years. For the longest time, after all, Google Cloud Platform lagged well behind its competitors. Only three years ago, Google Cloud offered only 13 regions, for example. And that’s on top of the company’s heavy investment in submarine cables and edge locations.

Google grants $3 million to the CNCF to help it run the Kubernetes infrastructure

Back in 2018, Google announced that it would provide $9 million in Google Cloud Platform credits — divided over three years — to the Cloud Native Computing Foundation (CNCF) to help it run the development and distribution infrastructure for the Kubernetes project. Previously, Google owned and managed those resources for the community. Today, the two organizations announced that Google is adding on to this grant with another $3 million annual donation to the CNCF to “help ensure the long-term health, quality and stability of Kubernetes and its ecosystem.”

As Google notes, the funds will go to the testing and infrastructure of the Kubernetes project, which currently sees over 2,300 monthly pull requests that trigger about 400,000 integration test runs, all of which use about 300,000 core hours on GCP.

“I’m really happy that we’re able to continue to make this investment,” Aparna Sinha, a director of product management at Google and the chairperson of the CNCF governing board, told me. “We know that it is extremely important for the long-term health, quality and stability of Kubernetes and its ecosystem and we’re delighted to be partnering with the Cloud Native Computing Foundation on an ongoing basis. At the end of the day, the real goal of this is to make sure that developers can develop freely and that Kubernetes, which is of course so important to everyone, continues to be an excellent, solid, stable standard for doing that.”

Sinha also noted that Google contributes a lot of code to the project, with 128,000 code contributions in the last twelve months alone. But on top of these technical contributions, the team is also making in-kind contributions through community engagement and mentoring, for example, in addition to the kind of financial contributions the company is announcing today.

“The Kubernetes project has been growing so fast — the releases are just one after the other,” said Priyanka Sharma, the General Manager of the CNCF. “And there are big changes, all of this has to run somewhere. […] This specific contribution of the $3 million, that’s where that comes in. So the Kubernetes project can be stress-free, [knowing] they have enough credits to actually run for a full year. And that security is critical because you don’t want Kubernetes to be wondering where will this run next month. This gives the developers and the contributors to the project the confidence to focus on feature sets, to build better, to make Kubernetes ever-evolving.”

It’s worth noting that while both Google and the CNCF are putting their best foot forward here, there have been some questions around Google’s management around the Istio service mesh project, which was incubated by Google and IBM a few years ago. At some point in 2017, there was a proposal to bring it under the CNCF umbrella, but that never happened. This year, Istio became one of the founding projects of Open Usage Commons, though that group is mostly concerned with trademarks, not with project governance. And while all of this may seem like a lot of inside baseball — and it is — but it had some members of the open-source community question Google’s commitment to organizations like the CNCF.

“Google contributes to a lot of open-source projects. […] There’s a lot of them, many are with open-source foundations under the Linux Foundation, many of them are otherwise,” Singha said when I asked her about this. “There’s nothing new, or anything to report about anything else. In particular, this discussion — and our focus very much with the CNCF here is on Kubernetes, which I think — out of everything that we do — is by far the biggest contribution or biggest amount of time and biggest amount of commitment relative to anything else.”

Google, Intel, Zoom and others launch a new alliance to get enterprises to use more Chrome

A group of industry heavyweights, including Google, Box, Citrix, Dell, Imprivata, Intel, Okta, RingCentral, Slack, VMware and Zoom, today announced the launch of the Modern Computing Alliance.

The mission for this new alliance is to “drive ‘silicon-to-cloud’ innovation for the benefit of enterprise customers — fueling a differentiated modern computing platform and providing additional choice for integrated business solutions.”

Whoever wrote this mission statement was clearly trying to see how many words they could use without actually saying something.

Here is what the alliance is really about: even though the word Chrome never appears on its homepage and Google’s partners never quite get to mentioning it either, it’s all about helping enterprises adopt Chrome and Chrome OS. “The focus of the alliance is to drive innovation and interoperability in the Google Chrome ecosystem, increasing options for enterprise customers and helping to address some of the biggest tech challenges facing companies today,” a Google spokesperson told me.

I’m not sure why it’s not called the Chrome Enterprise Alliance, but Modern Computing Alliance may just have more of a ring to it. This also explains why Microsoft isn’t part of it, though this is only the initial slate of members and others may follow at some point in the future.

Led by Google, the alliance’s focus is on bringing modern web apps to the enterprise, with a focus on performance, security, identity management and productivity. And all of that, of course, is meant to run well on Chrome and Chrome OS and be interoperable.

“The technology industry is moving towards an open, heterogeneous ecosystem that allows freedom of choice while integrating across the stack. This reality presents both a challenge and an opportunity,” Google’s Chrome OS VP John Solomon writes today.

As enterprises move to the cloud, building better web applications and maybe even Progressive Web Applications that work just as well as native solutions is obviously a noble goal and it’s nice to see these companies work together. Given the pandemic, all of this has taken on a new urgency now, too. The plan is for the alliance to release products — though it’s unclear what form these will take — in the first half of 2021. Hopefully, these will play nicely with any browser. A lot of these ‘alliances’ fizzle out quite quickly, so we’ll keep an eye on what happens here.

Bonus: the industry has a long history of alliance like these. Here’s a fun 1991 story about a CPU alliance between Intel, IBM, MIPS and others.

AWS expands on SageMaker capabilities with end-to-end features for machine learning

Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.

As machine learning moves into the mainstream, business units across organizations will find applications for automation,  and AWS is trying to make the development of those bespoke applications easier for its customers.

“One of the best parts of having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability, and automation at scale.”

Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile, and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.

The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains over 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.

Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.

Another new tool that Amazon Web Services touted was its workflow management and automation toolkit, Pipelines. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.

To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye towards better transparency on how models were set up. There are open source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.

Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables to developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data cross multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.

Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.

The cloud can’t solve all your problems

The way a team functions and communicates dictates the operational efficiency of a startup and sets the scene for its culture. It’s way more important than what social events and perks are offered, so it’s the responsibility of a founder and/or CEO to provide their team with a technology approach that will empower them to achieve and succeed — now and in the future.

With that in mind, moving to the cloud might seem like a no-brainer because of its huge benefits around flexibility, accessibility and the potential to rapidly scale, while keeping budgets in check.

But there’s an important consideration here: Cloud providers won’t magically give you efficient teams.

Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond.

It will get you going in the right direction, but you need to think even farther ahead. Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond. Let’s look at how you approach and manage your cloud infrastructure will impact the effectiveness of your teams and your ability to scale.

Hindsight is 20/20

Adopting cloud is easy, but adopting it properly with best practices and in a secure way? Not so much. You might think that when you move to cloud, the cloud providers will give you everything you need to succeed. But even though they’re there to provide a wide breadth of services, these services won’t necessarily have the depth that you will need to run efficiently and effectively.

Yes, your cloud infrastructure is working now, but think beyond the first prototype or alpha and toward production. Considering where you want to get to, and not just where you are, will help you avoid costly mistakes. You definitely don’t want to struggle through redefining processes and ways of working when you’re also managing time sensitivities and multiple teams.

If you don’t think ahead, you’ll have to put all new processes in. It will take a whole lot longer, cost more money and cause a lot more disruption to teams than if you do it earlier.

For any founder, making strategic technology decisions right now should be a primary concern. It feels more natural to put off those decisions until you come face to face with the problem, but you’ll just end up needing to redo everything as you scale and cause your teams a world of hurt. If you don’t give this problem attention at the beginning, you’re just scaling the problems with the team. Flaws are then embedded within your infrastructure, and they’ll continue to scale with the teams. When these things are rushed, corners are cut and you will end up spending even more time and money on your infrastructure.

Build effective teams and reduce bottlenecks

When you’re making strategic decisions on how to approach your technology stack and cloud infrastructure, the biggest consideration should be what makes an effective team. Given that, keep these things top of mind:

  • Speed of delivery: Having developers able to self-serve cloud infrastructure with best practices built-in will enable speed. Development tools that factor in visibility and communication integrations for teams will give transparency on how they are iterating, problems, bugs or integration failures.
  • Speed of testing: This is all about ensuring fast feedback loops as your team works on critical new iterations and features. Developers should be able to test as much as possible locally and through continuous integration systems before they are ready for code review.
  • Troubleshooting problems: Good logging, monitoring and observability services, gives teams awareness of issues and the ability to resolve problems quickly or reproduce customer complaints in order to develop fixes.

Microsoft announces its first Azure data center region in Denmark

Microsoft continues to expand its global Azure data center presence at a rapid clip. After announcing new regions in Austria and Taiwan in October, the company today revealed its plans to launch a new region in Denmark.

As with many of Microsoft’s recent announcements, the company is also attaching a commitment to provide digital skills to 200,000 people in the country (in this case, by 2024).

“With this investment, we’re taking the next step in our longstanding commitment to provide Danish society and businesses with the digital tools, skills and infrastructure needed to drive sustainable growth, innovation, and job creation. We’re investing in Denmark’s digital leap into the future – all in a way that supports the country’s ambitious climate goals and economic recovery,” said Nana Bule, General Manager, Microsoft Denmark.

Azure regions

Image Credits: Microsoft

The new data center, which will be powered by 100% renewable energy and feature multiple availability zones, will feature support for what has now become the standard set of Microsoft cloud products: Azure, Microsoft 365, and Dynamics 365 and Power Platform.

As usual, the idea here is to provide low-latency access to Microsoft’s tools and services. It has long been Microsoft’s strategy to blanket the globe with local data centers. Europe is a prime example of this, with regions (both operational and announced) in about a dozen countries already. In the U.S., Azure currently offers 13 regions (including three exclusively for government agencies), with a new region on the West Coast coming soon.

“This is a proud day for Microsoft in Denmark,” said Brad Smith, President, Microsoft. “Building a hyper-scale datacenter in Denmark means we’ll store Danish data in Denmark, make computing more accessible at even faster speeds, secure data with our world-class security, protect data with Danish privacy laws, and do more to provide to the people of Denmark our best digital skills training. This investment reflects our deep appreciation of Denmark’s green and digital leadership globally and our commitment to its future.”

3 ways the pandemic is transforming tech spending

Ever since the pandemic hit the U.S. in full force last March, the B2B tech community keeps asking the same questions: Are businesses spending more on technology? What’s the money getting spent on? Is the sales cycle faster? What trends will likely carry into 2021?

Recently we decided to join forces to answer these questions. We analyzed data from the just-released Q4 2020 Outlook of the Coupa Business Spend Index (BSI), a leading indicator of economic growth, in light of hundreds of conversations we have had with business-tech buyers this year.

A former Battery Ventures portfolio company, Coupa* is a business spend-management company that has cumulatively processed more than $2 trillion in business spending. This perspective gives Coupa unique, real-time insights into tech spending trends across multiple industries.

Tech spending is continuing despite the economic recession — which helps explain why many startups are raising large rounds and even tapping public markets for capital.

Broadly speaking, tech spending is continuing despite the economic recession — which helps explain why many tech startups are raising large financing rounds and even tapping the public markets for capital. Here are our three specific takeaways on current tech spending:

Spending is shifting away from remote collaboration to SaaS and cloud computing

Tech spending ranks among the hottest boardroom topics today. Decisions that used to be confined to the CIO’s organization are now operationally and strategically critical to the CEO. Multiple reasons drive this shift, but the pandemic has forced businesses to operate and engage with customers differently, almost overnight. Boards recognize that companies must change their business models and operations if they don’t want to become obsolete. The question on everyone’s mind is no longer “what are our technology investments?” but rather, “how fast can they happen?”

Spending on WFH/remote collaboration tools has largely run its course in the first wave of adaptation forced by the pandemic. Now we’re seeing a second wave of tech spending, in which enterprises adopt technology to make operations easier and simply keep their doors open.

SaaS solutions are replacing unsustainable manual processes. Consider Rhode Island’s decision to shift from in-person citizen surveying to using SurveyMonkey. Many companies are shifting their vendor payments to digital payments, ditching paper checks entirely. Utility provider PG&E is accelerating its digital transformation roadmap from five years to two years.

The second wave of adaptation has also pushed many companies to embrace the cloud, as this chart makes clear:

Similarly, the difficulty of maintaining a traditional data center during a pandemic has pushed many companies to finally shift to cloud infrastructure under COVID. As they migrate that workload to the cloud, the pie is still expanding. Goldman Sachs and Battery Ventures data suggest $600 billion worth of disruption potential will bleed into 2021 and beyond.

In addition to SaaS and cloud adoption, companies across sectors are spending on technologies to reduce their reliance on humans. For instance, Tyson Foods is investing in and accelerating the adoption of automated technology to process poultry, pork and beef.

All companies are digital product companies now

Mention “digital product company” in the past, and we’d all think of Netflix. But now every company has to reimagine itself as offering digital products in a meaningful way.

AWS announces Panorama a device adds machine learning technology to any camera

AWS has launched a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, will transform existing on-premises cameras into computer vision enabled super-powered surveillance devices.

Pitching the hardware as a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are being followed, or analyze traffic in retail stores, the new automation service is part of the theme of this AWS re:Invent event — automate everything.

Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.

Soon, AWS expects to have the Panorama SDK that can be used by device manufacturers to build Panorama-enabled devices.

Amazon has already pitched surveillance technologies to developers and the enterprise before. Back in 2017, the company unveiled DeepLens, which it began selling one year later. It was a way for developers to build prototype machine learning models and for Amazon to get comfortable with different ways of commercializing computer vision capabilities.

As we wrote in 2018:

DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models… Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up … DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

 

Amazon has had a lot of experience (and controversy) when it comes to the development of machine learning technologies for video. The company’s Rekognition software sparked protests and pushback which led to a moratorium on the use of the technology.

And the company has tried to incorporate more machine learning capabilities into its consumer facing Ring cameras as well.

Still, enterprises continue to clamor for new machine learning-enabled video recognition technologies for security, safety, and quality control. Indeed, as the COVID-19 pandemic drags on, new protocols around building use and occupancy are being adopted to not only adapt to the current epidemic, but plan ahead for spaces and protocols that can help mitigate the severity of the next one.