Google raises its G Suite prices

Google today announced that it is raising the price of its G Suite subscriptions for the first time. In the U.S., the prices of G Suite Basic and G Suite Business editions will increase by $1 and $2 per user/month, respectively, while increases in other regions will be adjusted according to the local currency and market. G Suite Enterprise pricing will remain the same.

The new pricing will go into effect on April 2; those on annual plans will pay the new price when their contract renews after that date.

Usually, a $1 or $2 price increase wouldn’t be a big deal, but this is the first time Google has raised the price of its G Suite subscriptions. The company argues that it has added plenty of new services — like video conferencing with Hangouts Meet, team messaging with Hangouts Chat, increased storage quotas and other security and productivity tools and services — to the platform since it first launched its paid service with its core productivity tools back in 2006.

That seems like a fair argument to me, though a 20 percent price increase may be hard to swallow for some small businesses. It’s also worth remembering that G Suite is now big business for Google. There are now more than 4 million businesses on G Suite, after all, and while some of them are surely on enterprise plans with a price point their teams negotiated privately, the vast majority of them are surely on the standard monthly or annual plans.

Salesforce Commerce Cloud updates keep us shopping with AI-fueled APIs

As people increasingly use their mobile phones and other devices to shop, it has become imperative for vendors to improve the shopping experience, making it as simple as possible, given the small footprint. One way to do that is using artificial intelligence. Today, Salesforce announced some AI-enhanced APIs designed to keep us engaged as shoppers.

For starters, the company wants to keep you shopping. That means providing an intelligent recommendation engine. If you searched for a particular jacket, you might like these similar styles, or this scarf and gloves. That’s fairly basic as shopping experiences go, but Salesforce didn’t stop there. It’s letting developers embed this ability to recommend products in any app whether that’s maps, social or mobile.

That means shopping recommendations could pop up anywhere developers think it makes sense like on your maps app. Whether consumers see this as a positive thing, Salesforce says when you add intelligence to the shopping experience, it increases sales anywhere from 7-16 percent, so however you feel about it, it seems to be working.

The company also wants to make it simple to shop. Instead of entering a long faceted search as has been the traditional way of shopping in the past — footwear, men’s, sneakers, red — you can take a picture of a sneaker (or anything you like) and the visual search algorithm should recognize it and make recommendations based on that picture. It reduces data entry for users, which is typically a pain on the mobile device, even if it has been simplified by checkboxes.

Salesforce has also made inventory availability as a service, allowing shoppers to know exactly where the item they want is available in the world. If they want to pick up in-store that day, it shows where the store is on a map and could even embed that into your ride-sharing app to indicate exactly where you want to go. The idea is to create this seamless experience between consumer desire and purchase.

Finally, Salesforce has added some goodies to make developers happy too including the ability to browse the Salesforce API library and find the ones that make most sense for what they are creating. This includes code snippets to get started. It may not seem like a big deal, but as companies the size of Salesforce increase their API capabilities (especially with the Mulesoft acquisition), it’s harder to know what’s available. The company has also created a sandboxing capability to let developers experiment and build capabilities with these APIs in a safe way.

The basis of Commerce Cloud is Demandware, the company Salesforce acquired two years ago for $2.8 billion. Salesforce’s intelligence platform is called Einstein. In spite of its attempt to personify the technology, it’s really about bringing artificial intelligence across the Salesforce platform of products, as it has with today’s API announcements.

Daily Crunch: Bing has a child porn problem

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Microsoft Bing not only shows child pornography, it suggests it

A TechCrunch-commissioned report has found damning evidence on Microsoft’s search engine. Our findings show a massive failure on Microsoft’s part to adequately police its Bing search engine and to prevent its suggested searches and images from assisting pedophiles.

2. Unity pulls nuclear option on cloud gaming startup Improbable, terminating game engine license

Unity, the widely popular gaming engine, has pulled the rug out from underneath U.K.-based cloud gaming startup Improbable and revoked its license — effectively shutting them out from a top customer source. The conflict arose after Unity claimed Improbable broke the company’s Terms of Service and distributed Unity software on the cloud.

3. Improbable and Epic Games establish $25M fund to help devs move to ‘more open engines’ after Unity debacle

Just when you thought things were going south for Improbable the company inked a late-night deal with Unity competitor Epic Games to establish a fund geared toward open gaming engines. This begs the question of how Unity and Improbable’s relationship managed to sour so quickly after this public debacle.

4. The next phase of WeChat 

WeChat boasts more than 1 billion daily active users, but user growth is starting to hit a plateau. That’s been expected for some time, but it is forcing the Chinese juggernaut to build new features to generate more time spent on the app to maintain growth.

5. Bungie takes back its Destiny and departs from Activision 

The creator behind games like Halo and Destiny is splitting from its publisher Activision to go its own way. This is good news for gamers, as Bungie will no longer be under the strict deadlines of a big gaming studio that plagued the launch of Destiny and its sequel.

6. Another server security lapse at NASA exposed staff and project data

The leaking server was — ironically — a bug-reporting server, running the popular Jira bug triaging and tracking software. In NASA’s case, the software wasn’t properly configured, allowing anyone to access the server without a password.

7. Is Samsung getting serious about robotics? 

This week Samsung made a surprise announcement during its CES press conference and unveiled three new consumer and retail robots and a wearable exoskeleton. It was a pretty massive reveal, but the company’s look-but-don’t-touch approach raised far more questions than it answered.

Salesforce keeps rolling with another banner year in 2018

The good times kept on rolling this year for Salesforce with all of the requisite ingredients of a highly successful cloud company — the steady revenue growth, the expanding product set and the splashy acquisitions. The company also opened the doors of its shiny new headquarters, Salesforce Tower in San Francisco, a testament to its sheer economic power in the city.

Salesforce, which set a revenue goal of $10 billion a few years ago is already on its way to $20 billion. Yet Salesforce is also proof you can be ruthlessly good at what you do, while trying to do the right thing as an organization.

Make no mistake, Marc Benioff and Keith Block, the company’s co-CEOs, want to make obscene amounts of money, going so far as to tell a group of analysts earlier this year that their goal by 2034 is to be a $60 billion company. Salesforce just wants to do it with a hint of compassion as it rakes in those big bucks and keeps well-heeled competitors like Microsoft, Oracle and SAP at bay.

A look at the numbers

In the end, a publicly traded company like Salesforce is going to be judged by how much money it makes, and Salesforce it turns out is pretty good at this, as it showed once again this year. The company grew every quarter by over 24 percent YoY and ended up the year with $12.53 billion in revenue. Based on its last quarter of $3.39 billion, the company finished the year on a $13.56 billion run rate.

This compares with $9.92 billion in total revenue for 2017 with a closing run rate of $10.72 billion.

Even with this steady growth trajectory, it might be some time before it hits the $5 billion-a-quarter mark and checks off the $20 billion goal. Keep in mind that it took the company three years to get from $1.51 billion in Q12016 to $3.1 billion in Q12019.

As for the stock market, it has been highly volatile this year, but Salesforce is still up. Starting the year at $102.41, it was sitting at $124.06 as of publication, after peaking on October 1 at $159.86. The market has been on a wild ride since then and cloud stocks have taken a big hit, warranted or not. On one particularly bad day last month, Salesforce had its worst day since 2016 losing 8.7 percent in value,

Spending big

When you make a lot of money you can afford to spend generously, and the company invested some of those big bucks when it bought Mulesoft for $6.5 billion in March, making it the most expensive acquisition it has ever made. With Mulesoft, the company had a missing link between data sitting on-prem in private data centers and Salesforce data in the cloud.

Mulesoft helps customers build access to data wherever it lives via APIs. That includes legacy data sitting in ancient data repositories. As Salesforce turns its eyes toward artificial intelligence and machine learning, it requires oodles of data and Mulesoft was worth opening up the wallet to provide the company with that kind of access to a variety of enterprise data.

Salesforce 2018 acquisitions. Chart: Crunchbase.

But Mulesoft wasn’t the only thing Salesforce bought this year. It made five acquisitions in all. The other significant one came in July when it scooped up Dataorama for a cool $800 million, giving it a market intelligence platform.

What could be on board for 2019? If Salesforce sticks to its recent pattern of spending big one year, then regrouping the next, 2019 could be a slower one for acquisitions. Consider that it bought just one company last year after buying a dozen in 2016.

One other way to keep revenue rolling in comes from high-profile partnerships. In the past, Salesforce has partnered with Microsoft and Google, and this year it announced that it was teaming up with Apple. Salesforce also announced another high-profile arrangement with AWS to share data between the two platforms more easily. The hope with these types of cross pollination is that the companies can both increase their business. For Salesforce, that means using these partnerships as a platform to move the revenue needle faster.

Compassionate capitalism

Even while his company has made big bucks, Benioff has been preaching compassionate capitalism using Twitter and the media as his soap box.

He went on record throughout this year supporting Prop C, a referendum question designed to help battle San Francisco’s massive homeless problem by taxing companies with greater than $50 million in revenue — companies like Salesforce. Benioff was a vocal proponent of the idea, and it won. He did not find kindred spirits among some of his fellow San Francisco tech CEOs, openly debating Twitter CEO Jack Dorsey on Twitter.

Speaking about Prop C in an interview with Kara Swisher of Recode in November, Benioff talked in lofty terms about why he believed in the measure even though it would cost his company money.

“You’ve got to really be mindful and think about what it is that you want your company to be for and what you’re doing with your business and here at Salesforce, that’s very important to us,” he told Swisher in the interview.

He also talked about how employees at other tech companies were driving their CEOs to change their tune around social issues, including supporting Prop C, but Benioff had to deal with his own internal insurrection this year when 650 employees signed a petition asking him to rethink Salesforce’s contract with the U.S. Customs and Border Protection (CBP) in light of the current administration’s border policies. Benioff defended the contract, stating that that Salesforce tools were being used internally at CBP for staff recruiting and communication and not to enforce border policy.

Regardless, Salesforce has never lost its focus on meeting lofty revenue goals, and as we approach the new year, there is no reason to think that will change. The company will continue to look for new ways to expand markets and keep their revenue moving ever closer to that $20 billion goal, even as it continues to meld its unique form of compassion and capitalism.

InVision, valued at $1.9 billion, picks up $115 million Series F

“The screen is becoming the most important place in the world,” says InVision CEO and founder Clark Valberg . In fact, it’s hard to get through a conversation with him without hearing it. And, considering that his company has grown to $100 million in annual recurring revenue, he has reason to believe his own affirmation.

InVision, the startup looking to be the Salesforce of design, has officially achieved unicorn status with the close of a $115 million Series F round, bringing the company’s total funding to $350 million. This deal values InVision at $1.9 billion, which is nearly double its valuation as of mid-2017 on the heels of its $100 million Series E financing.

Spark Capital led the round with participation from Goldman Sachs, as well as existing investors Battery Ventures, ICONIQ Capital, Tiger Global Management, FirstMark and Geodesic Capital. Atlassian also participated in the round. Earlier this year, Atlassian and InVision built out much deeper integrations, allowing Jira, Confluence and Trello users to instantly collaborate via InVision.

As part of the deal, Spark Capital’s Megan Quinn will be joining the board alongside existing board members, Amish Jani, Vas Natarajan, Simon Nebel, Lee Fixel, and Mark Hastings.

InVision started out back in 2011 as a simple prototyping tool. It let designers build out their experience without asking the engineering/dev team to actually build it, to then send to the engineering and product and marketing and executive teams for collaboration and/or approval.

Over the years, the company has stretched its efforts both up and downstream in the process, building out a full collaboration suite called InVision Cloud (so that every member of the organization can be involved in the design process), Studio, a design platform meant to take on the likes of Adobe and Sketch, and InVision Design System Manager, where design teams can manage their assets and best practices from one place.

But perhaps more impressive than InVision’s ability to build design products for designers is its ability to attract users that aren’t designers.

“Originally, I don’t think we appreciated how much the freemium model acted as a fly wheel internally within an organization,” said Megan Quinn. “Those designers weren’t just inviting designers from their own team or other teams, but PMs and Marketing and Customer Service and executives to collaborate and approve the designs. From the outside, InVision looks like a design company. But really, they start with the designer as a core customer and spread virally within an organization to serve a multitude.”

InVision has simply dominated prototyping and collaboration, today announcing it has surpassed 5 million users. What’s more, InVision has a wide variety of customers. The startup has a long and impressive list of digital first customers — including Netflix, Uber, Airbnb and Twitter — but also serves 97 percent of the Fortune 100, with customers like Adidas, General Electric, NASA, IKEA, Starbucks, and Toyota.

Part of that can be attributed to the quality of the products, but the fundamental shift to digital (as predicted by Valberg) is most certainly under way. Whether brands like it or not, customers are interacting with them more and more from behind a screen, and digital customer experience is becoming more and more important to all companies.

In fact, a McKinsey study showed that companies that are in the top quartile scores of the McKinsey Design Index outperformed their counterparts in both revenues and total returns to shareholders by as much as a factor of two.

But as with any transition, some folks are adverse to change. Valberg identifies industry education and evangelism as two big challenges for InVision.

“Organizations are not quick to change on things like design, which is why we’ve built out a Design Transformation Team,” said Valberg. “The team goes in and gets hands on with brands to help them with new practices and to achieve design maturity within the organization.”

With a fresh $115 million and 5 million users, InVision has just about everything it needs to step into a new tier of competition. Even amongst behemoths like Adobe, which pulled in $2.29 billion in revenue in Q3 alone, InVision has provided products that can both compliment and compete.

But Quinn believes that the future of InVision rests on execution.

“As with most companies, the biggest challenge will be continued excellence in execution,” said Quinn. “InVision has all the right tail winds with the right team, a great product, and excellent customers. It’s all about building and executing ahead of where the pack is going.”

Pivotal announces new serverless framework

Pivotal has always been about making open-source tools for enterprise developers, but surprisingly, up until now, the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function, and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient resources to meet the demands of the serverless programs.

The new package includes several key components for developers, including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This is in contrast to the cloud providers like Amazon, Google and Microsoft, which provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open-source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.

AWS wants to rule the world

AWS, once a nice little side hustle for Amazon’s eCommerce business, has grown over the years into a behemoth that’s on a $27 billion run rate, one that’s still growing at around 45 percent a year. That’s a highly successful business by any measure, but as I listened to AWS executives last week at their AWS re:Invent conference in Las Vegas, I didn’t hear a group that was content to sit still and let the growth speak for itself. Instead, I heard one that wants to dominate every area of enterprise computing.

Whether it was hardware like the new Inferentia chip and Outposts, the new on-prem servers or blockchain and a base station service for satellites, if AWS saw an opportunity they were not ceding an inch to anyone.

Last year, AWS announced an astonishing 1400 new features, and word was that they are on pace to exceed that this year. They get a lot of credit for not resting on their laurels and continuing to innovate like a much smaller company, even as they own gobs of marketshare.

The feature inflation probably can’t go on forever, but for now at least they show no signs of slowing down, as the announcements came at a furious pace once again. While they will tell you that every decision they make is about meeting customer needs, it’s clear that some of these announcements were also about answering competitive pressure.

Going after competitors harder

In the past, AWS kept criticism of competitors to a minimum maybe giving a little jab to Oracle, but this year they seemed to ratchet it up. In their keynotes, AWS CEO Andy Jassy and Amazon CTO Werner Vogels continually flogged Oracle, a competitor in the database market, but hardly a major threat as a cloud company right now.

They went right for Oracle’s market though with a new on prem system called Outposts, which allows AWS customers to operate on prem and in the cloud using a single AWS control panel or one from VMware if customers prefer. That is the kind of cloud vision that Larry Ellison might have put forth, but Jassy didn’t necessarily see it as going after Oracle or anyone else. “I don’t see Outposts as a shot across the bow of anyone. If you look at what we are doing, it’s very much informed by customers,” he told reporters at a press conference last week.

AWS CEO Andy Jassy at a press conference at AWS Re:Invent last week.

Yet AWS didn’t reserve its criticism just for Oracle. It also took aim at Microsoft, taking jabs at Microsoft SQL Server, and also announcing Amazon FSx for Windows File Server, a tool specifically designed to move Microsoft files to the AWS cloud.

Google wasn’t spared either when launching Inferentia and Elastic Inference, which put Google on notice that AWS wasn’t going to yield the AI market to Google’s TPU infrastructure. All of these tools and much more were about more than answering customer demand, they were about putting the competition on notice in every aspect of enterprise computing.

Upward growth trajectory

The cloud market is continuing to grow at a dramatic pace, and as market leader, AWS has been able to take advantage of its market dominance to this point. Jassy, echoing Google’s Diane Greene and Oracle’s Larry Ellison, says the industry as a whole is still really early in terms of cloud adoption, which means there is still plenty of marketshare left to capture.

“I think we’re just in the early stages of enterprise and public sector adoption in the US. Outside the US I would say we are 12-36 months behind. So there are a lot of mainstream enterprises that are just now starting to plan their approach to the cloud,” Jassy said.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy says that AWS has been using its market position to keep expanding into different areas. “AWS has the scale right now to do many things others cannot, particularly lesser players like Google Cloud Platform and Oracle Cloud. They are trying to make a point with the thousands of new products and features they bring out. This serves as a disincentive longer-term for other players, and I believe will result in a shakeout,” he told TechCrunch.

As for the frenetic pace of innovation, Moorhead believes it can’t go on forever. “To me, the question is, when do we reach a point where 95% of the needs are met, and the innovation rate isn’t required. Every market, literally every market, reaches a point where this happens, so it’s not a matter of if but when,” he said.

Certainly areas like the AWS Ground Station announcement, showed that AWS was willing to expand beyond the conventional confines of enterprise computing and into outer space to help companies process satellite data. This ability to think beyond traditional uses of cloud computing resources shows a level of creativity that suggests there could be other untapped markets for AWS that we haven’t yet imagined.

As AWS moves into more areas of the enterprise computing stack, whether on premises or in the cloud, they are showing their desire to dominate every aspect of the enterprise computing world. Last week they demonstrated that there is no area that they are willing to surrender to anyone.

more AWS re:Invent 2018 coverage

The economics and tradeoffs of ad-funded smart city tech

In order to have innovative smart city applications, cities first need to build out the connected infrastructure, which can be a costly, lengthy, and politicized process. Third-parties are helping build infrastructure at no cost to cities by paying for projects entirely through advertising placements on the new equipment. I try to dig into the economics of ad-funded smart city projects to better understand what types of infrastructure can be built under an ad-funded model, the benefits the strategy provides to cities, and the non-obvious costs cities have to consider.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @[email protected].

Using ads to fund smart city infrastructure at no cost to cities

When we talk about “Smart Cities”, we tend to focus on these long-term utopian visions of perfectly clean, efficient, IoT-connected cities that adjust to our environment, our movements, and our every desire. Anyone who spent hours waiting for transit the last time the weather turned south can tell you that we’ve got a long way to go.

But before cities can have the snazzy applications that do things like adjust infrastructure based on real-time conditions, cities first need to build out the platform and technology-base that applications can be built on, as McKinsey’s Global Institute explained in an in-depth report released earlier this summer. This means building out the network of sensors, connected devices and infrastructure needed to track city data. 

However, reaching the technological base needed for data gathering and smart communication means building out hard physical infrastructure, which can cost cities a ton and can take forever when dealing with politics and government processes.

Many cities are also dealing with well-documented infrastructure crises. And with limited budgets, local governments need to spend public funds on important things like roads, schools, healthcare and nonsensical sports stadiums which are pretty much never profitable for cities (I’m a huge fan of baseball but I’m not a fan of how we fund stadiums here in the states).

As city infrastructure has become increasingly tech-enabled and digitized, an interesting financing solution has opened up in which smart city infrastructure projects are built by third-parties at no cost to the city and are instead paid for entirely through digital advertising placed on the new infrastructure. 

I know – the idea of a city built on ad-revenue brings back soul-sucking Orwellian images of corporate overlords and logo-paved streets straight out of Blade Runner or Wall-E. Luckily for us, based on our discussions with developers of ad-funded smart city projects, it seems clear that the economics of an ad-funded model only really work for certain types of hard infrastructure with specific attributes – meaning we may be spared from fire hydrants brought to us by Mountain Dew.

While many factors influence the viability of a project, smart infrastructure projects seem to need two attributes in particular for an ad-funded model to make sense. First, the infrastructure has to be something that citizens will engage – and engage a lot – with. You can’t throw a screen onto any object and expect that people will interact with it for more than 3 seconds or that brands will be willing to pay to throw their taglines on it. The infrastructure has to support effective advertising.  

Second, the investment has to be cost-effective, meaning the infrastructure can only cost so much. A third-party that’s willing to build the infrastructure has to believe they have a realistic chance of generating enough ad-revenue to cover the costs of the projects, and likely an amount above that which could lead to a reasonable return. For example, it seems unlikely you’d find someone willing to build a new bridge, front all the costs, and try to fund it through ad-revenue.

When is ad-funding feasible? A case study on kiosks and LinkNYC

A LinkNYC kiosk enabling access to the internet in New York on Saturday, February 20, 2016. Over 7500 kiosks are to be installed replacing stand alone pay phone kiosks providing free wi-fi, internet access via a touch screen, phone charging and free phone calls. The system is to be supported by advertising running on the sides of the kiosks. ( Richard B. Levine) (Photo by Richard Levine/Corbis via Getty Images)

To get a better understanding of the types of smart city hardware that might actually make sense for an ad-funded model, we can look at the engagement levels and cost structures of smart kiosks, and in particular, the LinkNYC project. Smart kiosks – which provide free WiFi, connectivity and real-time services to citizens – have been leading examples of ad-funded smart city projects. Innovative companies like Intersection (developers of the LinkNYC project), SmartLink, IKE, Soofa, and others have been helping cities build out kiosk networks at little-to-no cost to local governments.

LinkNYC provides public access to much of its data on the New York City Open-Data website. Using some back-of-the-envelope math and a hefty number of assumptions, we can try to get to a very rough range of where cost and engagement metrics generally have to fall for an ad-funded model to make sense.

To try and retrace considerations for the developers’ investment decision, let’s first look at the terms of the deal signed with New York back in 2014. The agreement called for a 12-year franchise period, during which at least 7,500 Link kiosks would be deployed across the city in the first eight years at an expected project cost of more than $200 million. As part of its solicitation, the city also required the developers to pay the greater of either a minimum annual payment of at least $17.5 million or 50 percent of gross revenues.

Let’s start with the cost side – based on an estimated project cost of around $200 million for at least 7,500 Links, we can get to an estimated cost per unit of $25,000 – $30,000. It’s important to note that this only accounts for the install costs, as we don’t have data around the other cost buckets that the developers would also be on the hook for, such as maintenance, utility and financing costs.

Source: LinkNYC, NYC.gov, NYCOpenData

Turning to engagement and ad-revenue – let’s assume that the developers signed the deal with the expectations that they could at least breakeven – covering the install costs of the project and minimum payments to the city. And for simplicity, let’s assume that the 7,500 links were going to be deployed at a steady pace of 937-938 units per year (though in actuality the install cadence has been different). In order for the project to breakeven over the 12-year deal period, developers would have to believe each kiosk could generate around $6,400 in annual ad-revenue (undiscounted). 

Source: LinkNYC, NYC.gov, NYCOpenData

The reason the kiosks can generate this revenue (and in reality a lot more) is because they have significant engagement from users. There are currently around 1,750 Links currently deployed across New York. As of November 18th, LinkNYC had over 720,000 weekly subscribers or around 410 weekly subscribers per Link. The kiosks also saw an average of 18 million sessions per week, or 20-25 weekly sessions per subscriber, or around 10,200 weekly sessions per kiosk (seasonality might even make this estimate too low). 

And when citizens do use the kiosks, they use it for a long time! The average session for each Link unit was four minutes and six seconds. The level of engagement makes sense since city-dwellers use these kiosks in time or attention-intensive ways, such making phone calls, getting directions, finding information about the city, or charging their phones.   

The analysis here isn’t perfect, but now we at least have a (very) rough idea of how much smart kiosks cost, how much engagement they see, and the amount of ad-revenue developers would have to believe they could realize at each unit in order to ultimately move forward with deployment. We can use these metrics to help identify what types of infrastructure have similar profiles and where an ad-funded project may make sense.

Bus stations, for example, may cost about $10,000 – $15,000, which is in a similar cost range as smart kiosks. According to the MTA, the NYC bus system sees over 11.2 million riders per week or nearly 700 riders per station per week. Rider wait times can often be five-to-ten minutes in length if not longer. Not to mention bus stations already have experience utilizing advertising to a certain degree.  Projects like bike-share docking stations and EV charging stations also seem to fit similar cost profiles while having high engagement.

And interactions with these types of infrastructure are ones where users may be more receptive to ads, such as an EV charging station where someone is both physically engaging with the equipment and idly looking to kill up sometimes up to 30 minutes of time as they charge up. As a result, more companies are using advertising models to fund projects that fit this mold, like Volta, who uses advertising to offer charging stations free to citizens.

The benefits of ad-funding come with tradeoffs for cities

When it makes sense for cities and third-party developers, advertising-funded smart city infrastructure projects can unlock a tremendous amount of value for a city. The benefits are clear – cities pay nothing, citizens are offered free connectivity and real-time information on local conditions, and smart infrastructure is built and can possibly be used for other smart city applications down the road, such as using locational data tracking to improve city zoning and congestion. 

Yes, ads are usually annoying – but maybe understanding that advertising models only work for specific types of smart city projects may help quell fears that future cities will be covered inch-to-inch in mascots. And ads on projects like LinkNYC promote local businesses and can tap into idiosyncratic conditions and preferences of regional communities – LinkNYC previously used real-time local transit data to display beer ads to subway riders that were facing heavy delays and were probably in need of a drink. 

Like everyone’s family photos from Thanksgiving, the picture here is not all roses, however, and there are a lot of deep-rooted issues that exist under the surface. Third-party developed, advertising-funded infrastructure comes with externalities and less obvious costs that have been fairly criticized and debated at length. 

When infrastructure funding is derived from advertising, concerns arise over whether services will be provided equitably across communities. Many fear that low-income or less-trafficked communities that generate less advertising demand could end up having poor infrastructure and maintenance. 

Even bigger points of contention as of late have been issues around data consent and treatment. I won’t go into much detail on the issue since it’s incredibly complex and warrants its own lengthy dissertation (and many have already been written). 

But some of the major uncertainties and questions cities are trying to answer include: If third-parties pay for, manage and operate smart city projects, who should own data on citizens’ living behavior? How will citizens give consent to provide data when tracking systems are built into the environment around them? How can the data be used? How granular can the data get? How can we assure citizens’ information is secure, especially given the spotty track records some of the major backers of smart city projects have when it comes to keeping our data safe?

The issue of data treatment is one that no one has really figured out yet and many developers are doing their best to work with cities and users to find a reasonable solution. For example, LinkNYC is currently limited by the city in the types of data they can collect. Outside of email addresses, LinkNYC doesn’t ask for or collect personal information and doesn’t sell or share personal data without a court order. The project owners also make much of its collected data publicly accessible online and through annually published transparency reports. As Intersection has deployed similar smart kiosks across new cities, the company has been willing to work through slower launches and pilot programs to create more comfortable policies for local governments.

But consequential decisions related to third-party owned smart infrastructure are only going to become more frequent as cities become increasingly digitized and connected. By having third-parties pay for projects through advertising revenue or otherwise, city budgets can be focused on other vital public services while still building the efficient, adaptive and innovative infrastructure that can help solve some of the largest problems facing civil society. But if that means giving up full control of city infrastructure and information, cities and citizens have to consider whether the benefits are worth the tradeoffs that could come with them. There is a clear price to pay here, even when someone else is footing the bill.

And lastly, some reading while in transit:

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage