How Salesforce paved the way for the SaaS platform approach

When we think of enterprise SaaS companies today, just about every startup in the space aspires to be a platform. That means they want people using their stack of services to build entirely new applications, either to enhance the base product, or even build entirely independent companies. But when Salesforce launched Force.com, the company’s Platform as a Service in 2007, there wasn’t any model.

It turns out that Force.com was actually the culmination of a series of incremental steps after the launch of the first version of Salesforce in February, 2000, all of which were designed to make the software more flexible for customers. Company co-founder and CTO Parker Harris says that they didn’t have this goal to be a platform early on. “We were a solution first, I would say. We didn’t say let’s build a platform and then build sales-force automation on top of it. We wanted a solution that people could actually use,” Harris told TechCrunch.

The march toward becoming a full-fledged platform started with simple customization. That first version of Salesforce was pretty basic, and the company learned over time that customers didn’t always use the same language it did to describe customers and accounts — and that was something that would need to change.

Customizing the product

LogRocket nabs $11M Series A to fix web application errors faster

Every time a visitor experiences an issue on your website, it’s going to have an impact on their impression of the company. That’s why companies want to resolve issues in a timely manner.  LogRocket, a Cambridge, MA startup, announced an $11 Million Series A investment today to give engineering and web development teams access to more precise information they need to fix issues faster.

The round was led by Battery Ventures with participation from seed investor Matrix Partners. When combined with an earlier unannounced $4 million seed round, the company has raised of total of $15 million.

The two founders, Matthew Arbesfeld and Ben Edelstein, have been friends since birth growing up together in the Boston suburbs. After attending college separately at MIT and Columbia, the two friends both moved to San Francisco where they worked as engineers building front-end applications.

The company idea grew from the founders’ own frustration tracking errors. They found that they would have to do a lot of manual research to find problems, and it was taking too much time. That’s where they got the idea for LogRocket .

“What LogRocket does is we capture a recording in real time of all the user activity so the developer on the other end can replay exactly what went wrong and troubleshoot issues faster,” Arbesfeld explained.

Screenshot: LogRocket

The tool works by capturing low-resolution images of troublesome activity of each user and putting them together in a video. When there is an error or problem, the engineer can review the video and watch exactly what the user was doing when he or she encountered an error, allowing them to identify and resolve the problem much more quickly.

Arbesfeld said the company doesn’t have a video storage issue because it concentrates on capturing problems instead of the entire experience. “We’re looking at frustrating moments of the user, so that we can focus on the problem areas,” he explained.

Customers can access the data in the LogRocket dashboard, or it can be incorporated into help desk software like Zendesk. The company is growing quickly with 25 employees and 500 customers in just 18 months since inception, including Reddit, Ikea, CarGrus and Bloomberg.

As for the funding, they see this as the start of a long-term journey. “Our goal is to get out to a much wider audience and build a mature sales and marketing organization,” Arbesfeld said. He sees a future with thousands of customers and ambitious revenue goals. “We want to continue to use the data we have to offer more proactive insights into highest impact problems,” he said

Salesforce finally embedding Quip into platform, starting with Sales and Service Cloud

When Salesforce bought Quip in 2016 for $750 million, it was fair to wonder what it planned to do with it. While company founder Bret Taylor has moved up the ladder to chief product officer, Quip remained a standalone product. Today that changed when the company announced it was embedding Quip directly into its sales and customer service clouds.

Quip is a collaboration tool with built-in office suite functionality, including word processing, spreadsheet and presentation software. As a standalone product, it enables teams to collaborate around a rich set of documents. Quip for Salesforce is embedding that kind of functionality at the platform level.

Alan Lepofsky, who recently joined Salesforce as VP of Salesforce Quip, says the announcement is the culmination of a desire to embed the tool into Salesforce. “By bringing productivity directly into the context of business workflows, sales and customer support teams can collaborate in brand new ways, enabling them to be better aligned and more efficient, ultimately providing a better customer experience,” Lepofsky told TechCrunch.

Quip appears as a tab in the Sales or Service Cloud interface. There, employees can collaborate on documents and maintain all of their information in a single place without switching between multiple applications or losing context, an increasingly important goal for collaboration tools, including Slack.

Photo: Salesforce

Administrators can build templates to quickly facilitate team building. The templates enable you to start a page pre-populated with information about a specific account or set of accounts. You can take this a step further by creating templates with a set of filters to refine each one to meet the needs of a particular team, based on factors like deal size, industry or location.

In the service context, customer service agents can set up pages to discuss different kinds of issues or problems and work together to get answers quickly, even while chatting with a customer.

Salesforce has various partnerships with Microsoft, Dropbox, Google, Slack and others that provide a similar kind of functionality, and those customers that want to continue using those tools can do that, but 2.5 years after the Quip acquisition, Salesforce is finally putting it to work as a native productivity and collaboration tool.

“As an industry analyst, I spent years advising vendors on the importance of purpose and context as two key drivers for getting work done. Salesforce is delivering both by bringing productivity from Quip directly to CRM and customer service,” Lepofsky said.

The idea of providing a single place to collaborate without task switching is certainly attractive, but it remains to be seen if customers will warm to the idea of using Quip instead of one of the other tools out there. In the meantime, Quip will still be sold as a standalone tool.

Nvidia announces its next-gen RTX pods with up to 1,280 GPUs

Nvidia wants to be a cloud powerhouse. While its history may be in graphics cards for gaming enthusiasts, its recent focus has been on its data center GPUs for AI, machine learning inference, inference and visualization. Today, at its GTC conference, the company announced its latest RTX server configuration for Hollywood studios and others who need to quickly generate visual content.

A full RTX server pod can support up to 1,280 Turing GPUs on 32 RTX blade servers. That’s 40 GPUs per server, with each server taking up an 8U space. The GPUs here are Quadro RTX 4000 or 6000 GPUs, depending on the configuration.

NVIDIA RTX Servers — which include fully optimized software stacks available for Optix RTX rendering, gaming, VR and AR, and professional visualization applications — can now deliver cinematic-quality graphics enhanced by ray tracing for far less than just the cost of electricity for a CPU-based rendering cluster with the same performance,” the company notes in today’s announcement.

All of this power can be shared by multiple users and the backend storage and networking interconnect is powered by technology from Mellanox, which Nvidia bought earlier this month for $6.9 billion. That acquisition and today’s news clearly show how important the data center has become for Nvidia’s future.

System makers like Dell, HP, Lenovo, Asus and Supermicro will offer RTX servers to their customers, all of which have been validated by Nvidia and support the company’s software tools for managing the workloads that run on them.

Nvidia also stresses that these servers would work great for running AR and VR applications at the edge and then serving the visuals to clients over 5G networks. That’s a few too many buzzwords, I think, and consumer interest in AR and VR remains questionable, while 5G networks remain far from mainstream, too. Still, there’s a role for these servers in powering cloud gaming services, for example.

Nvidia’s T4 GPUs are coming to the AWS cloud

In the coming weeks, AWS is launching new G4 instances with support for Nvidia’s T4 Tensor Core GPUs, the company today announced at Nvidia’s GTC conference. The T4, which is based on Nvidia’s Turing architecture, was specifically optimized for running AI models. The T4 will be supported by the EC2 compute service and the Amazon Elastic Container Service for Kubernetes.

“NVIDIA and AWS have worked together for a long time to help customers run compute-intensive AI workloads in the cloud and create incredible new AI solutions,” said Matt Garman, vice president of Compute Services at AWS, in today’s announcement. “With our new T4-based G4 instances, we’re making it even easier and more cost-effective for customers to accelerate their machine learning inference and graphics-intensive applications.”

The T4 is also the first GPU on AWS that supports Nvidia’s raytracing technology. That’s not what Nvidia is focusing on with this announcement, but creative pros can use these GPUs to take the company’s real-time raytracing technology for a spin.

For the most part, though, it seems like Nvidia and AWS expect that developers will use the T4 to put AI models into production. It’s worth noting that the T4 hasn’t been optimized for training these models, but they can obviously be used for that as well. Indeed, with the new Cuda-X AI libraries (also announced today), Nvidia now offers an end-to-end platform for developers who want to use its GPUs fr deep learning, machine learning and data analytics.

It’s worth noting that Google launched T4 support in beta a few months ago. On Google’s cloud, these GPUs are currently in beta.

ClimaCell bets on IoT for better weather forecasts

To accurately forecast the weather, you first need lots of data — not just to train your forecasting models but also to generate more precise and granular forecasts. Typically, this has been the domain of government agencies, thanks to their access to this data and the compute power to run the extremely complex models. Anybody can now buy compute power in the cloud, though, and as the Boston and Tel Aviv-based startup ClimaCell is setting out to prove, there are now also plenty of other ways to get climate data thanks to a variety of relatively non-traditional sensors that can help generate more precise local weather predictions.

Now you may say that others, like Dark Sky, for example, are already doing that with their hyperlocal forecasts. But ClimaCell’s approach is very different, and with that has attracted as clients airlines like Delta, JetBlue and United, sports teams like the New England Patriots and agtech companies like Netafim.

“The biggest problem is that to predict the weather, you need to have observations and you need to have models,” ClimaCell CEO Shimon Elkabetz told me. “The entire industry is basically repackaging the data and models of the government [agencies]. And the governments don’t create the relevant infrastructure everywhere in the world. Even in the U.S., there’s room for improvement.”

And that’s where ClimaCell’s main innovation comes in. Instead of relying on government sensors, it’s using the Internet of Things to gather more weather data from far more places than would otherwise be possible. This kind of sensing technology could turn millions of existing connected devices — like cell phones, connected vehicles, street cameras, airplanes and drones — into virtual weather stations. It’s easy enough to see how this would work. If a driver turns on a windshield wiper or fog lights, you know it’s probably raining or foggy. Often, these cars also relay temperature data. If a street camera sees rain, it’s raining.

What’s more complex is that ClimaCell has also developed the technology to gather data from how atmospheric conditions impact the signal propagation between cell phones and their base stations. And to take this one step further — and beyond the ground level — it has also figured out how to gather similar data from satellite-to-ground microwave signals.

“The idea is that everything is sensitive to weather and we can turn everything into a weather sensor,” said Elkabetz. “That’s why we call it the weather of things. It enables us to put in place virtual sensors everywhere.”

Using all this data, ClimaCell is providing its customers, like airlines, ridesharing companies and energy companies, with real-time weather data and forecasts.

Using all of this data the company also recently launched flood alerts for about 500 cities that can provide 24 to 48-hour warnings ahead of major flood events. To do this, the company combined its weather data with its own hydrological model.

For now, most of ClimaCell’s business model focuses on selling its data and predictions to other businesses. The company plans to launch a consumer app in May, though. I got a sneak peek of the app; while I can’t vouch for the forecasts, it’s a very well-designed application that you’ll probably want to look at, no matter whether you’re a weather geek or just want to see if you can get a quick bike ride in before the rain starts.

Why a consumer app? “We want to become the biggest weather technology company in the world,” Elkabetz said. To get to this point, the company has raised a total of $68 million to date from investors that include Clearvision Ventures, JetBlue Technology Ventures, Ford Smart Mobility,  Envision Ventures, Canaan Partners, Fontinalis Partners and Square Peg Capital.

WorkClout brings SaaS to factory floor to increase operational efficiency

Factory software tools are often out of reach of small manufacturers, forcing them to operate with inefficient manual systems. WorkClout, a member of the Y Combinator Winter 2019 class, wants to change that by offering a more affordable SaaS alternative to traditional manufacturing software solutions.

Company co-founder and CEO Arjun Patel grew up helping out in his Dad’s factory and he saw first-hand how difficult it is for small factory owners to automate. He says that traditional floor management tools are expensive and challenging to implement.

“What motivated me is that when my Dad was trying to implement a similar system,” Patel said. He said that his father’s system had cost over $240K, taken over a year to get going and wasn’t really doing what he wanted it to do. That’s when he decided to help.

He teamed up with Bryan Trang, who became the CPO and Richard Girges, who became the CTO to build the system that his Dad (and others in a similar situation) needed. Specifically, the company developed a cloud software solution that helps manufacturers increase their operational efficiency. “Two things that we do really well is track every action on the factory floor and use that data to make suggestions on how to increase efficiency. We also determine how much work can be done in a given time period, taking finite resources into consideration,” Patel explained.

He said that one of the main problems that small-to-medium sized manufacturers face is a lack of visibility into their businesses. WorkClout looks at orders, activities, labor and resources to determine the best course of action to a complete an order in the most cost-effective way.

“WorkClout gives our customers a better way to allocate resources and greater visibility of what’s actually happening on the factory floor. The more data that they have, the more accurate picture they have of what’s going on,” Patel said.

Production Schedule view. Screenshot: WorkClout

The company is still working on the pricing model, but today it charges administrative users like plant management, accounting and sales. Machine operators get access to the data for free. The current rate for paid users starts at $99 per user per month. There is an additional one-time charge for implementation and training.

As for the Y Combinator experience, Patel says that it has helped him focus on what’s important. “It really makes you hone in on building the product and getting customers, then making sure those two things are leading to customer happiness,” he said.

While the company does have to help customers get going today, the goal is to make the product more self-serve over time as they begin to understand the different verticals they are developing solutions for. The startup launched in December and already has 13 customers, generating $100,000 in annual recurring revenue (ARR), according to Patel.

Slack hands over control of encryption keys to regulated customers

Slack announced today that it is launching Enterprise Key Management (EKM) for Slack, a new tool that enables customers to control their encryption keys in the enterprise version of the communications app. The keys are managed in the AWS KMS key management tool.

Geoff Belknap, chief security officer (CSO) at Slack, says that the new tool should appeal to customers in regulated industries, who might need tighter control over security. “Markets like financial services, health care and government are typically underserved in terms of which collaboration tools they can use, so we wanted to design an experience that catered to their particular security needs,” Belknap told TechCrunch.

Slack currently encrypts data in transit and at rest, but the new tool augments this by giving customers greater control over the encryption keys that Slack uses to encrypt messages and files being shared inside the app.

He said that regulated industries in particular have been requesting the ability to control their own encryption keys including the ability to revoke them if it was required for security reasons. “EKM is a key requirement for growing enterprise companies of all sizes, and was a requested feature from many of our Enterprise Grid customers. We wanted to give these customers full control over their encryption keys, and when or if they want to revoke them,” he said.

Screenshot: Slack

Belknap says that this is especially important when customers involve people outside the organization such as contractors, partners or vendors in Slack communications. “A big benefit of EKM is that in the event of a security threat or if you ever experience suspicious activity, your security team can cut off access to the content at any time if necessary,” Belknap explained.

In addition to controlling the encryption keys, customers can gain greater visibility into activity inside of Slack via the Audit Logs API. “Detailed activity logs tell customers exactly when and where their data is being accessed, so they can be alerted of risks and anomalies immediately,” he said. If a customer finds suspicious activity, it can cut off access.

EKM for Slack is generally available today for Enterprise Grid customers for an additional fee. Slack, which announced plans to go public last month, has raised over $1 billion on a $7 billion valuation.

Microsoft open sources its data compression algorithm and hardware for the cloud

The amount of data that the big cloud computing providers now store is staggering, so it’s no surprise that most store all of this information as compressed data in some form or another — just like you used to zip your files back in the days of floppy disks, CD-ROMs and low-bandwidth connections. Typically, those systems are closely guarded secrets, but today, Microsoft open sourced the algorithm, hardware specification and Verilog source code for how it compresses data in its Azure cloud. The company is contributing all of this to the Open Compute Project (OCP).

Project Zipline, as Microsoft calls this project, can achieve 2x higher compression ratios compared to the standard Zlib-L4 64KB model. To do this, the algorithm — and its hardware implementation — were specifically tuned for the kind of large datasets Microsoft sees in its cloud. Because the system works at the systems level, there is virtually no overhead and Microsoft says that it is actually able to manage higher throughput rates and lower latency than other algorithms are currently able to achieve.

Microsoft stresses that it is also contributing the Verilog source code for register transfer language (RTL) — that is, the low-level code that makes this all work. “Contributing RTL at this level of detail as open source to OCP is industry leading,” Kushagra Vaid, the general manager for Azure hardware infrastructure, writes. “It sets a new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level.”

Microsoft is currently using this system in its own Azure cloud, but it is now also partnering with others in the Open Compute Project. Among these partners are Intel, AMD, Ampere, Arm, Marvell, SiFive, Broadcom, Fungible, Mellanox, NGD Systems, Pure Storage, Synopsys and Cadence.

“Over time, we anticipate Project Zipline compression technology will make its way into several market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices,” writes Vaid.

Scaleway updates its high-performance instances

Cloud-hosting company Scaleway refreshed its lineup of high performance instances today. These instances are now all equipped with AMD EPYC CPUs, DDR4 RAM and NVMe SSD storage. The more you pay, the more computing power, RAM, storage and bandwidth you get.

High-performance plans start at €0.078 per hour or €39 per month ($44.20), whichever is lower at the end of the month. For this price you get 4 cores, 16GB of RAM, 150GB of storage and 400Mbps of bandwidth.

If you double the price, you get twice as many cores, RAM and storage. Higher plans get a tiny discount on performance bumps. And the fastest instance comes with 48 cores, 256GB of RAM, 600GB of storage and 2Gbps of bandwidth. That beast can cost as much as €569 per month ($645).

Here’s the full lineup:

Scaleway had high performance instances in the past called “X64” instances. They were relatively cheaper than that. Despite that price bump, Scaleway manages to stay competitive against Linode, DigitalOcean and others.

A server with 6 CPU cores and 16GB of RAM costs $80 per month on Linode. After that, you have to choose between high memory plans and dedicated CPU plans, so it’s harder to compare.

On DigitalOcean, an instance with 16GB of RAM and 4 CPU cores costs $120 per month. The most expensive instance costs $1,200 per month, and it doesn’t match the specifications of Scaleway’s most expensive instance.