Equinix is buying 13 data centers from Bell Canada for $750M

Equinix, the data center company, has the distinction of recently recording its 69th straight positive quarter. One way that it has achieved that kind of revenue consistency is through strategic acquisitions. Today, the company announced that it’s purchasing 13 data centers from Bell Canada for $750 million, greatly expanding its footing in the country.

The deal is financially detailed by Equinix across two axes, including how much the data centers cost in terms of revenue, and adjusted profit. Regarding revenue, Equinix notes that it is paying $750 million for what it estimates to be $105 million in “annualized revenue,” calculated using the most recent quarter’s results multiplied by four. This gives the purchase a revenue multiple of a little over 7x.

Equinix also provided an adjusted profit multiple, saying that the 13 data center locations “[represent] a purchase multiple of approximately 15x EV / adjusted EBITDA.” Unpacking that, the company is saying that the asset’s enterprise value (similar to market capitalization, a popular valuation metric for public companies) is worth about 15 times its earnings before interest, taxes, deprecation and amortization (EBITDA). This seems a healthy price, but not one that is outrageous.

Global reach of Equinix including expanded Canadian operations shown in left panel. Image: Equinix

The acquisition not only gives the company that additional revenue and a stronger foothold in the 10th largest economy in the world, it also gains 600 customers using the Bell data centers, of which 500 are net new.

As much of the world is attempting to digitally transform in the midst of the pandemic and current economic crisis, Equinix sees this as an opportunity to help more Canadian customers go digital more quickly.

“Equinix has been serving the Canadian market in Toronto for more than a decade. This expansion and scale gives the Canadian market a clear and rapid migration path to digital transformation. We’re looking forward to deepening our relationships with our existing Canada-based customers and helping new companies throughout the country position themselves for digital success,” Jon Lin, Equinix President, Americas told TechCrunch.

This is not the first time that Equinix has taken a bunch of data centers off of the hands of a telco. In fact, three years ago, the company bought 29 centers from Verizon (which is the owner of TechCrunch) for $3.6 billion.

As telcos move away from the data center business, companies like Equinix are able to come in and expand into new markets and increase revenue. It’s one of the ways it continues to generate positive revenue year after year.

Today’s deal is just part of that strategy to keep expanding into new markets and finding new ways to generate additional revenue as more companies use their services. Equinix rents space in its data centers and provides all the services that companies need without having to run their own. That would include things like heating, cooling, racks and wiring.

Even though public cloud companies like Amazon, Microsoft and Google are generating headlines with growing revenues, plenty of companies still want to run their own equipment without going to the expense of actually owning the building where the equipment resides.

Today’s deal is expected to close in the second half of the year, assuming it clears all of the regulatory scrutiny required in a purchase like this one.

Equinix just recorded its 69th straight positive quarter

There’s something to be said for consistency through good times and bad, and one company that has had a staggeringly consistent track record is international data center vendor, Equinix. It just recorded its 69th straight positive quarter, according to the company.

That’s an astonishing record, and covers over 17 years of positive returns. That means this streak goes back to 2003. Not too shabby.

The company had a decent quarter, too. Even in the middle of an economic mess, it was still up 6% YoY to $1.445 billion and up 2% over last quarter. The company runs data centers where companies can rent space for their servers. Equinix handles all of the infrastructure providing racks, wiring and cooling — and customers can purchase as many racks as they need.

If you’re managing your own servers for even part of your workload, it can be much more cost-effective to rent space from a vendor like Equinix than trying to run a facility on your own.

Among its new customers this quarter are Zoom, which is buying capacity all over the place, having also announced a partnership with Oracle earlier this month, and TikTok. Both of those companies deal in video and require lots of different types of resources to keep things running.

This report comes against a backdrop of a huge increase in resource demand for certain sectors like streaming video and video conferencing, with millions of people working and studying at home or looking for distractions.

And if you’re wondering if they can keep it going, they believe they can. Their guidance calls for 2020 revenue of $5.877-$5.985 billion, a 6-8% increase over the previous year.

You could call them the anti-IBM. At one point Big Blue recorded 22 straight quarters of declining revenue in an ignominious streak that stretched from 2012 to 2018 before it found a way to stop the bleeding.

When you consider that Equnix’s streak includes the period of 2008-2010, the last time the economy hit the skids, it makes the record even more impressive, and certainly one worth pointing out.

Google data centers watch the weather to make the most of renewable energy

Google’s data centers run 24/7 and suck up a ton of energy — so it’s in both the company’s and the planet’s interest to make them do so as efficiently as possible. One new method has the facilities keeping an eye on the weather so they know when the best times are to switch to solar and wind energy.

The trouble with renewables is that they’re not consistent, like the output of a power plant. Of course it isn’t simply that when the wind dies down, wind energy is suddenly ten times as expensive or not available — but there are all kinds of exchanges and energy economies that fluctuate depending on what’s being put onto the grid and from where.

Google’s latest bid to make its data centers greener and more efficient is to predict those energy economies and schedule its endless data-crunching tasks around them.

It’s not that someone at Google looks up the actual weather for the next day and calculates how much solar energy will be contributed in a given region and when. Turns out there are people who can do that for you! In this case a firm called Tomorrow.

Weather patterns affect those energy economies, leading to times when the grid is mostly powered by carbon sources like coal, and other times when renewables are contributing their maximum.

This helpful visualization shows how it might work – shift peak loads to match times when green energy is most abundant.

What Google is doing is watching this schedule of carbon-heavy and renewable-heavy periods on the grid and shuffling things around on its end to take advantage of them. By stacking all its heavy compute tasks into time slots where the extra power they will draw is taken from mostly renewable energy sources, they can reduce their reliance on carbon-heavy power.

It only works if you have the kind of fluid and predictable digital work that Google has nurtured. When energy is expensive or dirty, the bare minimum of sending emails and serving YouTube videos is more than enough to keep its data centers busy. But when it’s cheap and green, compute-heavy tasks like training machine learning models or video transcoding can run wild.

This informed time-shifting is a smart and intuitive idea, though from Google’s post it’s not clear how effective it really is. Usually when the company announces some effort like this, it’s accompanied by estimates of how much energy is saved or efficiency gained. In the case of this time-shifting experiment, the company is uncharacteristically conservative:

“Results from our pilot suggest that by shifting compute jobs we can increase the amount of lower-carbon energy we consume.”

That’s a lot of hedging for something that sounds like a home run on paper. A full research paper is forthcoming, but I’ve asked Google for more information in the meantime; I’ll update this post if I hear back.

Google Cloud’s newest data center opens in Salt Lake City

Google Cloud announced today that it’s a new data center in Salt Lake City has opened, making it the 22nd such center the company has opened to-date.

This Salt Lake City data center marks the third in the western region joining LA and Dalles, Oregon with the goal of providing lower latency compute power across the region.

“We’re committed to building the most secure, high-performance and scalable public cloud, and we continue to make critical infrastructure investments that deliver our cloud services closer to customers that need them the most,” said Jennifer Chason, director of Google Cloud Enterprise for the Western States and Southern California said in a statement.

Cloud vendors in general are trying to open more locations closer to potential customers. This is a similar approach taken by AWS when it announced its LA local zone at AWS re:Invent last year. The idea is to reduce latency by moving compute resources closer to the companies who need the, or to spread workloads across a set of regional resources.

Google also announced that PayPal, a company that was already a customer, has signed a multi-year contract, and will be moving parts of its payment systems into the western region. It’s worth noting that Salt Lake City is also home to a thriving startup scene that could benefit from having a data center located close by.

Google Cloud’s parent company Alphabet’s recently shared the cloud division’s quarterly earnings for the first time, indicating that it was on a run rate of more than $10 billion. While it still has a long way to go catch rivals Microsoft and Amazon, as it expands its reach in this fashion, it could help grow that market share.

Edge computing startup Pensando comes out of stealth mode with a total of $278 million in funding

Pensando, an edge computing startup founded by former Cisco engineers, came out of stealth mode today with an announcement that it has raised a $145 million Series C. The company’s software and hardware technology, created to give data centers more of the flexibility of cloud computing servers, is being positioned as a competitor to Amazon Web Services Nitro.

The round was led by Hewlett Packard Enterprise and Lightspeed Venture Partners and brings Pensando’s total raised so far to $278 million. HPE chief technology officer Mark Potter and Lightspeed Venture partner Barry Eggers will join Pensando’s board of directors. The company’s chairman is former Cisco CEO John Chambers, who is also one of Pensando’s investors through JC2 Ventures.

Pensando was founded in 2017 by Mario Mazzola, Prem Jain, Luca Cafiero and Soni Jiandani, a team of engineers who spearheaded the development of several of Cisco’s key technologies, and founded four startups that were acquired by Cisco, including Insieme Networks. (In an interview with Reuters, Pensando chief financial offier Randy Pond, a former Cisco executive vice president, said it isn’t clear if Cisco is interested in acquiring the startup, adding “our aspirations at this point would be to IPO. But, you know, there’s always other possibilities for monetization events.”)

The startup claims its edge computing platform performs five to nine times better than AWS Nitro, in terms of productivity and scale. Pensando prepares data center infrastructure for edge computing, better equipping them to handle data from 5G, artificial intelligence and Internet of Things applications. While in stealth mode, Pensando acquired customers including HPE, Goldman Sachs, NetApp and Equinix.

In a press statement, Potter said “Today’s rapidly transforming, hyper-connected world requires enterprises to operate with even greater flexibility and choices than ever before. HPE’s expanding relationship with Pensando Systems stems from our shared understanding of enterprises and the cloud. We are proud to announce our investment and solution partnership with Pensando and will continue to drive solutions that anticipate our customers’ needs together.”

Google is investing $3.3B to build clean data centers in Europe

Google announced today that it was investing 3 billion euro (approximately $3.3 billion USD) to expand its data center presence in Europe. What’s more, the company pledged the data centers would be environmentally friendly.

This new investment is in addition to the $7 billion the company has invested since 2007 in the EU, but today’s announcement was focused on Google’s commitment to building data centers running on clean energy, as much as the data centers themselves.

In a blog post announcing the new investment, CEO Sundar Pichai, made it clear that the company was focusing on running these data centers on carbon-free fuels, pointing out that he was in Finland today to discuss building sustainable economic development in conjunction with a carbon-free future with prime minister Antti Rinne.

Of the 3 billion Euros, the company plans to spend, it will invest 600 million to expand its presence in Hamina, Finland, which he wrote “serves as a model of sustainability and energy efficiency for all of our data centers.” Further, the company already announced 18 new renewable energy deals earlier this week, which encompass a total of 1,600-megawatts in the US, South America and Europe.

In the blog post, Pichai outlined how the new data center projects in Europe would include some of these previously announced projects:

Today I’m announcing that nearly half of the megawatts produced will be here in Europe, through the launch of 10 renewable energy projects. These agreements will spur the construction of more than 1 billion euros in new energy infrastructure in the EU, ranging from a new offshore wind project in Belgium, to five solar energy projects in Denmark, and two wind energy projects in Sweden. In Finland, we are committing to two new wind energy projects that will more than double our renewable energy capacity in the country, and ensure we continue to match almost all of the electricity consumption at our Finnish data center with local carbon-free sources, even as we grow our operations.

The company is also helping by investing in new skills training, so people can have the tools to be able to handle the new types of jobs these data centers and other high tech jobs will require. The company claims it has previously trained 5 million people in Europe for free in crucial digital skills, and recently opened a Google skills hub in Helsinki.

It’s obviously not a coincidence that company is making an announcement related to clean energy on Global Climate Strike Day, a day when people from around the world are walking out of schools and off their jobs to encourage world leaders and businesses to take action on the climate crisis. Google is attempting to answer the call with these announcements.

Why AWS gains big storage efficiencies with E8 acquisition

AWS is already the clear market leader in the cloud infrastructure market, but it’s never been an organization that rests on its past successes. Whether it’s a flurry of new product announcements and enhancements every year, or making strategic acquisitions.

When it bought Israeli storage startup E8 yesterday, it might have felt like a minor move on its face, but AWS was looking, as it always does, to find an edge and reduce the costs of operations in its data centers. It was also very likely looking forward to the next phase of cloud computing. Reports have pegged the deal at between $50 and $60 million.

What E8 gives AWS for relatively cheap money is highly advanced storage capabilities, says Steve McDowell, senior storage analyst at Moor Research and Strategy. “E8 built a system that delivers extremely high-performance/low-latency flash (and Optane) in a shared-storage environment,” McDowell told TechCrunch.

AWS follows Microsoft into the Middle East, opening new region in Bahrain

AWS, Amazon’s cloud arm, announced today that it has opened a Middle East Region in Bahrain. The Middle East is an emerging market for cloud providers and is this new region is part of a continuing expansion for the cloud giant. Today’s news comes on the heels of Microsoft announcing its own Middle East data centers in Abu Dhabi and Dubai just last month.

As AWS CEO Andy Jassy pointed out last year at AWS re:Invent, the cloud is at different stages in different parts of the world and Amazon obviously wants to be a part of the emerging areas to extend its lead in the cloud infrastructure market.

“I think we’re just in the early stages of enterprise and public sector adoption in the U.S. Outside the U.S. I would say we are 12-36 months behind. So there are a lot of mainstream enterprises that are just now starting to plan their approach to the cloud,” Jassy told the AWS re:Invent audience last year.

Amazon sees this expansion as helping companies in the Middle East, much in the same way it has in the U.S., Europe and other parts of the world, to digitally transform through the use of cloud services.

The new region in the Middle East is composed of three Availability Zones. That’s AWS lingo for a distinct geographic area that holds at least one data center. “Each Availability Zone has independent power, cooling and physical security and is connected via redundant, ultra-low-latency networks,” the company explained in a statement.

Amazon reports that this is part of a continuing expansion. It also announced plans to open nine additional availability zones in Indonesia, Italy and South Africa in coming years.

Fungible raises $200 million led by SoftBank Vision Fund to help companies handle increasingly massive amounts of data

Fungible, a startup that wants to help data centers cope with the increasingly massive amounts of data produced by new technologies, has raised a $200 million Series C led by SoftBank Vision Fund, with participation from Norwest Venture Partners and its existing investors. As part of the round, SoftBank Investment Advisers senior managing partner Deep Nishar will join Fungible’s board of directors.

Founded in 2015, Fungible now counts about 200 employees and has raised more than $300 million in total funding. Its other investors include Battery Ventures, Mayfield Fund, Redline Capital and Walden Riverwood Ventures. Its new capital will be used to speed up product development. The company’s founders, CEO Pradeep Sindhu and Bertrand Serlet, say Fungible will release more information later this year about when its data processing units will be available and their on-boarding process, which they say will not require clients to change their existing applications, networking or server design.

Sindu previously founded Juniper Networks, where he held roles as chief scientist and CEO. Serlet was senior vice president of software engineering at Apple before leaving in 2011 and founding Upthere, a storage startup that was acquired by Western Digital in 2017. Sindu and Serlet describe Fungible’s objective as pivoting data centers from a “compute-centric” model to a data-centric one. While the company is often asked if they consider Intel and Nvidia competitors, they say Fungible Data Processing Units (DPU) complement tech, including central and graphics processing units, from other chip makers.

Sindhu describes Fungible’s DPUs as a new building block in data center infrastructure, allowing them to handle larger amounts of data more efficiently and also potentially enabling new kinds of applications. Its DPUs are fully programmable and connect with standard IPs over Ethernet local area networks and local buses, like the PCI Express, that in turn connect to CPUs, GPUs and storage. Placed between the two, the DPUs act like a “super-charged data traffic controller,” performing computations offloaded by the CPUs and GPUs, as well as converting the IP connection into high-speed data center fabric.

This better prepares data centers for the enormous amounts of data generated by new technology, including self-driving cars, and industries such as personalized healthcare, financial services, cloud gaming, agriculture, call centers and manufacturing, says Sindu.

In a press statement, Nishar said “As the global data explosion and AI revolution unfold, global computing, storage and networking infrastructure are undergoing a fundamental transformation. Fungible’s products enable data centers to leverage their existing hardware infrastructure and benefit from these new technology paradigms. We look forward to partnering with the company’s visionary and accomplished management team as they power the next generation of data centers.”

Fungible raises $200 million led by SoftBank Vision Fund to help companies handle increasingly massive amounts of data

Fungible, a startup that wants to help data centers cope with the increasingly massive amounts of data produced by new technologies, has raised a $200 million Series C led by SoftBank Vision Fund, with participation from Norwest Venture Partners and its existing investors. As part of the round, SoftBank Investment Advisers senior managing partner Deep Nishar will join Fungible’s board of directors.

Founded in 2015, Fungible now counts about 200 employees and has raised more than $300 million in total funding. Its other investors include Battery Ventures, Mayfield Fund, Redline Capital and Walden Riverwood Ventures. Its new capital will be used to speed up product development. The company’s founders, CEO Pradeep Sindhu and Bertrand Serlet, say Fungible will release more information later this year about when its data processing units will be available and their on-boarding process, which they say will not require clients to change their existing applications, networking or server design.

Sindu previously founded Juniper Networks, where he held roles as chief scientist and CEO. Serlet was senior vice president of software engineering at Apple before leaving in 2011 and founding Upthere, a storage startup that was acquired by Western Digital in 2017. Sindu and Serlet describe Fungible’s objective as pivoting data centers from a “compute-centric” model to a data-centric one. While the company is often asked if they consider Intel and Nvidia competitors, they say Fungible Data Processing Units (DPU) complement tech, including central and graphics processing units, from other chip makers.

Sindhu describes Fungible’s DPUs as a new building block in data center infrastructure, allowing them to handle larger amounts of data more efficiently and also potentially enabling new kinds of applications. Its DPUs are fully programmable and connect with standard IPs over Ethernet local area networks and local buses, like the PCI Express, that in turn connect to CPUs, GPUs and storage. Placed between the two, the DPUs act like a “super-charged data traffic controller,” performing computations offloaded by the CPUs and GPUs, as well as converting the IP connection into high-speed data center fabric.

This better prepares data centers for the enormous amounts of data generated by new technology, including self-driving cars, and industries such as personalized healthcare, financial services, cloud gaming, agriculture, call centers and manufacturing, says Sindu.

In a press statement, Nishar said “As the global data explosion and AI revolution unfold, global computing, storage and networking infrastructure are undergoing a fundamental transformation. Fungible’s products enable data centers to leverage their existing hardware infrastructure and benefit from these new technology paradigms. We look forward to partnering with the company’s visionary and accomplished management team as they power the next generation of data centers.”