Decrypted: How a teenager hacked Twitter, Garmin’s ransomware aftermath

A 17-year-old Florida teenager is accused of perpetrating one of the year’s biggest and most high-profile hacks: Twitter.

A federal 30-count indictment filed in Tampa said Graham Ivan Clark used a phone spearphishing attack to pivot through multiple layers of Twitter’s security and bypassed its two-factor authentication to gain access to an internal “admin” tool that let the hacker take over any account. With two accomplices named in a separate federal indictment, Clark — who went by the online handle “Kirk” — allegedly used the tool to hijack the accounts of dozens of celebrities and public figures, including Bill Gates, Elon Musk and former president Barack Obama, to post a cryptocurrency scam netting over $100,000 in bitcoin in just a few hours.

It was, by all accounts, a sophisticated attack that required technical skills and an ability to trick and deceive to pull off the scam. Some security professionals were impressed, comparing the attack to one that had the finesse and professionalism of a well-resourced nation-state attacker.

But a profile in The New York Times describes Clark was an “adept scammer with an explosive temper.”

In the teenager’s defense, the attack could have been much worse. Instead of pushing a scam that promised to “double your money,” Clark and his compatriots could have wreaked havoc. In 2013, hackers hijacked the Associated Press’ Twitter account and tweeted a fake bomb attack on the White House, sending the markets plummeting — only to quickly recover after the all-clear was given.

But with control of some of the world’s most popular Twitter accounts, Clark was for a few hours in July one of the most powerful people in the world. If found guilty, the teenager could spend his better years behind bars.

Here’s more from the past week.


THE BIG PICTURE

Garmin hobbles back after ransomware attack, but questions remain

Taking on the perfect storm in cybersecurity

The cybersecurity industry is at a turning point.

Traditional security approaches were already struggling to deal with rising cyberattacks, a shift to cloud and explosive growth in Internet of Things (41.6 billion IoT devices by 2025, anyone?).

Then came the COVID-19 pandemic and all the changes that were fomenting for years accelerated, with remote work becoming the norm and digital transformation taking on urgency. New levels of complexity are piling on top of an environment that was already too overwhelming for most organizations.

To me, the biggest risk in cybersecurity today is that organizations can’t keep up with the amount of work it takes to be secure. The people on your cybersecurity teams are drowning under an impossible amount of manual work. When people are asked to use manual processes against machines, they can’t keep up. Meanwhile, the hackers are getting smarter every day. They use machine learning (ML) algorithms to scale attacks that can only be successfully prevented by comparable techniques.

Under these conditions, you’ve got a perfect storm brewing.

That’s the bad news. The good news is that these challenges can be addressed. In fact, now is the right time to fix things. Why? Because everything is in transition: cloud adoption, a mobile workforce, the proliferation of IoT, etc.

In this sophisticated threat landscape, you can’t be reactive — you must be proactive. You need integrated machine learning to lift the burden off cybersecurity teams, so that they can be faster and more effective in dealing with attacks. At the same time, you must embrace cloud delivery and a holistic approach to cybersecurity.

Where’s the core foundation?

A popular expression from a few years back was “the cloud changes everything.” OK, it doesn’t change everything, but it does change a lot.

When organizations move to the cloud, they’re often unprepared. Staking critical pieces of their business on a cloud future without understanding the full implications of cloud can be challenging. And it doesn’t help that they are inundated with so many disparate products on the market that don’t work easily together.

If your security teams are drowning now, the cloud will hit them like a tsunami.

Automation can help ease the burden. But it’s nearly impossible for security teams to operate successfully when there are multiple vendors that need to be manually integrated and managed.

Most organizations have taken a stopgap approach to cybersecurity over the years. Each time there is a new type of threat, a new set of startups are founded to come up with a solution to counter it.

I have seen companies with dozens, or hundreds, of disparate security products that don’t interoperate and don’t even offer the possibility of a holistic approach to cybersecurity. This has become a house of cards. One product piled on top of another, with no core foundation to hold everything together.

It’s time to act. The technology to do cybersecurity right is available. It’s all about how you deploy it.

A new model for cybersecurity

The future of cybersecurity depends on a platform approach. This will allow your cybersecurity teams to focus on security rather than continue to integrate solutions from many different vendors. It allows you to keep up with digital transformation and, along the way, battle the perfect storm.

Our network perimeters are typically well-protected, and organizations have the tools and technologies in place to identify threats and react to them in real-time within their network environments.

The cloud, however, is a completely different story. There is no established model for cloud security. The good news is that there is no big deployment of legacy security solutions in the cloud. This means organizations have a chance to get it right this time. We can also fix how to access the cloud and manage security operations centers (SOCs) to maximize ML and AI for prevention, detection, response and recovery.

Cloud security, cloud access and next-generation SOCs are interrelated. Individually and together, they present an opportunity to modernize cybersecurity. If we build the right foundation today, we can break the pattern of too many disparate tools and create a path to consuming cybersecurity innovations and solutions more easily in the future.

What is that path? With an integrated platform, organizations can still use a wide range of tools, but they can coordinate them, manage them centrally, eliminate silos and ensure that all across the organization they are fighting machines with machines, software with software.

Only with an integrated platform can cybersecurity teams leverage automation to rapidly monitor, investigate and respond across multicloud environments and distributed networks that encompass users and devices around the globe.

2020 is a year of accelerated transformation. You can break the old cybersecurity way of doing things and embrace a new approach — one driven by machine learning, cloud delivery and a platform model. This the future of cybersecurity. It is a future that, by necessity, has come upon us faster than we would have ever imagined.

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier.

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for compute services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

Google reportedly cancelled a cloud project meant for countries including China

After reportedly spending a year and a half working on a cloud service meant for China and other countries, Google cancelled the project, called “Isolated Region,” in May due partly to geopolitical and pandemic-related concerns. Bloomberg reports that Isolated Region, shut down in May, would have enabled it to offer cloud services in countries that want to keep and control data within their borders.

According to two Google employees who spoke to Bloomberg, the project was part of a larger initiative called “Sharded Google” to create data and processing infrastructure that is completely separate from the rest of the company’s network. Isolated Region began in early 2018 in response to Chinese regulations that mean foreign tech companies that want to enter the country need to form a joint venture with a local company that would hold control over user data. Isolated Region was meant to help meet requirements like this in China and other countries, while also addressing U.S. national security concerns.

Bloomberg’s sources said the project was paused in China in January 2019, and focus was redirected to Europe, the Middle East and Africa instead, before Isolated Region was ultimately cancelled in May, though Google has since considered offering a smaller version of Google Cloud Platform in China.

After the story was first published, a Google representative told Bloomberg that Isolated Region wasn’t shut down because of geopolitical issues or the pandemic, and that the company “does not offer and has not offered cloud platform services inside China.”

Instead, she said Isolated Region was cancelled because “other approaches we were actively pursuing offered better outcomes. We have a comprehensive approach to addressing these requirements that covers the governance of data, operational practices and survivability of software. Isolated Region was just one of the paths we explored to address these requirements.”

Alphabet, Google’s parent company, broke out Google Cloud as its own line item for the first time in its fourth-quarter and full-year earnings report, released in February. It revealed that its run rate grew 53.6% during the last year to just over $10 billion in 2019, making it a more formidable rival to competitors Amazon and Microsoft.

‘No code’ will define the next generation of software

It seems like every software funding and product announcement these days includes some sort of reference to “no code” platforms or functionality. The frequent callbacks to this buzzy term reflect a realization that we’re entering a new software era.

Similar to cloud, no code is not a category itself, but rather a shift in how users interface with software tools. In the same way that PCs democratized software usage, APIs democratized software connectivity and the cloud democratized the purchase and deployment of software, no code will usher in the next wave of enterprise innovation by democratizing technical skill sets. No code is empowering business users to take over functionality previously owned by technical users by abstracting complexity and centering around a visual workflow. This profound generational shift has the power to touch every software market and every user across the enterprise.

The average enterprise tech stack has never been more complex

In a perfect world, all enterprise applications would be properly integrated, every front end would be shiny and polished, and internal processes would be efficient and automated. Alas, in the real world, engineering and IT teams spend a disproportionate share of their time fighting fires in security, fixing internal product bugs and running vendor audits. These teams are bursting at the seams, spending an estimated 30% of their resources building and maintaining internal tools, torpedoing productivity and compounding technical debt.

Seventy-two percent of IT leaders now say project backlogs prevent them from working on strategic projects. Hiring alone can’t solve the problem. The demand for technical talent far outpaces supply, as demonstrated by the fact that six out of 10 CIOs expect skills shortages to prevent their organizations from keeping up with the pace of change.

At the same time that IT and engineering teams are struggling to maintain internal applications, business teams keep adding fragmented third-party tools to increase their own agility. In fact, the average enterprise is supporting 1,200 cloud-based applications at any given time. Lacking internal support, business users bring in external IT consultants. Cloud promised easy as-needed software adoption with seamless integration, but the realities of quickly changing business needs have led to a roaring comeback of expensive custom software.

Google Cloud launches Filestore High Scale, a new storage tier for high-performance computing workloads

Google Cloud today announced the launch of Filestore High Scale, a new storage options — and tier of Google’s existing Filestore service — for workloads that can benefit from access to a distributed high-performance storage option.

With Filestore High Scale, which is based on technology Google acquired when it bought Elastifile in 2019, users can deploy shared file systems with hundreds of thousands of IOPS, 10s of GB/s of throughput and at a scale of 100s of TBs.

“Virtual screening allows us to computationally screen billions of small molecules against a target protein in order to discover potential treatments and therapies much faster than traditional experimental testing methods,” says Christoph Gorgulla, a postdoctoral research fellow at Harvard Medical School’s Wagner Lab., which already put the new service through its paces “As researchers, we hardly have the time to invest in learning how to set up and manage a needlessly complicated file system cluster, or to constantly monitor the health of our storage system. We needed a file system that could handle the load generated concurrently by thousands of clients, which have hundreds of thousands of vCPUs.”

The standard Google Cloud Filestore service already supports some of these use cases, but the company notes that it specifically built Filestore High Scale for high-performance computing (HPC) workloads and in today’s announcement, the company specifically focuses on biotech uses cases around COVID-19. Filestore High Scale is meant to support tens of thousands of concurrent clients, which isn’t necessarily a standard use case, but developers who need this kind of power can now get it in Google’s Cloud.

In addition to High Scale, Google also today announced that all Filestore tiers now offer beta support for NFS IP-based access controls, an important new feature for those companies that have advanced security requirements on top of their need for a high-performance, fully-managed file storage service.

Los Angeles-based Open Raven raises $15 million from KPCB for its security tech to secure hybrid clouds

Open Raven, the Los Angeles-based security startup founded by a team of cybersecurity veterans from CrowdStrike and SourceClear, has closed on $15 million in new financing only four months after emerging from stealth and in the middle of a global pandemic. 

The company already boasted an impressive roster of investors well-versed in enterprise software and cybersecurity including Upfront Ventures; Goldman Sachs’ chief information risk officer, Phil Venables; RSA’s former chief strategy officer, Niloo Razi Howe; and the cybersecurity company Signal Sciences, whose chief executive, Andrew Peterson, is a Los Angeles native.

Now, the company has added to its haul with new capital and the cybersecurity expertise of Kleiner Perkins’ deep knowledge in the space through investors like Ted Schlein and Bucky Moore, who will be taking a seat on the company’s board of directors.

Investors’ confidence in Open Raven’s potential stems from the simple fact that a majority of all databases will be accessed from a cloud platform within the next two years, according to data from Gartner Inc. and provided by the company.

These databases may exist on several different service providers cloud computing platforms making it that much more difficult to secure and track the data as it’s accessed by different users. Put simply, data security tools weren’t built to handle this kind of data fluidity across multiple services. These instances of what Open Raven calls “data sprawl: can lead to misconfigurations that have become one of the biggest security threats, according to a study by TechCrunch’s parent company, Verizon 

“Today’s data security problem bears little resemblance to the historical challenges that drove the creation of the last generation of products,” said KPCB’s Moore, in a statement.

Co-founded by Crowdstrike’s former chief product officer, Dave Cole, and the founder of the open source code monitoring service, SourceClear, Mark Curphey, Open Raven has a tool that monitors, maps, and manages how data moves through an organization.

In the cloud-based computing environments that have become standard operating practice during the work-from-home era created by the COVID-19 pandemic, data is moving to an increasingly vast number of points outside of a centralized network.

As Cole told dot.la when his company first emerged from stealth, many security breaches are just “instances where an org simply lost control of what data they had where, and it ended up on the internet. And people found it before they did.”

Open Raven offers a free version of its service to map out networks and visualize where and how data moves. The core functionality will be available for free under an Apache 2.0 license, but there’s a premium version of the product where the company will provide additional services for paying customers.

“The transition to the cloud and out of physical data centers means that data stores change more quickly than ever before – leaving numerous unanswered questions,” said Dave Cole, co-founder and CEO of Open Raven, in a statement. ec

IBM Cloud suffers prolonged outage

The IBM Cloud is currently suffering a major outage, and with that, multiple services that are hosted on the platform are also down, including everybody’s favorite tech news aggregator, Techmeme.

It looks like the problems started around 2:30pm PT and spread from there. Best we can tell, this is a worldwide problem and involves a networking issue, but IBM’s own status page isn’t actually loading anymore and returns an internal server error, so we don’t quite know the extent of the outage or what triggered it. IBM Cloud’s Twitter account has also remained silent, though we found a status page for IBM Aspera hosted on a third-party server, which seems to confirm that this is likely a worldwide networking issue.

IBM Cloud, which published a paper about ensuring zero downtime in April, also suffered a minor outage in its Dallas data center in March.

We’ve reached out to IBM’s PR team and will update this post once we get more information.

Update (5:06pm PT): we are seeing some reports that IBM Cloud is slowly coming back online, but the company’s status page also now seems to be functioning again and still shows that the cloud outage continues for the time being.

Tibet to become China’s data gateway to South Asia

A sprawling 645,000-square-meter data facility is going up on the top of the world to power data exchange between China and its neighboring countries in South Asia.

The cloud computing and data center, perched on the plateau city Lhasa, the capital of Tibet, and developed by private tech firm Ningsuan Technologies, has entered pilot operation as it announced the completion of the first construction phase, China’s state news agency Xinhua reported (in Chinese) on Sunday.

Northeast of the Himalayas, Tibet was incorporated as an autonomous region of China in 1950. Over the decades, the Chinese government has been grappling with demand from many Tibetans for more religious freedom and human rights in one of its most critical regions for national security.

The plateau is now a bridge for China to South Asia under the Belt and Road Initiative, Beijing’s ambitious global infrastructure project. Ningsuan, a Tibet-headquartered company with data control centers in Beijing and research teams in Nanjing, is betting on the increasing trade and investment activity between China and India, Nepal, Bangladesh and other countries that are part of the BRI.

This generates the need for robust IT infrastructure in the region to support data transmission, Hu Xiao, Ningsuan’s general manager, contended in a previous media interview.

While hot days and spotty power supply in certain South Asian regions incur higher costs for running data centers, Tibet, like the more established data hub in Guizhou province, is a natural data haven thanks to its temperate climate and low average temperature that are ideal for keeping servers cool.

Construction of the Lhasa data center began in 2017 and is scheduled for completion around 2025 or 2026, a grand investment that will total almost 12 billion yuan or $1.69 billion. The cloud facility is estimated to generate 10 billion yuan in revenue each year when it goes into full operation.

Alibaba has skin in the game as well. In 2018, the Chinese e-commerce giant, which has a growing cloud computing business, sealed an agreement (in Chinese) with Ningsuan to bring cloud services to industries in the Tibetan region that span electricity supply, finance, national security, government affairs, public security, to cyberspace.

VMware acquires network security firm Lastline, said to lay off 40% of staff

VMware is acquiring network security firm Lastline, TechCrunch has learned.

Since its launch in 2012, Lastline raised about $52.2 million, according to Crunchbase. Investors include Thomvest Ventures, which led the company’s $28.5 million Series C round in 2017; Redpoint and e.ventures, which led the company’s 2013 funding round; and Barracuda Networks, NTT Finance and Dell Technologies Capital.

A source tells us that VMware will let go some 40% of Lastline’s employees — about 50 staffers — as part of the acquisition. We asked a Lastline spokesperson for comment prior to publication but did not hear back. A spokesperson for VMware also did not respond to a request for comment.

After we published, Lastline confirmed the acquisition in a blog post.

“By joining forces with VMware, we will be able to offer additional capabilities to our customers and bring to market comprehensive security solutions for the data center, branch office and remote and mobile users,” said Lastline’s chief executive John DiLullo.

Terms of the deal were not disclosed. The deal, subject to regulatory approvals, is expected to close by the end of July.

Lastline provides threat detection services mostly focused on the network level, but they range from malware analysis to intrusion detection and network traffic analysis. The company prides itself on being a cloud-native platform and, as such, it promises to secure cloud deployments and on-premises networks, as well as multi-cloud and hybrid environments.

Recently, support for cloud-native hybrid and multi-cloud deployments has very much been a focus for VMware, which makes Lastline a pretty obvious fit for its overall strategy. This also marks VMware’s third security acquisition this year, after it picked up network analytics firm Nyansa in January and cloud-native security platform Octarine in May. VMware also acquired security firm Carbon Black in August 2019. The trend here is pretty obvious, and VMware is obviously trying to position itself as the provider of choice for enterprises that are looking for cloud-native security tools.

The company was founded by Christopher Kruegel, Engin Kirda and Giovanni Vigna, a team of computer science professors from the University of California, Santa Barbara and Northeastern University.

News of the acquisition comes a week after VMware announced solid Q1 earnings of $386 million, or $0.92 a share. Revenues came in at $2.73 billion, up about 12% on the same period a year ago. VMware CEO Pat Gelsinger attributed the quarter to the shift to work-from-home sparked by the coronavirus pandemic.

VMware shares were down slightly at Thursday’s market close.

Updated to include Lastline’s blog post on the acquisition.