TikTok says it removed 104M videos in H1 2020, proposes harmful content coalition with other social apps

As the future of ByteDance’s TikTok ownership continues to get hammered out between tech leviathans, investors and government officials in meeting rooms, the video app today published its latest transparency report. In all, over 104.5 million videos were taken down; it had nearly 1,800 legal requests; and received 10,600 copyright takedown notices for the first half of this year.

Alongside that, and possibly to offset the high numbers of illicit videos and to also coincide with an appearance today in front of a parliamentary committee in the UK over harmful content, TikTok also announced a new initiative — potentially in partnership with other social apps — against harmful content.

The figures in the transparency report underscore an important aspect around the impact of the popular app. The government may want to shut down TikTok over national security concerns (unless ByteDance finds a new non-Chinese controlling structure that satisfies lawmakers).

But in reality, just like other social media apps, TikTok has another not-insignificant fire to fight: it is grappling with a lot of illegal and harmful content published and shared on its platform, and as it continues to grow in popularity (it now has more than 700 million users globally), that problem will also continue to grow.

That’s something TikTok sees will be an ongoing issue for the company, regardless of how its ownership unfolds outside of China. While one of the big issues around TikTok’s ownership has been related to its algorithms and whether these can or will be part of any deal, the company has tried to make other efforts to appear more open with regards to how it works. Earlier this year it opened a transparency center in the US that it said would help experts observe and vet how it moderates content.

TikTok said that the 104,543,719 total videos that TikTok removed globally for violating either community guidelines or its terms of service made up less than 1% of all videos uploaded on TikTok, which gives you some idea of the sheer scale of the service. 

The volume of videos that are getting taken down have more than doubled over the previous six months, a reflection of how the total volume of videos has also doubled.

In the second half of 2019, the company took down more than 49 million videos, according to the last transparency report published by the company (I don’t know why exactly, but it took a lot longer to publish that previous transparency report, which came out in July 2020.) The proportion of total videos taken down was roughly the same as in the previous six months (“less than 1%”).

TikTok said that 96.4% of the total number were removed before they were reported, with 90.3% removed before they received any views. It doesn’t specify if these were found via automated systems or by human moderators, or a mix of both, but it sounds like it made a switch to algorithm-based moderation at least in some markets:

“As a result of the coronavirus pandemic, we relied more heavily on technology to detect and automatically remove violating content in markets such as India, Brazil, and Pakistan,” it noted.

The company notes that the biggest category of removed videos was around adult nudity and sexual activities, at 30.9%, with minor safety at 22.3% and illegal activities at 19.6%. Other categories included suicide and self harm, violent content, hate speech and dangerous individuals. (And videos could count in more than one category, it noted.)

The biggest origination market for removed videos is the one in which TikTok has been banned (perhaps unsurprisingly): India took the lion’s share of videos at 37,682,924. The US, on the other hand, accounted for 9,822,996 (9.4%) of videos removed, making it the second-largest market.

Currently, it seems that misinformation and disinformation are not the main ways that TikTok is getting abused, but they are still significant numbers: some 41,820 videos (less than 0.5% of those removed in the US) violated TikTok’s misinformation and disinformation policies, the company said.

Some 321,786 videos (around 3.3% of US content removals) violated its hate speech policies.

Legal requests, it said, are on the rise, with 1,768 requests for user information from 42 countries/markets in the first six months of the year, with 290 (16.4%) coming from US law enforcement agencies, including 126 subpoenas, 90 search warrants and 6 court orders. In all, it had 135 requests from government agencies to restrict or remove content from 15 countries/markets.

TikTok said that the harmful content coalition is based on a proposal that Vanessa Pappas, the acting head of TikTok in the US, sent out to nine executives at other social media platforms. It doesn’t specify which, nor what the response was. We are asking and will update as we learn more.

Social media coalition proposal

Meanwhile, the letter, published in full by TikTok and reprinted below, underscores a response to current thinking around how proactive and successful social media platforms have been in trying to curtail some of the abuse of their platforms. It’s not the first effort of this kind — there have been several other attempts like this one where multiple companies, erstwhile competitors for consumer engagement, come together with a united front to tackle things like misinformation.

This one specifically is identifying non-political content and coming up with a “collaborative approach to early identification and notification amongst industry participants of extremely violent, graphic content, including suicide.” The MOU proposed by Pappas suggested that social media platforms communicate to keep each other notified of the content — a smart move, considering how much gets shared across multiple platforms, from other platforms.

The company’s efforts on the harmful content coalition is one more example of how social media companies are trying to take their own initiative and show that they are trying to be responsible, a key way of lobbying governments to stay out of regulating them. With Facebook, Twitter, YouTube and others continue to be in hot water over the content that is shared over their platforms — despite their attempts to curb abuse and manipulation — it’s unlikely that this will be the final word on any of this.

Full memo below:

Recently, social and content platforms have once again been challenged by the posting and cross-posting of explicit suicide content that has affected all of us – as well as our teams, users, and broader communities.

Like each of you, we worked diligently to mitigate its proliferation by removing the original content and its many variants, and curtailing it from being viewed or shared by others. However, we believe each of our individual efforts to safeguard our own users and the collective community would be boosted significantly through a formal, collaborative approach to early identification and notification amongst industry participants of extremely violent, graphic content, including suicide.

To this end, we would like to propose the cooperative development of a Memorandum of Understanding (MOU) that will allow us to quickly notify one another of such content.

Separately, we are conducting a thorough analysis of the events as they relate to the recent sharing of suicide content, but it’s clear that early identification allows platforms to more rapidly respond to suppress highly objectionable, violent material.

We are mindful of the need for any such negotiated arrangement to be clearly defined with respect to the types of content it could capture, and nimble enough to allow us each to move quickly to notify one another of what would be captured by the MOU. We also appreciate there may be regulatory constraints across regions that warrant further engagement and consideration.

To this end, we would like to convene a meeting of our respective Trust and Safety teams to further discuss such a mechanism, which we believe will help us all improve safety for our users.

We look forward to your positive response and working together to help protect our users and the wider community.

Sincerely,

Vanessa Pappas
Head of TikTok

More to come.

Senate’s encryption backdoor bill is ‘dangerous for Americans,’ says Rep. Lofgren

A Senate bill that would compel tech companies to build backdoors to allow law enforcement access to encrypted devices and data would be “very dangerous” for Americans, said a leading House Democrat.

Law enforcement frequently spars with tech companies over their use of strong encryption, which protects user data from hackers and theft, but the government says makes it harder to catch criminals accused of serious crime. Tech companies like Apple and Google have in recent years doubled down on their security efforts by securing data with encryption that even they cannot unlock.

Senate Republicans in June introduced their latest “lawful access” bill, renewing previous efforts to force tech companies to allow law enforcement access to a user’s data when presented with a court order.

“It’s dangerous for Americans, because it will be hacked, it will be utilized, and there’s no way to make it secure,” Rep. Zoe Lofgren, whose congressional seat covers much of Silicon Valley, told TechCrunch at Disrupt 2020. “If we eliminate encryption, we’re just opening ourselves up to massive hacking and disruption,” she said.

Lofgren’s comments echo those of critics and security experts, who have long criticized efforts to undermine encryption, arguing that there is no way to build a backdoor for law enforcement that could not also be exploited by hackers.

Several previous efforts by lawmakers to weaken and undermine encryption have failed. Currently, law enforcement has to use existing tools and techniques to find weaknesses in phones and computers. The FBI claimed for years that it had thousands of devices that it couldn’t get into, but admitted in 2018 that it repeatedly overstated the number of encrypted devices it had and the number of investigations that were negatively impacted as a result.

Lofgren has served in Congress since 1995 during the first so-called “Crypto Wars,” during which the security community fought the federal government to limit access to strong encryption. In 2016, Lofgren was part of an encryption working group on the House Judiciary Committee. The group’s final report, bipartisan but not binding, found that any measures to undermine encryption “works against the national interest.”

Still, it’s a talking point that the government continues to push, even as recently as this year when U.S. Attorney General William Barr said that Americans should accept the security risks that encryption backdoors pose.

“You cannot eliminate encryption safely,” Lofgren told TechCrunch. “And if you do, you will create chaos in the country and for Americans, not to mention others around the world,” she said. “It’s just an unsafe thing to do, and we can’t permit it.”

Homeland Security issues rare emergency alert over ‘critical’ Windows bug

Homeland Security’s cybersecurity advisory unit has issued a rare emergency alert to government departments after the recent disclosure of a “critical”-rated security vulnerability in server versions of Microsoft Windows.

The Cybersecurity and Infrastructure Security Agency, better known as CISA, issued an alert late on Friday requiring all federal departments and agencies to “immediately” patch any Windows servers vulnerable to the so-called Zerologon attack by Monday, citing an “unacceptable risk” to government networks.

It’s the third emergency alert issued by CISA this year.

The Zerologon vulnerability, rated the maximum 10.0 in severity, could allow an attacker to take control of any or all computers on a vulnerable network, including domain controllers, the servers that manage a network’s security. The bug was appropriately called “Zerologon,” because an attacker doesn’t need to steal or use any network passwords to gain access to the domain controllers, only gain a foothold on the network, such as by exploiting a vulnerable device connected to the network.

With complete access to a network, an attacker could deploy malware, ransomware, or steal sensitive internal files.

Security company Secura, which discovered the bug, said it takes “about three seconds in practice” to exploit the vulnerability.

Microsoft pushed out an initial fix in August to prevent exploitation. But given the complexity of the bug, Microsoft said it would have to roll out a second patch early next year to eradicate the issue completely.

But the race is on to patch systems after researchers reportedly released proof-of-concept code, potentially allowing attackers use the code to launch attacks. CISA said that Friday that it “assumes active exploitation of this vulnerability is occurring in the wild.”

Although the CISA alert only applies to federal government networks, the agency said it “strongly” urges companies and consumers to patch their systems as soon as possible if not already.

How the NSA is disrupting foreign hackers targeting COVID-19 vaccine research

The headlines aren’t always kind to the National Security Agency, a spy agency that operates almost entirely in the shadows. But a year ago, the NSA launched its new Cybersecurity Directorate, which in the past year has emerged as one of the more visible divisions of the spy agency.

At its core, the directorate focuses on defending and securing critical national security systems that the government uses for its sensitive and classified communications. But the directorate has become best known for sharing some of the more emerging, large-scale cyber threats from foreign hackers. In the past year the directorate has warned against attacks targeting secure boot features in most modern computers, and doxxed a malware operation linked to Russian intelligence. By going public, NSA aims to make it harder for foreign hackers to reuse their tools and techniques, while helping to defend critical systems at home.

But six months after the directorate started its work, COVID-19 was declared a pandemic and large swathes of the world — and the U.S. — went into lockdown, prompting hackers to shift gears and change tactics.

“The threat landscape has changed,” Anne Neuberger, NSA’s director of cybersecurity, told TechCrunch at Disrupt 2020. “We’ve moved to telework, we move to new infrastructure, and we’ve watched cyber adversaries move to take advantage of that as well,” she said.

Publicly, the NSA advised on which videoconferencing and collaboration software was secure, and warned about the risks associated with virtual private networks, of which usage boomed after lockdowns began.

But behind the scenes, the NSA is working with federal partners to help protect the efforts to produce and distribute a vaccine for COVID-19, a feat that the U.S. government called Operation Warp Speed. News of NSA’s involvement in the operation was first reported by Cyberscoop. As the world races to develop a working COVID-19 vaccine, which experts say is the only long-term way to end the pandemic, NSA and its U.K. and Canadian partners went public with another Russian intelligence operation aimed at targeting COVID-19 research.

“We’re part of a partnership across the U.S. government, we each have different roles,” said Neuberger. “The role we play as part of ‘Team America for Cyber’ is working to understand foreign actors, who are they, who are seeking to steal COVID-19 vaccine information — or more importantly, disrupt vaccine information or shake confidence in a given vaccine.”

Neuberger said that protecting the pharma companies developing a vaccine is just one part of the massive supply chain operation that goes into getting a vaccine out to millions of Americans. Ensuring the cybersecurity of the government agencies tasked with approving a vaccine is also a top priority.

Here are more takeaways from the talk, and you can watch the interview in full below:

Why TikTok is a national security threat

TikTok is just days away from an app store ban, after the Trump administration earlier this year accused the Chinese-owned company of posing a threat to national security. But the government has been less than forthcoming about what specific risks the video sharing app poses, only alleging that the app could be compelled to spy for China. Beijing has long been accused of cyberattacks against the U.S., including the massive breach of classified government employee files from the Office of Personnel Management in 2014.

Neuberger said that the “scope and scale” of TikTok’s app’s data collection makes it easier for Chinese spies to answer “all kinds of different intelligence questions” on U.S. nationals. Neuberger conceded that U.S. tech companies like Facebook and Google also collect large amounts of user data. But that there are “greater concerns on how [China] in particular could use all that information collected against populations other than its own,” she said.

NSA is privately disclosing security bugs to companies

The NSA is trying to be more open about the vulnerabilities it finds and discloses, Neuberger said. She told TechCrunch that the agency has shared a “number” of vulnerabilities with private companies this year, but “those companies did not want to give attribution.”

One exception was earlier this year when Microsoft confirmed NSA had found and privately reported a major cryptographic flaw in Windows 10, which could have allowed hackers to run malware masquerading as a legitimate file. The bug was so dangerous that NSA reported the vulnerability to Microsoft, which patched the bug.

Only two years earlier, the spy agency was criticized for finding and using a Windows vulnerability to conduct surveillance instead of alerting Microsoft to the flaw. The exploit was later leaked and was used to infect thousands of computers with the WannaCry ransomware, causing millions of dollars’ worth of damage.

As a spy agency, NSA exploits flaws and vulnerabilities in software to gather intelligence on the enemy. It has to run through a process called the Vulnerabilities Equities Process, which allows the government to retain bugs that it can use for spying.

Instagram CEO, ACLU slam TikTok and WeChat app bans for putting US freedoms into the balance

As people begin to process the announcement from the U.S. Department of Commerce detailing how it plans, on grounds of national security, to shut down TikTok and WeChat — starting with app downloads and updates for both, plus all of WeChat’s services, on September 20, with TikTok following with a shut down of servers and services on November 12 — the CEO of Instagram and the ACLU are among those speaking out against the move.

The CEO of Instagram, Adam Mosseri, wasted little time in taking to Twitter to criticize the announcement. His particular beef is the implication the move will have for U.S. companies — like his — that also have built their businesses around operating across national boundaries.

In essence, if the U.S. starts to ban international companies from operating in the U.S., then it opens the door for other countries to take the same approach with U.S. companies.

Meanwhile, the ACLU has been outspoken in criticizing the announcement on the grounds of free speech.

“This order violates the First Amendment rights of people in the United States by restricting their ability to communicate and conduct important transactions on the two social media platforms,” said Hina Shamsi, director of the American Civil Liberties Union’s National Security Project, in a statement today.

Shamsi added that ironically, while the U.S. government might be crying foul over national security, blocking app updates poses a security threat in itself.

“The order also harms the privacy and security of millions of existing TikTok and WeChat users in the United States by blocking software updates, which can fix vulnerabilities and make the apps more secure. In implementing President Trump’s abuse of emergency powers, Secretary Ross is undermining our rights and our security. To truly address privacy concerns raised by social media platforms, Congress should enact comprehensive surveillance reform and strong consumer data privacy legislation.”

Vanessa Pappas, who is the acting CEO of TikTok, also stepped in to endorse Mosseri’s words and publicly asked Facebook to join TikTok’s litigation against the U.S. over its moves.

We agree that this type of ban would be bad for the industry. We invite Facebook and Instagram to publicly join our challenge and support our litigation,” she said in her own tweet responding to Mosseri, while also retweeting the ACLU. (Interesting how Twitter becomes Switzerland in these stories, huh?) “This is a moment to put aside our competition and focus on core principles like freedom of expression and due process of law.”

The move to shutter these apps has been wrapped in an increasingly complex set of issues, and these two dissenting voices highlight not just some of the conflict between those issues, but the potential consequences and detriment of acting based on one issue over another.

The Trump administration has stated that the main reason it has pinpointed the apps has been to “safeguard the national security of the United States” in the face of nefarious activity out of China, where the owners of WeChat and TikTok, respectively Tencent and ByteDance, are based:

“The Chinese Communist Party (CCP) has demonstrated the means and motives to use these apps to threaten the national security, foreign policy, and the economy of the U.S.,” today’s statement from the U.S. Department of Commerce noted. “Today’s announced prohibitions, when combined, protect users in the U.S. by eliminating access to these applications and significantly reducing their functionality.”

In reality, it’s hard to know where the truth actually lies.

In the case of the ACLU and Mosseri’s comments, they are highlighting issues of principles but not necessarily precedent.

It’s not as if the U.S. would be the first country to take a nationalist approach to how it permits the operation of apps. Facebook and its stable of apps, as of right now, are unable to operate in China without a VPN (and even with a VPN, things can get tricky). And free speech is regularly ignored in a range of countries today.

But the U.S. has always positioned itself as a standard-bearer in both of these areas, and so apart from the self-interest that Instagram might have in advocating for more free-market policies, it points to wider market and business position that’s being eroded.

The issue, of course, is a little like an onion (a stinking onion, I’d say), with well more than just a couple of layers around it, and with the ramifications bigger than TikTok (with 100 million users in the U.S. and huge in pop culture beyond even that) or WeChat (much smaller in the U.S. but huge elsewhere and valued by those who do use it).

The Trump administration has been carefully selecting issues to tackle to give voters reassurance of Trump’s commitment to “Make America Great Again,” building examples of how it’s helping to promote U.S. interests and demote those that stand in its way. China has been a huge part of that image building, positioned as an adversary in industrial, defence and other arenas. Pinpointing specific apps and how they might pose a security threat by sucking up our data fits neatly into that strategy.

But are they really security threats, or are they just doing the same kind of nefarious data ingesting that every social app does in order to work? Will the U.S. banning them really mean that other countries, up to now more in favor of a free market, will fall in line and take a similar approach? Will people really stop being able to express themselves?

Those are the questions that Trump has forced into the balance with his actions, and even if they were not issues before, they have very much become so now.

Cloudflare’s Michelle Zatlyn on getting funding for crazy ideas

It’s not easy getting funding for any startup, but when Cloudflare launched at one of our early events 10 years ago, most investors sure thought its idea was a bit out there. Today, Cloudflare co-founder Michelle Zatlyn joined us at our virtual Disrupt event to talk with our enterprise reporter Ron Miller about those early days and how the team managed to get investors on board with its plan.

“Sometimes I think back and I think, what were we thinking? But really, I hope — as other entrepreneurs listen — that you do think, ‘hey, how can I be more bold?’ Because actually, that’s where a lot of greatness comes from,” Zatlyn said.

It’s maybe no surprise then that she found that when the team went to the Bay Area in the summer of 2009, trying to sell a product that democratized a service that was previously only available to the internet giants of that time, not everybody was ready.

“A lot of eyes glazed over,” she said. They kind of brushed us aside because we really had a big vision — we certainly did not lack in vision. So that was a lot of the conversations, but there were some conversations where people really leaned in and they’re like: ‘Oh, wow, tell me more.’ And maybe they knew a lot about cybersecurity or maybe they were really worried about the democratization of the internet and going behind guarded walls. And those people really got excited. And that’s how we found kind of our initial investors or initial teammates.”

Over time, the team found investors and an initial set of employees, but it wasn’t easy. “It was bold then and not everyone thought it was a good idea. Some people looked at us like we’re crazy, but I’d like to say that when people look at you like you’re crazy, you’re probably onto something big.”

While a lot of people looked at Cloudflare as a CDN in its early days, it was always meant to be a security product that needed the CDN to work (the company sometimes describes itself as an “accidental CDN” because of that). Ten years ago, the focus was mostly on web spammers, and because that was something everybody could relate to, it made its ability to combat that very explicit in its pitch deck.

“We literally had a slide in our pitch deck that had quotes from real-life IT administrators from some medium-sized businesses, small businesses and developers, explaining in their own words — and their very hard-hitting words — how much they despise like these cyber threats that were online. […] Even if you knew nothing about cybersecurity or global performance, you could all understand wow, there’s something here, right?” And of course, it helped that there was no real protection against these threats available at the time.

Still, getting early customers didn’t come easy, either, Zatlyn noted, because some just didn’t understand what the company was doing. But the team took that as feedback and improved how it explained itself.

“Early on as an entrepreneur, you got to try a lot of things. And you don’t need to know every single corner of your business, but you need to really have a high rate of learning. Actually, that’s an entrepreneur’s best friend. How fast can you learn, how fast can you take the feedback and iterate on it?”

You can watch the rest of the conversation, including Zatlyn’s thoughts about going public, below:

Twitter tightens account security for political candidates ahead of US election

Twitter is taking steps to tighten account security for a range of users ahead of the US presidential election, including by requiring the use of strong passwords.

“We’re taking the additional step of proactively implementing account security measures for a designated group of high-profile, election-related Twitter accounts in the US. Starting today, these accounts will be informed via an in-app notification from Twitter of some of the initial account security measures we will be requiring or strongly recommending going forward,” it said in a blog post announcing the pre-emptive step.

Image credit: Twitter

Last month Twitter said it would be dialling up efforts to combat misinformation and election interference, as well as pledging to help get out the vote — going on to out an election hub to help voters navigate the 2020 poll earlier this week.

Its latest election-focused security move follows an embarrassing account hack incident in July which saw scores of verified users’ accounts accessed and used to tweet out a cryptocurrency scam.

Clearly, Twitter won’t want a politically-flavored repeat of that.

Twitter said accounts that will be required to take steps to tighten their security are:

  • US Executive Branch and Congress

  • US Governors and Secretaries of State

  • Presidential campaigns, political parties and candidates with Twitter Election Labels running for US House, US Senate, or Governor

  • Major US news outlets and political journalists

As well as requiring users in these categories to have a strong password — prompting those without one to update it next time they log in — Twitter said it will also enable Password reset protection for the accounts by default.

“This is a setting that helps prevent unauthorized password changes by requiring an account to confirm its email address or phone number to initiate a password reset,” it noted.

It will also encourage the target types of users to enable Two-factor authentication (2FA) as a further measure to bolster against unauthorized logins. Although it will not be requiring 2FA be switched on.

The platform also said it would be implementing extra layers of what it called “proactive internal security safeguards” for the aforementioned accounts, including:

  • More sophisticated detections and alerts to help us, and account holders, respond rapidly to suspicious activity

  • Increased login defenses to prevent malicious account takeover attempts

  • Expedited account recovery support to ensure account security issues are resolved quickly

Perigee infrastructure security solution from former NSA employee moves into public beta

Perigee founder Mollie Breen used to work for NSA where she built a security solution to help protect the agency’s critical infrastructure. She spent the last two years at Harvard Business School talking to Chief Information Security Officers (CISOs) and fine-tuning that idea she started at NSA into a commercial product.

Today, the solution that she built moves into public beta and will compete at TechCrunch Disrupt Battlefield with other startups for $100,000 and the Disrupt Cup.

Perigree helps protect things like heating and cooling systems or elevators that may lack patches or true security, yet are connected to the network in a very real way. It learns what normal behavior looks like from an operations system when it interacts with the network, such as what systems it interacts with and which individual employees tend to access it. It can then determine when something seems awry and stop an anomalous activity before it reaches the network. Without a solution like the one Breen has built, these systems would be vulnerable to attack.

Perigee is a cloud-based platform that creates a custom firewall for every device on your network,” Breen told TechCrunch. “It learns each device’s unique behavior, the quirks of its operational environment and how it interacts with other devices to prevent malicious and abnormal usage while providing analytics to boost performance.”

Perigee HVAC fan dashboard view

Image Credits: Perigee

One of the key aspects of her solution is that it doesn’t require an agent, a small piece of software on the device, to make it work. Breen says this is especially important since that approach doesn’t scale across thousands of devices and can also introduce bugs from the agent itself. What’s more, it can use up precious resources on these devices if they can even support a software agent.

“Our sweet spot is that we can protect those thousands of devices by learning those nuances and we can do that really quickly, scaling up to thousands of devices with our generalized model because we take this agentless-based approach,” she said.

By creating these custom firewalls, her company is able to place security in front of the device preventing a hacker from using it as a vehicle to get on the network.

“One thing that makes us fundamentally different from other companies out there is that we sit in front of all of these devices as a shield,” she said. That essentially stops an attack before it reaches the device.

While Breen acknowledges that her approach can add a small bit of latency, it’s a tradeoff that CISOs have told her they are willing to make to protect these kinds of operational systems from possible attacks. Her system is also providing real-time status updates on how these devices are operating, giving them centralized device visibility. If there are issues found, the software recommends corrective action.

It’s still very early for her company, which Breen founded last year. She has raised an undisclosed amount of pre-seed capital. While Perigee is pre-revenue with just one employee, she is looking to add paying customers and begin growing the company as she moves into a wider public beta.

JupiterOne raises $19M Series A to automate cyber asset management

Asset management might not be the most exciting talking topic, but it’s often an overlooked area of cyber-defenses. By knowing exactly what assets your company has makes it easier to know where the security weak spots are.

That’s the problem JupiterOne is trying to fix.

“We built JupiterOne because we saw a gap in how organizations manage the security and compliance of their cyber assets day to day,” said Erkang Zheng, the company’s founder and chief executive.

The Morrisville, N.C.-based startup, which spun out from healthcare cloud firm LifeOmic in 2018, helps companies see all of their digital and cloud assets by integrating with dozens of services and tools, including Amazon Web Services, Cloudflare, and GitLab, and centralizing the results into a single monitoring tool.

JupiterOne says it makes it easier for companies to spot security issues and maintain compliance, with an aim of helping companies prevent security lapses and data breaches by catching issues early on.

The company already has Reddit, Databricks and Auth0 as customers, and just secured $19 million in its Series A, led by Bain Capital Ventures and with participation from Rain Capital and its parent company LifeOmic.

As part of the deal, Bain partner Enrique Salem will join JupiterOne’s board. “We see a large multibillion dollar market opportunity for this technology across mid-market and enterprise customers,” he said. Asset management is slated to be a $8.5 billion market by 2024.

Zheng told TechCrunch the company plans to use the funds to accelerate its engineering efforts and its go-to-market strategy, with new product features to come.

HacWare wants you to hate email security a little less

Let’s face it, email security is something a lot of people would rather think less about. When you’re not deluged with a daily onslaught of phishing attacks trying to steal your passwords, you’re also expected to dodge the simulated phishing emails sent by your own company all for the sake of checking a compliance box.

One security startup wants that to change. Tiffany Ricks founded HacWare in Dallas, Texas, in 2017 to help bring better cybersecurity awareness to small businesses without getting in the way of the day job.

“We’re trying to show them what they don’t know about cybersecurity and educate them on that so they can get back to work,” Ricks told TechCrunch, ahead of the company’s participation in TechCrunch’s Startup Battlefield.

Ricks, a former Pentagon contractor, has her roots as an ethical hacker. As a penetration tester, or “red teamer,” she would test the limits of a company’s cybersecurity defenses by using a number of techniques, including social engineering attacks, which often involves tricking someone into turning over a password or access to a system.

“It was just very easy to get into organizations by social engineering employees,” said Ricks. But the existing offerings on the market, she said, weren’t up to the task of educating users at scale.

“And so we built the product in-house,” she said.

HacWare sits on a company’s email server and uses machine learning to categorize and analyze each message for risk — the same things you would look for in a phishing email, like suspicious links and attachments.

HacWare tries to identify the most at-risk users, like those working in finance and human resources, who are more vulnerable to business email compromise attacks that try to steal sensitive employee information. The system also uses automated simulated phishing attacks using the contents of what’s in a user’s inbox already to send personalized phishing emails to test the user.

Email remains the most popular way for attackers to use phishing and other social engineering attacks to try to steal sensitive information, according to Verizon’s annual data breach report. These attackers want your passwords or to try to trick you into sending sensitive documents, like employee tax and financial information.

But as the adage goes, humans are the weakest link in the security chain.

Stronger security features, like two-factor authentication, makes it far more difficult for hackers to break into accounts but it’s not a panacea. It was only in July that Twitter was hit by a devastating breach that saw hackers use social engineering techniques to trick employees into giving over access to an internal “admin” tool that the hackers abused to hijack high-profile accounts and spread a cryptocurrency scam.

HacWare’s approach to email security appears to be working. “We’ve seen a 60% reduction in reducing phishing responses,” she said. The automated phishing simulations also help to reduce IT workload, she said.

Ricks moved the bootstrapped HacWare to New York City after securing a place in Techstars’ accelerator program. HacWare is seeking to raise a $1 million seed round, said Ricks. For now, the company is “laser focused” on email security, but the company has growth in its sights.

“I see us expanding into just trying to understand human behavior and trying to figure out how we can mitigate that risk,” she said.

“We believe that cyber security is an integrated approach,” said Ricks. “But first we definitely need to start with the root cause, and the root cause is we need to really get our people the tools they need to empower them to make sound cybersecurity decisions,” she said.