In the NYC enterprise startup scene, security is job one

While most people probably would not think of New York as a hotbed for enterprise startups of any kind, it is actually quite active. When you stop to consider that the world’s biggest banks and financial services companies are located there, it would certainly make sense for security startups to concentrate on such a huge potential market — and it turns out, that’s the case.

According to Crunchbase, there are dozens of security startups based in the city with everything from biometrics and messaging security to identity, security scoring and graph-based analysis tools. Some established companies like Symphony, which was originally launched in the city (although it is now on the west coast), has raised almost $300 million. It was actually formed by a consortium of the world’s biggest financial services companies back in 2014 to create a secure unified messaging platform.

There is a reason such a broad-based ecosystem is based in a single place. The companies who want to discuss these kinds of solutions aren’t based in Silicon Valley. This isn’t typically a case of startups selling to other startups. It’s startups who have been established in New York because that’s where their primary customers are most likely to be.

In this article, we are looking at a few promising early-stage security startups based in Manhattan

Hypr: Decentralizing identity

Hypr is looking at decentralizing identity with the goal of making it much more difficult to steal credentials. As company co-founder and CEO George Avetisov puts it, the idea is to get rid of that credentials honeypot sitting on the servers at most large organizations, and moving the identity processing to the device.

Hypr lets organizations remove stored credentials from the logon process. Photo: Hypr

“The goal of these companies in moving to decentralized authentication is to isolate account breaches to one person,” Avetisov explained. When you get rid of that centralized store, and move identity to the devices, you no longer have to worry about an Equifax scenario because the only thing hackers can get is the credentials on a single device — and that’s not typically worth the time and effort.

At its core, Hypr is an SDK. Developers can tap into the technology in their mobile app or website to force the authorization to the device. This could be using the fingerprint sensor on a phone or a security key like a Yubikey. Secondary authentication could include taking a picture. Over time, customers can delete the centralized storage as they shift to the Hypr method.

The company has raised $15 million and has 35 employees based in New York City.

Uplevel Security: Making connections with graph data

Uplevel’s founder Liz Maida began her career at Akamai where she learned about the value of large data sets and correlating that data to events to help customers understand what was going on behind the scenes. She took those lessons with her when she launched Uplevel Security in 2014. She had a vision of using a graph database to help analysts with differing skill sets understand the underlying connections between events.

“Let’s build a system that allows for correlation between machine intelligence and human intelligence,” she said. If the analyst agrees or disagrees, that information gets fed back into the graph, and the system learns over time the security events that most concern a given organization.

“What is exciting about [our approach] is you get a new alert and build a mini graph, then merge that into the historical data, and based on the network topology, you can start to decide if it’s malicious or not,” she said.

Photo: Uplevel

The company hopes that by providing a graphical view of the security data, it can help all levels of security analysts figure out the nature of the problem, select a proper course of action, and further build the understanding and connections for future similar events.

Maida said they took their time creating all aspects of the product, making the front end attractive, the underlying graph database and machine learning algorithms as useful as possible and allowing companies to get up and running quickly. Making it “self serve” was a priority, partly because they wanted customers digging in quickly and partly with only 10 people, they didn’t have the staff to do a lot of hand holding.

Security Scorecard: Offering a way to measure security

The founders of Security Scorecard met while working at the NYC ecommerce site, Gilt. For a time ecommerce and adtech ruled the startup scene in New York, but in recent times enterprise startups have really started to come on. Part of the reason for that is many people started at these foundational startups and when they started their own companies, they were looking to solve the kinds of enterprise problems they had encountered along the way. In the case of Security Scorecard, it was how could a CISO reasonably measure how secure a company they were buying services from was.

Photo: Security Scorecard

“Companies were doing business with third-party partners. If one of those companies gets hacked, you lose. How do you vett the security of companies you do business with” company co-founder and CEO Aleksandr Yampolskiy asked when they were forming the company.

They created a scoring system based on publicly available information, which wouldn’t require the companies being evaluated to participate. Armed with this data, they could apply a letter grade from A-F. As a former CISO at Gilt, it was certainly a paint point he felt personally. They knew some companies did undertake serious vetting, but it was usually via a questionnaire.

Security Scorecard was offering a way to capture security signals in an automated way and see at a glance just how well their vendors were doing. It doesn’t stop with the simple letter grade though, allowing you to dig into the company’s strengths and weaknesses and see how they compare to other companies in their peer groups and how they have performed over time.

It also gives customers the ability to see how they compare to peers in their own industry and use the number to brag about their security position or conversely, they could use it to ask for more budget to improve it.

The company launched in 2013 and has raised over $62 million, according to Crunchbase. Today, they have 130 employees and 400 enterprise customers.

If you’re an enterprise security startup, you need to be where the biggest companies in the world do business. That’s in New York City, and that’s precisely why these three companies, and dozens of others have chosen to call it home.

Twitter banned Russian security firm Kaspersky Lab from buying ads

The U.S. government isn’t the only one feeling skittish about Kaspersky Lab. On Friday, the Russian security firm’s founder Eugene Kaspersky confronted Twitter’s apparent ban on advertising from the company, a decision it quietly issued in January.

“In a short letter from an unnamed Twitter employee, we were told that our company ‘operates using a business model that inherently conflicts with acceptable Twitter Ads business practices,'” Kaspersky wrote.

“One thing I can say for sure is this: we haven’t violated any written – or unwritten – rules, and our business model is quite simply the same template business model that’s used throughout the whole cybersecurity industry: We provide users with products and services, and they pay us for them.”

He noted that the company has spent around than €75,000 ($93,000 USD) to promote its content on Twitter in 2017.

Kaspersky called for Twitter CEO Jack Dorsey to specify the motivation behind the ban after failing to respond to an official February 6 letter from his company.

More than two months have passed since then, and the only reply we received from Twitter was the copy of the same boilerplate text. Accordingly, I’m forced to rely on another (less subtle but nevertheless oft and loudly declared) principle of Twitter’s – speaking truth to power – to share details of the matter with interested users and to publicly ask that you, dear Twitter executives, kindly be specific as to the reasoning behind this ban; fully explain the decision to switch off our advertising capability, and to reveal what other cybersecurity companies need to do in order to avoid similar situations.

In a statement about the incident, Twitter reiterated that Kaspersky Lab’s business model “inherently conflicts with acceptable Twitter Ads business practices.” In a statement to CyberScoop, Twitter pointed to the late 2017 Department of Homeland Security directive to eliminate Kaspersky software from Executive Branch systems due to the company’s relationship with Russian intelligence.

“The Department is concerned about the ties between certain Kaspersky officials and Russian intelligence and other government agencies, and requirements under Russian law that allow Russian intelligence agencies to request or compel assistance from Kaspersky and to intercept communications transiting Russian networks,” DHS asserted in the directive at the time.

LinkedIn’s AutoFill plugin could leak user data, secret fix failed

Facebook isn’t the only one in the hot seat over data privacy. A flaw in LinkedIn’s AutoFill plugin that websites use to let you quickly complete forms could have allowed hackers to steal your full name, phone number, email address, ZIP code, company, and job title. Malicious sites have been able to invisibly render the plugin on their entire page so if users who are logged into LinkedIn click anywhere, they’d effectively be hitting a hidden “AutoFill with LinkedIn” button and giving up their data.

Researcher Jack Cable discovered the issue on April 9th, 2018 and immediately disclosed it to LinkedIn. The company issued a fix on April 10th but didn’t inform the public of the issue. Cable quickly informed LinkedIn that its fix, which restricted the use of its AutoFill feature to whitelisted sites who pay LinkedIn to host their ads, still left it open to abuse. If any of those sites have cross-site scripting vulnerabilities, which Cable confirmed some do, hackers can still run AutoFill on their sites by installing an iframe to the vulnerable whitelisted site. He got no response from LinkedIn over the last 9 days so Cable reached out to TechCrunch.

LinkedIn’s AutoFill tool

LinkedIn tells TechCrunch it doesn’t have evidence that the weakness was exploited to gather user data. But Cable says “it is entirely possible that a company has been abusing this without LinkedIn’s knowledge, as it wouldn’t send any red flags to LinkedIn’s servers.”

I demoed the security fail on a site Cable set up. It was able to show me my LinkedIn sign-up email address with a single click anywhere on the page, without me ever knowing I was interacting with an exploited version of LinkedIn’s plugin. Even if users have configured their LinkedIn privacy settings to hide their email, phone number, or other info, it can still be pulled in from the AutoFill plugin.

“It seems like LinkedIn accepts the risk of whitelisted websites (and it is a part of their business model), yet this is a major security concern” Cable wrote to TechCrunch. [Update: He’s now posted a detailed write-up of the issue.]

A LinkedIn spokesperson issued this statement to TechCrunch, saying it’s planning to roll out a more comprehensive fix shortly:

“We immediately prevented unauthorized use of this feature, once we were made aware of the issue. We are now pushing another fix that will address potential additional abuse cases and it will be in place shortly. While we’ve seen no signs of abuse, we’re constantly working to ensure our members’ data stays protected. We appreciate the researcher responsibly reporting this and our security team will continue to stay in touch with them.

For clarity, LinkedIn AutoFill is not broadly available and only works on whitelisted domains for approved advertisers. It allows visitors to a website to choose to pre-populate a form with information from their LinkedIn profile.”

Facebook has recently endured heavy scrutiny regarding data privacy and security, and just yesterday confirmed it was investigating an issue with unauthorized JavaScript trackers pulling in user info from sites using Login With Facebook.

But Cable’s findings demonstrate that other tech giants deserve increased scrutiny too. In an effort to colonize the web with their buttons and gather more data about their users, sites like LinkedIn have played fast and loose with people’s personally identifiable information.

The research shows how relying on whitelists of third-party sites doesn’t always solve a problem. All it takes is for one of those sites to have its own security flaw, and a bigger vulnerability can be preyed upon. Over 70 of the world’s top websites were on LinkedIn’s whitelist, including Twitter, Stanford, Salesforce, Edelman, and Twilio. OpenBugBounty shows the prevalence of cross-site scripting problems. These “XSS” vulnerabilities accounted for 84% of secuity flaws documented by Symantec in 2007, and bug bounty service HackerOne defines XSS as a massive issue to this day.

With all eyes on security, tech companies may need to become more responsive to researchers pointing out flaws. While LinkedIn initially moved quickly, its attention to the issue lapsed while only a broken fix was in place. Meanwhile, government officials considering regulation should focus on strengthening disclosure requirements for companies that discover breaches or vulnerabilities. If they know they’ll have to embarass themselves by informing the public about their security flaws, they might work harder to keep everything locked tight.

TaskRabbit CEO posts statement as its app returns following a cybersecurity breach

After taking them down to investigate what it called a “cybersecurity incident,” TaskRabbit’s website and app are back online. The Ikea-owned platform for on-demand labor also posted an update from its chief executive officer Stacy Brown-Philpot about the incident.

“While our investigation is ongoing, preliminary evidence shows that an unauthorized user gained access to our systems. As a result, certain personally identifiable information may have been compromised,” she wrote.

While Brown-Philpot said that an outside forensics team is currently working to identify what information was compromised and will notify all affected users, she urged the platform’s customers and providers, called “taskers,” to monitor online accounts for suspicious activity and change passwords if they used the same login information on other services.

TaskRabbit will add several new security measures because of the incident. Brown-Philpot said they are working on ways to make their login process more secure, reduce the amount of data retained about customers and taskers and “enhance overall network cyber threat detection technology.”

The company will continue posting updates to a dedicated page on its website, which also includes a FAQ for taskers who were unable to complete jobs while the app was offline. TaskRabbit says people who were forced to reschedule or cancel tasks will be compensated.

Facebook, Microsoft and others sign anti-cyberattack pledge

Microsoft, Facebook and Cloudflare are among a group of technology firms that have signed a joint pledge committing publicly not to assist offensive government cyberattacks.

The pledge also commits them to work together to enhance security awareness and the resilience of the global tech ecosystem.

The four top-line principles the firms are agreeing to are [ALL CAPS theirs]:

  • 1. WE WILL PROTECT ALL OF OUR USERS AND CUSTOMERS EVERYWHERE.
  • 2. WE WILL OPPOSE CYBERATTACKS ON INNOCENT CITIZENS AND ENTERPRISES FROM ANYWHERE.
  • 3. WE WILL HELP EMPOWER USERS, CUSTOMERS AND DEVELOPERS TO STRENGTHEN CYBERSECURITY PROTECTION.
  • 4. WE WILL PARTNER WITH EACH OTHER AND WITH LIKEMINDED GROUPS TO ENHANCE CYBERSECURITY.

You can read the full Cybersecurity Tech Accord here.

So far 34 companies have signed up to the initiative, which was announced on the eve of the RSA Conference in San Francisco, including ARM, Cloudflare, Facebook, Github, LinkedIn, Microsoft and Telefonica.

In a blog post announcing the initiative Microsoft’s Brad Smith writes that it’s hopeful more will soon follow.

“Protecting our online environment is in everyone’s interest,” says Smith. “The companies that are part of the Cybersecurity Tech Accord promise to defend and advance technology’s benefits for society. And we commit to act responsibly, to protect and empower our users and customers, and help create a safer and more secure online world.”

Notably not on the list are big tech’s other major guns: Amazon, Apple and Google — nor indeed most major mobile carriers (TC’s parent Oath’s parent Verizon is not yet a signee, for example).

And, well, tech giants are often the most visible commercial entities bowing to political pressure to comply with ‘regulations’ that do the opposite of enhance the security of their users living under certain regimes — merely to ensure continued market access for themselves.

But the accord raises more nuanced questions than who has not (yet) spilt ink on it.

What does ‘protect’ mean in this cybersecurity context? Are the companies which have signed up to the accord committing to protect their users from government mass surveillance programs, for example?

What about the problem of exploits being stockpiled by intelligence agencies — which might later leak and wreak havoc on innocent web users — as was apparently the case with the Wannacrypt malware.

Will the undersigned companies fight against (their own and other) governments doing that — in order to reduce security risks for all Internet users?

“We will strive to protect all our users and customers from cyberattacks — whether an individual, organization or government — irrespective of their technical acumen, culture or location, or the motives of the attacker, whether criminal or geopolitical,” sure sounds great in principle.

In practice this stuff gets very muddy and murky, very fast.

Perhaps the best element here is the commitment between the firms to work together for the greater security cause — including “to improve technical collaboration, coordinated vulnerability disclosure, and threat sharing, as well as to minimize the levels of malicious code being introduced into cyberspace”.

That at least may bear some tangible fruit.

Other security issues are far too tightly bound up with geopolitics for even a number of well-intentioned technology firms to be able to do much to shift the needle.

Facebook, Microsoft and others sign anti-cyberattack pledge

Microsoft, Facebook and Cloudflare are among a group of technology firms that have signed a joint pledge committing publicly not to assist offensive government cyberattacks.

The pledge also commits them to work together to enhance security awareness and the resilience of the global tech ecosystem.

The four top-line principles the firms are agreeing to are [ALL CAPS theirs]:

  • 1. WE WILL PROTECT ALL OF OUR USERS AND CUSTOMERS EVERYWHERE.
  • 2. WE WILL OPPOSE CYBERATTACKS ON INNOCENT CITIZENS AND ENTERPRISES FROM ANYWHERE.
  • 3. WE WILL HELP EMPOWER USERS, CUSTOMERS AND DEVELOPERS TO STRENGTHEN CYBERSECURITY PROTECTION.
  • 4. WE WILL PARTNER WITH EACH OTHER AND WITH LIKEMINDED GROUPS TO ENHANCE CYBERSECURITY.

You can read the full Cybersecurity Tech Accord here.

So far 34 companies have signed up to the initiative, which was announced on the eve of the RSA Conference in San Francisco, including ARM, Cloudflare, Facebook, Github, LinkedIn, Microsoft and Telefonica.

In a blog post announcing the initiative Microsoft’s Brad Smith writes that it’s hopeful more will soon follow.

“Protecting our online environment is in everyone’s interest,” says Smith. “The companies that are part of the Cybersecurity Tech Accord promise to defend and advance technology’s benefits for society. And we commit to act responsibly, to protect and empower our users and customers, and help create a safer and more secure online world.”

Notably not on the list are big tech’s other major guns: Amazon, Apple and Google — nor indeed most major mobile carriers (TC’s parent Oath’s parent Verizon is not yet a signee, for example).

And, well, tech giants are often the most visible commercial entities bowing to political pressure to comply with ‘regulations’ that do the opposite of enhance the security of their users living under certain regimes — merely to ensure continued market access for themselves.

But the accord raises more nuanced questions than who has not (yet) spilt ink on it.

What does ‘protect’ mean in this cybersecurity context? Are the companies which have signed up to the accord committing to protect their users from government mass surveillance programs, for example?

What about the problem of exploits being stockpiled by intelligence agencies — which might later leak and wreak havoc on innocent web users — as was apparently the case with the Wannacrypt malware.

Will the undersigned companies fight against (their own and other) governments doing that — in order to reduce security risks for all Internet users?

“We will strive to protect all our users and customers from cyberattacks — whether an individual, organization or government — irrespective of their technical acumen, culture or location, or the motives of the attacker, whether criminal or geopolitical,” sure sounds great in principle.

In practice this stuff gets very muddy and murky, very fast.

Perhaps the best element here is the commitment between the firms to work together for the greater security cause — including “to improve technical collaboration, coordinated vulnerability disclosure, and threat sharing, as well as to minimize the levels of malicious code being introduced into cyberspace”.

That at least may bear some tangible fruit.

Other security issues are far too tightly bound up with geopolitics for even a number of well-intentioned technology firms to be able to do much to shift the needle.

France to move ministers off Telegram, WhatsApp over security fears

The French government has said it intends to move to using its own encrypted messaging service this summer, over concerns of the risks that foreign entities could spy on officials using popular encrypted apps such as Telegram and WhatsApp .

Reuters reports that ministers are concerned about the use of foreign-built encrypted apps which do not have servers in France. “We need to find a way to have an encrypted messaging service that is not encrypted by the United States or Russia,” a digital ministry spokeswoman told the news agency. “You start thinking about the potential breaches that could happen, as we saw with Facebook, so we should take the lead.”

Telegram’s founder, Pavel Durov, is Russian, though the entrepreneur lives in exile and his messaging app has just been blocked in his home country after the company refused to hand over encryption keys to the authorities.

WhatsApp, which (unlike Telegram) is end-to-end encrypted across its entire platform — using the respected and open sourced Signal Protocol — is nonetheless owned by U.S. tech giant Facebook, and developed out of the U.S. (as Signal also is).

Its parent company is currently embroiled in a major data misuse scandal after it emerged that tens of millions of Facebook users’ information was passed to a controversial political consultancy without their knowledge or consent.

The ministry spokeswoman said about 20 officials and top civil servants in the French government are testing the new messaging app, with the aim of its use becoming mandatory for the whole government by the summer.

It could also eventually be made available to all citizens, she added.

Reuters reports the spokeswoman also said a state-employed developer has designed the app, using free-to-use code available for download online (which presumably means it’s based on open source software) — although she declined to name the code being used or the messaging service.

Late last week, ZDNet also reported the French government wanted to replace its use of apps like Telegram — which president Emmanuel Macron is apparently a big fan of.

It quoted Mounir Mahjoubi, France’s secretary of state for digital, saying: “We are working on public secure messaging, which will not be dependent on private offers.”

The French government reportedly already uses some secure messaging products built by defense group and IT supplier Thales. On its website Thales lists a Citadel instant messaging smartphone app — which it describes as “trusted messaging for professionals”, saying it offers “the same recognisable functionality and usability as most consumer messaging apps” with “secure messaging services on a smartphone or computer, plus a host of related functions, including end-to-end encrypted voice calls and file sharing”.

How to save your privacy from the Internet’s clutches

Another week, another massive privacy scandal. When it’s not Facebook admitting it allowed data on as many as 87 million users to be sucked out by a developer on its platform who sold it to a political consultancy working for the Trump campaign, or dating app Grindr ‘fessing up to sharing its users’ HIV status with third party A/B testers, some other ugly facet of the tech industry’s love affair with tracking everything its users do slides into view.

Suddenly, Android users discover to their horror that Google’s mobile platform tells the company where they are all the time — thanks to baked-in location tracking bundled with Google services like Maps and Photos. Or Amazon Echo users realize Jeff Bezos’ ecommerce empire has amassed audio recordings of every single interaction they’ve had with their cute little smart speaker.

The problem, as ever with the tech industry’s teeny-weeny greyscaled legalise, is that the people it refers to as “users” aren’t genuinely consenting to having their information sucked into the cloud for goodness knows what. Because they haven’t been given a clear picture of what agreeing to share their data will really mean.

Instead one or two select features, with a mote of user benefit, tend to be presented at the point of sign up — to socially engineer ‘consent’. Then the company can walk away with a defacto license to perpetually harvest that person’s data by claiming that a consent box was once ticked.

A great example of that is Facebook’s Nearby Friends. The feature lets you share your position with your friends so — and here’s that shiny promise — you can more easily hang out with them. But do you know anyone who is actively using this feature? Yet millions of people started sharing their exact location with Facebook for a feature that’s now buried and mostly unused. Meanwhile Facebook is actively using your location to track your offline habits so it can make money targeting you with adverts.

Terms & Conditions are the biggest lie in the tech industry, as we’ve written before. (And more recently: It was not consent, it was concealment.)

Senator Kennedy of Louisiana also made the point succinctly to Facebook founder Mark Zuckerberg this week, telling him to his face: “Your user agreement sucks.” We couldn’t agree more.

Happily disingenuous T&Cs are on borrowed time — at least for European tech users, thanks to a new European Union data protection framework that will come into force next month. The GDPR tightens consent requirements — mandating clear and accurate information be provided to users at the point of sign up. Data collection is also more tightly tied to specific function.

From next month, holding onto personal data without a very good reason to do so will be far more risky — because GDPR is also backed up with a regime of supersized fines that are intended to make privacy rules much harder to ignore.

Of course U.S. tech users can’t bank on benefiting from European privacy regulations. And while there are now growing calls in the country for legislation to protect people’s data — in a bid to steer off the next democracy-denting Cambridge Analytica scandal, at very least — any such process will take a lot of political will.

It certainly will not happen overnight. And you can expect tech giants to fight tooth and nail against laws being drafted and passed — as indeed Facebook, Google and others lobbied fiercely to try to get GDPR watered down.

Facebook has already revealed it will not be universally applying the European regulation — which means people in North America are likely to get a degree of lower privacy than Facebook users everywhere else in the world. Which doesn’t exactly sound fair.

When it comes to privacy, some of you may think you have nothing to hide. But that’s a straw man. It’s especially hard to defend this line of thinking now that big tech companies have attracted so much soft power they can influence elections, inflame conflicts and divide people in general. It’s time to think about the bigger impact of technology on the fabric of society, and not just your personal case.

Shifting the balance

So what can Internet users do right now to stop tech giants, advertisers and unknown entities tracking everything you do online — and trying to join the dots of your digital activity to paint a picture of who they think you are? At least, everything short of moving to Europe, where privacy is a fundamental right.

There are some practical steps you can take to limit day-to-day online privacy risks by reducing third party access to your information and shielding more of your digital activity from prying eyes.

Not all these measures are appropriate for every person. It’s up to you to determine how much effort you want (or need) to put in to shield your privacy.

You may be happy to share a certain amount of personal data in exchange for access to a certain service, for example. But even then it’s unlikely that the full trade-off has been made clear to you. So it’s worth asking yourself if you’re really getting a good deal.

Once people’s eyes are opened to the fine-grained detail and depth of personal information being harvested, even some very seasoned tech users have reacted with shock — saying they had no idea, for example, that Facebook Messenger was continuously uploading their phone book and logging their calls and SMS metadata.

This is one of the reasons why the U.K.’s information commissioner has been calling for increased transparency about how and why data flows. Because for far too long tech savvy entities have been able to apply privacy hostile actions in the dark. And it hasn’t really been possible for the average person to know what’s being done with their information. Or even what data they are giving up when they click ‘I agree’.

Why does an A/B testing firm need to know a person’s HIV status? Why does a social network app need continuous access to your call history? Why should an ad giant be able to continuously pin your movements on a map?

Are you really getting so much value from an app that you’re happy for the company behind it and anyone else they partner with to know everywhere you go, everyone you talk to, the stuff you like and look at — even to have a pretty good idea what you’re thinking?

Every data misuse scandal shines a bit more light on some very murky practices — which will hopefully generate momentum for rule changes to disinfect data handling processes and strengthen individuals’ privacy by spotlighting trade-offs that have zero justification.

With some effort — and good online security practices (which we’re taking as a given for the purposes of this article, but one quick tip: Enable 2FA everywhere you can) — you can also make it harder for the web’s lurking watchers to dine out on your data.

Just don’t expect the lengths you have to go to protect your privacy to feel fair or just — the horrible truth is this fight sucks.

But whatever you do, don’t give up.

How to hide on the internet

Action: Tape over all your webcams
Who is this for: Everyone — even Mark Zuckerberg!
How difficult is it: Easy peasy lemon squeezy
Tell me more: You can get fancy removable stickers for this purpose (noyb has some nice ones). Or you can go DIY and use a bit of masking tape — on your laptop, your smartphone, even your smart TV… If your job requires you to be on camera, such as for some conference calls, and you want to look a bit more pro you can buy a webcam cover. Sadly locking down privacy is rarely this easy.

Action: Install HTTPS Everywhere
Who is this for: Everyone — seriously do it
How difficult is it: Mild effort
Tell me more: Many websites offer encryption. With HTTPS, people running the network between your device and the server hosting the website you’re browsing can’t see your requests and your internet traffic. But some websites still load unencrypted pages by default (HTTP), which also causes a security risk. The EFF has developed a browser extension that makes sure that you access all websites that offer HTTPS using… HTTPS.

Action: Use tracker blockers
Who is this for: Everyone — except people who like being ad-stalked online
How difficult is it: Mild effort
Tell me more: Trackers refers to a whole category of privacy-hostile technologies designed to follow and record what web users are doing as they move from site to site, and even across different devices. Trackers come in a range of forms these days. And there are some pretty sophisticated ways of being tracked (some definitely harder to thwart than others). But to combat trackers being deployed on popular websites — which are probably also making the pages slower to load than they otherwise would be — there’s now a range of decent, user-friendly tracker blockers to choose from. Pro-privacy search engine DuckDuckGo recently added a tracker blocker to their browser extensions, for example. Disconnect.me is also a popular extension to block trackers from third-party websites. Firefox also has a built-in tracker blocker, which is now enabled by default in the mobile apps. If you’re curious and want to see the list of trackers on popular website, you can also install Kimetrak to understand that it’s a widespread issue.

Action: Use an ad blocker
Who is this for: Everyone who can support the moral burden
How difficult is it: Fairly easy these days but you might be locked out of the content on some news websites as a result
Tell me more: If you’ve tried using a tracker blocker, you may have noticed that many ads have been blocked in the process. That’s because most ads load from third-party servers that track you across multiple sites. So if you want to go one step further and block all ads, you should install an ad blocker. Some browsers like Opera come with an ad blocker. Otherwise, we recommend uBlock Origin on macOS, Windows, Linux and Android. 1Blocker is a solid option on iOS.
But let’s be honest, TechCrunch makes some money with online ads. If 100% of web users install an ad blocker many websites you know and love would simply go bankrupt. While your individual choice won’t have a material impact on the bottom line, consider whitelisting the sites you like. And if you’re angry at how many trackers your favorite news site is running try emailing them to ask (politely) if they can at least reduce the number of trackers they use.

Action: Make a private search engine your default
Who is this for: Most people
How difficult is it: A bit of effort because your search results might become slightly less relevant
Tell me more: Google probably knows more about you than even Facebook does, thanks to the things you tell it when you type queries into its search engine. Though that’s just the tip of how it tracks you — if you use Android it will keep running tabs on everywhere you go unless you opt out of location services. It also has its tracking infrastructure embedded on three-quarters of the top million websites. So chances are it’s following what you’re browsing online — unless you also take steps to lock down your browsing (see below).
But one major way to limit what Google knows about you is to switch to using an alternative search engine when you need to look something up on the Internet. This isn’t as hard as it used to be as there are some pretty decent alternatives now — such as DuckDuckGo which Apple will let you set as the default browser on iOS — or Qwant for French-speaking users. German users can check out Cliqz. You will also need to remember to be careful about any voice assistants you use as they often default to using Google to look stuff up on the web.

Action: Use private browser sessions
Who is this for: Most people
How difficult is it: Not at all if you understand what a private session is
Tell me more: All browsers on desktop and mobile now let you open a private window. While this can be a powerful tool, it is often misunderstood. By default, private sessions don’t make you more invisible — you’ll get tracked from one tab to another. But private sessions let you start with a clean slate. Every time you close your private session, all your cookies are erased. It’s like you disappear from everyone’s radar. You can then reopen another private session and pretend that nobody knows who you are. That’s why using a private session for weeks or months doesn’t do much, but short private sessions can be helpful.

Action: Use multiple browsers and/or browser containers
Who is this for: People who don’t want to stop using social media entirely
How difficult is it: Some effort to not get in a muddle
Tell me more: Using different browsers for different online activities can be a good way of separating portions of your browsing activity. You could, for example, use one browser on your desktop computer for your online banking, say, and a different browser for your social networking or ecommerce activity. Taking this approach further, you could use different mobile devices when you want to access different apps. The point of dividing your browsing across different browsers/devices is to try to make it harder to link all your online activity to you. That said, lots of adtech effort has been put into developing cross-device tracking techniques — so it’s not clear that fragmenting your browsing sessions will successful beat all the trackers. 
In a similar vein, in 2016 Mozilla added a feature to its Firefox browser that’s intended to help web users segregate online identities within the same browser — called multi container extensions. This approach gives users some control but it does not stop their browser being fingerprinted and all their web activity in it linked and tracked. It may help reduce some cookie-based tracking, though.
Last month Mozilla also updated the container feature to add one that specifically isolates a Facebook user’s identity from the rest of the web. This limits how Facebook can track a user’s non-Facebook web browsing — which yes Facebook does do, whatever Zuckerberg tried to claim in Congress — so again it’s a way to reduce what the social network giant knows about you. (Though it should also be noted that clicking on any Facebook social plug-ins you encounter on other websites will still send Facebook your personal data.)

Action: Get acquainted with Tor
Who is this for: Activists, people with high risks attached to being tracked online, committed privacy advocates who want to help grow the Tor network
How difficult is it: Patience is needed to use Tor. Also some effort to ensure you don’t accidentally do something that compromises your anonymity
Tell me more: For the most robust form of anonymous web browsing there’s Tor. Tor’s onion network works by encrypting and routing your Internet traffic randomly through a series of relay servers to make it harder to link a specific device with a specific online destination. This does mean it’s definitely not the fastest form of web browsing around. Some sites can also try to block Tor users so the Internet experience you get when browsing in this way may suffer. But it’s the best chance of truly preserving your online anonymity. You’ll need to download the relevant Tor browser bundle to use it. It’s pretty straightforward to install and get going. But expect very frequent security updates which will also slow you down.

Action: Switch to another DNS
Who is this for: People who don’t trust their ISP
How difficult is it: Moderately
Tell me more: When you type an address in the address bar (such as techcrunch.com), your device asks a Domain Name Server to translate that address into an IP address (a unique combination of numbers and dots). By default, your ISP or your mobile carrier runs a DNS for their users. It means that they can see all your web history. Big telecom companies are going to take advantage of that to ramp up their advertising efforts. By default, your DNS query is also unencrypted and can be intercepted by people running the network. Some governments also ask telecom companies to block some websites on their DNS servers — some countries block Facebook for censorship reasons, others block The Pirate Bay for online piracy reasons.
You can configure each of your device to use another public DNS. But don’t use Google’s public DNS! It’s an ad company, so they really want to see your web history. Both Quad9 and Cloudflare’s 1.1.1.1 have strong privacy policies. But Quad9 is a not-for-profit organization, so it’s easier to trust them.

Action: Disable location services
Who is this for: Anyone who feels uncomfortable with the idea of being kept under surveillance
How difficult is it: A bit of effort finding and changing settings, and a bit of commitment to stay on top of any ‘updates’ to privacy policies which might try to revive location tracking. You also need to be prepared to accept some reduction in the utility and/or convenience of the service because it won’t be able to automatically customize what it shows you based on your location
Tell me more: The tech industry is especially keen to keep tabs on where its users are at any given moment. And thanks to the smash hit success of smartphones with embedded sensors it’s never been easier to pervasively track where people are going — and therefore to infer what they’re doing. For ad targeting purposes location data is highly valuable of course. But it’s also hugely intrusive. Did you just visit a certain type of health clinic? Were you carrying your phone loaded with location-sucking apps? Why then it’s trivially easy for the likes of Google and Facebook to connect your identity to that trip — and link all that intel to their ad networks. And if the social network’s platform isn’t adequately “locked down” — as Zuckerberg would put it — your private information might leak and end up elsewhere. It could even get passed around between all sorts of unknown entities — as the up to 87M Facebook profiles in the Cambridge Analytica scandal appear to have been. (Whistleblower Chris Wylie has said that Facebook data-set went “everywhere”.)
There are other potential risks too. Insurance premiums being assessed based on covertly collected data inputs. Companies that work for government agencies using social media info to try to remove benefits or even have people deported. Location data can also influence the types of adverts you see or don’t see. And on that front there’s a risk of discrimination if certain types of ads — jobs or housing, for example — don’t get served to you because you happen to be a person of color, say, or a Muslim. Excluding certain protected groups of people from adverts can be illegal — but that hasn’t stopped it happening multiple times on Facebook’s platform. And location can be a key signal that underpins this kind of prejudicial discrimination.
Even the prices you are offered online can depend on what is being inferred about you via your movements. The bottom line is that everyone’s personal data is being made to carry a lot of baggage these days — and most of the time it’s almost impossible to figure out exactly what that unasked for baggage might entail when you consent to letting a particular app or service track where you go.
Pervasive tracking of location at very least risks putting you at a disadvantage as a consumer. Certainly if you live somewhere without a proper regulatory framework for privacy. It’s also worth bearing in mind how lax tech giants can be where location privacy is concerned — whether it’s Uber’s infamous ‘god view’ tool or Snapchat leaking schoolkids’ location or Strava accidentally revealing the locations of military bases. Their record is pretty terrible.
If you really can’t be bothered to go and hunt down and switch off every location setting one fairly crude action you can take is to buy a faraday cage carry case — Silent Pocket makes an extensive line of carry cases with embedded wireless shielding tech, for instance — which you can pop your smartphone into when you’re on the move to isolate it from the network. Of course once you take it out it will instantly reconnect and location data will be passed again so this is not going to do very much on its own. Nixing location tracking in the settings is much more effective.

Action: Approach VPNs with extreme caution
Who is this for: All web users — unless free Internet access is not available in your country
How difficult is it: No additional effort
Tell me more: While there may be times when you feel tempted to sign up and use a VPN service — say, to try to circumvent geoblocks so you can stream video content that’s not otherwise available in your country — if you do this you should assume that the service provider will at very least be recording everything you’re doing online. They may choose to sell that info or even steal your identity. Many of them promise you perfect privacy and great terms of service. But you can never know for sure if they’re actually doing what they say. So the rule of thumb about all VPNs is: Assume zero privacy — and avoid if at all possible. Facebook even has its own VPN — which it’s been aggressively pushing to users of its main app by badging it as a security service, with the friendly-sounding name ‘Protect’. In reality the company wants you to use this so it can track what other apps you’re using — for its own business intelligence purposes. So a more accurate name for this ‘service’ would be: ‘Protect Facebook’s stranglehold on the social web’.

Action: Build your own VPN server
Who is this for: Developers
How difficult is it: You need to be comfortable with the Terminal
Tell me more: The only VPN server you can trust is the one you built yourself! In that case, VPN servers can be a great tool if you’re on a network you don’t trust (a hotel, a conference or an office). We recommend using Algo VPN and a hosting provider you trust.

Action: Take care with third-party keyboard apps
Who is this for: All touchscreen device users
How difficult is it: No additional effort
Tell me more: Keyboard apps are a potential privacy minefield given that, if you allow cloud-enabled features, they can be in a position to suck out all the information you’re typing into your device — from passwords to credit card numbers to the private contents of your messages. That’s not to say that all third-party keyboards are keylogging everything you type. But the risk is there — so you need to be very careful about what you choose to use. Security is also key. Last year, sensitive personal data from 31M+ users of one third-party keyboard, AI.type, leaked online after the company had failed to properly secure its database server, as one illustrative example of the potential risks.
Google knows how powerful keyboards can be as a data-sucker — which is why it got into the third-party keyboard game, outing its own Gboard keyboard app first for Apple’s iOS in 2016 and later bringing it to Android. If you use Gboard you should know you are handing the adtech giant another firehose of your private information — though it claims that only search queries and “usage statistics” are sent by Gboard to Google (The privacy policy further specifies: “Anything you type other than your searches, like passwords or chats with friends, isn’t sent. Saved words on your device aren’t sent.”). So if you believe that Gboard is not literally a keylogger. But it is watching what you search for and how you use your phone. 
Also worth remembering: Data will still be passed by Gboard to Google if you’re using an e2e encrypted messenger like Signal. So third party keyboards can erode the protection afforded by robust e2e encryption — so again: Be very careful what you use.

Action: Use end-to-end encrypted messengers
Who is this for: Everyone who can
How difficult is it: Mild effort unless all your friends are using other messaging apps
Tell me more: Choosing friends based on their choice of messaging app isn’t a great option so real world network effects can often work against privacy. Indeed, Facebook uses the fuzzy feelings you have about your friends to manipulate Messenger users to consent to continuously uploading their phone contacts, by suggesting you have to if you want to talk to your contacts. (Which is, by the by, entirely bogus.)
But if all your friends use a messaging app that does not have end-to-end encryption chances are you’ll feel forced to use that same non-privacy-safe app too. Given that the other option is to exclude yourself from the digital chatter of your friend group. Which would clearly suck. 
Facebook-owned WhatsApp does at least have end-to-end encryption — and is widely used (certainly internationally). Though you still need to be careful to opt out of any privacy-eroding terms the company tries to push. In summer 2016, for example, a major T&Cs change sought to link WhatsApp users’ accounts with their Facebook profiles (and thus with all the data Facebook holds on them) — as well as sharing sensitive stuff like your last seen status, your address book, your BFFs in Whatsapp and all sorts of metadata with Zuck’s ‘family’ of companies. Thankfully most of this privacy-hostile data sharing has been suspended in Europe, after Facebook got in trouble with local data protection agencies. 

Action: Use end-to-end encryption if you use cloud storage
Who is this for: Dedicated privacy practitioners, anyone worried about third parties accessing their stuff
How difficult is it: Some effort, especially if you have lots of content stored in another service that you need to migrate
Tell me more: Dropbox IPO’d last month — and the markets signalled their approval of its business. But someone who doesn’t approve of the cloud storage giant is Edward Snowden — who in 2014 advised: “Get rid of Dropbox”, arguing the company is hostile to privacy. The problem is that Dropbox does not offer zero access encryption — because it retains encryption keys, meaning it can technically decrypt and read the data you store with it if it decides it needs to or is served with a warrant.
Cloud storage alternatives that do offer local encryption with no access to the encryption keys are available, such as Spideroak. And if you’re looking for a cloud backup service, Backblaze also offers the option to let you manage the encryption key. Another workaround if you do still want to use a service like Dropbox is to locally encrypt the stuff you want to store before you upload it — using another third party service such as Boxcryptor.

Action: Use an end-to-end encrypted email service
Who is this for: Anyone who wants to be sure their email isn’t being data mined
How difficult is it: Some effort — largely around migrating data and/or contacts from another email service
Tell me more: In the middle of last year Google finally announced it would no longer be data-mining the emails inside its Gmail free email service. (For a little perspective on how long it took to give up data-mining your emails, Gmail launched all the way back in 2004.) The company probably feels it has more than enough alternative data points feeding its user profiling at this point. Plus data-mining email with the rise of end-to-end encrypted messaging apps risks pushing the company over the ‘creepy line’ it’s been so keen to avoid to try to stave off the kind of privacy backlash currently engulfing Facebook.
So does it mean that Gmail is now 100% privacy safe? No, because the service is not end-to-end encrypted. But there are now some great webmail clients that do offer robust end-to-end encryption — most notably the Swiss service Protonmail. Really it’s never been easier to access a reliable, user-friendly, pro-privacy email service. If you want to go one step further, you should set up PGP encryption keys and share them with your contacts. This is a lot more difficult though.

Action: Choose iOS over Android
Who is this for: Mainstream consumers, Apple fans
How difficult is it: Depends on the person. Apple hardware is generally more expensive so there’s a cost premium
Tell me more: No connected technology is 100% privacy safe but Apple’s hardware-focused business model means the company’s devices are not engineered to try to harvest user data by default. Apple does also invest in developing pro-privacy technologies. Whereas there’s no getting around the fact Android-maker Google is an adtech giant whose revenues depend on profiling users in order to target web users with adverts. Basically the company needs to suck your data to make a fat profit. That’s why Google asks you to share all your web and app activity and location history if you want to use Google Assistant, for instance.
Android is a more open platform than iOS, though, and it’s possible to configure it in many different ways — some of which can be more locked down as regards privacy than others (the Android Open Source Project can be customized and used without Google services as default preloads, for example). But doing that kind of configuration is not going to be within reach of the average person. So iOS is the obvious choice for mainstream consumers.

Action: Delete your social media accounts
Who is this for: Committed privacy lovers, anyone bored with public sharing
How difficult is it: Some effort — mostly feeling like you’re going to miss out. But third party services can sometimes require a Facebook login (a workaround for that would be to create a dummy Facebook account purely for login purposes — using a name and email you don’t use for anything else, and not linking it to your usual mobile phone number or adding anyone you actually know IRL)
Tell me more: Deleting Facebook clearly isn’t for everyone. But ask yourself how much you use it these days anyway? You might find yourself realizing it’s not really that central to what you do on the Internet after all. The center of gravity in social networking has shifted away from mass public sharing into more tightly curated friend groups anyway, thanks to the popularity of messaging apps.
But of course Facebook owns Messenger, Instagram and WhatsApp too. So ducking out of its surveillance dragnet entirely is especially hard. Ideally you would also need to run tracker blockers (see above) as the company tracks non-Facebook users around the Internet via the pixels it has embedded on lots of popular websites.
While getting rid of your social media accounts is not a privacy panacea, removing yourself from mainstream social network platforms at least reduces the risk of a chunk of your personal info being scraped and used without your say so. Though it’s still not absolutely guaranteed that when you delete an account the company in question will faithfully remove all your information from their servers — or indeed from the servers of any third party they shared your data with.
If you really can’t bring yourself to ditch Facebook (et al) entirely, at least dive into the settings and make sure you lock down as much access to your data as you can — including checking which apps have been connected to your account and removing any that aren’t relevant or useful to you anymore.

Action: Say no to always-on voice assistants
Who is this for: Anyone who values privacy more than gimmickry
How difficult is it: No real effort
Tell me more: There’s a rash of smart speaker voice assistants on shop shelves these days, marketed in a way that suggests they’re a whole lot smarter and more useful than they actually are. In reality they’re most likely to be used for playing music (albeit, audio quality can be very poor) or as very expensive egg timers.
Something else the PR for gadgets like Amazon’s (many) Echos or Google Home doesn’t mention is the massive privacy trade off involved with installing an always-on listening device inside your home. Essentially these devices function by streaming whatever you ask to the cloud and will typically store recordings of things you’ve said in perpetuity on the companies’ servers. Some do offer a delete option for stored audio but you would have to stay on top of deleting your data as long as you keep using the device. So it’s a tediously Sisyphean task. Smart speakers have also been caught listening to and recording things their owner didn’t actually want them to — because they got triggered by accident. Or when someone on the TV used the trigger word.
The privacy risks around smart speakers are clearly very large indeed. Not least because this type of personal data is of obvious and inevitable interest to law enforcement agencies. So ask yourself whether that fake fart dispenser gizmo you’re giggling about is really worth the trade off of inviting all sorts of outsiders to snoop on the goings on inside your home.

Action: Block some network requests
Who is this for: Paranoid people
How difficult is it: Need to be tech savvy
Tell me more: On macOS, you can install something called Little Snitch to get an alert every time an app tries to talk with a server. You can approve or reject each request and create rules. If you don’t want Microsoft Word to talk with Microsoft’s servers all the time for instance, it’s a good solution — but is not really user friendly.

Action: Use a privacy-focused operating system
Who is this for: Edward Snowden
How difficult is it: Need to be tech savvy
Tell me more: If you really want to lock everything down, you should consider using Tails as your desktop operating system. It’s a Linux distribution that leaves no trace by default, uses the Tor network for all network requests by default. But it’s not exactly user friendly, and it’s quite complicated to install on a USB drive. One for those whose threat model really is ‘bleeding edge’.

Action: Write to your political reps to demand stronger privacy laws
Who is this for: Anyone who cares about privacy, and especially Internet users in North America right now
How difficult is it: A bit of effort
Tell me more: There appears to be bipartisan appetite among U.S. lawmakers to bring in some form of regulation for Internet companies. And with new tougher rules coming in in Europe next month it’s an especially opportune moment to push for change in the U.S. where web users are facing reduced standards vs international users after May 25. So it’s a great time to write to your reps reminding them you’re far more interested in your privacy being protected than Facebook winning some kind of surveillance arms race with the Chinese. Tell them it’s past time for the U.S. to draft laws that prioritize the protection of personal data. 

Action: Throw away all your connected devices — and choose your friends wisely
Who is this for: Fugitives and whistleblowers
How difficult is it: Privacy doesn’t get harder than this
Tell me more: Last month the former Catalan president, Carles Puigdemont — who, in October, dodged arrest by the Spanish authorities by fleeing to Brussels after the region’s abortive attempt to declare independence — was arrested by German police, after crossing the border from Denmark in a car. Spanish intelligence agents had reportedly tracked his movements via the GPS on the mobile device of one or more of his friends. The car had also been fitted with a tracker. Trusting anything not to snitch on you is a massive risk if your threat model is this high. The problem is you also need trustworthy friends to help you stay ahead of the surveillance dragnet that’s out to get you.

Action: Ditch the Internet entirely
Who is this for: Fugitives and whistleblowers
How difficult is it: Privacy doesn’t get harder than this
Tell me more: Public administrations can ask you to do pretty much everything online these days — and even if it’s not mandatory to use their Internet service it can be incentivized in various ways. The direction of travel for government services is clearly digital. So eschewing the Internet entirely is getting harder and harder to do.
One wild card option — that’s still not a full Internet alternative (yet) — is to use a different type of network that’s being engineered with privacy in mind. The experimental, decentralized MaidSafe network fits that bill. This majorly ambitious project has already clocked up a decade’s worth of R&D on the founders’ mission to rethink digital connectivity without compromising privacy and security by doing away with servers — and decentralizing and encrypting everything. It’s a fascinating project. Just sadly not yet a fully-fledged Internet alternative.

DroneShield is keeping hostile UAVs away from NASCAR events

If you were hoping to get some sweet drone footage of a NASCAR race in progress, you may find your quadcopter grounded unceremoniously by a mysterious force: DroneShield is bringing its anti-drone tech to NASCAR events at the Texas Motor Speedway.

The company makes a handful of products, all aimed at detecting and safely intercepting drones that are flying where they shouldn’t. That’s a growing problem, of course, and not just at airports or Area 51. A stray drone at a major sporting event could fall and interrupt the game, or strike someone, or at a race it may even cause a major accident.

Most recently it introduced a new version of its handheld “DroneGun,” which scrambles the UAV’s signal so that it has no choice but to safely put itself down, as these devices are generally programmed to do. You can’t buy one — technically, they’re illegal — but the police sure can.

Recently DroneShield’s tech was deployed at the Commonwealth Games in Brisbane and at the Olympics in PyeongChang, and now the company has announced that it was tapped by a number of Texas authorities for the protection of stock car races.

DroneShield’s systems in place in PyeongChang

“We are proud to be able to assist a high-profile event like this,” said Oleg Vornik, DroneShield’s CEO, in an email announcing the news. “We also believe that this is significant for DroneShield in that this is the first known live operational use of all three of our key products – DroneSentinel, DroneSentry and DroneGun – by U.S. law enforcement.”

It’s a big get for a company that clearly saw an opportunity in the growing drone market (in combating it, really) and executed well on it.

Department of Energy hosts competition to train cyber defense warriors

From leaked passwords to identity theft, cybersecurity issues are constantly in the news. Few issues though are as important — or as under-reported by the media — as the security of America’s industrial control infrastructure. Oil rigs, power plants, water treatment facilities and other critical infrastructure are increasingly connecting to the internet, but often without the kinds of foolproof security systems in place to ensure bad actors can’t gain access or disrupt service delivery.

This is a growing area of the economy with a wealth of jobs, but few students even realize that industrial and infrastructure cybersecurity is an interesting career path. So, over the past three years, the Department of Energy has hosted a Cyber Defense Competition to encourage university students to engage in the field. The latest incarnation of the completion was held this past weekend and hosted by Argonne, Pacific Northwest, and Oak Ridge national laboratories.

Lewis University won the competition this year in a total field of 25 entrants. That is up from 15 teams last year, and 9 teams in the inaugural competition.

Nate Evans leads the program at Argonne, and explained to me the design of the competition. Teams get a month before the competition to learn how to defend industrial control systems against hackers. Each team is given a small industrial control system that emulates a real-world model.

Then on the day of the competition, those teams compete and run the operations of the model infrastructure as the cyber defense team. A red team cell tries to hack the system, while a green team of regular, nontechnical people do the normal work of using the system, such as answering emails or responding to requests.

Evans explained that they “add in the usability piece as well, so they’re trying not just trying to defend against the red team but keeping usability.“ Six times an hour, a request to the team comes in, such as a new feature desired by the CEO. The idea is to simulate as closely as possible the conditions of a real piece of industrial infrastructure and forcing the team to project manage different priorities.

Teams are allowed to build anything they want to defend their system. “We try to make it as flexible as possible and so they can bring whatever skills they have,” Evans said. We want the teams to “come out and try new things such as a custom operating system that they wrote in the class,” he explained, “or some crazy firewall or setup or design.”

Since each of the three labs hosting the competition is on ES Net, the Energy Sciences Network that connects all labs in the U.S., the competition can be conducted in real-time across all locations, and lasts about eight hours. Lewis University won the national award, while University of Central Florida, Oregon State University, and University of Memphis won first place regional awards.