Oscar’s health insurance platform nabs another $225 million

The direct-to-consumer health insurer Oscar has raised another $225 million in its latest, late-stage round of funding as its vision of tech-enabled health care services to drive down consumer costs becomes more and more of a reality.

In an effort to prevent a patient’s potential exposure to the novel coronavirus, COVID-19, most healthcare practices are seeing patients remotely via virtual consultations, and more patients are embracing digital health services voluntarily, which reduces costs for insurers and potentially provide better access to basic healthcare needs. Indeed, Oscar now has a $2 billion revenue base to point to and now a fresh pile of cash to draw from.

“Transforming the health insurance experience requires the creation of personalized, affordable experiences at scale,” said Mario Schlosser, the co-founder and chief executive of Oscar.

Oscar’s insurance customers have the distinction of being among the most active users of telemedicine among all insurance providers in the US, according to the company. Around 30 percent of patients with insurance plans from the company have used telemedical services, versus only 10 percent of the country as a whole.

The new late-stage funding for Oscar includes new investors Baillie Gifford and Coatue, two late-stage investor that typically come in before a public offering. Other previous investors including Alphabet, General Catalyst, Khosla Ventures, Lakestar and Thrive Capital also participated in the round.

With the new funding, Oscar was able to shrug off the latest criticisms and controversies that swirled around the company and its relationship with White House official Jared Kushner as the President prepared its response to the COVID-19 epidemic.

As the Atlantic reported, engineers at Oscar spent days building a stand-alone website that would ask Americans to self report their symptoms and, if at risk, direct them to a COVID-19 test location. The project was scrapped within days of its creation, according to the same report.

The company now offers its services in 15 states and 29 U.S. cities, with over 420,000 members in individual, Medicare Advantage, and small group products, the company said.

As Oscar gets more ballast on its balance sheet, it may be readying itself for a public offering. The insurer wouldn’t be the first new startup to test public investor appetite for new listings. Lemonade, which provides personal and home insurance, has already filed to go public.

Oscar’s investors and executives may be watching closely to see how that listing performs. Despite its anemic target, the public market response could signal that more startups in the insurance space could make lemonade from frothy market conditions — even as employment numbers and the broader national economy continue to suffer from pandemic-induced economic shocks.

Apple temporarily re-closes 14 more Florida stores as COVID-19 numbers surge

After closing stores across four states, this was no doubt a bit of an inevitable: Following reporting earlier today, Apple has confirmed that it will be shutting down an additional 14 stores in Florida, joining the two it closed last week.

The company sent a statement to TechCrunch that is essentially identical to the one it gave us last week, reading, “Due to current COVID-19 conditions in some of the communities we serve, we are temporarily closing stores in these areas. We take this step with an abundance of caution as we closely monitor the situation and we look forward to having our teams and customers back as soon as possible.”

The move comes as COVID-19 cases continue to surge in the southern states. On Wednesday, state officials reported north of 5,000 new infections for the second straight day. In all, Florida has experienced more than 114,000 COVID-19 cases and 3,000 deaths, ranking sixth among all states by number of infections.

As noted last week, Apple had earlier confirmed the possibility of closed locations as soon as it began to reopen select locations in May. The full list of newly closed Florida stores includes:

  • The Galleria
  • The Falls
  • Aventura
  • Lincoln Road
  • Dadeland
  • Brickell City Centre
  • Wellington Green
  • Boca Raton
  • The Gardens Mall
  • Millenia
  • Florida Mall
  • Altamonte
  • International Plaza
  • Brandon

The Waterside Shops and Coconut Point stores were closed last week. Locations in Arizona and North and South Carolina have also been closed following reopening.

NASA’s JPL open-sources an anti-face touching wearable to help reduce the spread of COVID-19

There are some wearables out there in the world that are making claims around COVID-19 and their ability to detect it, prevent it, certify that you don’t have it, and more. But a new wearable device from NASA’s Jet Propulsion Laboratory might actually be able to do the most to prevent the spread of COVID-19 – and it’s not really all that technically advanced or complicated.

JPL’s PULSE wearable uses 3D-printed parts and readily available, affordable electronic components to do just one thing: remind a person not to touch their face. JPL’s designers claim that its simple enough that the gadget “can easily be reproduced by anyone regardless of their level of expertise,” and to encourage more people and companies to actually do that, the lab has made available a full list of parts, 3D modelling files and full instructions for its assembly via an open source license.

The PULSE is essentially a pendant, worn between six inches and 1 foot from the head around the neck, which can detect when a person’s hand is approaching their face using gan IR-based proximity sensor. A vibration motor then shakes out an alert, and the response becomes stronger as your hand gets closer to your face.

The hardware itself is simple – but that’s the point. It’s designed to run on readily available 3V coin batteries, and if you have a 3D printer to hand for the case and access to Amazon, you can probably put one together yourself at home in no time.

The goal of PULSE obviously isn’t to single-handedly eliminate COVID-19 – contact transmission from contaminated hands to a person’s mouth, nose or eyes is just one vector, and it seems likely that respiratory droplets that result in airborne transmission is at least as effective at passing’s the virus around. But just like regular mask-wearing can dramatically reduce transmission risk, minimizing how often you touch your face can have a big combinatory effect with other measures taken to reduce the spread.

Other health wearables might actually be able to tell you when you have COVID-19 before you show significant symptoms or have a positive test result – but work still needs to be done to understand how well that works, and how it could be used to limit exposure. JPL’s Pulse has the advantage of being effective now in terms of building positive habits that we know will limit the spread of COVID-19, as well as other viral infections.

Mophie is selling an $80 wireless-charging UV phone sanitizer

The best possible time to launch a UV phone sanitizer would have been about five months ago. The second best possible time, however, is right now. When the COVID-19 pandemic really started hitting the global community in earnest, there was a run on these once fairly niche products from companies with names like PhoneSoap.

In fact, available models started selling out all over the place, when many people began to recognize for the first time just how much of a disease vector the smartphone could truly be. All of that in mind, the category is a pretty logical next step for the many accessory brands underneath the Zagg umbrella.

Today Mophie and InvisibleShield launched their takes on the category. The products are priced at $80 and $60, respectively, with the key differentiator here being the inclusion of a 10W wireless charging pad. Though that’s actually on the lid of the product. Meaning you can use it to charge the handset only once it’s out of the disinfecting bed.

Both sanitize phones up to 6.9 inches, using UV-C light, promising to kill up to 99.9% of bacteria. It’s certainly worth noting that, like PhoneSoap, these brands are making claims that their products can kill the novel coronavirus. The jury is still out on their efficacy on that front. In fact, COVID-19 is conspicuously absent from the press material. And even with one of these, I’d still strongly recommend carrying around a pack of antibacterial wipes, if you can still find any.

Both products are available now, through their respective sites.

Demand for fertility services persists despite COVID-19 shutdowns

In 2019, the global fertility services industry was estimated to be worth $14.8 billion with demand driven by the significant growth in the median age of first-time mothers, according to a Research & Markets report.

Gina Bartasi, founder and CEO of NYC-based fertility center Kindbody, has pointed to macroeconomic trends responsible for the industry’s consistent growth, such as the increase in single mothers by choice and the fact that “heterosexual couples are waiting to have children and waiting to get married, and more and more same-sex couples are having children, which is relatively new.”

Regardless of the increasing demand, disasters can disrupt fertility services: On March 17, the American Society for Reproductive Medicine directed U.S.-based fertility clinics to avoid initiating new treatments, push back nonemergency surgeries and shift care to telemedicine.

Now reopened, it’s undeniable that COVID-19’s national impact could alter the space as different types of crises have in the past. In looking back, we can find a better understanding of what the future holds.

After the terror attacks on September 11, 2001, a University of Louisville study found that there was “a prompt and significant increase in births and birthrates in the post-9/11 period” in New York City. Relatedly, when Hurricane Katrina hit New Orleans in August 2005 and created the nation’s costliest natural disaster, it was also one of five times since 1987 that frozen embryos were evacuated and protected during a natural disaster.

According to a study done by University of Wisconsin, “following Katrina, displacement contributed to a 30% decline in birth cohort size. Black fertility fell, and remained 4% below expected values through 2010. By contrast, white fertility increased by 5%.” The communities were so ravaged that the area’s Black population has remained substantially smaller.

Use AI responsibly to uplift historically disenfranchised people during COVID-19

One of the most distressing aspects of the ongoing pandemic is that COVID-19 is having a disproportionate impact on communities of color and lower-income Americans due to structural factors rooted in history and long-standing societal biases.

Those most at risk during this pandemic are 24 million of the lowest-income workers; the people who have less job security and can’t work from home. In fact, only 9.2% of the bottom 25% have the ability to work from home. Compare that to the 61.5% of the top 25% and the disparity is staggering. Additionally, people in these jobs typically do not have the financial security to avoid public interaction by stockpiling food and household goods, buying groceries online or avoiding public transit. They cannot self-isolate. They need to venture out far more than other groups, heightening their risk of infection.

The historically disadvantaged will also be hit the hardest by the economic impacts of the pandemic. They are overrepresented in the industries experiencing the worst downturn. The issues were atrocious prior to COVID-19, with the typical Black and Latinx households having a net worth of just $17,100 and $20,765, respectively, compared with the $171,000 held by the typical white household. An extended health and economic crisis will only exacerbate these already extreme disparities.

AI as a beacon of hope

A rare encouraging aspect of the ongoing pandemic response is the use of cutting-edge technology — especially AI — to address everything from supply chains to early-stage vaccine research.

The potential of human + AI exceeds the potential of humans working alone by far, but there are tremendous risks that require careful consideration. AI requires massive amounts of data, but ingrained in that data are the societal imperfections and inequities that have given rise to disproportionate health and financial impacts in the first place.

In short, we cannot use a tool until we know it works and understand the potential for unintended consequences. Some health groups hurried to repurpose existing AI models to help track patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of COVID-19, but many of those tools have struggled with bias and accuracy issues. Balancing the instinct to “help now” and the risks of “unforeseen consequences” amidst the high stakes of the COVID-19 pandemic is why the responsible use of AI is more important now than ever.

4 ways to purposefully and responsibly use AI to combat COVID-19

1. Avoid delegating to algorithms that run critical systems

Think of an AI system designed to distribute ventilators and medical equipment to hospitals with the objective of maximizing survival rates. Disadvantaged populations have higher comorbidities and thus may be less likely to receive supplies if the system is not properly designed. If these preexisting prejudices are not accounted for when designing the AI system, then well-intentioned efforts could result in directing supplies away from especially vulnerable communities.

Artificial intelligence is also being used to improve supply chains across all sectors. The Joint Artificial Intelligence Center is prototyping AI that can track data on ventilators, PPE, medical supplies and food. When the goal is to anticipate panic-buying and ensure health care professionals have access to the equipment they need, this is a responsible use of AI.

As these examples illustrate, we can quickly arrive at a problematic use case when decision-making authority is delegated to an algorithm. The best and most responsible use of AI is maximizing efficiency to ensure the necessary supplies get to those truly in need. Previous failures in AI show the need for healthy skepticism when delegating authority on potentially life-and-death decisions to an algorithm.

2. Be wary of disproportional impacts and singling out specific communities

Think of an AI system that uses mobility data to detect localized communities that are violating stay-at-home orders and route police for additional enforcement. Disadvantaged populations do not have the economic means to stockpile food and other supplies, or order delivery, forcing them to go outside.  As we mentioned earlier, being overrepresented in frontline sectors means leaving the home more frequently. In addition, individuals and families experiencing homelessness could be targeted for violating stay-at-home enforcement. In New York City, police enforcement of stay-at-home directives has disproportionately targeted Black and Latinx residents. Here is where responsible AI steps in. AI systems should be designed not to punish these populations with police enforcement, but rather help identify the root causes and route additional food and resources. This is not a panacea, but will avoid exacerbating existing challenges.

Israel has already demonstrated that this model works. In mid-March it passed an emergency law enabling the use of mobile data to pinpoint the infected as well as those they had come in contact with. Maccabi Healthcare Services are using AI to ID its most at-risk customers and prioritize them for testing. This is a fantastic example of adopting previously responsible and successful AI by adapting an existing system that was built and trained to identify people most at risk for flu, using millions of records from over 27 years.

3. Establish AI that is human-centric with privacy by design and native controls

Think of an AI system that uses mobile phone apps to track infections and trace contacts in an effort to curb new infections. Minority and economically disadvantaged populations have lower rates of smartphone ownership than other groups. AI systems should take these considerations into account to avoid design bias. This will ensure adequate protections for vulnerable populations, but also improve the overall efficacy of the system since these individuals may have high human contact in their jobs. Ensuring appropriate track and trace takes place within these populations is critically important.

In the U.S., MIT researchers are developing Private Automatic Contact Tracing (PACT), which uses Bluetooth communications for contact tracing while also preserving individual privacy. If you test positive and inform the app, everyone who has been in close proximity to you in the last 14 days gets a notification. Anonymity and privacy are the biggest keys to responsible use of AI to curb the spread of COVID-19.

In India the government’s Bridge to Health app uses a phone’s Bluetooth and location data to let users know if they have been near a person with COVID-19. But, again, privacy and anonymity are the keys to responsible and ethical use of AI.

This is a place where the true power of human + AI shines through. As these apps are rolled out, it is important that they are paired with human-based track and trace to account for disadvantaged populations. AI allows automating and scaling track and trace for most of the population; humans ensure we help the most vulnerable.

4. Validate systems and base decisions on sanitized representative data

Think of an AI system that helps doctors make rapid decisions on which patients to treat and how to treat them in an overburdened health care system. One such system developed in Wuhan identified biomarkers that correlate with higher survival rates to help doctors pinpoint which patients likely need critical care and which can avoid the hospital altogether.

The University of Chicago Medical Center is working to upgrade an existing AI system called eCART. The system will be enhanced for COVID to use more than 100 variables to predict the need for intubation eight hours in advance. While eight hours may not seem like much, it provides doctors an opportunity to take action before a patient’s condition deteriorates.

But, the samples and data sets systems like these rely on could have the potential to produce unreliable outcomes or ones that reinforce existing biases. If the AI is trained on observations of largely white individuals — as was the case with data in the International Cancer Genome Consortium — how willing would you be to delegate life-and-death health care decisions for a nonwhite patient? These are issues that require careful consideration and demonstrate why it is so important to validate not only the systems themselves, but also the data on which they rely.

Questions we must ask

As companies, researchers and governments increasingly leverage AI, a parallel discussion around responsible AI is necessary to ensure benefits are maximized while harmful consequences are minimized. We need better guidelines and assessments of AI around fairness, trustworthiness, bias and ethics.

There are dozens of dimensions we should evaluate every use case against to ensure it is developed in a responsible manner. But, these four simple questions provide a great framework to start a discussion between AI system developers and policy makers who may be considering deploying an AI solution to combat COVID-19.

  • What are the consequences if the system makes a mistake? Can we redesign the system to minimize this?
  • Can we clearly explain how the AI system produced specific outcomes in a way that is understandable to the general public?
  • What are potential sources of bias — data, human and design — and how can they be minimized?
  • What steps can be taken to protect the privacy of individuals?

When to use AI solutions and tools

Each of these questions will apply in different ways to particular use cases. A natural language processing (NLP) system sifting through tens of thousands of scientific papers that might focus the search for a COVID-19 vaccine poses no direct threat of harm to individuals and performs a task faster that an army of research assistants ever could. Case in point, in April at Harvard the Harvard T.H. Chan School of Public Health and the Human Vaccines Project announced the Human Immunomics Initiative to leverage AI models to accelerate vaccine development.

This is a global effort with scientists around the world working together to expedite drug discovery processes to defeat COVID-19 through the use of AI. From the aforementioned work in the U.S. all the way to Australia where Oracle cloud technology and vaccine technology developed by Vaxine is being leveraged by Flinders University to develop promising vaccine candidates, we can see AI being used for its most ethical purpose, saving human lives.

Another use case is the omnipresent issue facing us during this pandemic: dissemination of misinformation across the planet. Imagine trying to manually filter the posts of the 1.7 billion daily Facebook users every day and scan for misinformation about COVID-19. This is an ideal project for human + AI — with humans confirming cases of misinformation flagged by AI.

This use case is relatively low risk, but its ultimate success depends on human oversight and engagement. That’s even more so the case in the high-risk use cases that are grabbing headlines amidst the COVID-19 pandemic. Human + AI is not just a safeguard against a system gone off the rails, it’s critical to AI delivering meaningful and impactful results as illustrated through earlier examples.

We need to classify use cases into three buckets to guide our decision making:

1. Red

  • Use case represents a decision that should not be delegated to an AI system.
  • Using an AI system to decide which patients receive medical treatment during a crisis. This is a case where humans should ultimately be making decisions because of their impact on life-and-death decisions. This has already been recognized by the medical community, where ethical frameworks have been developed to support these very types of decisions.

2. Yellow

  • Use case could be deployed responsibly, but it depends upon the design and execution.
  • Using an AI system to monitor adherence to quarantine policies. This is a case where use cases may be acceptable depending on design and deployment of systems. For example, using the system to deploy police to neighborhoods to “crackdown” on individuals not adhering to quarantine policies would be problematic. But deploying police to these neighborhoods to understand why quarantine is being broken so policy makers can better address citizen needs would be legitimate — provided privacy of individuals is protected.

3. Green

  • Use case is low risk and the benefits far outweigh the risks.
  • Content filtering on social media platforms to ensure malicious and misleading information regarding COVID-19 is not shared widely.

We must ask our four questions and deliberately analyze the answers we find. We can then responsibly and confidently decide which bucket to put the project into and move forward in a responsible and ethical manner.

A recent U.S. example

We recently created Lighthouse, a new dynamic navigation cockpit that helps organizations capture a holistic picture of the ongoing crisis. These “lighthouses” are being used to illuminate the multiple dimensions of the situation. For example, we recently partnered with an American city to develop a tool that predicted disruptions in the food supply chain. One data source was based on declines in foot traffic in and around distribution centers. Without accessing any personally identifiable information (PII) — and therefore preserving individual privacy — it shows which parts of the city were most likely to suffer shortages, enabling leaders to respond preemptively and prevent an even worse public health crisis.

This is an easily duplicated process that other organizations can follow to create and implement responsible AI to help the historically disenfranchised navigate and thrive during the age of COVID-19.

Moving forward

When confronting the ethical dilemmas presented by crises like COVID-19, enterprises and organizations equipped with responsible AI programs will be best positioned to offer solutions that protect the most vulnerable and historically disenfranchised groups by respecting privacy, eliminating historical bias and preserving trust. In the rush to “help now,” we cannot throw responsible AI out the window. In fact, in the age of COVID-19, it is more important than ever before to understand the unintended consequences and long-term effects of the AI systems we create.

Taiwanese startup Deep01 raises $2.7 million for its AI-based medical imaging software

Deep01, a Taiwanese startup that develops software to help doctors interpret CT brain scans more quickly, announced today that it has raised $2.7 million. The funding was led by PC maker ASUSTek.

Deep01’s product has obtained clearance from both Taiwan and the United States’ Food and Drug Administrations, and the company received its first purchase order, worth about $700,000, in February.

Other investors included the Digital Economy Fund, which is co-funded by Taiwanese research organizations Industrial Technology Research Institute (ITRI) and the Institute for Information Industry (III), and BE Capital.

Deep01’s software is currently used in two medical centers and four hospitals in Taiwan and has already helped doctors check over 2,000 brain scans.

Created for use by emergency departments, Deep01 says its software can detect acute intracerebral hemorrhage with an accuracy rate of 93% to 95%, within 30 seconds.

The startup was launched in 2016 by a team that includes co-founder and CEO David Chou, who earned his Master’s Degree in computer science at Carnegie Mellon University and was a Harvard University research fellow at Massachusetts General Hospital between 2018 and 2019.

In a press statement, Albert Chang, ASUS corporate vice president and co-head of its AIoT Business Group, said “Deep01 is a leading startup in the AI medical area. The collaboration is promising for smart medical applications.”

After reopening, Apple is closing stores in four states as COVID-19 numbers climb

Apple today confirmed earlier rumors that it plans to shut down re-opened stores in four states.  Impacted locations include six stores in Arizona, two in Florida, another two in North Carolina and one in South Carolina.

“Due to current COVID-19 conditions in some of the communities we serve, we are temporarily closing stores in these areas. We take this step with an abundance of caution as we closely monitor the situation and we look forward to having our teams and customers back as soon as possible,” the company said in a statement to TechCrunch.

It’s been just over a month since the company began to reopen a handful of locations, as states began wider reopening efforts. The company implemented several safeguards, including mask requirements, temperature checks and enforced social distancing, as well as extended cleaning efforts.

“These are not decisions we rush into,” Retail SVP Deirdre O’Brien wrote at the time, “and a store opening in no way means that we won’t take the preventative step of closing it again should local conditions warrant.”

One imagines the company will approach re-re-opening the same way. However, several states have posted increases in COVID-19 cases since government began the process of reopening. Arizona, Florida, Oklahoma, Nevada, Oregon and Texas have all posted record high infection rates in the past week. Given the uncertain nature of the virus’s spread, it seems likely this won’t be the last time Apple and other retailers have to reverse course. 

 

The following locations will be closed, beginning tomorrow,

Florida
  • Waterside Shops
  • Coconut Point
North Carolina
  •  Southpark
  • Northlake Mall
South Carolina
  • Haywood Mall
Arizona
  • Chandler Fashion Center
  • Scottsdale Fashion Square
  • Arrowhead
  • SanTan Village
  • Scottsdale Quarter
  • La Encantada

More information on specific stores can be found on Apple’s site.

Drone-deployed sterile mosquitoes could check spread of insect-borne illnesses

Drone deployment of sterile mosquitoes could accelerate efforts to control their populations and reduce insect-borne disease, according to a proof of concept experiment by an multi-institutional research team. The improved technique could save thousands of lives.

Mosquitoes are a public health hazard around the world, spreading infections like malaria to millions and causing countless deaths and health crises. Although traps and netting offer some protection, the proactive approach of reducing the number of insects has also proven effective. This is accomplished by sterilizing male mosquitoes and releasing them into the wild, where they compete with the other males for food and mates but produce no offspring.

The problem with this approach is it is fairly hands-on, requiring people to travel through mosquito-infested areas to make regular releases of treated males. Some aerial and other dispersal methods have been attempted but this project from French, Swiss, British, Brazilian, Senegalese and other researchers seems to be the most effective and practical yet.

Mosquitoes grown in bulk and sterilized by radiation are packed at low temperatures (“chilled” mosquitoes don’t fly or bite) into cartridges. These cartridges are kept refrigerated until they can be brought to a target site, where they’re loaded onto a drone.

Thousands of chilled, marked mosquitoes ready for deployment.

This drone ascends to a set altitude and travels over the target area, steadily releasing thousands of sterile males as it goes. By staging at the center of a town, the drone operators can reload the craft with new cartridges and send it in more directions, accomplishing dispersal over a huge and perhaps difficult to navigate space more quickly and easily than manual techniques.

The experiment used mosquitoes marked with fluorescent dyes that let the researchers track the effectiveness of their air-dropped mosquitoes, and the new technique shows great improvement over manual methods (on the order of 50 percent better) — without even getting into the reductions in time and labor. New methods for sterilizing, packing, and meting out the insects further gild the results.

The researchers point out that while there are of course plenty of applications for this technique in ordinary times, the extraordinary times of this pandemic present new dangers and opportunities. Comorbidity of COVID-19 and mosquito-borne illnesses is practically unstudied, and disruptions to supply chains and normal insect suppression efforts is likely to lead to spikes in the likes of malaria and dengue fever.

Work like this could lead to improved general health for billions. The researchers’ work appeared in the journal Science Robotics.

EU states agree a tech spec for national coronavirus apps to work across borders

European Union countries and the Commission have agreed on a technical framework to enable regional coronavirus contacts tracing apps to work across national borders.

A number of European countries have launched contacts tracing apps at this point, with the aim of leveraging smartphone technologies in the fight against COVID-19, but none of these apps can yet work across national borders.

Last month, EU Member States agreed to a set of interoperability guidelines for tracing apps. Now they’ve settled on a technical spec for achieving cross-border working of apps. The approach has been detailed in a specification document published today by the eHealth Network.

The Commission has called the agreement on a tech spec an important step in the fight against COVID-19, while emphasizing tracing apps are only a supplement to manual contacts tracing methods.

Commenting in a statement, European commissioner for the Internal Market, Thierry Breton, said: “As we approach the travel season, it is important to ensure that Europeans can use the app from their own country wherever they are travelling in the EU. Contact tracing apps can be useful to limit the spread of coronavirus, especially as part of national strategies to lift confinement measures.”

The system will involve a Federation Gateway Service, run by the Commission, that will receive and pass on “relevant information” from national contact tracing apps and servers — in order to minimise the amount of data exchanged and reduce users’ data consumption, per a Commission press release.

From the tech spec:

The pattern preferred by the European eHealth Network is a single European Federation Gateway Service. Each national backend uploads the keys of newly infected citizens (‘diagnosis keys’) every couple of hours and downloads the diagnosis keys from the other countries participating in this scheme. That’s it. Data conversion and filtering is done in the national backends.

“The proximity information shared between apps will be exchanged in an encrypted way that prevents the identification of an individual person, in line with the strict EU guidelines on data protection for apps; no geolocation data will be used,” the Commission added.

The key caveat attached to the agreed interoperability system is that it currently only works to link up decentralized contacts tracing apps — such as the Corona-Warn-App launched today by Germany — or the national apps recently released in Italy, Latvia and Switzerland.

Centralized coronavirus contacts tracing apps — which do not store and process proximity data locally on the device but upload it to a central server for processing, such as France’s StopCovid app; the UK’s NHS COVID-19 app; or the currently suspended Norwegian Smittestopp app — will not immediately be able to plug into the interoperability architecture, as we explained in our report last month.

Apple and Google’s joint API for coronavirus exposure notifications also only supports decentralized tracing apps.

“This document presents the basic elements for interoperability for ‘COVID+ Keys driven solutions’ [i.e. decentralized tracing systems],” notes the eHealth Network. “It aims to keep data volumes to the minimum necessary for interoperability to ensure cost efficiency and trust between the participating Member States. This document is therefore addressed only to Member States implementing this type of protocol.”

The Commission has been calling for a common approach to the use of tech and data to fight COVID-19 for months. However national governments have not fallen uniformly into line — with, still, a mixture of decentralized and centralized approaches for tracing apps in play (although the former now comprise “the great majority of national approved apps”, per the Commission).

It’s also playing the diplomat — saying it “continues to support the work of Member States on extending interoperability also to centralised tracing apps”.

Although it has not provided any detail on how that might be achieved in a way that’s satisfactory for both app architecture camps, given associated privacy risks/security trade-offs of crossing opposing technical streams.

This means that citizens in European countries whose governments have chosen a centralized approach for coronavirus contacts tracing may find, on traveling elsewhere in the region, they will need to download another country’s national app to be able to receive and send coronavirus exposure notifications.

Even decentralized national apps aren’t able to exchange relevant data yet, though. The interoperability architecture’s gateway interface still needs to be deployed — and national apps launched and/or updated before all the relevant pieces can start talking. So there’s a way to go before any digital contacts tracing is working smoothly across European borders.

Meanwhile, some EU countries have already started to reopen their borders to other European countries — ahead of a wider reopening planned for the summer.

This week, for example, a few thousand German holidaymakers were allowed to travel to Spain’s Balearic Islands as part of a trial aimed at restarting tourism. So EU citizens are already flowing across borders before national apps are in a position to securely exchange data on exposure risk.