Google confirms US offices will remain closed until at least September, as COVID-19 spikes

A few months back, Google announced plans to reopen some U.S. offices after the July 4th holiday. But the best-laid plans, and all of that. Things have obviously not been going great in terms of the United States’ battle with COVID-19, and Google once again finds itself proceeding on the side of caution.

As was first reported by Bloomberg, Google has since confirmed with TechCrunch that it will be pushing back reopening at least until September 7, after the Labor Day holiday in the States. Along with other tech giants like Facebook, Google has noted that it will continue to offer employees the option of working from home through the remainder of the year.

It’s a smart choice, as many no doubt still feel uncomfortable returning to an office situation — not to mention questions around the public transit that many use to get there. Twitter, meanwhile, made waves in May by announcing that employees would be allowed to indefinitely work remotely.

Yesterday, the United States reported more than 47,000 new COVID-19 cases, marking the biggest single-day spike since the beginning of the pandemic.

Arizona, Florida and Texas have all become epicenters as many other states have seen their own increases in recent weeks. Reopening plans have been put on hold or rolled back in many locales, amid increased concern over the virus’s continued spread. It seems likely that other big tech companies will delay their own reopening plans. In most cases, shifting back to the office simply isn’t worth the risk. 

Minneapolis-based VC shop Bread & Butter focuses on its own backyard

While many investors say sheltering in place has broadened their appetite for funding companies located outside major hubs, one firm is doubling down on backing startups in America’s heartland.

Launched in 2016 by Brett Bohl, The Syndicate Fund rebranded to Bread & Butter Ventures earlier this month (a reference to one of Minnesota’s many nicknames). Along with the rebrand, longtime Google executive and Revolution partner Mary Grove joined the team as a general partner and Stephanie Rich came aboard as head of platform.

The growth of the Twin Cities’ startup ecosystem is precisely why The Syndicate Fund rebranded. The firm, which has $10 million in assets under management, will invest in three of Minneapolis’ biggest strengths: agriculture and food, health care and enterprise software.

Agtech interest spans the entire spectrum from farming to restaurants and grocery stores. The firm is also interested in the “messy middle” of supply chain and logistics around food, said Bohl and is interested in a mix of software, hardware and biosciences. Within health care, the firm evaluates solutions focused on prevention versus treatment, female health startups working on maternal health and fertility and software focused on the aging population and millennials.

It’s also looking at enterprise software that can serve large businesses and scale efficiently.

Beyond Burger arrives in Alibaba’s grocery stores in China

Beyond Meat is starting to hit supermarket shelves in China after it first entered the country in April by supplying Starbucks’ plant-based menu. Within weeks, it had also forayed into select KFC, Taco Bell, and Pizza Hut outlets — all under the Yum China empire.

China, the world’s biggest market for meat consumption, has seen a growing demand for plant-based protein. Euromonitor predicted that the country’s “free from meat” market, including plant-based meat substitutes, would be worth almost $12 billion by 2023, up from just under $10 billion in 2018.

The Nasdaq-listed food giant is now bringing its signature Beyond Burgers into Freshippo (“Hema” in Chinese), Alibaba’s supermarket chain with a 30-minute delivery service that recorded a spike in orders during the pandemic as people avoided in-person shopping.

The tie-up will potentially promote the animal-free burgers to customers of Freshippo’s more than 200 stores across China’s Tier 1 and Tier 2 cities. They will first be available in 50 stores in Shanghai and arrive in more locations in September.

“We know that retail will be a critical part of our success in China, and we’re pleased to mark this early milestone within a few months of our market entry,” Ethan Brown, founder and chief executive officer of Beyond Meat, said in a statement.

Plant-based meat has a long history in China, serving the country’s Buddhist communities before the diet emerged as a broader urban lifestyle in more recent times. Amid health concerns, the Chinese government told citizens to cut back on meat consumption in 2016. The middle-class urban dwellers have also been embracing fake meat products as they respond to climate change.

“Regardless of international or local brands, Chinese consumers are now only seeing the first generation of plant-based offerings. Purchases today are mostly limited to forward-thinking experimenters,” Matilda Ho, founder and managing director of Bits x Bites, a venture capital firm targeting the Chinese food-tech industry, told TechCrunch. “The good news is China’s per capita consumption of plant-based protein is amongst the highest in the world.”

“For these offerings to scale to mass consumers or attract repeat purchases from early adopters, there is tremendous opportunity to improve on the mouthfeel, flavor, and how these products fit into the Chinese palate. To appeal to health-conscious flexitarians or vegetarians, there is also plenty of room to improve the nutritional profile in comparison to the conventional tofu or Buddhist mimic meat,” Ho added.

The fake meat market is already rife with competition. Domestic incumbent Qishan Foods has been around since 1993. Hong Kong’s OmniPork and Alpha Foods were quick to capture the new appetite across the border. Nascent startup Zhenmeat is actively seeking funding and touting its understanding of the “Chinese taste.”

Meanwhile, Beyond Meat’s rival back home Impossible Foods may be having a harder time cracking the market, as its genetically-modified soy ingredient could cause concerns among health-conscious Chinese.

How $20 billion health care behemoth Blue Shield of California sees startups

In the two years since Jeff Semenchuk took the reins in the newly created position of chief innovation officer for Blue Shield of California, the nonprofit health insurer with $20 billion in revenues has stepped up its investments in startup companies.

As one of California’s largest insurance providers with more than four million members, Blue Shield plays an outsized role in technology adoption among physicians, hospital networks and patients. With that in mind, and with the acceleration of entrepreneurial activity around the multitrillion health care market, Semenchuk was brought on board after serving as chief executive of Yaro (now Virgin Plus) and CIO of Hyatt Hotels and co-founder of Citi Ventures.

Semenchuk said he sees Blue Shield as working to create a new health care system: “It’s not to perpetuate the health care system we have today.” Increasingly, startups have a role to play in that revisioning of health care services in America, according to Semenchuk.

“What I would say has happened over the last two years is that we have really focused on transformational innovation,” he added.

Investing in those transformational technologies involves taking cash directly from Blue Shield’s balance sheet for investments. The company doesn’t operate a corporate venture capital fund in the traditional sense, instead making strategic investments under the auspices of Semenchuk or Chief Financial Officer Sandra Clarke.*

Oscar’s health insurance platform nabs another $225 million

The direct-to-consumer health insurer Oscar has raised another $225 million in its latest, late-stage round of funding as its vision of tech-enabled health care services to drive down consumer costs becomes more and more of a reality.

In an effort to prevent a patient’s potential exposure to the novel coronavirus, COVID-19, most healthcare practices are seeing patients remotely via virtual consultations, and more patients are embracing digital health services voluntarily, which reduces costs for insurers and potentially provide better access to basic healthcare needs. Indeed, Oscar now has a $2 billion revenue base to point to and now a fresh pile of cash to draw from.

“Transforming the health insurance experience requires the creation of personalized, affordable experiences at scale,” said Mario Schlosser, the co-founder and chief executive of Oscar.

Oscar’s insurance customers have the distinction of being among the most active users of telemedicine among all insurance providers in the US, according to the company. Around 30 percent of patients with insurance plans from the company have used telemedical services, versus only 10 percent of the country as a whole.

The new late-stage funding for Oscar includes new investors Baillie Gifford and Coatue, two late-stage investor that typically come in before a public offering. Other previous investors including Alphabet, General Catalyst, Khosla Ventures, Lakestar and Thrive Capital also participated in the round.

With the new funding, Oscar was able to shrug off the latest criticisms and controversies that swirled around the company and its relationship with White House official Jared Kushner as the President prepared its response to the COVID-19 epidemic.

As the Atlantic reported, engineers at Oscar spent days building a stand-alone website that would ask Americans to self report their symptoms and, if at risk, direct them to a COVID-19 test location. The project was scrapped within days of its creation, according to the same report.

The company now offers its services in 15 states and 29 U.S. cities, with over 420,000 members in individual, Medicare Advantage, and small group products, the company said.

As Oscar gets more ballast on its balance sheet, it may be readying itself for a public offering. The insurer wouldn’t be the first new startup to test public investor appetite for new listings. Lemonade, which provides personal and home insurance, has already filed to go public.

Oscar’s investors and executives may be watching closely to see how that listing performs. Despite its anemic target, the public market response could signal that more startups in the insurance space could make lemonade from frothy market conditions — even as employment numbers and the broader national economy continue to suffer from pandemic-induced economic shocks.

Apple temporarily re-closes 14 more Florida stores as COVID-19 numbers surge

After closing stores across four states, this was no doubt a bit of an inevitable: Following reporting earlier today, Apple has confirmed that it will be shutting down an additional 14 stores in Florida, joining the two it closed last week.

The company sent a statement to TechCrunch that is essentially identical to the one it gave us last week, reading, “Due to current COVID-19 conditions in some of the communities we serve, we are temporarily closing stores in these areas. We take this step with an abundance of caution as we closely monitor the situation and we look forward to having our teams and customers back as soon as possible.”

The move comes as COVID-19 cases continue to surge in the southern states. On Wednesday, state officials reported north of 5,000 new infections for the second straight day. In all, Florida has experienced more than 114,000 COVID-19 cases and 3,000 deaths, ranking sixth among all states by number of infections.

As noted last week, Apple had earlier confirmed the possibility of closed locations as soon as it began to reopen select locations in May. The full list of newly closed Florida stores includes:

  • The Galleria
  • The Falls
  • Aventura
  • Lincoln Road
  • Dadeland
  • Brickell City Centre
  • Wellington Green
  • Boca Raton
  • The Gardens Mall
  • Millenia
  • Florida Mall
  • Altamonte
  • International Plaza
  • Brandon

The Waterside Shops and Coconut Point stores were closed last week. Locations in Arizona and North and South Carolina have also been closed following reopening.

NASA’s JPL open-sources an anti-face touching wearable to help reduce the spread of COVID-19

There are some wearables out there in the world that are making claims around COVID-19 and their ability to detect it, prevent it, certify that you don’t have it, and more. But a new wearable device from NASA’s Jet Propulsion Laboratory might actually be able to do the most to prevent the spread of COVID-19 – and it’s not really all that technically advanced or complicated.

JPL’s PULSE wearable uses 3D-printed parts and readily available, affordable electronic components to do just one thing: remind a person not to touch their face. JPL’s designers claim that its simple enough that the gadget “can easily be reproduced by anyone regardless of their level of expertise,” and to encourage more people and companies to actually do that, the lab has made available a full list of parts, 3D modelling files and full instructions for its assembly via an open source license.

The PULSE is essentially a pendant, worn between six inches and 1 foot from the head around the neck, which can detect when a person’s hand is approaching their face using gan IR-based proximity sensor. A vibration motor then shakes out an alert, and the response becomes stronger as your hand gets closer to your face.

The hardware itself is simple – but that’s the point. It’s designed to run on readily available 3V coin batteries, and if you have a 3D printer to hand for the case and access to Amazon, you can probably put one together yourself at home in no time.

The goal of PULSE obviously isn’t to single-handedly eliminate COVID-19 – contact transmission from contaminated hands to a person’s mouth, nose or eyes is just one vector, and it seems likely that respiratory droplets that result in airborne transmission is at least as effective at passing’s the virus around. But just like regular mask-wearing can dramatically reduce transmission risk, minimizing how often you touch your face can have a big combinatory effect with other measures taken to reduce the spread.

Other health wearables might actually be able to tell you when you have COVID-19 before you show significant symptoms or have a positive test result – but work still needs to be done to understand how well that works, and how it could be used to limit exposure. JPL’s Pulse has the advantage of being effective now in terms of building positive habits that we know will limit the spread of COVID-19, as well as other viral infections.

Mophie is selling an $80 wireless-charging UV phone sanitizer

The best possible time to launch a UV phone sanitizer would have been about five months ago. The second best possible time, however, is right now. When the COVID-19 pandemic really started hitting the global community in earnest, there was a run on these once fairly niche products from companies with names like PhoneSoap.

In fact, available models started selling out all over the place, when many people began to recognize for the first time just how much of a disease vector the smartphone could truly be. All of that in mind, the category is a pretty logical next step for the many accessory brands underneath the Zagg umbrella.

Today Mophie and InvisibleShield launched their takes on the category. The products are priced at $80 and $60, respectively, with the key differentiator here being the inclusion of a 10W wireless charging pad. Though that’s actually on the lid of the product. Meaning you can use it to charge the handset only once it’s out of the disinfecting bed.

Both sanitize phones up to 6.9 inches, using UV-C light, promising to kill up to 99.9% of bacteria. It’s certainly worth noting that, like PhoneSoap, these brands are making claims that their products can kill the novel coronavirus. The jury is still out on their efficacy on that front. In fact, COVID-19 is conspicuously absent from the press material. And even with one of these, I’d still strongly recommend carrying around a pack of antibacterial wipes, if you can still find any.

Both products are available now, through their respective sites.

Demand for fertility services persists despite COVID-19 shutdowns

In 2019, the global fertility services industry was estimated to be worth $14.8 billion with demand driven by the significant growth in the median age of first-time mothers, according to a Research & Markets report.

Gina Bartasi, founder and CEO of NYC-based fertility center Kindbody, has pointed to macroeconomic trends responsible for the industry’s consistent growth, such as the increase in single mothers by choice and the fact that “heterosexual couples are waiting to have children and waiting to get married, and more and more same-sex couples are having children, which is relatively new.”

Regardless of the increasing demand, disasters can disrupt fertility services: On March 17, the American Society for Reproductive Medicine directed U.S.-based fertility clinics to avoid initiating new treatments, push back nonemergency surgeries and shift care to telemedicine.

Now reopened, it’s undeniable that COVID-19’s national impact could alter the space as different types of crises have in the past. In looking back, we can find a better understanding of what the future holds.

After the terror attacks on September 11, 2001, a University of Louisville study found that there was “a prompt and significant increase in births and birthrates in the post-9/11 period” in New York City. Relatedly, when Hurricane Katrina hit New Orleans in August 2005 and created the nation’s costliest natural disaster, it was also one of five times since 1987 that frozen embryos were evacuated and protected during a natural disaster.

According to a study done by University of Wisconsin, “following Katrina, displacement contributed to a 30% decline in birth cohort size. Black fertility fell, and remained 4% below expected values through 2010. By contrast, white fertility increased by 5%.” The communities were so ravaged that the area’s Black population has remained substantially smaller.

Use AI responsibly to uplift historically disenfranchised people during COVID-19

One of the most distressing aspects of the ongoing pandemic is that COVID-19 is having a disproportionate impact on communities of color and lower-income Americans due to structural factors rooted in history and long-standing societal biases.

Those most at risk during this pandemic are 24 million of the lowest-income workers; the people who have less job security and can’t work from home. In fact, only 9.2% of the bottom 25% have the ability to work from home. Compare that to the 61.5% of the top 25% and the disparity is staggering. Additionally, people in these jobs typically do not have the financial security to avoid public interaction by stockpiling food and household goods, buying groceries online or avoiding public transit. They cannot self-isolate. They need to venture out far more than other groups, heightening their risk of infection.

The historically disadvantaged will also be hit the hardest by the economic impacts of the pandemic. They are overrepresented in the industries experiencing the worst downturn. The issues were atrocious prior to COVID-19, with the typical Black and Latinx households having a net worth of just $17,100 and $20,765, respectively, compared with the $171,000 held by the typical white household. An extended health and economic crisis will only exacerbate these already extreme disparities.

AI as a beacon of hope

A rare encouraging aspect of the ongoing pandemic response is the use of cutting-edge technology — especially AI — to address everything from supply chains to early-stage vaccine research.

The potential of human + AI exceeds the potential of humans working alone by far, but there are tremendous risks that require careful consideration. AI requires massive amounts of data, but ingrained in that data are the societal imperfections and inequities that have given rise to disproportionate health and financial impacts in the first place.

In short, we cannot use a tool until we know it works and understand the potential for unintended consequences. Some health groups hurried to repurpose existing AI models to help track patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of COVID-19, but many of those tools have struggled with bias and accuracy issues. Balancing the instinct to “help now” and the risks of “unforeseen consequences” amidst the high stakes of the COVID-19 pandemic is why the responsible use of AI is more important now than ever.

4 ways to purposefully and responsibly use AI to combat COVID-19

1. Avoid delegating to algorithms that run critical systems

Think of an AI system designed to distribute ventilators and medical equipment to hospitals with the objective of maximizing survival rates. Disadvantaged populations have higher comorbidities and thus may be less likely to receive supplies if the system is not properly designed. If these preexisting prejudices are not accounted for when designing the AI system, then well-intentioned efforts could result in directing supplies away from especially vulnerable communities.

Artificial intelligence is also being used to improve supply chains across all sectors. The Joint Artificial Intelligence Center is prototyping AI that can track data on ventilators, PPE, medical supplies and food. When the goal is to anticipate panic-buying and ensure health care professionals have access to the equipment they need, this is a responsible use of AI.

As these examples illustrate, we can quickly arrive at a problematic use case when decision-making authority is delegated to an algorithm. The best and most responsible use of AI is maximizing efficiency to ensure the necessary supplies get to those truly in need. Previous failures in AI show the need for healthy skepticism when delegating authority on potentially life-and-death decisions to an algorithm.

2. Be wary of disproportional impacts and singling out specific communities

Think of an AI system that uses mobility data to detect localized communities that are violating stay-at-home orders and route police for additional enforcement. Disadvantaged populations do not have the economic means to stockpile food and other supplies, or order delivery, forcing them to go outside.  As we mentioned earlier, being overrepresented in frontline sectors means leaving the home more frequently. In addition, individuals and families experiencing homelessness could be targeted for violating stay-at-home enforcement. In New York City, police enforcement of stay-at-home directives has disproportionately targeted Black and Latinx residents. Here is where responsible AI steps in. AI systems should be designed not to punish these populations with police enforcement, but rather help identify the root causes and route additional food and resources. This is not a panacea, but will avoid exacerbating existing challenges.

Israel has already demonstrated that this model works. In mid-March it passed an emergency law enabling the use of mobile data to pinpoint the infected as well as those they had come in contact with. Maccabi Healthcare Services are using AI to ID its most at-risk customers and prioritize them for testing. This is a fantastic example of adopting previously responsible and successful AI by adapting an existing system that was built and trained to identify people most at risk for flu, using millions of records from over 27 years.

3. Establish AI that is human-centric with privacy by design and native controls

Think of an AI system that uses mobile phone apps to track infections and trace contacts in an effort to curb new infections. Minority and economically disadvantaged populations have lower rates of smartphone ownership than other groups. AI systems should take these considerations into account to avoid design bias. This will ensure adequate protections for vulnerable populations, but also improve the overall efficacy of the system since these individuals may have high human contact in their jobs. Ensuring appropriate track and trace takes place within these populations is critically important.

In the U.S., MIT researchers are developing Private Automatic Contact Tracing (PACT), which uses Bluetooth communications for contact tracing while also preserving individual privacy. If you test positive and inform the app, everyone who has been in close proximity to you in the last 14 days gets a notification. Anonymity and privacy are the biggest keys to responsible use of AI to curb the spread of COVID-19.

In India the government’s Bridge to Health app uses a phone’s Bluetooth and location data to let users know if they have been near a person with COVID-19. But, again, privacy and anonymity are the keys to responsible and ethical use of AI.

This is a place where the true power of human + AI shines through. As these apps are rolled out, it is important that they are paired with human-based track and trace to account for disadvantaged populations. AI allows automating and scaling track and trace for most of the population; humans ensure we help the most vulnerable.

4. Validate systems and base decisions on sanitized representative data

Think of an AI system that helps doctors make rapid decisions on which patients to treat and how to treat them in an overburdened health care system. One such system developed in Wuhan identified biomarkers that correlate with higher survival rates to help doctors pinpoint which patients likely need critical care and which can avoid the hospital altogether.

The University of Chicago Medical Center is working to upgrade an existing AI system called eCART. The system will be enhanced for COVID to use more than 100 variables to predict the need for intubation eight hours in advance. While eight hours may not seem like much, it provides doctors an opportunity to take action before a patient’s condition deteriorates.

But, the samples and data sets systems like these rely on could have the potential to produce unreliable outcomes or ones that reinforce existing biases. If the AI is trained on observations of largely white individuals — as was the case with data in the International Cancer Genome Consortium — how willing would you be to delegate life-and-death health care decisions for a nonwhite patient? These are issues that require careful consideration and demonstrate why it is so important to validate not only the systems themselves, but also the data on which they rely.

Questions we must ask

As companies, researchers and governments increasingly leverage AI, a parallel discussion around responsible AI is necessary to ensure benefits are maximized while harmful consequences are minimized. We need better guidelines and assessments of AI around fairness, trustworthiness, bias and ethics.

There are dozens of dimensions we should evaluate every use case against to ensure it is developed in a responsible manner. But, these four simple questions provide a great framework to start a discussion between AI system developers and policy makers who may be considering deploying an AI solution to combat COVID-19.

  • What are the consequences if the system makes a mistake? Can we redesign the system to minimize this?
  • Can we clearly explain how the AI system produced specific outcomes in a way that is understandable to the general public?
  • What are potential sources of bias — data, human and design — and how can they be minimized?
  • What steps can be taken to protect the privacy of individuals?

When to use AI solutions and tools

Each of these questions will apply in different ways to particular use cases. A natural language processing (NLP) system sifting through tens of thousands of scientific papers that might focus the search for a COVID-19 vaccine poses no direct threat of harm to individuals and performs a task faster that an army of research assistants ever could. Case in point, in April at Harvard the Harvard T.H. Chan School of Public Health and the Human Vaccines Project announced the Human Immunomics Initiative to leverage AI models to accelerate vaccine development.

This is a global effort with scientists around the world working together to expedite drug discovery processes to defeat COVID-19 through the use of AI. From the aforementioned work in the U.S. all the way to Australia where Oracle cloud technology and vaccine technology developed by Vaxine is being leveraged by Flinders University to develop promising vaccine candidates, we can see AI being used for its most ethical purpose, saving human lives.

Another use case is the omnipresent issue facing us during this pandemic: dissemination of misinformation across the planet. Imagine trying to manually filter the posts of the 1.7 billion daily Facebook users every day and scan for misinformation about COVID-19. This is an ideal project for human + AI — with humans confirming cases of misinformation flagged by AI.

This use case is relatively low risk, but its ultimate success depends on human oversight and engagement. That’s even more so the case in the high-risk use cases that are grabbing headlines amidst the COVID-19 pandemic. Human + AI is not just a safeguard against a system gone off the rails, it’s critical to AI delivering meaningful and impactful results as illustrated through earlier examples.

We need to classify use cases into three buckets to guide our decision making:

1. Red

  • Use case represents a decision that should not be delegated to an AI system.
  • Using an AI system to decide which patients receive medical treatment during a crisis. This is a case where humans should ultimately be making decisions because of their impact on life-and-death decisions. This has already been recognized by the medical community, where ethical frameworks have been developed to support these very types of decisions.

2. Yellow

  • Use case could be deployed responsibly, but it depends upon the design and execution.
  • Using an AI system to monitor adherence to quarantine policies. This is a case where use cases may be acceptable depending on design and deployment of systems. For example, using the system to deploy police to neighborhoods to “crackdown” on individuals not adhering to quarantine policies would be problematic. But deploying police to these neighborhoods to understand why quarantine is being broken so policy makers can better address citizen needs would be legitimate — provided privacy of individuals is protected.

3. Green

  • Use case is low risk and the benefits far outweigh the risks.
  • Content filtering on social media platforms to ensure malicious and misleading information regarding COVID-19 is not shared widely.

We must ask our four questions and deliberately analyze the answers we find. We can then responsibly and confidently decide which bucket to put the project into and move forward in a responsible and ethical manner.

A recent U.S. example

We recently created Lighthouse, a new dynamic navigation cockpit that helps organizations capture a holistic picture of the ongoing crisis. These “lighthouses” are being used to illuminate the multiple dimensions of the situation. For example, we recently partnered with an American city to develop a tool that predicted disruptions in the food supply chain. One data source was based on declines in foot traffic in and around distribution centers. Without accessing any personally identifiable information (PII) — and therefore preserving individual privacy — it shows which parts of the city were most likely to suffer shortages, enabling leaders to respond preemptively and prevent an even worse public health crisis.

This is an easily duplicated process that other organizations can follow to create and implement responsible AI to help the historically disenfranchised navigate and thrive during the age of COVID-19.

Moving forward

When confronting the ethical dilemmas presented by crises like COVID-19, enterprises and organizations equipped with responsible AI programs will be best positioned to offer solutions that protect the most vulnerable and historically disenfranchised groups by respecting privacy, eliminating historical bias and preserving trust. In the rush to “help now,” we cannot throw responsible AI out the window. In fact, in the age of COVID-19, it is more important than ever before to understand the unintended consequences and long-term effects of the AI systems we create.