The case for corporates to fill the seed vacuum

Over the past five years, there has been a clear drop in seed investing. Between 2010 and 2014 there was an influx of “micro” VCs, perfectly equipped to deploy seed capital. Since then, we have seen a gradual decline.

One key reason is that the Micro VCs were successful. Turns out that investing at the seed stage is a really strong strategy for generating returns. Their portfolios performed very well and, as a result, were able to raise a much larger second and third fund.

Unfortunately, once your fund size exceeds $75 million, I’d argue, it is very difficult to focus on the seed stage. It is simply too difficult to identify enough quality opportunities to deploy all that capital. Instead, you need to write bigger checks. In order to do that, you start to focus on later rounds. This leaves a gap at the seed stage, which I’d argue, is the most exciting.

Because of that, I believe there is an incredible opportunity for this gap to be filled by corporate venture funds. We, at dunnhumby, have invested here, successfully, for years. And by successfully, I don’t mean just financially, though we have returned far more than we have invested; I also mean strategically. There are incredible strategic benefits to investing at the seed stage.

Innovation

The seed stage is where the greatest innovation is happening. We invest to inform our own strategic direction and identify new technologies and business models prior to their impact on our own business. We also use it to identify and embed with emerging companies who could, one day, be great partners.

In the recent surge of corporate innovation efforts, venturing is not leveraged nearly enough. There are few ways of exposing innovation better than aligning with a company that is innovating daily as a means of survival. There is no better inspiration than watching a team of two grow into a team of 100-plus, often pulling the slower-moving corporate along for the ride.

Collaboration

There is a flexibility and eagerness with early-stage companies that allows for greater collaboration. They are not so large as to have their own, built-out bureaucracy, and are actively willing to work together. For many, it is why they take money from a strategic, in the hope that there is more than just capital that comes from the relationship.

In many cases, these synergies do not emerge right away. However, there is a closeness that forms between the two companies that begins to bear fruit, from my experience, about one year post-investment.

For the startup, there is increased exposure to the investor’s client base and resources. For the corporation, there is firsthand insight into the success of the startup’s business model, technology and market. From this, partnership and acquisition opportunities emerge.

M&A and partner pipeline

Because of the strategic nature behind these investments, they also act as an incubator for future partnerships and acquisitions.

Participating at the seed stage does not require significant capital contributions.

By aligning at the seed stage, you have the unique opportunity to watch the company grow. What is the market demand and is there an opportunity to enter a new space before others have realized the opportunity? Often, we will take a board or board observer position with the company, which brings even greater insight into their performance, as well as the potential upside of an even closer relationship.

Also, nearly as important, is that you gain an even greater insight into the company culture and their alignment with your own. In most cases, these discussions will emerge from early collaborations, where your broader teams will have the opportunity to interact and form a culture of their own. This cultural alignment will increase the likelihood of a successful outcome, whether that is a partnership or full acquisition.

Value

Participating at the seed stage does not require significant capital contributions. For one later-stage investment, you could make three to four seed investments, which increases your exposure to the above items and drastically reduces the financial impact on your balance sheet. If done right, within four to five years, the fund should contribute much more than it costs.

Does this mean that the corporate should finance the entire seed round? Not typically. In fact, for almost all of our investments to date, we are participating as part of a syndicate of investors. Often this syndicate is made up of other corporate investors (often referred to as “Strategics”). This reduces risk as well as the financial burden for each investor at this stage. The goal is to get a seat at the table. For strategic purposes, there is little difference between owning 5% versus 20% at this stage. Once the company grows larger, this dynamic will change.

Conclusion

At dunnhumby we invest in less than 2% of the companies we meet with. We are diligent about where we invest. However, I’d argue that the 98% we pass on are nearly as important. Because we have an investment arm, we are exposed to incredible innovation across a range of industries that most companies, that lack a seed investing strategy, do not see. At least, not until it is too late. Capital gives us a seat at the table.

These conversations provide signals into emerging trends in our industry, as well as our clients’ industries. When we pass, often the relationship does not end. Many times, they will lead to partnership discussions, referrals and introductions that are equally beneficial to the startup.

The opportunity is there. Corporations just need to seize it.

Preparing for a future of drone-filled skies

The last few months have seen an escalating series of incidents in which the harmful elements of drones have loomed large in the public eye. In April, rumors of a coup in Saudi Arabia flared after a recreational drone was shot down when flying into an unauthorized zone in the capital. August saw a drone attack on the president of Venezuela. In late December, 10,000 flights carrying 140,000 passengers were grounded over the course of 36 hours at Gatwick Airport in the United Kingdom. In the months since, a number of airports, ranging from Dublin to Dubai, have experienced delays on account of drone activity. The Gatwick incident alone is estimated to have cost the aviation industry as much as $90 million.

While these are spectacular incidents, they speak to the growing ubiquity of drones. Perhaps even more telling than those events were the efforts that authorities put into air security for the Super Bowl. In the days leading up to the event, PBS reported a “deluge” of drones despite a ban on their presence in the airspace around the stadium.

These incidents underline the conclusion that mapping the skies — as well as policing them — is moving from the theoretical to the practical. Just as Google took the noise of the early internet and arranged it into something comprehensible and navigable, so we need to organize and understand the sky as drones become a growing part of civilian life.

Most of the examples I outlined above are “bad drone” problems — problems related to drones that might be hostile — but understanding what entities are up in the air is critical for “good drone” problems too. While drones have risen to prominence primarily as threatening entities, they’ll soon be central in more benign contexts, from agriculture and weather forecasting to deliveries and urban planning. We could soon pass a tipping point: In early 2018, the Federal Aviation Authority (FAA) announced that their drone registry had topped 1 million drones for the first time. While most of those were owned by hobbyists, the agency expects commercial drone numbers to quadruple by 2022. At some point, it’s going to be vital that we have systems for ensuring “good drones” don’t crash into each other.

We need to organize and understand the sky.

For comparison, the FAA reports that in the U.S. there are around 500 aircontrol towers coordinating 43,000 airplane flights a day, with up to 5,000 planes in the sky at any one moment. Some 20,000 airway transportation system specialists and air traffic controllers spend their professional lives keeping those 5,000 planes from bumping into each other. Consider, then, the effort and resources required to prevent potentially hundreds of thousands, or even millions, of concurrently airborne drones from colliding. This a big problem with real stakes.

Countless companies have emerged in recent years to tackle the challenge of organizing this ecosystem. Significant investor capital has gone into different approaches to making sense of a sky filled with drones, from point-sensor solution providers such as Echodyne and Iris Automation to drone management systems such as KittyhawkAirMap and Unifly“Bad drone” solutions ranging from lasers and ground-based bazookas to malware and enormous net shields have cropped up.

The most exciting approach, however, is a unified one that addresses both “good drone” and “bad drone” challenges; one that maps well-intentioned drones and defends against nefarious ones. In this case, knowledge is the first step to understanding, which enables suitable action. In practice, that means we need to start with a firm data layer — typically gathered through a radar detection system. That data layer allows practitioners to determine what and where entities are in the air.

With that data in hand, understanding the nature of those entities becomes possible — specifically, if they’re benign or malicious. That designation enables the final step: action. For benign drones, that means routing them to the right destination or ensuring they don’t crash into other drones. In the case of malicious drones, action means mobilizing one of the exciting solutions we mentioned above — malware, lasers or even defensive drones to neutralize the potential threat.

A full-stack approach is helpful for formulating a seamless response, but the most important element is the data layer. It’s still early days in the mainstreaming of drones, but there’s great value in getting a headstart on creating the infrastructural and security framework for when that moment arrives. Gathering data now gives us more of a baseline for drones in the future. It also allows new entrants to offer solutions on top of that foundation. Moreover, there are strong positive externalities at work here: As with cellular networks 25 years ago, the decision of early adopters to adopt detection and defense systems benefits others who are slower to move. When Gatwick puts that infrastructure in place, Heathrow benefits.

Ultimately, there are as many — if not more — reasons to get excited about solving the problem of drone-filled skies as there are reasons to be concerned about their negative implications. Creating the rails for what Goldman Sachs estimated will be a $100+ billion market is a tremendous opportunity. The sooner we plan for the positive implications of drones in addition to their malicious potential, the better.

Three ‘new rules’ worth considering for the internet

In a recent commentary, Facebook’s Mark Zuckerberg argues for new internet regulation starting in four areas: harmful content, election integrity, privacy and data portability. He also advocates that government and regulators “need a more active role” in this process. This call to action should be welcome news as the importance of the internet to nearly all aspects of people’s daily lives seems indisputable. However, Zuckerberg’s new rules could be expanded, as part of the follow-on discussion he calls for, to include several other necessary areas: security-by-design, net worthiness and updated internet business models.

Security-by-design should be an equal priority with functionality for network connected devices, systems and services which comprise the Internet of Things (IoT). One estimate suggests that the number of connected devices will reach 125 billion by 2030, and will increase 50% annually in the next 15 years. Each component on the IoT represents a possible insecurity and point of entry into the system. The Department of Homeland Security has developed strategic principles for securing the IoT. The first principle is to “incorporate security at the design phase.” This seems highly prudent and very timely, given the anticipated growth of the internet.

Ensuring net worthiness — that is, that our internet systems meet appropriate and up to date standards — seems another essential issue, one that might be addressed under Zuckerberg’s call for enhanced privacy. Today’s internet is a hodge-podge of different generations of digital equipment, unclear standards for what constitutes internet privacy and growing awareness of the likely scenarios that could threaten networks and user’s personal information.

Recent cyber incidents and concerns have illustrated these shortfalls. One need only look at the Office of Personnel Management (OPM) hack that exposed the private information of more than 22 million government civilian employees to see how older methods for storing information, lack of network monitoring tools and insecure network credentials resulted in a massive data theft. Many networks, including some supporting government systems and hospitals, are still running Windows XP software from the early 2000s. One estimate is that 5.5% of the 1.5 billion devices running Microsoft Windows are running XP, which is now “well past its end-of-life.” In 2016, a distributed denial of service attack against the web security firm Dyn exposed critical vulnerabilities in the IoT that may also need to be addressed.

Updated business models may also be required to address internet vulnerabilities. The internet has its roots as an information-sharing platform. Over time, a vast array of information and services have been made available to internet users through companies such as Twitter, Google and Facebook. And these services have been made available for modest and, in some cases, no cost to the user.

Regulation is necessary, but normally occurs only once potential for harm becomes apparent.

This means that these companies are expending their own resources to collect data and make it available to users. To defray the costs and turn a profit, the companies have taken to selling advertisements and user information. In turn, this means that private information is being shared with third parties.

As the future of the internet unfolds, it might be worth considering what people would be willing to pay for access to traffic cameras to aid commutes, social media information concerning friends or upcoming events, streaming video entertainment and unlimited data on demand. In fact, the data that is available to users has likely been compiled using a mix of publicly available and private data. Failure to revise the current business model will likely only encourage more of the same concerns with internet security and privacy issues. Finding new business models — perhaps even a fee-for-service for some high-end services — that would support a vibrant internet, while allowing companies to be profitable, could be a worthy goal.

Finally, Zuckerberg’s call for government and regulators to have a more active role is imperative, but likely will continue to be a challenge. As seen in attempts at regulating technologies such as transportation safety, offshore oil drilling and drones, such regulation is necessary, but normally occurs only once potential for harm becomes apparent. The recent accidents involving the Boeing 737 Max 8 aircraft could be seen as one example of the importance of such government regulation and oversight.

Zuckerberg’s call to action suggests a pathway to move toward a new and improved internet. Of course, as Zuckerberg also highlights, his four areas would only be a start, and a broader discussion should be had as well. Incorporating security-by-design, net worthiness and updated business models could be part of this follow-on discussion.

A beautiful duopoly

One hundred and fifty years before John Nash received his Nobel prize, a train left Versailles for Paris. On board were two brothers returning home from visiting friends. Always a pleasant journey through the French countryside, this one was, unfortunately, in peril. The train crashed and one of the two brothers, Joseph, was severely injured with a broken bone and other fractures. Joseph Bertrand on that day was 20 years old and was already a professor of mathematics with a doctorate he received at the age of 17 for a thesis in thermodynamics.

Joseph later would challenge another French mathematician Antoine Augustin Cournot, reworking an economic situation in which two companies dominate a market, formally known as the Bertrand duopoly. He proposed that in a state of duopoly, whereby players offer a non-differentiated product and are not in cooperation, their customers buy from whichever one sells it for cheaper. The Bertrand duopoly is a harsh situation where prices eventually converge to costs making it economically impossible for its players to exist in the long run. It was a time when only profitable companies existed.

Bertrand’s work was one of the foundations upon which John Nash would later build. Where Bertrand defined a cutthroat competition, Nash recognized that competitors don’t always know what the other’s cost structure is or what they would do in response to one’s actions, therefore keep making tactical decisions in their businesses resulting in certain payoffs. He stated that there exists a profile of strategies such that each competitor’s strategy is an optimal response to the other’s, there is a point of balance in which neither competitor has anything to gain by changing strategies. That point is called the “Nash Equilibrium.”

John Nash shared the Nobel prize in 1994 with another brilliant mind: John Harsanyi. Harsanyi examined the uncertainty around each party’s knowledge and understanding of the other’s decisions and how beliefs can be embedded into a framework of game theory. These games are called games of incomplete information. Harsanyi said that the payoff structures are not always known and come with a certain probability distribution so one should take probability into account when making a tactical economic move and calculating the results.

From Bertrand to Nash to Harsanyi, many companies have struggled with competition, conditions of duopoly, price pressures and survival. Some survived, some did not. Others reached a profitable state of Nash equilibrium and still exist to this day.

Fast-forward to today… here comes Uber and Lyft.

Consider a hypothetical situation where Lyft runs a promo in a specific market. Doing so will impact Lyft’s market share, total revenue, and overall profits. It will also impact Uber’s market share and total revenue in that market, but not profit per ride because they have not yet responded to the move by adjusting their price. The same situation applies to Lyft if Uber runs a promo. They will choose to respond or not respond based on their beliefs of the payoff they will receive. They will keep playing this game until they conclude there’s nothing to gain by offering more promos at which point, they will have reached Nash equilibrium.

Harsanyi’s work is quite relevant here because the two companies have a reasonably good idea about the outcome of each action and each other’s costs but do not precisely know what they were, and they compete with a certain level of belief about each other’s preferences and payoffs. Based on their beliefs, each company will have to assign a certain level of probability to the outcomes of their actions and the responses of their competitor.

We must also note that in the very beginning, competitors know less about each other, but the longer they play the game, the more they will learn and make adjustments to their moves. Going public brings more transparency about each company so with that they will learn even more. The more each competitor will know about each other the more informed their decisions and responses will be so the rideshare game should ultimately reach Nash equilibrium.

So, which one will prevail? At this point, there are a number of questions one must ask as an investor. Are Uber and Lyft in Nash equilibrium today? If they are in Nash equilibrium, and we know that this state means they’re losing money every day, they will ultimately deplete all reserves. If not, what would that final state of equilibrium be? Would it be a profitable state for these companies and their investors? In a state of Nash equilibrium, what price would each company charge their customers in a given market?

Secondly, do Uber and Lyft exist in a Bertrand duopoly? Their products are identical. One driver can drive for both companies in the same car and they often do. Bertrand would be baffled at how fierce this competition is. In his mind, price wars would end when price equals cost leaving no profit for either party or no economic interest to continue their businesses. In this case, these companies convinced investors to raise massive levels of outside capital so that they can afford to charge prices below their cost, operating at deficits hoping they would beat the competition and at some point, reach profitability.

There are two things companies can do to escape Bertrand duopoly: either come up with a lower cost structure or differentiate the product. If one can come up with a lower cost structure, such asdriverless cars, and the other does not, that one wins. If one introduces a new product, such as bikes or scooters and breaks into a brand-new market, they escape Bertrand and gain an edge. But as long as the companies maintain a status of non-differentiated products, according to Bertrand, customers would go with the cheaper of the two, prices would go lower, drivers earn less, and economic benefits erode.

Bertrand assumed a very commoditized world and did not take into consideration the softer elements of competition. In the absence of cost-cutting solutions such as driverless cars, attributes such as “company culture” come into play. If two companies charge the same price, would consumers split 50-50 like Bertrand said, or would they pick the company they think is “nicer?” Or, what is the premium or discount attributable to “niceness” of companies?

In the war between Uber and Lyft, or in any other duopoly, the ability of companies to make calculated decisions at times of competition remains a vital piece of the puzzle. The strategy comes in two steps. First, all decisions must be made at optimal levels reaching a state of Nash equilibrium. At this point, there are no further decisions to make that’ll provide an additional economic benefit to either party. Once that’s done, then differentiation efforts begin so that the parties may escape Bertrand. And those happen on two fronts: cost and product differentiation. It’s certainly a complex task and both companies have smart teams in place to make the calculations. It will be exciting to watch the battles in the years to come.

(If you’re an investor, would it make sense to invest in both companies in a Bertrand duopoly? Perhaps that’s like betting on both black and red in a game of roulette. Remember, if the ball lands on zero, both bets lose!)

Disclaimer: Venture Science is a Lyft investor.

Bubble-driven boom in M&As hides steep costs long-term

Driven by ultra-easy central bank policy, global merger and acquisition activity is exploding. The value of transactions in the first eight months of 2018 reached $3.3 trillion worldwide, a 39% increase from 2017, and the market can expect another record-setting year in 2019. What does this mean? The data suggests that optimism about the efficacy of M&As has never been higher. Businesses are increasingly looking to M&As as the way to grow.

Growth is good, but growth can also be cancerous. Financial and strategic calculus may suggest a perfect fit between two companies, but that calculus is mostly irrelevant to the long-term success of an M&A transaction. What looks on paper to be a perfect fit ends up in protracted conflict arising from a mismatch of cultures, values and ideologies. Those intangible factors are often only obvious in hindsight, and it is tempting for decision makers to ignore them because of their very intangibility; after all, if it is not part of the model, it cannot possibly exist, right?

Most acquisitions fail. That is the sobering reality. 

Issues of high executive turnover, labored transition periods and lowered production standards arise when businesses either jump into a deal too quickly or leave internal disagreements unchecked for too long. It is not a secret that M&As have drawbacks, and a lot of ink has been spilled outlining the potential pitfalls of mergers and acquisitions. For a lot of companies, staying private and addressing issues internally is the best path to steady growth. It may not make for a bold headline or improve a company’s financial valuation, but there are real benefits to avoiding M&As altogether.

Facebook’s WhatsApp and Instagram acquisitions will rank among the most successful in all of tech. Yet, even with that success, issues of culture and values have come to the fore longer term.

Late-stage executive churn

In 2003, the Harvard Business Review looked at executive churn within targeted companies. It reported, “On average, about a quarter of the executives in acquired top management teams leave within the first year, a departure rate about three times higher than in comparable companies that haven’t been acquired. An additional 15% depart in the second year, roughly double the normal turnover rate.”

Upon further research, the Harvard Business Review survey found that “executives continued to depart at twice the normal rate for a minimum of nine years after the acquisition.” If we look at a company like Facebook, the Harvard study’s churn timeline doesn’t seem so far-fetched.

Back in 2012, Mark Zuckerberg was unjustly mocked for what was then an unthinkable $1 billion bid to buy Instagram. Five years later, Instagram was seen as perhaps Facebook’s most successful acquisition. Then, from disagreements with Facebook and the urge to start something new, Kevin Systrom and Mike Krieger, the co-founders of Instagram, exited the company at the end of last year. Nicole Jackson Colaco, Instagram’s director of Public Policy, left the company in early 2018. Around the same time, Keith Peiris, Instagram’s AR/Camera product lead, moved on as well. According to TechCrunch, “Instagram’s COO Marne Levine who was known as a strong unifying force, went back to lead partnerships at Facebook. Without an immediate replacement named, Instagram started to look more like just a product division within Facebook.”

Growth is good, but growth can also be cancerous.

Loss of autonomy — and even the perceived loss of autonomy — can be a prime driver of executive churn at targeted companies. In 2018, Facebook also lost Jan Koum, a board member and the co-founder of WhatsApp, the company Zuckerberg acquired in 2014 for $19 billion. Many speculated that Koum’s departure came after concerns about data privacy and Facebook’s advertising model. In either case, Koum’s departure was born out of concern for his company’s ability to function autonomously within Facebook — a concern, we’re learning, that was justified.

What we see here is the exodus of key decision-makers at two targeted companies. With Instagram and WhatsApp, Facebook is now left to move these properties forward without the help of critical executives who know the products they created more intimately than their acquirer ever could. It’s yet to be seen how much and in what ways these personnel changes will hurt Facebook’s bottom line. Facebook is still reporting substantial revenue growth year-over-year, but these recent departures make for a cloudier outlook. Keep in mind that WhatsApp and Instagram would count as major successes.

Righting the ship

Fortune ran an article in 2014 outlining some of the problems with acquisitions. One of the companies they reported on was Aptean, a roughly 1,500-person business software company formed in 2012 from a merger of CDC Software and Consona. Both CDC Software and Consona were the product of several previous acquisitions. The company had become a daisy chain of acquired businesses strung together under one name.

According to Aptean’s own chief architect, “The result was 30 companies that were really never integrated with each other. We had 30 vertically organized separate companies doing their own things, with their own tools. Everything from HR to software delivery and launch was in the hands of the product teams. There were attempts to try and solve that but there was really no interest.”

Aptean, like so many other companies, was not prepared for the herculean task of retraining and acclimating hundreds of workers. It can take years to onboard new teams, requiring long adjustment periods for employees who need to learn new systems and management styles. According to Forbes,”Worker experiences can vary dramatically even if values are aligned. You can speed up assimilation with focus, resources, support, communication and transparency, but it still takes time.”

Early M&A struggles can be managed, but it takes recognition at the point of conflict.

Acquisitions are often initiated to solve a problem then and there, so long acclimation periods require time most businesses don’t have or are unwilling to give. Aptean was willing to put in the work to shore up foreseeable issues that come from a business model built on mergers and acquisitions. If there’s one thing to learn from Aptean it’s that early M&A struggles can be managed, but it takes recognition at the point of conflict to enact a plan to remedy the situation.

One fairly recent M&A that has received a lot of attention is Amazon’s purchase of Whole Foods. If we look at Whole Foods one year after the acquisition, a familiar narrative to Facebook and Aptean arises, only now Amazon and Whole Foods have the added challenge of competing in the retail space while maintaining customer satisfaction.

Growing pains

Similar to Instagram and Whatsapp, Whole Foods was a big fish in a big pond that has been swallowed by a blue whale. To what end? As The Wall Street Journal points out, “More than a dozen executives and senior managers have left since Amazon acquired Whole Foods last year, according to former employees and recruiters steering them to new jobs.”

There appears to be little harmony between Amazon and Whole Foods right now, and the bruises are already showing. Whole Foods may be reporting a 19% rise in sales year-over-year, but customers are complaining about the quality of their produce. Businesses are weary of steep price hikes for prime shelving space, and perhaps most concerning of all, Amazon — the blue whale — isn’t getting the return on its investment.

Late last year, Forbes ran an article about the Amazon-Whole Foods deal, writing, “Amazon, even after acquiring Whole Foods for $13.7 billion in 2017 and offering two-hour grocery delivery service, is finding little success in the grocery business.” It may be that Amazon’s plans for Whole Foods are far-reaching and require several years to fall into place, but just like Facebook, the company is encountering problems now that if gone unaddressed could jeopardize the viability of the acquisition. Bloomberg reported that, “The number of Amazon Prime members who shop for groceries at least once a month declined in 2018 compared with 2017… The drop was surprising given the company’s Whole Foods investment and expansion of two-hour delivery service Prime Now.”

In the short-term, we see that shoppers at Whole Foods are unhappy, vendors are feeling pinched and Amazon is losing money to Walmart and Kroger and Target (businesses with more physical stores to service online orders). Looking ahead, Amazon’s plans for Whole Foods are ambitious, and with proper management of these early issues, this acquisition could prove beneficial to both companies, but only time will tell.

The perks of going it alone

An overwhelming majority of companies that engage in M&As are public. The reason for this is because public companies are accountable to their shareholders, who demand revenue growth year-over-year. The fastest way for a business to demonstrate growth and reinvest capital is to acquire another company. When financial valuations, shareholders and exit strategies are top of mind for a business, little attention is paid to company culture.

Good company culture is becoming harder to find as businesses increasingly turn to M&As to solve their problems.

Now consider a private company that avoids M&As. Over time, that company can benefit immensely from its autonomy. Money that would have otherwise been used to buy a competing business can be reinvested into R&D and far-reaching growth projects that may not suit the revenue timeline of a shareholder. Executive turnover is lower, which leads to lower churn, company-wide. These benefits contribute to company culture. Demonstrating good company culture means that employees stay longer and are given the opportunity to work on projects that excite them. Good company culture is becoming harder to find as businesses increasingly turn to M&As to solve their problems.

Look before you leap

Ultimately, expectations and creative control have always loomed large over the fate of any merger or acquisition. It is natural for a business to want to absorb the brain trust of a competing company. Buying out a business to integrate its products into your suite can be a sound financial practice as well. But when things go south — whether that be through executive churn at the targeted company or problems with integration — people rarely point to the baked-in complications associated with M&As as a responsible party.

With each acquisition, a business may be forfeiting a part of its core DNA. There are issues of long-term employee retention and ideological compatibility that weigh heavy on any M&A. What’s more, acquisitions can require 10-year implementation plans (or longer), but with such a high turnover rate, it becomes incredibly difficult to make the transition work.

In the abstract, warning against these issues can come off as patronizing. But with this year expected to bring more M&A activity than 2018, the best way for businesses to assess the merits of a merger or acquisition tomorrow is to study the troubles befallen many high-profile companies today.

Solving tech’s stubborn diversity gaps

Twenty years after Jesse Jackson first took aim at tech employers, Silicon Valley’s enduring diversity gaps remain a painful reminder of its origins as a mostly white boy’s club.

Sadly, little has changed in the decades since the campaign first made headlines. Today, just 7.4 percent of tech industry employees are African-American, and 8 percent are Latinx. Workers at Google, Microsoft, Facebook and Twitter — according to those companies’ own reports — were just 3 percent Hispanic and 1 percent black in 2016.

In some ways, tech’s equity gaps reflect a simple supply and demand imbalance. But it is an imbalance with artificial constraints. Because while Black and Hispanic students now earn computer science degrees at twice the rate that they are hired by leading tech companies, they are all but invisible to most recruiters.  

The problem stems from the fact that tech employers tend to recruit from a tiny subset of elite U.S. colleges.  Which means they may never come into contact with, for example, the 20 percent of black computer science graduates who come from historically black colleges and universities. Thousands of talented candidates are overlooked each year because they graduate from less-selective public universities, minority-serving institutions or women’s colleges — schools that exist far outside the elite network where tech employers recruit.

As a result, the recruiting practices of Silicon Valley actually compound the structural race and economic inequities that are endemic at every step of the education-to-career ladder. The number of segregated schools in the United States has doubled over the past 20 years. Poor and minority students often lack SAT and ACT test preparation, college advising services and after-school or extracurricular options. Just 3 percent of the students at the most competitive colleges are from the lowest economic quartile. And even those who make their way through the admissions industrial complex face college-to-career barriers like unpaid internships, which are more than many less-affluent students can endure.

Failure to broaden their aperture for talent means that even the best-intentioned diversity initiatives leave companies competing for the tiny pool of engineers of color who graduate from the top programs.

Inequities have plagued the tech world since Ada Lovelace coded the first computer program in 1842.

To move the needle on diversity, employers must move beyond filtering outputs of top computer science programs and focus on changing the inputs. They must invest in building industry-aligned programs at colleges and universities that are attended by more diverse students, but may lack the know-how to build — and keep current — curricula that prepare students to thrive in an increasingly dynamic tech industry. They can partner with institutions falling into the well-worn traps of academia, teaching theory without application, or relying on dated practices that leave graduates unprepared for the labor market.

A growing number of employers have begun to take such an approach, partnering with institutions that harbor underrepresented talent to transform their computer science programs.

Facebook has partnered with institutions, including the City College of New York, to create industry-relevant courses, and committed to funding the training of 3,000 Michigan workers for jobs in digital marketing. Last year, Facebook invested $1 million in an effort to teach computer science to more women and underrepresented minorities.

In 2015, Intel announced a $300 million effort to diversify its workforce by 2020.Since then, the company has launched a $4.5 million program to help STEM students at historically black colleges stay on track. In 2017, Howard University opened a campus at Google’s headquarters, offering students a three-month program in which they can receive instruction from both Howard faculty and engineers at Google. A year later, Howard leaders said the partnership helped lead to a 40 percent increase in computer science enrollment at the university.

Inequities have plagued the tech world since Ada Lovelace coded the first computer program in 1842 — only to lose her place in the textbooks to the men who capitalized on her insights while denying her contributions. Today, fluency in high-tech skills and knowledge is no longer controlled by an elite few. Opportunity, however, can remain stubbornly fixed.

Top tech companies have already taken the first step by activating the search for underrepresented talent. The next step is to broaden their search beyond elite campuses and invest in the education of underrepresented students.

It will take wholesale collaboration between employers and colleges to provide meaningful, relevant computer science education to any student on any campus. But such partnerships hold the promise of addressing the diversity gaps that blight our industry at its roots.

AR will mean dystopia if we don’t act today

The martial arts actor Jet Li turned down a role in the Matrix and has been invisible on our screens because he does not want his fighting moves 3D-captured and owned by someone else. Soon everyone will be wearing 3D-capable cameras to support augmented reality (often referred to as mixed reality) applications. Everyone will have to deal with the sorts of digital-capture issues across every part of our life that Jet Li avoided in key roles and musicians have struggled to deal with since Napster. AR means anyone can rip, mix and burn reality itself.

Tim Cook has warned the industry about “the data industrial complex” and advocated for privacy as a human right. It doesn’t take too much thinking about where some parts of the tech industry are headed to see AR ushering in a dystopian future where we are bombarded with unwelcome visual distractions, and our every eye movement and emotional reaction is tracked for ad targeting. But as Tim Cook also said, “it doesn’t have to be creepy.” The industry has made data-capture mistakes while building today’s tech platforms, and it shouldn’t repeat them.

Dystopia is easy for us to imagine, as humans are hard-wired for loss aversion. This hard-wiring refers to people’s tendency to prefer avoiding a loss versus an equal win. It’s better to avoid losing $5 than to find $5. It’s an evolutionary survival mechanism that made us hyper-alert for threats. The loss of being eaten by a tiger was more impactful than the gain of finding some food to eat. When it comes to thinking about the future, we instinctively overreact to the downside risk and underappreciate the upside benefits.

How can we get a sense of what AR will mean in our everyday lives, that is (ironically) based in reality?

When we look at the tech stack enabling AR, it’s important to note there is now a new type of data being captured, unique to AR. It’s the computer vision-generated, machine-readable 3D map of the world. AR systems use it to synchronize or localize themselves in 3D space (and with each other). The operating system services based on this data are referred to as the “AR Cloud.” This data has never been captured at scale before, and the AR Cloud is 100 percent necessary for AR experiences to work at all, at scale.

Fundamental capabilities such as persistence, multi-user and occlusions outdoor all need it. Imagine a super version of Google Earth, but machines instead of people use it. This data set is entirely separate to the content and user data used by AR apps (e.g. login account details, user analytics, 3D assets, etc.).

The AR Cloud services are often thought of as just being a “point cloud,” which leads people to imagine simplistic solutions to manage this data. This data actually has potentially many layers, all of them providing varying degrees of usefulness to different use cases. The term “point” is just a shorthand way of referring to a concept, a 3D point in space. The data format for how that point is selected and described is unique to every state-of-the-art AR system.

The critical thing to note is that for an AR system to work best, the computer vision algorithms are tied so tightly to the data that they effectively become the same thing. Apple’s ARKit algorithms wouldn’t work with Google’s ARCore data even if Google gave them access. Same for HoloLens, Magic Leap and all the startups in the space. The performance of open-source mapping solutions are generations behind leading commercial systems.

So we’ve established that these “AR Clouds” will remain proprietary for some time, but exactly what data is in there, and should I be worried that it is being collected?

AR makes it possible to capture everything…

The list of data that could be saved is long. At a minimum, it’s the computer vision (SLAM) map data, but it could also include a wireframe 3D model, a photo-realistic 3D model and even real-time updates of your “pose” (exactly where you are and what you are looking at), plus much more. Just with pose alone, think about the implications on retail given the ability to track foot traffic to provide data on the best merchandise placement or best locations for ads in store (and at home).

The lower layers of this stack are only useful to machines, but as you add more layers on top, it quickly starts to become very private. Take, for example, a photo-realistic 3D model of my kid’s bedroom captured just by a visitor walking down the hall and glancing in while wearing AR glasses.

There’s no single silver bullet to solving these problems. Not only are there many challenges, but there are also many types of challenges to be solved.

Tech problems that are solved and need to be applied

Much of the AR Cloud data is just regular data. It should be managed the way all cloud data should be managed. Good passwords, good security, backups, etc. GDPR should be implemented. In fact, regulation might be the only way to force good behavior, as major platforms have shown little willingness to regulate themselves. Europe is leading the way here; China is a whole different story.

A couple of interesting aspects to AR data are:

  • Similar to Maps or Streetview, how “fresh” should the data be, and how much historical data should be saved. Do we need to save a map with where your couch was positioned last week? What scale or resolution should be saved. There’s little value in a cm-scale model of the world, except for a map of the area right around you.
  • The biggest aspect that is difficult but doable is no personally identifying information leaves the phone. This is equivalent to the image data that your phone processes before you press the shutter and upload it. Users should know what is being uploaded and why it is OK to capture it. Anything that is personally identifying (e.g. the color texture of a 3D scan) should always be opt-in and carefully explained how it is being used. Homomorphic transformations should be applied to all data that leaves the device, to remove anything human readable or identifiable, and yet still leave the data in a state that algorithms can interpret for very specific relocalization functionality (when run on the device).
  • There’s also the problem of “private clouds” in that a corporate campus might want a private and accurate AR cloud for its employees. This can easily be hosted on a private server. The tricky part is if a member of the public walks around the site wearing AR glasses, a new model (possibly saved on another vendor’s platform) will be captured.

Tech challenges the AR industry still needs to solve

There are some problems we know about, but we don’t know how to solve yet. Examples are:

  • Segmenting rooms: You could capture a model of your house, but one side of an inner apartment wall is your apartment while the other side is someone else’s apartment. Most privacy methods to date have relied on something like a private radius around your GPS location, but AR will need more precise ways to detect what is “your space.”
  • Identifying rights to a space is a massive challenge. Fortunately, social contracts and existing laws are in place for most of these problems, as AR Cloud data is pretty much the same as recording video. There are public spaces, semi-public (a building lobby), semi-private (my living room) and private (my bedroom). The trick is getting the AR devices to know who you are and what it should capture (e.g. my glasses can capture my house, but yours can’t capture my house).
  • Managing the capture of a place from multiple people, and stitching that into a single model and discarding overlapping and redundant data makes ownership of the final model tricky.
  • The Web has the concept of a robots.txt file, which a website owner can host on their site, and the web data collection engines (e.g. Google, etc.) agree to only collect the data that the robots.txt file asks them to. Unsurprisingly this can be hard to enforce on the web, where each site has a pretty clear owner. Some agreed type of “robots.txt” for real-world places would be a great (but maybe unrealistic) solution. Like web crawlers, it will be hard to force this on devices, but like with cookies and many ad-tracking technologies, people should at least be able to tell devices what they want and hopefully market forces or future innovations can require platforms to respect it. The really hard aspect of this attractive idea is “whose robots.txt is authoritative for a place.” I shouldn’t be able to create a robots.txt for Central Park in NYC, but I should for my house. How is this to be verified and enforced?

Social contracts need to emerge and be adopted

A big part of solving AR privacy problems will come from developing a social contract that identifies when and where it’s appropriate to use a device. When camera phones were introduced in the early 2000s, there was a mild panic about how they could be misused; for example, cameras used secretly in bathrooms or taking your photos in public without a person’s permission. The OEMs tried to head off that public fear by having the cameras make a “click” sound. Adding that feature helped society adopt the new technology and become accustomed to it pretty quickly. As a result of having the technology in consumers hands, society adopted a social contract — learning when and where it is OK to hold up your phone for a picture and when it is not.

… [but ] the platform doesn’t need to capture everything in order to deliver a great AR UX.

Companies added to this social contract, as well. Sites like Flickr developed policies to manage images of private places and things and how to present them (if at all). Similar social learning took place with Google Glass versus Snap Spectacles. Snap took the learnings from Glass and solved many of those social problems (e.g. they are sunglasses, so we naturally take them off indoors, and they show a clear indicator when recording). This is where the product designers need to be involved to solve the problems for broad adoption.

Challenges the industry cannot predict

AR is a new medium. New mediums come along only every 15 years or so, and no one can predict how they will be used. SMS experts never predicted Twitter and Mobile Mapping experts never predicted Uber. Platform companies, even the best-intentioned *will* make mistakes.

These are not tomorrow’s challenges for future generations or science fiction-based theories. The product development decisions the AR industry is making over the next 12-24 months will play out in the next five years.

This is where AR platform companies are going to have to rely on doing a great job of:

  1. Ensuring their business model incentives are aligned with doing the right thing by the people whose data they capture; and
  2. Communicating their values and earning the trust of the people whose data they capture. Values need to become an even more explicit dimension of product design. Apple has always done a great job of this. Everyone needs to take it more seriously as tech products become more and more personal.

What should the AR players be doing today to not be creepy?

Here’s what needs to be done at a high level, which pioneers in AR believe is the minimum:

  1. Personal Data Never Leaves Device, Opt In Only: No personally identifying data required for the service to work leaves the device. Give users the option to opt in to sharing additional personal data if they choose for better apps feedback. Personal data does NOT have to leave the device in order for the tech to work; anyone arguing otherwise doesn’t have the technical skills and shouldn’t be building AR platforms.

  2. Encrypted IDs: Coarse Location IDs (e.g. Wi-Fi network name) are encrypted on the device, and it’s not possible to tell a location from the GPS coordinates of a specific SLAM map file, beyond generalities.

  3. Data Describing Locations Only Accessible When Physically at Location: An app can’t access the data describing a physical location unless you are physically in that location. That helps by relying on the social contract of having physical permission to be there, and if you can physically see the scene with your eyes, then the platform can be confident that it’s OK to let you access the computer vision data describing what a scene looks like.

  4. Machine-Readable Data Only: The data that does leave the phone is only able to be interpreted by proprietary homomorphic algorithms. No known science should be able to reverse engineer this data into anything human readable.

  5. App Developers Host User Data On Their Servers, Not The Platforms: App developers, not the AR platform company, host the application and end user-specific data re: usernames, logins, application state, etc. on their servers. The AR Cloud platform should only manage a digital replica of reality. The AR Cloud platform can’t abuse an app user’s data because they never touch or see it.

  6. Business Models Pay for Use Versus Selling Data: A business model based on developers or end users paying for what they use ensures the platform won’t be tempted to collect more than necessary and on-sell it. Don’t create financial incentives to collect extra data to sell to third parties.

  7. Privacy Values on Day One: Publish your values around privacy, not just your policies, and ask to be held accountable to them. There are many unknowns, and people need to trust the platform to do the right thing when mistakes are made. Values-driven companies like Mozilla or Apple will have a trust advantage over other platforms whose values we don’t know.

  8. User and Developer Ownership and Control: Figure out how to give end users and app developers appropriate levels of ownership and control over data that originates from their device. This is complicated. The goal (we’re not there yet) should be to support GDPR standards globally.

  9. Constant Transparency and Education: Work to educate the market and be as transparent as possible about policies and what is known and unknown, and seek feedback on where people feel “the line” should be in all the new gray areas. Be clear on all aspects of the bargain that users enter into when trading some data for a benefit.

  10. Informed Consent, Always: Make a sincere attempt at informed consent with regard to data capture (triply so if the company has an ad-based business model). This goes beyond an EULA, and IMO should be in plain English and include diagrams. Even then, it’s impossible for end users to understand the full potential.

Even apart from the creep factor, remember there’s always the chance that a hack or a government agency legally accesses the data captured by the platform. You can’t expose what you don’t collect, and it doesn’t need to be collected. That way people accessing any exposed data can’t tell precisely where an individual map file refers to (the end user encrypts it, the platform doesn’t need the keys), and even if they did, the data describing the location in detail can’t be interpreted.

There’s no single silver bullet to solving these problems.

Blockchain is not a panacea for these problems — specifically as applied to the foundational AR Cloud SLAM data sets. The data is proprietary and centralized, and if managed professionally, the data is secure and the right people have the access they need. There’s no value to the end user from blockchain that we can find. However, I believe there is value to AR content creators, in the same way that blockchain brings value to any content created for mobile and/or web. There’s nothing inherently special about AR content (apart from a more precise location ID) that makes it different.

For anyone interested, the Immersive Web working group at W3C and Mozilla are starting to dig further into the various risks and mitigations.

Where should we put our hope?

This is a tough question. AR startups need to make money to survive, and as Facebook has shown, it was a good business model to persuade consumers to click OK and let the platform collect everything. Advertising as a business model creates inherently misaligned incentives with regard to data capture. On the other hand, there are plenty of examples where capturing data makes the product better (e.g. Waze or Google search).

Education and market pressure will help, as will (possibly necessary) privacy regulation. Beyond that we will act in accordance with the social contracts we adopt with each other re: appropriate use.

The two key takeaways are that AR makes it possible to capture everything, and that the platform doesn’t need to capture everything in order to deliver a great AR UX.

If you draw a parallel with Google, in that web crawling was trying to figure out what computers should be allowed to read, AR is widely distributing computer vision, and we need to figure out what computers should be allowed to see.

The good news is that the AR industry can avoid the creepy aspects of today’s data collection methods without hindering innovation. The public is aware of the impact of these decisions and they are choosing which applications they will use based on these issues. Companies like Apple are taking a stand on privacy. And most encouragingly, every AR industry leader I know is enthusiastically engaged in public and private discussions to try to understand and address the realities of meeting the challenge.

Amazon Prime’s dominance is spurring new startup opportunities

E-commerce is one of the economy’s bright spots; U.S. e-commerce sales have nearly doubled in five years, and now exceed $500 billion. Unsurprisingly, Amazon has swooped in to claim a disproportionate share of the riches, gobbling up nearly 50 percent of the market share, driving competitors out of business and solidifying its position as one of the world’s most valuable companies.

As part of its complete transformation of the e-commerce landscape, Amazon has made two-day shipping the new industry standard — a standard which most would-be competitors can’t meet on their own without either investing millions in infrastructure or partnering with their greatest competitive threat. Fortunately for merchants, some exciting new logistics startups are emerging to help them compete with Amazon.

Amazon’s chokehold

In classic coopetition form, Amazon now enables more than a million merchants to sell through  Amazon Marketplace. It offers these merchants two-day shipping via a cheap flat fee per package — a fee so cheap, in fact, that no shipping provider can come close to matching it. Amazon is doubling down on its advanced fulfillment network by investing $700 million in Rivian, an electric truck company; augmenting its fleet of 50+ delivery planes; and rolling out 20,000 Mercedes-Benz delivery vans.

Two-day delivery is so compelling, often doubling sales, that many merchants are becoming increasingly dependent on Amazon despite the obvious risks of partnering with the juggernaut. This in itself is spurring startups that help merchants thrive on Amazon. Amazon forces those merchants who work with them to compete side-by-side with other brands, including the company’s own private-label collection that it promotes aggressively. Amazon also pressures merchants to provide their lowest prices on Amazon — despite the fact that Amazon takes a significant revenue percentage. Even then, Amazon still might suddenly kick merchants off its platform without prior notice.

Once merchants sell on Amazon, they often find it impossible to diversify to other platforms with higher margins and more control because they become reliant on Amazon’s unbeatable two-day delivery price. This pressure is making merchants increasingly nervous as Amazon squeezes them from all sides. Merchants are desperately seeking solutions to help them get out of Amazon’s chokehold. A new batch of startups is seizing the opportunity to provide just that.

Aggregated delivery routes

Transportation accounts for more than 75 percent of delivery costs. Merchants can save millions by pooling together their shipping, trucking and last-mile delivery costs. Traditionally, this pooling was done by expensive freight brokers on pen and paper. Today, companies like Flexport, which just raised $1 billion, and Convoy, which was just valued at more than $1 billion, can more effectively match shippers and carriers to combine packages and lower costs.

Addicted to convenience, consumers keep demanding that their merchandise arrive ever more quickly.

Last-mile delivery companies like ShipBob, which recently closed a $40 million investment round, are also beginning to offer Amazon-like two-day shipping solutions. Deliv* takes an even more aggressive approach by offering same-day shipping for retailers via its couriers. By combining volume, these startups allow merchants to save more than 20 percent by negotiating for larger bulk discounts with carriers and by optimizing routes.

Distributed warehousing

To deliver within two days, merchants must have access to warehouses located near their customers. While companies like Walmart and Amazon might be able to invest billions in multiple distribution centers located throughout the U.S., smaller merchants and distributors can rely on startups like Flexe and Darkstore to provide on-demand storage in pooled warehouses across the country. Rather than keeping everything in a central warehouse thousands of miles away, merchants can use artificial intelligence to predict consumer demand and ship inventory to nearby distribution centers. These startups will become increasingly important as retailers seek to go beyond two-day shipping and offer one-day and even same-day shipping.

Robotics and automation

Despite the heavy upfront costs, robotics offer a cheaper long-term alternative to manual labor in many distribution centers. RightHand Robotics, which just landed $23 million, uses a robotic arm to help pick and place items at warehouses. Each arm can operate at the same speed as an experienced packer, while working around the clock. Other startups use automation to reduce last-mile delivery costs through a variety of methods, ranging from self-driving cars to delivery drones. Starship Technologies, for instance, is building a fleet of small self-driving robots to deliver locally. Although individual merchants may not purchase robotic arms, they can leverage logistics startups to reduce costs and improve efficiencies via these new automation techniques.

Addicted to convenience, consumers keep demanding that their merchandise arrive ever more quickly. Amazon is king of convenience and is constantly pushing the bar higher — or faster in this case. Merchants are struggling to keep up. Fortunately for them, a new generation of logistics startups are helping them compete. By creating solutions for the logistics infrastructure of the future, these startups are helping merchants stay in the race against Amazon.

* Denotes Trinity portfolio company

Why unicorns can raise $1 billion but can’t figure out diversity and inclusion

In the early 2000s, Hasbro revived its “My Little Pony” toy franchise. Of all the colorful creatures in Ponyville, my favorite were the unicorn ponies.

Unicorn ponies were magical, whimsical and, most importantly, rare. I identified with the latter.

I was 13 years old and had just been selected for a competitive math, science and computer science program. Of the 100 students in the program, I was one of two black girls. But, I was lucky. Just like the Earth ponies embraced the unicorns, my white and Asian classmates made me feel welcome.

I wish that was always my experience in the tech industry.

The tech industry is no more diverse than it was when I was 13. But more tech companies than ever have committed to becoming more diverse and inclusive.

So why doesn’t commitment always translate to Ponyville?

Goodbye Ponyville, hello world

Six years in my intensive math, science and computer science program almost prepared me to study at MIT. Multivariable calculus? Check. Getting over the fact that you’re not the smartest person at school? Check. Having to worry about being discriminated against by your classmates? Not check.

Here’s an example. My senior year, I was working with a team of 21 other students to develop a new medical device. Peer valuations determined part of my grade, which concerned me. I worried that some of my classmates’ feedback would be clouded by biases against black women. I felt pressured to be perceived as intelligent-but-not-intimidating, confident-but-not-aggressive and approachable-but-not-dense.

Though I largely received positive evaluations, not one, but two, of my teammates told me to “be less aggressive.”

I felt singled out and discouraged until I heard from some of my other black classmates. They’d been excluded from team meetings, and assigned the most menial tasks.

Creating diverse and inclusive tech companies starts with individuals.

How could this happen at MIT, a place that prides itself on being a diverse and inclusive center of innovation?

People discriminate. Institutions tolerate discrimination. People learn to tolerate the discrimination against them. It’s a simple, vicious cycle that few institutions and companies design against.

During the three years after I graduated from MIT, I became fed up with being treated as “less than.” It was time to find a unicorn.

Unicorn (noun)

uni·corn | \ ˈyü-nə-ˌkȯrn

  1. a mythical, usually white animal generally depicted with the body and head of a horse with long flowing mane and tail and a single often spiraled horn in the middle of the forehead
  2. a diverse and inclusive tech company

Following the Rainbow Trail

Finding a unicorn was not easy. My Google search yielded plenty of startups with billion-plus valuations. Few startups were very diverse or inclusive.

That’s why Temboo, a NYC-based industrial IoT startup, intrigued me:

  • A tech company led by a woman of color.
  • An engineering team with an equal number of women and men.
  • A product focused on accessibility and the democratization of programming.
  • A diverse team of employees from different cultural backgrounds.
  • And, most surprisingly, when I arrived for my first interview, I was greeted with a giant hug. This is New York. Random hugs don’t just happen.

Every person I met had a background and interests different from the next. Of all the companies I interviewed with, only Temboo asked why I chose to lead the black employee resource group at my previous position. Even the company’s physical space was different than most tech companies — an independent office nestled in the heart of the TriBeCa neighborhood of NYC.

When I made the decision to join the team, I was hopeful. Maybe this would be a place where I would be respected and appreciated for just being myself.

My Little Pony: NYC tales

During my first few months, I held onto the past lessons that taught me I needed to formulate an acceptable version of myself for my colleagues. However, with time, I understood that at Temboo, Sarah is enough.

My kinky hair could be braided or in an afro, but my hairstyle had no bearing on my perceived intelligence. I could openly critique the lack of diversity at the industrial IoT conferences we attend, and hear resounding agreement.

There were, admittedly, a few times I felt judged. My deep love of obscure reality TV shows and pumpkin-flavored foods is questionable.

I found my unicorn and I’m happier for it. Now, I want everyone working in tech to find their unicorn, so I’ve started to think about ways that I can help pass the torch.

Stuck in Bro-nyville

Most tech companies are following the same recommendations to become more diverse and inclusive:

  1. Diversify your talent pool.
  2. Create community with employee resource groups.
  3. Tie performance evaluations to diversity and inclusion goals.
  4. Call out the lack of diversity.

Take the example of this medium-sized tech company that was preparing to revamp its employee resource groups. The company invited me to speak on a panel, and share what I’d learned from leading the black employee resource group at my previous company.

For example, my team organized Microaggression Awareness Week. The results were tangible: the next week during an executive leadership meeting, a senior manager stopped to ask his peers if something he said was a microaggression.

But we could not convince the recruiting team to tie their performance ratings to diversity and inclusion goals. They did not want the burden of responsibility, and asked my team to come up with new ideas to attract more diverse talent.

Diverse and inclusive tech companies have better retention and financial performance.

Another panelist shared her experience of coming out in the workplace at 50 years old. After 18 years as a senior executive at a Fortune 500 company, she moved to a small tech company. The atmosphere was totally different. Jokes about someone’s sexual orientation were faux pax, and the company even built a float for the NYC Pride Parade. After a 30-year career, she finally felt safe enough to be herself at work.

The panel ended on an encouraging note, but issues remained. One of the company’s employees shared with me that in order to avoid discrimination, he goes by his Anglo-sounding middle name. His job is to lead diversity and inclusion initiatives.

How to grow a horn

Unfair behaviors like stereotyping, harassment and microaggressions are the primary reasons employees quit tech companies. Women, underrepresented minorities and LGBTQ employees bear the brunt of discrimination (Kapor Center).

Diverse and inclusive tech companies have better retention and financial performance. McKinsey examined the relationship between the diversity of company leadership and financial performance in 2014 and 2017: companies in the top quartile for gender diversity were 15-21 percent more likely to experience above-average profitability compared to companies in the fourth quartile. For ethnic and cultural diversity, the likelihood of above-average performance increased to 33-35 percent.

Creating diverse and inclusive tech companies starts with individuals. From management to junior employees, everyone needs to continually rethink, unlearn and relearn.

Rethink personal biases.

Unlearn habits of discrimination.

Relearn how to respect others who are different.

Companies help end workplace discrimination by signaling their intolerance. Temboo’s culture and practices are a great model.

Unicorns are magical, but diverse and inclusive tech companies are not. They ask the people who work there to redefine what is ordinary.

Amazon’s one-two punch: How traditional retailers can fight back

If you think physical retail is dead, you couldn’t be more wrong. Despite the explosion in e-commerce, we’re still buying plenty of stuff in offline stores. In 2017, U.S. retail sales totaled $3.49 trillion, of which only 13 percent (about $435 billion) were e-commerce sales. True, e-commerce is growing at a much faster annual pace. But we’re still very far from the tipping point.

Amazon, the e-commerce giant, is playing an even longer game than everyone thinks. The company already dominates online retail — Amazon accounted for almost 50 percent of all U.S. e-commerce dollars spent in 2018. But now Amazon is eyeing the much bigger prize: modernizing and dominating retail sales in physical locations, mainly through the use of sophisticated data analysis. The recent reports of Amazon launching its own chain of grocery stores in several U.S. cities — separate from its recent Whole Foods acquisition — is just one example of how this could play out.

You can think of this as the Amazon one-two punch: The company’s vast power in e-commerce is only the initial, quick jab to an opponent’s face. Data-focused innovations in offline retail will be Amazon’s second, much heavier cross. Traditional retailers too focused on the jab aren’t seeing the cross coming. But we think canny retailers can fight back — and avoid getting KO’d. Here’s how.

The e-commerce jab starts with warehousing

Physical storage of goods has long been crucial to advances in commerce. Innovations here range from Henry Ford’s conveyor belt assembly line in 1910, to IBM’s universal product code (the “barcode”) in the early 1970s, to J.C. Penney’s implementation of the first warehouse management system in 1975. Intelligrated (Honeywell), Dematic (KION), Unitronics, Siemens and others further optimized and modernized the traditional warehouse. But then came Amazon.

After expanding from books to a multi-product offering, Amazon Prime launched in 2005. Then, the company’s operational focus turned to enabling scalable two-day shipping. With hundreds of millions of product SKUs, the challenge was how to get your pocket 3-layer suture pad (to cite a super-specific product Amazon now sells) from the back of the warehouse and into the shippers’ hands as quickly as possible.

Make no mistake: Amazon’s one-two retail punch will be formidable.

Amazon met this challenge at a time when automated warehouses still had massive physical footprints and capital-intensive costs. Amazon bought Kiva Systems in 2012, which ushered in the era of Autonomous Guided Vehicles (AGVs), or robots that quickly ferried products from the warehouse’s depths to static human packers.

Since the Kiva acquisition, retailers have scrambled to adopt technology to match Amazon’s warehouse efficiencies.  These technologies range from warehouse management software (made by LogFire, acquired by Oracle; other companies here include Fishbowl and Temando) to warehouse robotics (Locus Robotics, 6 River Systems, Magazino). Some of these companies’ technologies even incorporate wearables (e.g. ProGlove, GetVu) for warehouse workers. We’ve also seen more general-purpose projects in this area, such as Google Robotics. The main adopters of these new technologies are those companies that feel Amazon’s burn most harshly, namely operators of fulfillment centers serving e-commerce.

The schematic below gives a broad picture of their operations and a partial list of warehouse/inventory management technologies they can adopt:

It’s impossible to say what optimizations Amazon will bring to warehousing beyond these, but that may be less important to predict than retailers realize.

The cross: Modernizing the physical retail environment

Amazon has made several recent forays into offline shopping. These range from Amazon Books (physical book stores), Amazon Go (fast retail where consumers skip the cashier entirely) and Amazon 4-Star (stores featuring only products ranked four-stars or higher). Amazon Live is even bringing brick-and-mortar-style shopping streaming to your phone with a home-shopping concept à la QVC. Perhaps most prominently, Amazon’s 2017 purchase of Whole Foods gave the company an entrée into grocery shopping and a nationwide chain of physical stores.

Most retail-watchers have dismissed these projects as dabbling, or — in the case of Whole Foods — focused too narrowly on a particular vertical. But we think they’re missing Bezos’ longer-term strategic aim. Watch that cross: Amazon is mastering how physical retail works today, so it can do offline what it already does incredibly well online, which is harness data to help retailers sell much more intelligently. Amazon recognizes certain products lend themselves better to offline shopping — groceries and children’s clothing are just a few examples.

How can traditional retailers fight back? Get more proactive.

Those shopping experiences are unlikely to disappear. But traditional retailers (and Amazon offline) can understand much, much more about the data points between shopping and purchase. Which path did shoppers take through the store? Which products did they touch and which did they put into a cart? Which items did they try on, and which products did they abandon? Did they ask for different sizes? How does product location within the store influence consumers’ willingness to buy? What product correlations can inform timely marketing offers — for instance, if women often buy hats and sunglasses together in springtime, can a well-timed coupon prompt an additional purchase? Amazon already knows answers to most of these questions online. They want to bring that same intelligence to offline retail.

Obviously, customer privacy will be a crucial concern in this brave new future. But customers have come to expect online data-tracking and now often welcome the more informed recommendations and the convenience this data can bring. Why couldn’t a similar mindset-shift happen in offline retail?

How can retailers fight back?

Make no mistake: Amazon’s one-two retail punch will be formidable. But remember how important the element of surprise is. Too many venture capitalists underestimate physical retail’s importance and pooh-pooh startups focused on this sector. That’s extremely short-sighted.

Does the fact that Amazon is developing computer vision for Amazon Go mean that alternative self-checkout companies (e.g. Trigo, AiFi) are at a disadvantage? I’d argue that this validation is actually an accelerant as traditional retail struggles to keep up.

How can traditional retailers fight back? Get more proactive. Don’t wait for Amazon to show you what the next best-practice in retail should be. There’s plenty of exciting technology you can adopt today to beat Jeff Bezos to the punch. Take Relex, a Finnish startup using AI and machine learning to help brick-and-mortar and e-commerce companies make better forecasts of how products will sell. Or companies like Memomi or Mirow that are creating solutions for a more immersive and interactive offline shopping experience.

Amazon’s one-two punch strategy seems to be working. Traditional retailers are largely blinded by the behemoth’s warehousing innovations, just as they are about to be hit with an in-store innovation blow. New technologies are emerging to help traditional retail rally. The only question is whether they’ll implement the solutions fast enough to stay relevant.