How Russia’s online influence campaign engaged with millions for years

Russian efforts to influence U.S. politics and sway public opinion were consistent and, as far as engaging with target audiences, largely successful, according to a report from Oxford’s Computational Propaganda Project published today. Based on data provided to Congress by Facebook, Instagram, Google, and Twitter, the study paints a portrait of the years-long campaign that’s less than flattering to the companies.

The report, which you can read here, was published today but given to some outlets over the weekend, summarizes the work of the Internet Research Agency, Moscow’s online influence factory and troll farm. The data cover various periods for different companies, but 2016 and 2017 showed by far the most activity.

A clearer picture

If you’ve only checked into this narrative occasionally during the last couple years, the Comprop report is a great way to get a bird’s-eye view of the whole thing, with no “we take this very seriously” palaver interrupting the facts.

If you’ve been following the story closely, the value of the report is mostly in deriving specifics and some new statistics from the data, which Oxford researchers were provided some seven months ago for analysis. The numbers, predictably, all seem to be a bit higher or more damning than those provided by the companies themselves in their voluntary reports and carefully practiced testimony.

Previous estimates have focused on the rather nebulous metric of “encountering” or “seeing” IRA content put on these social metrics. This had the dual effect of increasing the affected number — to over a hundred million on Facebook alone — but “seeing” could easily be downplayed in importance; after all, how many things do you “see” on the internet every day?

The Oxford researchers better quantify the engagement, on Facebook first, with more specific and consequential numbers. For instance, in 2016 and 2017, nearly 30 million people on Facebook actually shared Russian propaganda content, with similar numbers of likes garnered, and millions of comments generated.

Note that these aren’t ads that Russian shell companies were paying to shove into your timeline — these were pages and groups with thousands of users on board who actively engaged with and spread posts, memes, and disinformation on captive news sites linked to by the propaganda accounts.

The content itself was, of course, carefully curated to touch on a number of divisive issues: immigration, gun control, race relations, and so on. Many different groups (i.e. black Americans, conservatives, Muslims, LGBT communities) were targeted all generated significant engagement, as this breakdown of the above stats shows:

Although the targeted communities were surprisingly diverse, the intent was highly focused: stoke partisan divisions, suppress left-leaning voters, and activate right-leaning ones.

Black voters in particular were a popular target across all platforms, and a great deal of content was posted both to keep racial tensions high and to interfere with their actual voting. Memes were posted suggesting followers withhold their votes, or deliberately incorrect instructions on how to vote. These efforts were among the most numerous and popular of the IRA’s campaign; it’s difficult to judge their effectiveness, but certainly they had reach.

Examples of posts targeting black Americans.

In a statement, Facebook said that it was cooperating with officials and that “Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency.” It also noted that it has “made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”

Instagram on the rise

Based on the narrative thus far, one might expect that Facebook — being the focus for much of it — was the biggest platform for this propaganda, and that it would have peaked around the 2016 election, when the evident goal of helping Donald Trump get elected had been accomplished.

In fact Instagram was receiving as much or more content than Facebook, and it was being engaged with on a similar scale. Previous reports disclosed that around 120,000 IRA-related posts on Instagram had reached several million people in the run-up to the election. The Oxford researchers conclude, however, that 40 accounts received in total some 185 million likes and 4 million comments during the period covered by the data (2015-2017).

A partial explanation for these rather high numbers may be that, also counter to the most obvious narrative, IRA posting in fact increased following the election — for all platforms, but particularly on Instagram.

IRA-related Instagram posts jumped from an average of 2,611 per month in 2016 to 5,956 in 2017; note that the numbers don’t match the above table exactly because the time periods differ slightly.

Twitter posts, while extremely numerous, are quite steady at just under 60,000 per month, totaling around 73 million engagements over the period studied. To be perfectly frank this kind of voluminous bot and sock puppet activity is so commonplace on Twitter, and the company seems to have done so little to thwart it, that it hardly bears mentioning. But it was certainly there, and often reused existing bot nets that previously had chimed in on politics elsewhere and in other languages.

In a statement, Twitter said that it has “made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”

Google too is somewhat hard to find in the report, though not necessarily because it has a handle on Russian influence on its platforms. Oxford’s researchers complain that Google and YouTube have been not just stingy, but appear to have actively attempted to stymie analysis.

Google chose to supply the Senate committee with data in a non-machine-readable format. The evidence that the IRA had bought ads on Google was provided as images of ad text and in PDF format whose pages displayed copies of information previously organized in spreadsheets. This means that Google could have provided the useable ad text and spreadsheets—in a standard machine- readable file format, such as CSV or JSON, that would be useful to data scientists—but chose to turn them into images and PDFs as if the material would all be printed out on paper.

This forced the researchers to collect their own data via citations and mentions of YouTube content. As a consequence their conclusions are limited. Generally speaking when a tech company does this, it means that the data they could provide would tell a story they don’t want heard.

For instance, one interesting point brought up by a second report published today, by New Knowledge, concerns the 1,108 videos uploaded by IRA-linked accounts on YouTube. These videos, a Google statement explained, “were not targeted to the U.S. or to any particular sector of the U.S. population.”

In fact, all but a few dozen of these videos concerned police brutality and Black Lives Matter, which as you’ll recall were among the most popular topics on the other platforms. Seems reasonable to expect that this extremely narrow targeting would have been mentioned by YouTube in some way. Unfortunately it was left to be discovered by a third party and gives one an idea of just how far a statement from the company can be trusted.

Desperately seeking transparency

In its conclusion, the Oxford researchers — Philip N. Howard, Bharath Ganesh, and Dimitra Liotsiou — point out that although the Russian propaganda efforts were (and remain) disturbingly effective and well organized, the country is not alone in this.

“During 2016 and 2017 we saw significant efforts made by Russia to disrupt elections around the world, but also political parties in these countries spreading disinformation domestically,” they write. “In many democracies it is not even clear that spreading computational propaganda contravenes election laws.”

“It is, however, quite clear that the strategies and techniques used by government cyber troops have an impact,” the report continues, “and that their activities violate the norms of democratic practice… Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement, to being a computational tool for social control, manipulated by canny political consultants, and available to politicians in democracies and dictatorships alike.”

Predictably, even social networks’ moderation policies became targets for propagandizing.

Waiting on politicians is, as usual, something of a long shot, and the onus is squarely on the providers of social media and internet services to create an environment in which malicious actors are less likely to thrive.

Specifically, this means that these companies need to embrace researchers and watchdogs in good faith instead of freezing them out in order to protect some internal process or embarrassing misstep.

“Twitter used to provide researchers at major universities with access to several APIs, but has withdrawn this and provides so little information on the sampling of existing APIs that researchers increasingly question its utility for even basic social science,” the researchers point out. “Facebook provides an extremely limited API for the analysis of public pages, but no API for Instagram.” (And we’ve already heard what they think of Google’s submissions.)

If the companies exposed in this report truly take these issues seriously, as they tell us time and again, perhaps they should implement some of these suggestions.

The limits of coworking

It feels like there’s a WeWork on every street nowadays. Take a walk through midtown Manhattan (please don’t actually) and it might even seem like there are more WeWorks than office buildings.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @[email protected].

Co-working has permeated cities around the world at an astronomical rate. The rise has been so remarkable that even the headline-dominating SoftBank seems willing to bet the success of its colossal Vision Fund on the shift continuing, having poured billions into WeWork – including a recent $4.4 billion top-up that saw the co-working king’s valuation spike to $45 billion.

And there are no signs of the trend slowing down. With growing frequency, new startups are popping up across cities looking to turn under-utilized brick-and-mortar or commercial space into low-cost co-working options.

It’s a strategy spreading through every type of business from retail – where companies like Workbar have helped retailers offer up portions of their stores – to more niche verticals like parking lots – where companies like Campsyte are transforming empty lots into spaces for outdoor co-working and corporate off-sites. Restaurants and bars might even prove most popular for co-working, with startups like Spacious and KettleSpace turning restaurants that are closed during the day into private co-working space during their off-hours.

Before you know it, a startup will be strapping an Aeron chair to the top of a telephone pole and calling it “WirelessWorking”.

But is there a limit to how far co-working can go? Are all of the storefronts, restaurants and open spaces that line city streets going to be filled with MacBooks, cappuccinos and Moleskine notebooks? That might be too tall a task, even for the movement taking over skyscrapers.

The co-working of everything

Photo: Vasyl Dolmatov / iStock via Getty Images

So why is everyone trying to turn your favorite neighborhood dinner spot into a part-time WeWork in the first place? Co-working offers a particularly compelling use case for under-utilized space.

First, co-working falls under the same general commercial zoning categories as most independent businesses and very little additional infrastructure – outside of a few extra power outlets and some decent WiFi – is required to turn a space into an effective replacement for the often crowded and distracting coffee shops used by price-sensitive, lean, remote, or nomadic workers that make up a growing portion of the workforce.

Thus, businesses can list their space at little-to-no cost, without having to deal with structural layout changes that are more likely to arise when dealing with pop-up solutions or event rentals.

On the supply side, these co-working networks don’t have to purchase leases or make capital improvements to convert each space, and so they’re able to offer more square footage per member at a much lower rate than traditional co-working spaces. Spacious, for example, charges a monthly membership fee of $99-$129 dollars for access to its network of vetted restaurants, which is cheap compared to a WeWork desk, which can cost anywhere from $300-$800 per month in New York City.

Customers realize more affordable co-working alternatives, while tight-margin businesses facing increasing rents for under-utilized property are able to pool resources into a network and access a completely new revenue stream at very little cost. The value proposition is proving to be seriously convincing in initial cities – Spacious told the New York Times, that so many restaurants were applying to join the network on their own volition that only five percent of total applicants were ultimately getting accepted.

Basically, the business model here checks a lot of the boxes for successful marketplaces: Acquisition and transaction friction is low for both customers and suppliers, with both seeing real value that didn’t exist previously. Unit economics seem strong, and vetting on both sides of the market creates trust and community. Finally, there’s an observable network effect whereby suppliers benefit from higher occupancy as more customers join the network, while customers benefit from added flexibility as more locations join the network.

… Or just the co-working of some things

Photo: Caiaimage / Robert Daly via Getty Images

So is this the way of the future? The strategy is really compelling, with a creative solution that offers tremendous value to businesses and workers in major cities. But concerns around the scalability of demand make it difficult to picture this phenomenon becoming ubiquitous across cities or something that reaches the scale of a WeWork or large conventional co-working player.

All these companies seem to be competing for a similar demographic, not only with one another, but also with coffee shops, free workspaces, and other flexible co-working options like Croissant, which provides members with access to unused desks and offices in traditional co-working spaces. Like Spacious and KettleSpace, the spaces on Croissant own the property leases and are already built for co-working, so Croissant can still offer comparatively attractive rates.

The offer seems most compelling for someone that is able to work without a stable location and without the amenities offered in traditional co-working or office spaces, and is also price sensitive enough where they would trade those benefits for a lower price. Yet at the same time, they can’t be too price sensitive, where they would prefer working out of free – or close to free – coffee shops instead of paying a monthly membership fee to avoid the frictions that can come with them.

And it seems unclear whether the problem or solution is as poignant outside of high-density cities – let alone outside of high-density areas of high-density cities.

Without density, is the competition for space or traffic in coffee shops and free workspaces still high enough where it’s worth paying a membership fee for? Would the desire for a private working environment, or for a working community, be enough to incentivize membership alone? And in less-dense and more-sprawl oriented cities, members could also face the risk of having to travel significant distances if space isn’t available in nearby locations.

While the emerging workforce is trending towards more remote, agile and nomadic workers that can do more with less, it’s less certain how many will actually fit the profile that opts out of both more costly but stable traditional workspaces, as well as potentially frustrating but free alternatives. And if the lack of density does prove to be an issue, how many of those workers will live in hyper-dense areas, especially if they are price-sensitive and can work and live anywhere?

To be clear, I’m not saying the companies won’t see significant growth – in fact, I think they will. But will the trend of monetizing unused space through co-working come to permeate cities everywhere and do so with meaningful occupancy? Maybe not. That said, there is still a sizable and growing demographic that need these solutions and the value proposition is significant in many major urban areas.

The companies are creating real value, creating more efficient use of wasted space, and fixing a supply-demand issue. And the cultural value of even modestly helping independent businesses keep the lights on seems to outweigh the cultural “damage” some may fear in turning them into part-time co-working spaces.

And lastly, some reading while in transit:

Nvidia’s limited China connections

Another round of followups on Nvidia, and then some short news analysis.

TechCrunch is experimenting with new content forms. This is a rough draft of something new – provide your feedback directly to the author (Danny at [email protected]) if you like or hate something here.

Nvidia / TSMC questions

Following up on my analyses this week on Nvidia (Part 1, Part 2) , a reader asked in regards to Nvidia’s risk with China tariffs:

but the TSMC impact w.r.t. tariffs doesn’t make sense to me. TSMC is largely not impacted by tariffs and so the supply chain with NVIDIA is also not impacted w.r.t. to TSMC as a supplier. There are many alternate wafer suppliers in Taiwan.

This is a challenging question to definitively answer, since obviously Nvidia doesn’t publicly disclose its supply chain, or more granularly, which factories those supply chain partners utilize for its production. It does, however, list a number of companies in its 10-K form as manufacturing, testing, and packaging partners, including:

To understand how this all fits together, there are essentially three phases for bringing a semiconductor to market:

  1. Design – this is Nvidia’s core specialty
  2. Manufacturing – actually making the chip from silicon and other materials at the precision required for it to be reliable
  3. Testing, packaging and distribution – once chips are made, they need to be tested to prove that manufacturing worked, then packaged properly to protect them and shipped worldwide to wherever they are going to be assembled/integrated

For the highest precision manufacturing required for chips like Nvidia’s, Taiwan, South Korea and the U.S. are the world leaders, with China trying to catch up through programs like Made in China 2025 (which, after caustic pushback from countries around the world, it looks like Beijing is potentially scrapping this week). China is still considered to be one-to-two generations behind in chip manufacturing, though it increasingly owns the low-end of the market.

Where the semiconductor supply chain traditionally gets more entwined with China is around testing and packaging, which are generally considered lower value (albeit critical) tasks that have been increasingly outsourced to the mainland over the years. Taiwan remains the dominant player here as well, with roughly 50% of the global market, but China has been rapidly expanding.

U.S. tariffs on Chinese goods do not apply to Taiwan, and so for the most part, Nvidia’s supply chain should be adept at avoiding most of the brunt of the trade conflict. And while assembly is heavily based in China, electronics assemblers are rapidly adapting their supply chains to mitigate the damage of tariffs by moving factories to Vietnam, India, and elsewhere.

Where it gets tricky is the Chinese market itself, which imports a huge number of semiconductor chips, and represents roughly 20% of Nvidia’s revenues. Even here, many analysts believe that the Chinese will have no choice but to buy Nvidia’s chips, since they are market-leading and substitutes are not easily available.

So the conclusion is that Nvidia likely has maneuvering room in the short-term to weather exogenous trade tariff shocks and mitigate their damage. Medium to long-term though, the company will have to strategically position itself very carefully, since China is quickly becoming a dominant player in exactly the verticals it wants to own (automotive, ML workflows, etc.). In other words, Nvidia needs the Chinese market for growth at the exact moment that door is slamming shut. How it navigates this challenge in the years ahead will determine much of its growth profile in the years ahead.

Rapid fire analysis

Short summaries and analysis of important news stories

Saudi Arabia’s Crown Prince Mohammed bin Salman. FETHI BELAID/AFP/Getty Images

US intelligence community says quantum computing and AI pose an ’emerging threat’ to national security – Our very own Zack Whittaker talks about future challenges to U.S. national security. These technologies are “dual-use,” which means that they can be used for good purposes (autonomous driving, faster processing) and also for nefarious purposes (breaking encryption, autonomous warfare). Expect huge debates and challenges in the next decade about how to keep these technologies on the safe side.

Saudi Arabia Pumps Up Stock Market After Bad News, Including Khashoggi Murder – A WSJ trio of reporters investigates the Saudi government’s aggressive attempts to shore up the value of its stock exchange. Exchange manipulation is hardly novel, either in traditional markets or in blockchain markets. China has been aggressively doing this in its stock exchanges for years. But it is a reminder that in emerging and new exchanges, much of the price signaling is artificial.

A law firm in the trenches against media unions – Andrew McCormick writes in the Columbia Journalism Review how law firm Jones Day has taken a leading role in fighting against the unionization of newsrooms. The challenge of course is that the media business remains mired in cutbacks and weak earnings, and so trying to better divide a rapidly shrinking pie doesn’t make a lot of sense to me. The future — in my view — is entrepreneurial journalists backed up by platforms like Substack where they set their own voice, tone, publishing calendar, and benefits. Having a close relationship with readers is the only way forward for job security.

At least 15 central banks are serious about getting into digital currency – Mike Orcutt at MIT Technology Review notes that there are a bunch of central banks, including China and Canada. What’s interesting is that the trends backing this up including financial inclusion and “diminishing cash usage.” Even though blockchain is in a nuclear winter following the collapse of crypto prices this year, it is exactly these sorts of projects that could be the way forward for the industry.

What’s next

More semiconductors probably. And Arman and I are side glancing at Yelp these days. Any thoughts? Email me at [email protected].

This newsletter is written with the assistance of Arman Tabatabai from New York

Robinhood lacked proper insurance so will change checking & savings feature

Robinhood will rename and revamp its upcoming checking and banking features after encountering problems with its insurance. The company published a blog post this evening explaining “We plan to work closely with regulators as we prepare to launch our cash management program, and we’re revamping our marketing materials, including the name . . . Stay tuned for updates.”

Robinhood’s new high-interest, zero-fee checking and savings feature seemed too good to be true. Users’ money wasn’t slated  to fully protected. The CEO of the Securities Investor Protection Corporation, a nonprofit membership corporation that insures stock brokerages, tells TechCrunch its insurance would not apply to checking and savings accounts the way Robinhood originally claimed. “Robinhood would be buying securities for its account and sharing a portion of the proceeds with their customers, and that’s not what we cover,” says SIPC CEO Stephen Harbeck. “I’ve never seen a single document on this. I haven’t been consulted on this.”

That info directly conflicts with comments from Robinhood’s comms team, which told me yesterday users would be protected because the SIPC insures brokerages and the checking/savings feature is offered via Robinhood’s brokerage that is a member of the SIPC.

If Robinhood checking and savings is indeed ineligible for insurance coverage from the SIPC, and since it doesn’t qualify for FDIC protection like a standard bank, users’ funds would have been at risk. Robinhood co-CEO Baiju Bhatt told me that “Robinhood invests users’ checking and savings money into government-grade assets like U.S. treasuries and we collect yield from those assets and pay that back to customers in the form of 3 percent interest.” But Harbeck tells me that means users would effectively be loaning Robinhood their money, and the SIPC doesn’t cover loans. If a market downturn caused the values of those securities to decline and Robinhood couldn’t cover the losses, the SIPC wouldn’t necessarily help users get their money back. 

Robinhood’s team insisted yesterday that customers would not lose their money in the event that the treasuries in which it invests decline, and that only what users gamble on the stock market would be unprotected, as is standard. But now it appears that because Robinhood is misusing its brokerage classification to operate checking and savings accounts where it says users don’t have to invest in stocks and other securities, SIPC insurance wouldn’t apply. “I have an issue with some of the things on their website about whether these checking and savings accounts would be protected. I referred the issue to the SEC,” Harbeck tells me. TechCrunch got in touch with the SEC, but it declined to comment.

Robinhood planned to start shipping its Mastercard debit cards to customers on December 18th with users being added off the waitlist in January. That may now be delayed due to the insurance problem and it’s announcement that it will change how it works and is positioned.

Robinhood touted how its checking and savings features have no minimum account balance, overdraft fees, foreign transaction fees or card replacement fees. It also has 75,000 free-to-use ATMs in its network, which Bhatt claims is more than the top five U.S. banks combined. And its 3 percent interest rate users earn is much higher than the 0.09 percent average interest rate for traditional savings, and beats  most name-brand banks outside of some credit unions.

But for those perks, users must sacrifice brick-and-mortar bank branches that can help them with troubles, and instead rely on a 24/7 live chat customer support feature from Robinhood. The debit card has Mastercard’s zero-liability protection against fraud, and Robinhood partners with Sutton Bank to issue the card. But it’s unclear how the checking and savings accounts would have been protected against other types of attacks or scams.

Robinhood was likely hoping to build a larger user base on top of its existing 6 million accounts by leveraging software scalability to provide such competitive rates. It planned to be profitable from its margin on the interest from investing users’ money and a revenue-sharing agreement with Mastercard on interchange fees charged to merchants when you swipe your card. But long term, Robinhood may use checking and savings as a wedge into the larger financial services market from which it can launch more lucrative products like loans.

That could fall apart if users are scared to move their checking and savings money to Robinhood. Startups can suddenly fold or make too risky of decisions while chasing growth. Robinhood’s valuation went from $1.3 billion last year to $5.6 billion when it raised $363 million this year. That puts intense pressure on the company to grow to justify that massive valuation. In its rush to break into banking, it may have cut corners on becoming properly insured. It’s wise for the company to be rethinking the plan to ensure it doesn’t leave users exposed or hurt its reputation by launching without adequate protection.

[Update 12/14/2018 9:30pm pacific: This article has been significantly updated to include information about Robinhood planning to change its checking and savings feature before launch to ensure users aren’t in danger or losing their money.]

[DIsclosure: The author of this article knows Robinhood co-founders Baiju Bhatt and Vlad Tenev from college 10 years ago.]

China’s BYD further drives into Chile with 100 electric buses

Over the past few years, Chinese automaker giant BYD has been on a partnership spree with cities across China to electrify their public transportation systems, and now it’s extending its footprint across the globe. On Thursday, BYD announced that it has shipped 100 electric buses to Santiago, the capital city of Chile.

The step marks the Chinese firm’s further inroads into the Latin American country where a green car revolution is underway to battle smog. BYD’s first batch of vehicles arrived in Santiago last November and the Warren Buffett-backed carmaker remains as the only electric public bus provider in the country.

Chile is on the map of China’s grand Belt and Road Initiative aiming to turbocharge the world’s less developed regions with infrastructure development and investments. “With the help of ‘One Belt One Road,’ BYD has successfully entered Chile, Colombia, Ecuador, Brazil, Uraguay and other Latin American countries. As the region accelerates its electric revolution, BYD may be able to win more opportunities,” said BYD in a statement.

byd chile 1

President of Chile Mr. Sebastián Piñera rides the BYD electric bus. / Credit: BYD

The 100 buses embarked on a 45-day sea voyage from BYD’s factory in eastern China to land on the roads of Santiago. They sport the Chilean national colors of red and white on the exterior and provide USB charging ports inside to serve a generation who live on their electronic devices.

The fleet arrived through a partnership between BYD and Enel, a European utility juggernaut that claims to make up 40 percent of Chile’s energy sales in 2017. Enel has purchased the fleet from BYD and leased them to local transportation operator Metbus while the Chilean government set the rules and standards for the buses, a BYD spokesperson told TechCrunch.

Local passengers graded BYD’s electric vehicles at 6.3 out of 7, well above the 4.6 average of the Santiago public transportation system, according to a survey jointly produced by Chile’s Ministry of Energy as well as Ministry of Transport and Telecommunications. Respondents cited qualities such as low noise level, air conditioning and USB charging that the buses deliver.

Santiago currently has 7,000 public buses running on the road, among which 400 get replaced every year. A lot of the new ones will be diesel-free as the Chilean government said it aims to increase the total number of electric vehicles by tenfold in 2022.

Why you need a supercomputer to build a house

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @[email protected].

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:

The nation-state of the internet

The internet is a community, but can it be a nation-state? It’s a question that I have been pondering on and off this year, what with the rise of digital nomads and the deeply libertarian ethos baked into parts of the blockchain community. It’s clearly on a lot of other people’s minds as well: when we interviewed Matt Howard of Norwest on Equity a few weeks back, he noted (unprompted) that Uber is one of the few companies that could reach “nation-state” status when it IPOs.

Clearly, the internet is home to many, diverse communities of similar-minded people, but how do those communities transmute from disparate bands into a nation-state?

That question led me to Imagined Communities, a book from 1983 and one of the most lauded (and debated) social science works ever published. Certainly it is among the most heavily cited: Google Scholar pegs it at almost 93,000 citations.

Benedict Anderson, a political scientist and historian, ponders over a simple question: where does nationalism come from? How do we come to form a common bond with others under symbols like a flag, even though we have never — and will almost never — meet all of our comrades-in-arms? Why does every country consider itself “special,” yet for all intents and purposes they all look identical (heads of state, colors and flags, etc.) Also, why is the nation-state invented so late?

Anderson’s answer is his title: people come to form nations when they can imagine their community and the values and people it holds, and thus can demarcate the borders (physical and cognitive) of who is a member of that hypothetical club and who is not.

In order to imagine a community though, there needs to be media that actually links that community together. The printing press is the necessary invention, but Anderson tracks the rise of nation-states to the development of vernacular media — French language as opposed to the Latin of the Catholic Church. Lexicographers researched and published dictionaries and thesauruses, and the printing presses — under pressure from capitalism’s dictates — created rich shelves of books filled with the stories and myths of peoples who just a few decades ago didn’t “exist” in the mind’s eye.

The nation-state itself was developed first in South America in the decline and aftermath of the Spanish and Portuguese empires. Anderson argues for a sociological perspective on where these states originate from. Intense circulation among local elites — the bureaucrats, lawyers, and professionals of these states — and their lack of mobility back to their empires’ capitals created a community of people who realized they had more in common with each other than the people on the other side of the Atlantic.

As other communities globally start to understand their unique place in the world, they import these early models of nation-states through the rich print culture of books and newspapers. We aren’t looking at convergent evolution, but rather clones of one model for organizing the nation implemented across the world.

That’s effectively the heart of the thesis of this petite book, which numbers just over 200 pages of eminently readable if occasionally turgid writing. There are dozens of other epiphanies and thoughts roaming throughout those pages, and so the best way to get the full flavor is just to pick up a used copy and dive in.

For my purposes though, I was curious to see how well Anderson’s thesis could be applied to the nation-state of the internet. Certainly, the concept that the internet is its own sovereign entity has been with us almost since its invention (just take a look at John Perry Barlow’s original manifesto on the independence of cyberspace if you haven’t).

Isn’t the internet nothing but a series of imagined communities? Aren’t subreddits literally the seeds of nation-states? Every time Anderson mentioned the printing press or “print-capitalism,” I couldn’t help but replace the word “press” with WordPress and print-capitalism with advertising or surveillance capitalism. Aren’t we going through exactly the kind of media revolution that drove the first nation-states a few centuries ago?

Perhaps, but it’s an extraordinarily simplistic comparison, one that misses some of the key originators of these nation-states.

Photo by metamorworks via Getty Images

One of the key challenges is that nation-states weren’t a rupture in time, but rather were continuous with existing power structures. On this point, Anderson is quite absolute. In South America, nation-states were borne out of the colonial administrations, and elites — worried about losing their power — used the burgeoning form of the nation-state to protect their interests (Anderson calls this “official nationalism”). Anderson sees this pattern pretty much everywhere, and if not from colonial governments, then from the feudal arrangements of the late Middle Ages.

If you turn the gaze to the internet then, who are the elites? Perhaps Google or Facebook (or Uber), companies with “nation-state” status that are essentially empires on to themselves. Yet, the analogy to me feels stretched.

There is an even greater problem though. In Anderson’s world, language is the critical vehicle by which the nation-state connects its citizens together into one imagined community. It’s hard to imagine France without French, or England without English. The very symbols by which we imagine our community are symbols of that community, and it is that self-referencing that creates a critical feedback loop back to the community and reinforces its differentiation.

That would seem to knock out the lowly subreddit as a potential nation-state, but it does raise the question of one group: coders.

When I write in Python for instance, I connect with a group of people who share that language, who communicate in that language (not entirely mind you), and who share certain values in common by their choice of that language. In fact, software engineers can tie their choices of language so strongly to their identities that it is entirely possible that “Python developer” or “Go programmer” says more about that person than “American” or “Chinese.”

Where this gets interesting is when you carefully connect it to blockchain, which I take to mean a technology that can autonomously distribute “wealth.” Suddenly, you have an imagined community of software engineers, who speak in their own “language” able to create a bureaucracy that serves their interests, and with media that connects them all together (through the internet). The ingredients — at least as Anderson’s recipe would have them — are all there.

I am not going to push too hard in this direction, but one surprise I had with Anderson is how little he discussed the physical agglomeration of people. The imagining of (physical) borders is crucial for a community, and so the development of maps for each nation is a common pattern in their historical developments. But, the map, fundamentally, is a symbol, a reminder that “this place is our place” and not much more.

Indeed, nation-states bleed across physical borders all the time. Americans are used to the concept of worldwide taxation. France seats representatives from its overseas departments in the National Assembly, allowing French citizens across the former empire to vote and elect representatives to the country’s legislature. And anyone who has followed the Huawei CFO arrest in Canada this week should know that “jurisdiction” these days has few physical borders.

The barrier for the internet or its people to become nation-states is not physical then, but cognitive. One needs to not just imagine a community, but imagine it as the prime community. We will see an internet nation-state when we see people prioritizing fealty to one of these digital communities over the loyalty and patriotism to a meatspace country. There are already early acolytes in these communities who act exactly that way. The question is whether the rest of the adherents will join forces and create their own imagined (cyber)space.

Australia rushes its ‘dangerous’ anti-encryption bill into parliament, despite massive opposition

Australia’s controversial anti-encryption bill is one step closer to becoming law, after the two leading but sparring party political giants struck a deal to pass the legislation.

The bill, in short, grants Australian police greater powers to issue “technical notices” — a nice way of forcing companies — even websites — operating in Australia to help the government hack, implant malware, undermine encryption or insert backdoors at the behest of the government.

If companies refuse, they could face financial penalties.

Lawmakers say that the law is only meant to target serious criminals — sex offenders, terrorists, homicide and drug offenses. Critics have pointed out that the law could allow mission creep into less serious offenses, such as copyright infringement, despite promises that compelled assistance requests are signed off by two senior government officials.

In all, the proposed provisions have been widely panned by experts, who argue that the bill is vague and contradictory, but powerful, and still contains “dangerous loopholes.” And, critics warn (as they have for years) that any technical backdoors that allow the government to access end-to-end encrypted messages could be exploited by hackers.

But that’s unlikely to get in the way of the bill’s near-inevitable passing.

Australia’s ruling coalition government and its opposition Labor party agreed to have the bill put before parliament this week before its summer break.

Several lawmakers look set to reject the bill, criticizing the government’s efforts to rush through the bill before the holiday.

“Far from being a ‘national security measure’ this bill will have the unintended consequence of diminishing the online safety, security and privacy of every single Australian,” said Jordon Steele-John, a Greens’ senator, in a tweet.

Tim Watts, a Labor member of Parliament for Gellibrand, tweeted a long thread slamming the government’s push to get the legislation passed before Christmas, despite more than 15,000 submissions to a public consultation, largely decrying the bill’s content.

The tech community — arguably the most affected by the bill’s passing — has also slammed the bill. Apple called it “dangerously ambiguous”, while Cisco and Mozilla joined a chorus of other tech firms calling for the government to dial back the provisions.

But the rhetoric isn’t likely to dampen the rush by the global surveillance pact — the U.S., U.K., Canada, Australia and New Zealand, known as the so-called “Five Eyes” group of nations — to push for greater access to encrypted data. Only earlier this year, the governmental coalition said in no uncertain terms that it would force backdoors if companies weren’t willing to help their governments spy.

Australia’s likely to pass the bill — but when exactly remains a mystery. The coalition government has to call an election in less than six months, putting the anti-encryption law on a timer.

Cove.Tool wants to solve climate change one efficient building at a time

As the fight against climate change heats up, Cove.Tool is looking to help tackle carbon emissions one building at a time.

The Atlanta-based startup provides an automated big-data platform that helps architects, engineers and contractors identify the most cost-effective ways to make buildings compliant with energy efficiency requirements. After raising an initial round earlier this year, the company completed the final close of a $750,000 seed round. Since the initial announcement of the round earlier this month, Urban Us, the early-stage fund focused on companies transforming city life, has joined the syndicate comprised of Tech Square Labs and Knoll Ventures.

Helping firms navigate a growing suite of energy standards and options

Cove.Tool software allows building designers and managers to plug in a variety of building conditions, energy options, and zoning specifications to get to the most cost-effective method of hitting building energy efficiency requirements (Cove.Tool Press Image / Cove.Tool / https://covetool.com).

In the US, the buildings we live and work in contribute more carbon emissions than any other sector. Governments across the country are now looking to improve energy consumption habits by implementing new building codes that set higher energy efficiency requirements for buildings. 

However, figuring out the best ways to meet changing energy standards has become an increasingly difficult task for designers. For one, buildings are subject to differing federal, state and city codes that are all frequently updated and overlaid on one another. Therefore, the specific efficiency requirements for a building can be hard to understand, geographically unique and immensely variable from project to project.

Architects, engineers and contractors also have more options for managing energy consumption than ever before – equipped with tools like connected devices, real-time energy-management software and more-affordable renewable energy resources. And the effectiveness and cost of each resource are also impacted by variables distinct to each project and each location, such as local conditions, resource placement, and factors as specific as the amount of shade a building sees.

With designers and contractors facing countless resource combinations and weightings, Cove.Tool looks to make it easier to identify and implement the most cost-effective and efficient resource bundles that can be used to hit a building’s energy efficiency requirements.

Cove.Tool users begin by specifying a variety of project-specific inputs, which can include a vast amount of extremely granular detail around a building’s use, location, dimensions or otherwise. The software runs the inputs through a set of parametric energy models before spitting out the optimal resource combination under the set parameters.

For example, if a project is located on a site with heavy wind flow in a cold city, the platform might tell you to increase window size and spend on energy efficient wall installations, while reducing spending on HVAC systems. Along with its recommendations, Cove.Tool provides in-depth but fairly easy-to-understand graphical analyses that illustrate various aspects of a building’s energy performance under different scenarios and sensitivities.

Cove.Tool users can input granular project-specifics, such as shading from particular beams and facades, to get precise analyses around a building’s energy performance under different scenarios and sensitivities.

Democratizing building energy modeling

Traditionally, the design process for a building’s energy system can be quite painful for architecture and engineering firms.

An architect would send initial building designs to engineers, who then test out a variety of energy system scenarios over the course a few weeks. By the time the engineers are able to come back with an analysis, the architects have often made significant design changes, which then gets sent back to the engineers, forcing the energy plan to constantly be 1-to-3 months behind the rest of the building. This process can not only lead to less-efficient and more-expensive energy infrastructure, but the hectic back-and-forth can lead to longer project timelines, unexpected construction issues, delays and budget overruns.

Cove.Tool effectively looks to automate the process of “energy modeling.” The energy modeling looks to ease the pains of energy design in the same ways Building Information Modeling (BIM) has transformed architectural design and construction. Just as BIM creates predictive digital simulations that test all the design attributes of a project, energy modeling uses building specs, environmental conditions, and various other parameters to simulate a building’s energy efficiency, costs and footprint.

By using energy modeling, developers can optimize the design of the building’s energy system, adjust plans in real-time, and more effectively manage the construction of a building’s energy infrastructure. However, the expertise needed for energy modeling falls outside the comfort zones of many firms, who often have to outsource the task to expensive consultants.

The frustrations of energy system design and the complexities of energy modeling are ones the Cove.Tool team knows well. Patrick Chopson and Sandeep Ajuha, two of the company’s three co-founders, are former architects that worked as energy modeling consultants when they first began building out the Cove.Tool software.

After seeing their clients’ initial excitement over the ability to quickly analyze millions of combinations and instantly identify the ones that produce cost and energy savings, Patrick and Sandeep teamed up with CTO Daniel Chopson and focused full-time on building out a comprehensive automated solution that would allow firms to run energy modeling analysis without costly consultants, more quickly, and through an interface that would be easy enough for an architectural intern to use.

So far there seems to be serious demand for the product, with the company already boasting an impressive roster of customers that includes several of the country’s largest architecture firms, such as HGA, HKS and Cooper Carry. And the platform has delivered compelling results – for example, one residential developer was able to identify energy solutions that cost $2 million less than the building’s original model. With the funds from its seed round, Cove.Tool plans further enhance its sales effort while continuing to develop additional features for the platform.

Changing decision-making and fighting climate change

The value proposition Cove.Tool hopes to offer is clear – the company wants to make it easier, faster and cheaper for firms to use innovative design processes that help identify the most cost-effective and energy-efficient solutions for their buildings, all while reducing the risks of redesign, delay and budget overruns.

Longer-term, the company hopes that it can help the building industry move towards more innovative project processes and more informed decision-making while making a serious dent in the fight against emissions.

“We want to change the way decisions are made. We want decisions to move away from being just intuition to become more data-driven.” The co-founders told TechCrunch.

“Ultimately we want to help stop climate change one building at a time. Stopping climate change is such a huge undertaking but if we can change the behavior of buildings it can be a bit easier. Architects and engineers are working hard but they need help and we need to change.”

The economics and tradeoffs of ad-funded smart city tech

In order to have innovative smart city applications, cities first need to build out the connected infrastructure, which can be a costly, lengthy, and politicized process. Third-parties are helping build infrastructure at no cost to cities by paying for projects entirely through advertising placements on the new equipment. I try to dig into the economics of ad-funded smart city projects to better understand what types of infrastructure can be built under an ad-funded model, the benefits the strategy provides to cities, and the non-obvious costs cities have to consider.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @[email protected].

Using ads to fund smart city infrastructure at no cost to cities

When we talk about “Smart Cities”, we tend to focus on these long-term utopian visions of perfectly clean, efficient, IoT-connected cities that adjust to our environment, our movements, and our every desire. Anyone who spent hours waiting for transit the last time the weather turned south can tell you that we’ve got a long way to go.

But before cities can have the snazzy applications that do things like adjust infrastructure based on real-time conditions, cities first need to build out the platform and technology-base that applications can be built on, as McKinsey’s Global Institute explained in an in-depth report released earlier this summer. This means building out the network of sensors, connected devices and infrastructure needed to track city data. 

However, reaching the technological base needed for data gathering and smart communication means building out hard physical infrastructure, which can cost cities a ton and can take forever when dealing with politics and government processes.

Many cities are also dealing with well-documented infrastructure crises. And with limited budgets, local governments need to spend public funds on important things like roads, schools, healthcare and nonsensical sports stadiums which are pretty much never profitable for cities (I’m a huge fan of baseball but I’m not a fan of how we fund stadiums here in the states).

As city infrastructure has become increasingly tech-enabled and digitized, an interesting financing solution has opened up in which smart city infrastructure projects are built by third-parties at no cost to the city and are instead paid for entirely through digital advertising placed on the new infrastructure. 

I know – the idea of a city built on ad-revenue brings back soul-sucking Orwellian images of corporate overlords and logo-paved streets straight out of Blade Runner or Wall-E. Luckily for us, based on our discussions with developers of ad-funded smart city projects, it seems clear that the economics of an ad-funded model only really work for certain types of hard infrastructure with specific attributes – meaning we may be spared from fire hydrants brought to us by Mountain Dew.

While many factors influence the viability of a project, smart infrastructure projects seem to need two attributes in particular for an ad-funded model to make sense. First, the infrastructure has to be something that citizens will engage – and engage a lot – with. You can’t throw a screen onto any object and expect that people will interact with it for more than 3 seconds or that brands will be willing to pay to throw their taglines on it. The infrastructure has to support effective advertising.  

Second, the investment has to be cost-effective, meaning the infrastructure can only cost so much. A third-party that’s willing to build the infrastructure has to believe they have a realistic chance of generating enough ad-revenue to cover the costs of the projects, and likely an amount above that which could lead to a reasonable return. For example, it seems unlikely you’d find someone willing to build a new bridge, front all the costs, and try to fund it through ad-revenue.

When is ad-funding feasible? A case study on kiosks and LinkNYC

A LinkNYC kiosk enabling access to the internet in New York on Saturday, February 20, 2016. Over 7500 kiosks are to be installed replacing stand alone pay phone kiosks providing free wi-fi, internet access via a touch screen, phone charging and free phone calls. The system is to be supported by advertising running on the sides of the kiosks. ( Richard B. Levine) (Photo by Richard Levine/Corbis via Getty Images)

To get a better understanding of the types of smart city hardware that might actually make sense for an ad-funded model, we can look at the engagement levels and cost structures of smart kiosks, and in particular, the LinkNYC project. Smart kiosks – which provide free WiFi, connectivity and real-time services to citizens – have been leading examples of ad-funded smart city projects. Innovative companies like Intersection (developers of the LinkNYC project), SmartLink, IKE, Soofa, and others have been helping cities build out kiosk networks at little-to-no cost to local governments.

LinkNYC provides public access to much of its data on the New York City Open-Data website. Using some back-of-the-envelope math and a hefty number of assumptions, we can try to get to a very rough range of where cost and engagement metrics generally have to fall for an ad-funded model to make sense.

To try and retrace considerations for the developers’ investment decision, let’s first look at the terms of the deal signed with New York back in 2014. The agreement called for a 12-year franchise period, during which at least 7,500 Link kiosks would be deployed across the city in the first eight years at an expected project cost of more than $200 million. As part of its solicitation, the city also required the developers to pay the greater of either a minimum annual payment of at least $17.5 million or 50 percent of gross revenues.

Let’s start with the cost side – based on an estimated project cost of around $200 million for at least 7,500 Links, we can get to an estimated cost per unit of $25,000 – $30,000. It’s important to note that this only accounts for the install costs, as we don’t have data around the other cost buckets that the developers would also be on the hook for, such as maintenance, utility and financing costs.

Source: LinkNYC, NYC.gov, NYCOpenData

Turning to engagement and ad-revenue – let’s assume that the developers signed the deal with the expectations that they could at least breakeven – covering the install costs of the project and minimum payments to the city. And for simplicity, let’s assume that the 7,500 links were going to be deployed at a steady pace of 937-938 units per year (though in actuality the install cadence has been different). In order for the project to breakeven over the 12-year deal period, developers would have to believe each kiosk could generate around $6,400 in annual ad-revenue (undiscounted). 

Source: LinkNYC, NYC.gov, NYCOpenData

The reason the kiosks can generate this revenue (and in reality a lot more) is because they have significant engagement from users. There are currently around 1,750 Links currently deployed across New York. As of November 18th, LinkNYC had over 720,000 weekly subscribers or around 410 weekly subscribers per Link. The kiosks also saw an average of 18 million sessions per week, or 20-25 weekly sessions per subscriber, or around 10,200 weekly sessions per kiosk (seasonality might even make this estimate too low). 

And when citizens do use the kiosks, they use it for a long time! The average session for each Link unit was four minutes and six seconds. The level of engagement makes sense since city-dwellers use these kiosks in time or attention-intensive ways, such making phone calls, getting directions, finding information about the city, or charging their phones.   

The analysis here isn’t perfect, but now we at least have a (very) rough idea of how much smart kiosks cost, how much engagement they see, and the amount of ad-revenue developers would have to believe they could realize at each unit in order to ultimately move forward with deployment. We can use these metrics to help identify what types of infrastructure have similar profiles and where an ad-funded project may make sense.

Bus stations, for example, may cost about $10,000 – $15,000, which is in a similar cost range as smart kiosks. According to the MTA, the NYC bus system sees over 11.2 million riders per week or nearly 700 riders per station per week. Rider wait times can often be five-to-ten minutes in length if not longer. Not to mention bus stations already have experience utilizing advertising to a certain degree.  Projects like bike-share docking stations and EV charging stations also seem to fit similar cost profiles while having high engagement.

And interactions with these types of infrastructure are ones where users may be more receptive to ads, such as an EV charging station where someone is both physically engaging with the equipment and idly looking to kill up sometimes up to 30 minutes of time as they charge up. As a result, more companies are using advertising models to fund projects that fit this mold, like Volta, who uses advertising to offer charging stations free to citizens.

The benefits of ad-funding come with tradeoffs for cities

When it makes sense for cities and third-party developers, advertising-funded smart city infrastructure projects can unlock a tremendous amount of value for a city. The benefits are clear – cities pay nothing, citizens are offered free connectivity and real-time information on local conditions, and smart infrastructure is built and can possibly be used for other smart city applications down the road, such as using locational data tracking to improve city zoning and congestion. 

Yes, ads are usually annoying – but maybe understanding that advertising models only work for specific types of smart city projects may help quell fears that future cities will be covered inch-to-inch in mascots. And ads on projects like LinkNYC promote local businesses and can tap into idiosyncratic conditions and preferences of regional communities – LinkNYC previously used real-time local transit data to display beer ads to subway riders that were facing heavy delays and were probably in need of a drink. 

Like everyone’s family photos from Thanksgiving, the picture here is not all roses, however, and there are a lot of deep-rooted issues that exist under the surface. Third-party developed, advertising-funded infrastructure comes with externalities and less obvious costs that have been fairly criticized and debated at length. 

When infrastructure funding is derived from advertising, concerns arise over whether services will be provided equitably across communities. Many fear that low-income or less-trafficked communities that generate less advertising demand could end up having poor infrastructure and maintenance. 

Even bigger points of contention as of late have been issues around data consent and treatment. I won’t go into much detail on the issue since it’s incredibly complex and warrants its own lengthy dissertation (and many have already been written). 

But some of the major uncertainties and questions cities are trying to answer include: If third-parties pay for, manage and operate smart city projects, who should own data on citizens’ living behavior? How will citizens give consent to provide data when tracking systems are built into the environment around them? How can the data be used? How granular can the data get? How can we assure citizens’ information is secure, especially given the spotty track records some of the major backers of smart city projects have when it comes to keeping our data safe?

The issue of data treatment is one that no one has really figured out yet and many developers are doing their best to work with cities and users to find a reasonable solution. For example, LinkNYC is currently limited by the city in the types of data they can collect. Outside of email addresses, LinkNYC doesn’t ask for or collect personal information and doesn’t sell or share personal data without a court order. The project owners also make much of its collected data publicly accessible online and through annually published transparency reports. As Intersection has deployed similar smart kiosks across new cities, the company has been willing to work through slower launches and pilot programs to create more comfortable policies for local governments.

But consequential decisions related to third-party owned smart infrastructure are only going to become more frequent as cities become increasingly digitized and connected. By having third-parties pay for projects through advertising revenue or otherwise, city budgets can be focused on other vital public services while still building the efficient, adaptive and innovative infrastructure that can help solve some of the largest problems facing civil society. But if that means giving up full control of city infrastructure and information, cities and citizens have to consider whether the benefits are worth the tradeoffs that could come with them. There is a clear price to pay here, even when someone else is footing the bill.

And lastly, some reading while in transit: