How to make remote work work

Every time I see a “the future of work is remote” article, I think to myself: “How backwards! How retro! How quaint!” That future is now, for many of us. I’ve been a fully remote developer-turned-CTO for a full decade. So I’m always baffled by people still wrestling with whether remote work is viable for their company. That jury rendered its verdict a long time ago.

One reason companies still struggle with it is that remote work amplifies the negative effects of bad practices. If everyone’s in one place, you can dither, handwave, vacillate, micromanage, and turn your workplace into an endless wasteland of unclear uncertainty, punctuated by ad-hoc last-second crisis meetings — and your employees will probably still conspire against your counterproduction to get something done, albeit much less than what they’re capable of.

If they’re remote, though, progress via conspiracy and adhocracy is no longer an option. If they’re remote, you need decisive confidence, clear direction, iterative targets, independent responsibilities, asynchronous communications, and cheerful chatter. Let me go over each of those:

Decisive confidence. Suppose Vivek in Delhi, Diego in Rio, and Miles in Berlin are all on a project. (An example I’m drawing from my real life.) It’s late your time. You have to make a decision about the direction of their work. If you sleep on it, you’re writing off multiple developer-days of productivity.

Sometimes they have enough responsibilities to have other things to work on. (More on that below.) Sometimes you don’t have to make the decision because they have enough responsibility to do so themselves. (More on that below.) But sometimes you have to make the business-level decision based on scant information. In cases like this, remember the military maxim: “Any decision is better than no decision.”

Feedback loops and online abuse

I’ve long thought that much of the world can be explained by feedback loops. Why are small companies nimbler than large ones? Why are private companies generally more efficient than governments? Primarily because in each case, the former has a better feedback loop. When faced with a baffling question — such as, “why do online companies do such a terrible job at dealing with abuse?” — it’s often helpful to look at the feedback loops.

Let’s look at the small vs. large and private vs. government comparisons first, as examples. Small companies have extremely tight feedback loops; a single person makes a decision, sees the results, and pivots accordingly, without the need for meetings or cross-division consensus. Larger companies have to deal with other departments, internal politics, red tape, the blessing of multiple vice-presidents, legal analysis, etc., before they can make meaningful changes.

Similarly, if a private company’s initiative isn’t going well, its revenue immediately begins to plummet, a very strong signal that it needs to change its course quickly. If a government initiative isn’t going well, the voters render their verdict … at the next election, mingled with their verdicts on all the other initiatives. In the absence of specific and meaningful external feedback, various proxies exist … but it’s difficult to definitively determine actual signal from noise.

And when a social-media platform, especially an algorithm-driven one, determines what content to amplify — which implicitly means deciding which content to de-amplify — and which content to ban … what is its feedback loop? Revenue is one, of course. Amplifying content which leads to more engagement leads to more revenue. So they do that. Simple, right?

Ahahahahahaha no, as you may have noticed. Anything but simple. Content which is amplified is often bad content. Abuse. False news. Horrifyingly creepy YouTube videos. Etcetera.

Suppose that (many of) the employees of these platforms genuinely wish to deal with and hopefully eliminate these problems. I know that seems like a big supposition, but let’s just imagine it. Then why have they consistently seemed so spectacularly bad at doing so? Is it purely because they are money-grubbing monsters making hay off bullying, vitriol, the corrosion of the social contract, etc.?

Or is it that, because it did not occur to them to try to measure the susceptibility and severity of the effects on their own systems by bad actors, they had to rely on others — journalists, politicians, the public — for a slow, imprecise form of feedback. Such as: “your recommendation algorithm is doing truly terrible things” or “you are amplifying content designed to fragment our culture and society” or “you are consistently letting assholes dogpile-abuse vulnerable people, while suspending the accounts of the wronged,” to name major criticisms most often leveled at Google, Facebook, and Twitter respectively.

But this is a subtle and sluggish feedback loop, one primarily driven by journalists and politicians, who in turn have their own agendas, flaws, and their own feedback loops to which they respond. There is no immediately measurable response like there is with, say, revenue. And so whatever they do in response is subject to that same slow and imprecise feedback.

So when Google finally responds by banning right-wing extremism, but also history teachers, which is clearly an insanely stupid thing to do, is this a transient, one-time, edge-case bug, or a sign that Google’s whole approach is fundamentally flawed and they need to rethink things? Either way, how can we tell? How can they tell?

(Before you object, no, it’s not done purely by algorithms or neural networks. Humans are in the loop — but clearly not enough of them. I mean, look at this channel which YouTube recently banned; it’s clear at first glance, and confirmed by subsequent study, that this is not right-wing extremism. This should not have been a tough call.)

I’ve long been suspicious of what I call “the scientific fallacy” — that if something cannot be measured, it does not exist. But at the same time, in order to construct meaningful feedback loops which allow your system to be guided in the desired direction, you need a meaningful measure for comparisons.

So I put it to you that a fundamental problem (although not the fundamental problem) with tackling the thorny problem of content curation in social media is that we have no way to concretely measure the scale of what we’re talking about when we say “abuse” or “fake news” or “corrupted recommendation algorithms.” Has it gotten better? Has it gotten worse? Your opinion is probably based on, er, your custom-curated social-media feed. That may not be the best source of truth.

Instead of measuring anything, we seem to be relying on Whack-a-Mole in response to viral outrage and/or media reports. That’s still much better than doing nothing at all. But I can’t help but wonder: do the tech platforms have any way of measuring what it is they’re trying to fight? Even if they did, would anyone else believe their measurements? Perhaps what we need is some form of trusted, or even crowdsourced, third-party measure of just how bad things are.

If you would look to make a meaningful difference to these problems — which are admittedly difficult, although, looking back at the banned history teacher’s YouTube channel, perhaps not so difficult as the companies claim — you could come up with a demonstrable, reliable way to measure them. Even an imprecise one would be better than the “outrage Whack-a-Mole” flailing quasi-responses which seem to be underway at the moment.

Password expiration is dead, long live your passwords

May was a momentous month, which marked a victory for sanity and pragmatism over irrational paranoia. I’m obviously not talking about politics. I’m talking about Microsoft finally — finally! but credit to them for doing this nonetheless! — removing the password expiration policies from their Windows 10 security baseline.

Many enterprise-scale organizations (including TechCrunch’s owner Verizon) require their users to change their passwords regularly. This is a spectacularly counterproductive policy. To quote Microsoft:

Recent scientific research calls into question the value of many long-standing password-security practices such as password expiration policies, and points instead to better alternatives … If a password is never stolen, there’s no need to expire it. And if you have evidence that a password has been stolen, you would presumably act immediately rather than wait for expiration to fix the problem.

…If an organization has successfully implemented banned-password lists, multi-factor authentication, detection of password-guessing attacks, and detection of anomalous logon attempts, do they need any periodic password expiration? And if they haven’t implemented modern mitigations, how much protection will they really gain from password expiration? …Periodic password expiration is an ancient and obsolete mitigation of very low value

If you have a password at such an organization, I recommend you send that blog post to its system administrators. They will ignore you at first, of course, because that’s what enterprise administrators do, and because information security (like transportation security) is too often an irrational one-way ratchet because our culture of fear incentivizes security theater rather than actual security — but they may grudgingly begin to accept that the world has moved on.

Instead: Use a password manager like LastPass or 1Password. (They have viable free tiers! You really have no excuse.) Use it to eliminate or at least minimize password re-use across sites. Use two-factor authentication wherever possible. Yes, even SMS two-factor authentication, despite number-porting and SS7 attacks, because it’s still better than one-factor authentication.

And please, if you work with code or data repositories, stop checking your passwords and API keys into your repos. I’m the CTO of a consultancy and you would be amazed how many times clients come to us with this unfortunate setup. Repository access is not fine-grained, repos are very easily copied and/or their copies misplaced, and once you’ve checked in credentials they can be annoyingly tricky to truly delete. Using even something as simple as environment variables instead is a huge step up, and also makes your life simpler in many ways when working across multiple environments.

Perfect security doesn’t exist. World-class security is hard. But decent security is generally quite accessible, if you faithfully follow some basic rules. In order to do so, it’s best to keep those rules to a minimum, and get rid of the ones that don’t make sense. Password expiration is one of those. Goodbye to it, and good riddance.

How games conquered the movies

We used to think that as video games matured, as a medium, they would become more like Hollywood, becoming more focused on character development, plot reversals, and tight, suspense-driven narratives, rather than action set pieces alternating with cinematic cut scenes. Hoo boy, were we wrong. Instead the exact inverse has happened. Action movies have become more like video games. And you know what, this is no bad thing.

I thought of this while watching John Wick 3 last night. (Which I loved, as I did 1 and 2.) It’s not just that its ballet of bullets — especially the one with the dogs — are so like video games, in both structure and form, that they seem to have been practically been torn from a controller; you can practically see health bars and Stun markets hovering over the heads of the characters.

It’s also that the series’s primary costars, after Keanu — with apologies to Halle Berry and Ian McShane — is not any other individual character, but the world of John Wick, the Continental, and the High Table. Worldbuilding has long been a first-class citizen in video and tabletop role-playing games; now it has graduated to movies as well.

Speaking of role-playing games, ensemble-cast movies are more and more like them as well. Consider the Fast and Furious movies, or Game of Thrones. Each has a core group who are clearly the “player characters,” as well as disposable villains and extras who are “NPCs.” Each starts with the characters at a relatively low level of skill/power, and over the course of the series grow to worldshaking might.

In The Fast & The Furious Vin Diesel’s character is a really good driver and mechanic; by the time we get to The Fate of The Furious he’s a superspy capable of singlehandedly opposing entire intelligence agencies. In Game of Thrones we watch Arya become a high-level assassin before our eyes, and Jon Snow happens to become one of the deadliest swordsmen in all of Westeros, casually dispatching dozens of enemies, often several simultaneously, while rarely even breaking a sweat, because — well, there’s no real reason for it, other than that’s what happens to player characters, isn’t it? They level up and become the best.

That didn’t use to be the case. Jason Bourne and James Bond were superspies, but they didn’t really get better over the course of their series, or become so ridiculously puissant that they can casually take out a dozen heavily armed/armored expert fighters in thirty seconds, singlehandedly, as Shaw does in the trailer of the new Fast & Furious movie. Most of Jason Bourne’s action sequences are escapes; most of John Wick’s are hunts. And of course “one hunting a horde” has been the basic mode of first-person shooters since long before Doom.

Does the introduction of these new tropes / styles / narrative conceits make things worse? Well — not necessarily. The Bourne series is a lot grittier, in terms of emotional resonance and suspense, than the John Wick series, but the latter is far more stylish, semiotically rich, and immersive. I love them both about equally. It would be a shame if the only kind of action movie we ever saw from here on in was the stylized un/hyperreality of John Wick — but similarly it would be a shame if Hollywood had never made those movies on the grounds they were too brutally unrealistic.

Ultimately, video games have expanded Hollywood’s possibility space, and to my mind that’s always a good thing. Is it a universal rule that when technology introduces a new medium of storytelling, old media soon adopts the new medium’s styles and tropes? Did plays become more like novels after Don Quixote? Did radio become more like television after TV was introduced? And if/when we figure out the most compelling structure(s) for AR/VR storytelling, will video games become more like that? It seems fairly inevitable to me that the answer is yes.

Blockchain blockchain Malkovich blockchain

I spent much of last week at blockchain conferences, and I’m about ready to never hear the word again. This despite the fact I’ve been supporting decentralized software, as a counterweight or at least alternative to the growing power of governments and megacorps, for years now. Do you think blockchains are no answer? Great, let’s discuss! I wholeheartedly support you skeptics with whom I cautiously disagree. What I can’t stand, it turns out, is an endless sea of true believers nominally on my side.

The believers are twofold, and the two groups grate differently. One group is there almost purely because there is money in the space: Wall Street types hoping to rule a new asset-tokenized world which may come to be; financial startups offering blockchain versions of existing financial tools; new marketplaces just like old marketplaces, except On The Blockchain, and therefore better.

It’s all too easy to envision a future in which the collective vampire squid that is the financial industry — which has gone from taking 14% of all US profits in 1985 to consistently raking in more like 25% over the last couple of decades — ironically turns blockchains into a tool for “financializing” the economy even more, routing every global transaction through even more middlepeople, each of them shaving off a basis point or two. I doubt this will ever happen — but it’s the clear goal of much of group one.

(And before you even think about blaming regulators for this continued attempt at creeping financialization, consider that the cryptocurrency casino is full of so much shadiness it serves as a superb object example of why regulators exist, even if a few of their rules are a bit hidebound and baffling in today’s world.)

Needless to say this was not the original vision. The original vision of Bitcoin was, quote, “A Peer-To-Peer Electronic Cash System.” How has that worked out? Well, as Tom Howard puts it, “It’s 2019. Where the fuck is our Global Peer-To-Peer Electronic Cash System?”1 His conclusion: still not here.

He has a point — but in many ways the Bitcoin community is among the most admirable in the space. Their vision may have pivoted, from “medium of exchange” to “decentralized store of value,” but it is clear, and it is succeeding, they are making both sacrifices and technical leaps to advance it. (And maybe one day Lightning will provide us all scalable peer-to-peer payments atop Bitcoin. Maybe. One day.) Furthermore, people actually use Bitcoin in sizable numbers … albeit mostly for speculation.

The other group of true believers is the technical group, for whom I should have more love, as an engineer myself. But so much blockchain engineering is built on the unexamined presumption that blockchains are inevitably going to become wildly important, rather than an attempt to actually make them important in any way … again, other than the decentralized global casino of unregulated speculation.

There are still projects I like. Ethereum offered us the breakthrough concept of decentralized applications, although it turns out their usage rate is flat. Cosmos is an important and scalable alternative to Ethereum’s approach, although it’s only just launched. Blockstack’s approach is even more interesting, although the most successful Blockstack app — by far — has only 8,000 installs, still basically a rounding error, albeit one with an impressive growth rate.

And yet most of the non-financial people I met or read about last week were building new blockchains, or new tools for blockchains, new governance or voting systems to run atop blockchains, new blockchain analytics platforms, new ways to scale blockchains to handle the inevitable immense demand for their capacity … which is not at all apparent. The industry has so much potential, everyone agrees. It’s so revolutionary. It’s going to change everything. It’s going to be so important for the unbanked, everyone agreed, while standing in rooms full of bankers.

Meanwhile the decentralized Internet, “Web 3.0,” is beginning to feel like nuclear fusion or superpower Brazil: perpetually 10 years away. The belief is that scaling must be solved first — but premature scaling is exactly the mistake which has killed many a startup. Almost everyone in the space, financial or technical, seems primarily focused on tooling, infrastructure, platforms, and scaling, and writes off the lack of any non-believer users as merely “a UX problem” to vaguely be solved somehow in the future.

Maybe. Or maybe, a decade on from the Bitcoin whitepaper, it’s past time to instead be building applications that unbelievers who don’t care one whit about blockchains actually want to use, in the course of their everyday existence, at home and/or at work. If there are any fundamental issues other than scaling which prevent that from happening, then maybe it’s past time to focus on them instead.

1 I feel compelled to note that Mr. Howard actually wrote “f*ck” to preserver his readers’ delicate sensibilities; TechCrunch, of course, has a long and proud history of not worrying about those.

From crypto winter to crypto weirder

Captain Kirk and neo-Dadaists. Repugnant markets and legendary cryptographers. “Digital couture” auctioned by CryptoKitties developers. Distributed autonomous art organizations. A keynote speech looking back from 2047 at the near-apocalypse of 2026, from which we were saved by a new, fully tokenized economy. Yes, that’s right: NYC Blockchain Week has begun.

Where to begin? I suppose with context. This week’s series of cryptocurrency conferences kicked off with “Ethereal,” hosted by Consensys, a company/incubator/studio mostly devoted to decentralized software and services built atop the Ethereum blockchain … although they also acquired an asteroid-mining company last year. Subsequently they laid off 13% of their staff, in the depths of the notorious “crypto winter” that followed the crypto bubble which ended abruptly last January.

You read it here first, though: we are now moving from crypto winter into crypto weirder.

In fairness, the Ethereum community has long been home to the starry-eyed idealists, utopians, and … let’s diplomatically call them “original thinkers” … of the blockchain world. Eyebrow-raising proposals are nothing new. At the same time, Ethereum’s programmability also attracts many hard-headed money people increasingly fascinated by the prospects and potential profits of “DeFi,” decentralized finance.

DeFi, to oversimplify, incorporates and transcends the ICO craze of 2017-18 (most of which were Ethereum tokens) into decentralized platforms for loans, currency stabilization, insurance, clearinghouses, even derivatives, and much more. Its current poster child is the MakerDAO, a “stablecoin” system, i.e. a token intended to maintain a constant dollar value, maintained not by direct fiat collateral but by a complex architecture of cryptocurrency loans orchestrated by smart contracts.

But if you ask DeFi’s true believers, MakerDAO is merely an initial proof-of-concept of the larger DeFi vision. Its long-term prospects are immense, spanning all of the many tentacles of Wall Street and the financial industry, and immensely valuable. Assuming regulators are willing to play ball, of course…

And so the attendees at Ethereal are a colorful mix of serious financial and legal types. In the first group: former hedge-fund billionaire Michael Novogratz, or Rocket Lawyer CEO Charley Moore, there to announce the beta launch of their “Rocket Wallet” offering “legal contract execution and payments on the Ethereum blockchain.” In the second category: the abovementioned starry-eyed dreamers, weirdos, and artists, with whom you might find yourself discussing the dangers of a generalized on-chain AI ArtDAO which might run amok and transform the planet (and humanity) not into paperclips but into a planet-scale work of art. I suppose there are worse ways to go.

Do I sound dismissive? Au contraire; I’m all about the dreamers and weirdos. (I mean, I am one, although I was probably the only attendee whose pet Ethereum project is explicitly designed to never have any monetary value. Even the dreamers generally still want to get rich.) The most interesting thing about the blockchain / cryptocurrency space is that it is full of people who do not hesitate to question some of the most basic underpinnings of our society, our social constructs so fundamental they are often mistaken for laws of nature.

The concept of money. The existence of financial intermediaries. The partitioning of the world into geographically defined nation-states. That sort of thing. What’s more, they question them with an eye towards improving or even replacing them, generally with (admittedly usually at-best-half-baked) iterations and solutions in mind. Such people are definitionally weird, and tend to view the status quo so skeptically that they believe it’s inevitably headed for some kind of apocalyptic demise … but their questions are valuable even if you don’t agree with their answers.

Not least when they highlight genuine problems with the way things currently work. Leah Callon-Butler of spoke at Ethereal about “repugnant markets,” which are entirely legal but which face such social disapproval that ordinary business and transactions face substantial difficulties. In the US, of course, that generally means sex — and not even porn. Within the last few years, Chase Bank has refused to process payments for a condom company; Square rejected Early to Bed, a woman-owned sex toy store; and CES banned a sex toy after they gave it an award. One can’t help but think that there has to be a better way.

Similarly, sure, it’s amusing that, after announcing a partnership with Mattereum (who I’ve written about before) to track the provenance of collectible memorabilia, William Shatner got into a Twitter fight about the fine technical details of data storage on the Ethereum blockchain — and won by dint of being completely correct! — but it also highlights the fact that provenance is a really hard problem, and existing solutions are deeply imperfect at best.

So bring on the crypto weirder, says me. Speculation, trying to make money from the oft-inexplicable ups and downs of the “crypto casino,” is boring and breeds scams, hucksters, bad faith, fraud, and outright robbery. Actually trying to build distributed networks and platforms, which do old things in new disintermediated ways, or better yet entirely new things — now that’s interesting, even if/when 90+% of them fail. The crypto weirder means more of the latter and less of the former. It’s about time.

Against the Slacklash

Such hate. Such dismay. “How Slack is ruining work.” “Actually, Slack really sucks.” “Slack may actually be hurting your workplace productivity.” “Slack is awful.” Slack destroys teams’ ability to think, plan & get complex work out the door.” “Slack is a terrible collaboration tool.” “Face it, Slack is ruining your life.”

Contrarian view: Slack is not inherently bad. Rather, the particular way in which you are misusing it epitomizes your company’s deeper problems. I’m the CTO of a company which uses Slack extensively, successfully, and happily — but because we’re a consultancy, I have also been the sometime member of dozens of others’ Slack workspaces, where I have witnessed all the various flavors of flaws recounted above. In my experience, those are not actually caused by Slack.

Please note that I am not saying “Slack is just a tool, you have to use it correctly.” Even if that were so, a tool which lends itself so easily to being used so badly would be a bad tool. What I’m saying is something more subtle, and far more damning: that Slack is a mirror which reflects the pathologies inherent in your workplace. The fault, dear Brutus, is not in our Slacks, but in ourselves.

Facebook is pivoting

“The future is private,” said Mark Zuckerberg of Facebook’s roadmap, after conceding “we don’t exactly have the strongest reputation on privacy right now, to put it lightly.” But it’s easy to see why he would genuinely want that … now. Facebook’s seemingly endless series of privacy debacles have been disastrous for the company’s reputation.

Not its revenue, mind you; but revenue is famously a lagging indicator in the tech industry. Companies which, like Facebook, effectively become utilities, tend to maximize their income just as their use becomes ubiquitous — not because people especially like them any more, but because there seems to be no better alternative. (See also: Craigslist. PayPal. An obscure little company called MySpace which you may have heard of once.)

But “the future is private,” the vision of Facebook as a platform for groups and individuals sharing end-to-end-encrypted messages, the content of which it cannot be criticized for because it is literally incapable of knowing, sounds like a pretty gargantuan shift in business model, too. “Senator, we sell ads,” is another famous Zuckerberg quote. Won’t end-to-end encryption, and the de-emphasis of the continuously scrollable News Feed in favor of more discrete communications, strip Facebook of both valuable ad space and valuable ad-targeting information?

Probably. But it’s already painfully clear that Facebook wants to do far more than just sell ads against News Feed attention to make money. That got them where they are, but it has its limits, and of late, it’s also attracted a volcano of furious attention, and a fake-news firestorm. So don’t look where their puck is; look where it’s going. Look at Facebook Marketplace; look at Facebook’s cryptocurrency plans; look at their purchase of WhatsApp and how Facebook Messenger was broken out into its own app.

It seems clear that what Facebook really wants next is for Messenger to become WeChat for the rest of the world. An impregnable walled garden, used for business communications as well as personal. One which dominates not just messaging but commerce. A platform capable of transcending — and replacing — credit cards.

That would be enormously lucrative. That would also immensely reduce public and regulatory scrutiny and outrage: when outrages and atrocities are plotted and performed over Messenger, as they inevitably will be, Facebook will point out, quite correctly, that it is mathematically impossible for them to monitor and censor those messages, and that by keeping it mathematically impossible they are preserving their users’ privacy.

Does that sound hypocritical? What a narrow, short-sighted view. The irony is that it’s now entirely possible to envision a thriving future for Facebook which does not really include — well — Facebook. One in which Instagram is the king of all social media, while Messenger/WhatsApp rule messaging, occupy the half-trillion dollar international-remittances space, and also take basis points from millions of daily transactions performed on them …

…while what we used to know as “Facebook,” that once-famous app and web site, languishes as a neglected relic, used by a diminishing and increasingly middle-aged audience for event planning and sporadic life updates, yet another zombie social medium like LiveJournal and MySpace and so many others before. But one which birthed new, stronger, more evolved, corporate titans before it withered away: online gardens not merely “walled” but “domed like Wakanda,” more resistant to regulation, less prone to unpleasant emergent properties and summons to testify to the Senate. Love or hate this idea, you have to concede that it would be, if it succeeded, the mother of all pivots.

Meet the tech boss, same as the old boss

“Power corrupts, and absolute power corrupts absolutely.” It seems darkly funny, now, that anyone ever dared to dream that tech would be different. But we did, once. We would build new companies in new ways, was the thinking, not like the amoral industrial behemoths of old. The corporate villains of 90s cyberpunk were fresh in our imaginations. We weren’t going to be like that. We were going to show that you could get rich, do good, and treat everyone who worked for or interacted with your business with fundamental decency, all at the same time.

The poster child for this was, of course, Google, whose corporate code of conduct for fifteen years famously included the motto “don’t be evil.” No longer, and the symbolism is all too apt. Since removing that phrase in 2015, we’ve all witnessed reports of widespread sexual harassment, including 13 senior managers fired for it; Project Maven; and Project Dragonfly. Internal backlashes and a mass walkout led to retractions and changes, courtesy of Google employees rather than management … and now we’re seeing multiple reports of management retaliation against those employees.

Facebook? I mean, where do we even begin. Rootkits on teenagers‘ phones. Privacy catastrophe after privacy catastrophe. Admissions that they didn’t do enough to prevent Facebook-fostered violence in Myanmar. Sheryl Sandberg personally ordering opposition research on a Facebook critic. And those are just stories from the last six months alone!

Amazon? Consider how they overwork and underpay delivery drivers and warehouse workers. Apple? Consider how they “deny Chinese users the ability to install the VPN and E2E messaging apps that would allow them to avoid pervasive censorship and surveillance,” to quote Stanford’s Alex Stamos. Microsoft? The grand dame of the Big Five has mostly evolved into a quiet enterprise respectability, but has recently seen “dozens of” reports of sexual harassment and discrimination ignored by HR, along with demands for cancellation of the HoloLens military contract.

Those are the five most valuable publicly traded companies in the world. It’s far from “absolute power,” but it’s far more power than the tech industry has had before. Have we avoided corruption and complacency? Have we done things differently? Have we been better than our predecessors? Not half so much as we hoped back in the giddy early days of the Internet. Not a quarter. Not an eighth.

And it’s mostly so gratuitous. Google didn’t need to try to build a censored search engine for China. They don’t need the money — they’re a giant money-printing machine already — and the Chinese people don’t need their product. Amazon doesn’t need to treat its lower-paid workers with vicious contempt. (It’s true they finally — finally! — raised their minimum wage to $15, but it could very easily afford to make their pay and working conditions substantially better yet.) Facebook doesn’t need to … to increasingly act like a company whose management is composed largely of wide-eyed cultists and/or mustache-twirling villains, basically.

Google should have promoted the organizers of their walkout, but there, at least, you can see why they didn’t. Raw fear. The one thing which truly frightens the management of big tech companies, more than regulators, more than competitors, more than climate change, is their own employees.

Is it that the modern megacorps have inherited from their forebears the obsession with growth at all costs, a religious drive to cast their net over every aspect of the entire world, so it’s still not enough for each of those companies to make billions upon billions from advertising and commerce to spend on their famous — and now sometimes infamous — “moonshot” projects? (Don’t talk to me about the fiduciary duty of maximum profit. Tech senior management can interpret that “duty” however they see fit.)

Is it that any sufficiently large and wealthy organization becomes, in its upper reaches, a nest of would-be Game of Thrones starlets, playing power politics with their pet projects and personal careers, regardless of the costs and repercussions? (At least when they are born of hypergrowth; it’s noticeable that more-mature Apple and Microsoft, while imperfect, still seem by some considerable distance the least objectionable of these Big Five, and Facebook the most so.)

I don’t want to sound like I think the tech industry is guilty of ruining everything. Not at all. The greatest trick the finance industry ever pulled is somehow convincing (some of) the world that it’s the tech industry who are the primary drivers of inequality. As for the many media who seem to be trying to pin recent election outcomes, and all other ills of the world, on tech, well

But the existence of greater failures should not blind us to our own, and whether we have failed in an old way or a new one is moot. Accepting this failure is — at least for people like me who were once actually dumb/optimistic enough to believe that things might be different this time — an important step towards trying to build something better.

The new new web

Over the last five years, almost everything about web development has changed. Oh, the old tech still works, your WordPress and Ruby On Rails sites still function just fine — but they’re increasingly being supplanted by radical new approaches. The contents of your browser are being sliced, diced, rendered, and processed in wholly new ways nowadays, and the state of art is currently in serious flux. What follows is a brief tour of what I like to call the New New Web:

Table of Contents

  1. Single-Page Apps
  2. Headless CMSes
  3. Static Site Generators
  4. The JAMStack
  5. Hosting and Serverlessness
  6. Summary

1. Single-Page Apps

These have become so much the norm — our web projects at HappyFunCorp are almost always single-page apps nowadays — that it’s easy to forget how new and radical they were when they first emerged, in the days of jQuery running in pages dynamically built from templates on the server.