Trump circumvents Twitter ban to decry ‘unprecedented assault on free speech’

Following a comprehensive ban from Twitter and a number of other online services following last week’s assault on the Capitol by his followers, President Trump managed to put out a tweet in the form of a video address touching on the “calamity at the Capitol”… and, of course, his deplatforming.

In the video, Trump instructs his followers to shun violence, calling it un-American. “No true supporter of mine could ever endorse political violence,” he said, days after calling rioters “great patriots” and telling them “we love you, you’re very special” as they despoiled the House and Senate.

He pivoted after a few minutes to the topic that, after his historic second impeachment, is almost certainly foremost on his mind: being banned from his chief instrument of governance, Twitter.

“I also want to say a few words about the unprecedented assault on free speech we have seen in recent days,” he said, although the bans and other actions are all due to documented breaches of the platforms’ rules. “The efforts to censor, cancel and blacklist our fellow citizens are wrong, and they are dangerous. What is needed now is for us to listen to one another, not to silence one another.”

After having his @realdonaldtrump handle suspended by Twitter, Trump attempted to sockpuppet a few other prominent accounts of allies, but was swiftly shut down. What everyone assumed must be plans to join Parler were scuttled along with the social network itself, which has warned it may be permanently taken offline after Amazon and other internet infrastructure companies refused to host it.

In case you’re wondering how Trump was able to slip this one past Twitter’s pretty decisive ban to begin with, we were curious too.

Twitter tells TechCrunch:

This Tweet is not in violation of the Twitter Rules. As we previously made clear, other official administration accounts, including @WhiteHouse, are permitted to Tweet as long as they do not demonstrably engage in ban evasion or share content that otherwise violates the Twitter Rules.

In other words, while Trump the person was banned, Trump the head of the Executive branch may still have some right, in the remaining week he holds the office, to utilize Twitter as a way of communicating matters of importance to the American people.

This gives a somewhat unfortunate impression of a power move, as Twitter has put itself in the position of determining what is a worthwhile transmission and what is a rabble-rousing incitement to violence. I’ve asked the company to clarify how it is determined whether what Trump does on this account is considered ban evasion.

Meanwhile, almost simultaneous with Trump’s surprise tweet, Twitter founder Jack Dorsey unloaded 13 tweets worth of thoughts about the situation:

I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.

That said, having to ban an account has real and significant ramifications. While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation. And a time for us to reflect on our operations and the environment around us.

Jack neither reaches any real conclusions nor illuminates any new plans, but it’s clear he is thinking real hard about this. As he notes, however, it’ll take a lot of work to establish the “one humanity working together” he envisions as a sort of stretch goal for Twitter and the internet in general.

These robo-fish autonomously form schools and work as search parties

Researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering have created a set of fish-shaped underwater robots that can autonomously navigate and find each other, cooperating to perform tasks or just placidly school together.

Just as aerial drones are proving themselves useful in industry after industry, underwater drones could revolutionize ecology, shipping, and other areas where a persistent underwater presence is desirable but difficult.

The last few years have seen interesting new autonomous underwater vehicles, or AUVs, but the most common type is pretty much a torpedo — efficient for cruising open water, but not for working one’s way through the nooks and crannies of a coral reef or marina.

For that purpose, it seems practical to see what Nature herself has seen fit to create, and the Wyss Institute has made a specialty of doing so and creating robots and machinery in imitation of the natural world.

In this case Florian Berlinger, Melvin Gauci, and Radhika Nagpa, all co-authors on a new paper published in Science Robotics, decided to imitate not just the shape of a fish, but the way it interacts with its fellows as well.

Having been inspired by the sight of schooling fish during scuba diving, Nagpa has pursued the question: “How do we create artificial agents that can demonstrate this kind of collective coherence where a whole collective seems as if it’s a single agent?”

Diagram of a fish-shaped robot

Image Credits: Berlinger et al., Science Robotics

Their answer, Blueswarm, is a collection of small “Bluebots” 3D-printed in the shape of fish, with fins instead of propellers and cameras for eyes. Although neither you nor I is likely to mistake these for actual fish, they’re far less scary of an object for a normal fish to see than a six-foot metal tube with a propeller spinning loudly in the back. The Bluebots also imitate nature’s innovation of bioluminescence, lighting up with LEDs the way some fish and insects do to signal others. The LED pulses change and adjust depending on each bot’s position and knowledge of its neighbors.

Using the simple senses of cameras and a photosensor at the very front, elementary swimming motions, and the LEDs, Blueswarm automatically organizes itself into group swimming behaviors, establishing a simple “milling” pattern that accommodates new bots when they’re dropped in from any angle.

Images showing how "bluebots" swarm intelligently and find each other.

Image Credits: Berlinger et al, Science Robotics

The robots can also work together on simple tasks, like searching for something. If the group is given the task of finding a red LED in the tank they’re in, they can each look independently, but when one of them finds it, it alters its own LED flashing to alert and summon the others.

It’s not hard to imagine uses for this tech. These robots could get closer to reefs and other natural features safely without alarming the sea life, monitoring their health or looking for specific objects their camera-eyes could detect. Or they could meander around underneath docks and ships inspecting hulls more efficiently than a single craft can. Perhaps they might even be useful in search and rescue.

The research also advances our understanding of how and why animals swarm together in the first place.

With this research, we cannot only build more advanced robot collectives, but also learn about collective intelligence in nature. Fish must follow even simpler behavior patterns when swimming in schools than our robots do. This simplicity is so beautiful yet hard to discover,” said Berlinger. “Other researchers have reached out to me already to use my Bluebots as fish surrogates for biological studies on fish swimming and schooling. The fact that they welcome Bluebot among their laboratory fish makes me very happy.”

Facial recognition reveals political party in troubling new research

Researchers have created a machine learning system that they claim can determine a person’s political party, with reasonable accuracy, based only on their face. The study, from a group that also showed that sexual preference can seemingly be inferred this way, candidly addresses and carefully avoids the pitfalls of “modern phrenology,” leading to the uncomfortable conclusion that our appearance may express more personal information that we think.

The study, which appeared this week in the Nature journal Scientific Reports, was conducted by Stanford University’s Michal Kosinski. Kosinski made headlines in 2017 with work that found that a person’s sexual preference could be predicted from facial data.

The study drew criticism not so much for its methods but for the very idea that something that’s notionally non-physical could be detected this way. But Kosinski’s work, as he explained then and afterwards, was done specifically to challenge those assumptions and was as surprising and disturbing to him as it was to others. The idea was not to build a kind of AI gaydar — quite the opposite, in fact. As the team wrote at the time, it was necessary to publish in order to warn others that such a thing may be built by people whose interests went beyond the academic:

We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.

We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.

Similar warnings may be sounded here, for while political affiliation at least in the U.S. (and at least at present) is not as sensitive or personal an element as sexual preference, it is still sensitive and personal. A week hardly passes without reading of some political or religious “dissident” or another being arrested or killed. If oppressive regimes could obtain what passes for probable cause by saying “the algorithm flagged you as a possible extremist,” instead of for example intercepting messages, it makes this sort of practice that much easier and more scalable.

The algorithm itself is not some hyper-advanced technology. Kosinski’s paper describes a fairly ordinary process of feeding a machine learning system images of more than a million faces, collected from dating sites in the U.S., Canada and the U.K., as well as American Facebook users. The people whose faces were used identified as politically conservative or liberal as part of the site’s questionnaire.

The algorithm was based on open-source facial recognition software, and after basic processing to crop to just the face (that way no background items creep in as factors), the faces are reduced to 2,048 scores representing various features — as with other face recognition algorithms, these aren’t necessary intuitive things like “eyebrow color” and “nose type” but more computer-native concepts.

Chart showing how faces are cropped and reduced to neural network representations.

Image Credits: Michal Kosinski / Nature Scientific Reports

The system was given political affiliation data sourced from the people themselves, and with this it diligently began to study the differences between the facial stats of people identifying as conservatives and those identifying as liberal. Because it turns out, there are differences.

Of course it’s not as simple as “conservatives have bushier eyebrows” or “liberals frown more.” Nor does it come down to demographics, which would make things too easy and simple. After all, if political party identification correlates with both age and skin color, that makes for a simple prediction algorithm right there. But although the software mechanisms used by Kosinski are quite standard, he was careful to cover his bases in order that this study, like the last one, can’t be dismissed as pseudoscience.

The most obvious way of addressing this is by having the system make guesses as to the political party of people of the same age, gender and ethnicity. The test involved being presented with two faces, one of each party, and guessing which was which. Obviously chance accuracy is 50%. Humans aren’t very good at this task, performing only slightly above chance, about 55% accurate.

The algorithm managed to reach as high as 71% accurate when predicting political party between two like individuals, and 73% presented with two individuals of any age, ethnicity or gender (but still guaranteed to be one conservative, one liberal).

Image Credits: Michal Kosinski / Nature Scientific Reports

Getting three out of four may not seem like a triumph for modern AI, but considering people can barely do better than a coin flip, there seems to be something worth considering here. Kosinski has been careful to cover other bases as well; this doesn’t appear to be a statistical anomaly or exaggeration of an isolated result.

The idea that your political party may be written on your face is an unnerving one, for while one’s political leanings are far from the most private of info, it’s also something that is very reasonably thought of as being intangible. People may choose to express their political beliefs with a hat, pin or t-shirt, but one generally considers one’s face to be nonpartisan.

If you’re wondering which facial features in particular are revealing, unfortunately the system is unable to report that. In a sort of para-study, Kosinski isolated a couple dozen facial features (facial hair, directness of gaze, various emotions) and tested whether those were good predictors of politics, but none led to more than a small increase in accuracy over chance or human expertise.

“Head orientation and emotional expression stood out: Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust,” Kosinski wrote in author’s notes for the paper. But what they added left more than 10 percentage points of accuracy not accounted for: “That indicates that the facial recognition algorithm found many other features revealing political orientation.”

The knee-jerk defense of “this can’t be true — phrenology was snake oil” doesn’t hold much water here. It’s scary to think it’s true, but it doesn’t help us to deny what could be a very important truth, since it could be used against people very easily.

As with the sexual orientation research, the point here is not to create a perfect detector for this information, but to show that it can be done in order that people begin to consider the dangers that creates. If for example an oppressive theocratic regime wanted to crack down on either non-straight people or those with a certain political leaning, this sort of technology gives them a plausible technological method to do so “objectively.” And what’s more, it can be done with very little work or contact with the target, unlike digging through their social media history or analyzing their purchases (also very revealing).

We have already heard of China deploying facial recognition software to find members of the embattled Uyghur religious minority. And in our own country, this sort of AI is trusted by authorities as well — it’s not hard to imagine police using the “latest technology” to, for instance, classify faces at a protest, saying “these 10 were determined by the system as being the most liberal,” or what have you.

The idea that a couple researchers using open-source software and a medium-sized database of faces (for a government, this is trivial to assemble in the unlikely possibility they do not have one already) could do so anywhere in the world, for any purpose, is chilling.

“Don’t shoot the messenger,” said Kosinski. “In my work, I am warning against widely used facial recognition algorithms. Worryingly, those AI physiognomists are now being used to judge people’s intimate traits – scholars, policymakers, and citizens should take notice.”

GoPro makes stopping and starting simpler with motion, power, QR triggers

GoPro may have started out at the intersection of capability and affordability in the action cam space, but since then it has increasingly leaned towards use by professionals or deployment by businesses. The latest features, announced at CES, underline that priority, making the cameras simpler and more automated for rentals and hands-free operation.

If you’ve got a Hero 7, 8, or 9 Black, or Max, you should be able to download the latest GoPro Labs firmware, which adds the following convenient features.

Motion and USB power triggers: Set the camera to start and stop recording either when power flows to it (in a dash cam situation, for instance) or when in motion (for a bike or ski helmet perhaps). Motion detection is also improved and now works in all video modes.

The cameras can already perform various tasks upon scanning QR codes, but here’s a new one: you can use a QR code to tell a device to connect to a specific wi-fi network and start streaming. It’s faster than using the app for when you need a quick deployment.

An obvious one for tourism is the “one button mode,” which as you might expect limits the controls to starting and stopping video capture. Great both for the less tech-savvy on vacation who can’t handle more than one button’s worth of controls, and also for rental joints tired of their cameras coming back with weird custom settings after an overly tech-savvy customer tweaks them.

There are a few other improvements, which you can check out at the press release.

Parler sues Amazon, leveling far-fetched antitrust allegations

Parler has sued Amazon after the beleaguered conservative social media site was expelled from AWS, filing a fanciful complaint alleging the internet giant took it out for political reasons — and in an antitrust conspiracy to benefit Twitter. But its own allegations, including breach of contract, are belied by evidence they supply alongside the suit.

In the lawsuit, filed today in the U.S. Western District Court, Parler complains that “AWS’s decision to effectively terminate Parler’s account is apparently motivated by political animus. It is also apparently designed to reduce competition in the microblogging services market to the benefit of Twitter.”

Regarding the “political animus” it is difficult to speak to Parler’s reasoning, since that argument is supported nowhere in the suit — it simply is never referred to again.

There is the suggestion that Amazon has shown more tolerance for offending content on Twitter than on Parler, but this isn’t well substantiated. For instance, the suit notes that “Hang Mike Pence” trended on Friday the 8th, without noting that much of this volume was, as any user of Twitter can see by searching, people decrying this phrase as having been chanted by the rioters in the Capitol two days prior.

By way of contrast, one Parler post cited by Amazon says that “we need to start systematicly [sic] assasinating [sic] #liberal leaders, liberal activists, #blm leaders and supporters,” and so on. As TechCrunch has been monitoring Parler conversations, we can say that this is far from an isolated example of this rhetoric.

The antitrust argument suggests a conspiracy by Amazon to protect and advance the interests of Twitter. Specifically, the argument is that because Twitter is a major customer of AWS, and Parler is a threat to Twitter, Amazon wanted to take Parler out of the picture.

Given the context of Parler’s looming threat to Twitter and the fact that the Twitter ban might not long muzzle the President if he switched to Parler, potentially bringing tens of millions of followers with him, AWS moved to shut down Parler.

This argument is not convincing for several reasons, but the most obvious one is that Parler was at the time also an AWS customer. If people are going to one customer to another, why would Amazon care at all, let alone enough to interfere to the point of legal and ethical dubiety?

The lawsuit also accuses Amazon of leaking the email communicating Parler’s imminent suspension to reporters before it was sent to administrators at the site. (It also says that Amazon “sought to defame” Parler, though defamation is not part of the legal complaint. Parler seems to be using this term rather loosely.)

Lastly Parler says Amazon is in breach of contract, having not given the 30 days warning stipulated in the terms of service. The exception is if a “material breach remains uncured for a period of 30 days” after notice. As Parler explains it:

On January 8, 2021, AWS brought concerns to Parler about user content that encouraged violence. Parler addressed them, and then AWS said it was “okay” with Parler.

The next day, January 9, 2021, AWS brought more “bad” content to Parler and Parler took down all of that content by the evening.

Thus, there was no uncured material breach of the Agreement for 30 days, as required for termination.

But in the email attached as evidence to the lawsuit — literally exhibit A — Amazon makes it clear the issues have been ongoing for longer than that (emphasis added):

Over the past several weeks, we’ve reported 98 examples to Parler of posts that clearly encourage and incite violence… You remove some violent content when contacted by us or others, but not always with urgency… It’s clear that Parler does not have an effective process to comply with the AWS terms of service.

You can read the rest of the letter here, but it’s obvious that Amazon is not simply saying that a few days of violations are the cause of Parler’s being kicked off the service.

Parler asks a judge for a Temporary Restraining Order that would restore its access to AWS services while the rest of the case is argued, and for damages to be specified at trial.

TechCrunch has asked Amazon for comment and will update this post if we hear back. Meanwhile you can read the full complaint below:

Parler v Amazon by TechCrunch on Scribd

Stolen computers are the least of the government’s security worries

Reports that a laptop from House Speaker Nancy Pelosi’s office was stolen during the pro-Trump rioters’ sack of the Capitol building has some worried that the mob may have access to important, even classified information. Fortunately that’s not the case — even if this computer and others had any truly sensitive information, which is unlikely, like any corporate asset it can almost certainly be disabled remotely.

The cybersecurity threat in general from the riot is not as high as one might think, as we explained yesterday. Specific to stolen or otherwise compromised hardware, there are several facts to keep in mind.

In the first place, the offices of elected officials are in many ways already public spaces. These are historic buildings through which tours often go, in which meetings with foreign dignitaries and other politicians are held, and in which thousands of ordinary civil servants without any security clearance would normally be working shoulder-to-shoulder. The important work they do is largely legislative and administrative — largely public work, where the most sensitive information being exchanged is probably unannounced speeches and draft bills.

But recently, you may remember, most of these people were working from home. Of course during the major event of the joint session confirming the electors, there would be more people than normal. But this wasn’t an ordinary day at the office by a long shot — even before hundreds of radicalized partisans forcibly occupied the building. Chances are there wasn’t a lot of critical business being conducted on the desktops in these offices. Classified data lives in the access-controlled SCIF, not on random devices sitting in unsecured areas.

In fact, the laptop is reported by Reuters as having been part of a conference room’s dedicated hardware — this is the dusty old Inspiron that lives on the A/V table so you can put your Powerpoint on it, not Pelosi’s personal computer, let alone a hard line to top secret info.

Even if there was a question of unintended access, it should be noted that the federal government, as any large company might, has a normal IT department with a relatively modern provisioning structure. The Pelosi office laptop, like any other piece of hardware being used for official House and Senate business, is monitored by IT and should be able to be remotely disabled or wiped. The challenge for the department is figuring out which hardware does actually need to be handled that way — as was reported earlier, there was (understandably) no official plan for a violent takeover of the Capitol building.

In other words, it’s highly likely that the most that will result from the theft of government computers on the 6th will be inconvenience or at most some embarrassment should some informal communications become public. Staffers do gossip and grouse, of course, on both back and official channels.

That said, the people who invaded these office and stole that equipment — some on camera — are already being arrested and charged. Just because the theft doesn’t present a serious security threat doesn’t mean it wasn’t highly illegal in several different ways.

Any cybersecurity official will tell you that the greater threat by far is the extensive infiltration of government contractors and accounts through the SolarWinds breach. Those systems are packed with information that was never meant to be public, and will likely provide fuel for credential-related attacks for years to come.

Google AI concocts ‘breakie’ and ‘cakie’ hybrid baked goods

If, as I suspect many of you have, you have worked your way through baking every type of cookie, bread and cake under the sun over the last year, Google has a surprise for you: a pair of AI-generated hybrid treats, the “breakie” and the “cakie.”

The origin of these new items seems to have been in a demonstration of the company’s AutoML Tables tool, a codeless model generation system that’s more spreadsheet automation than what you’d really call “artificial intelligence.” But let’s not split hairs, or else we’ll never get to the recipe.

Specifically it was the work of Sara Robinson, who was playing with these tools earlier last spring, as a person interested in machine learning and baking was likely to start doing around that time as cabin fever first took hold.

What happened was she wanted to design a system that would look at a recipe and automatically tell you whether it was bread, cookie or cake, and why — for instance, a higher butter and sugar content might bias it toward cookie, while yeast was usually a dead giveaway for bread.

Image Credits: Sara Robinson

But of course, not every recipe is so straightforward, and the tool isn’t always 100% sure. Robinson began to wonder, what would a recipe look like that the system couldn’t decide on?

She fiddled around with the ingredients until she found a balance that caused the machine learning system to produce a perfect 50/50 split between cookie and cake. Naturally, she made some — behold the “cakie.”

A cakie, left, and breakies, right, with Robinson.

A cakie, left, and breakies, right, with Robinson. Image Credits: Sara Robinson / Google

“It is yummy. And it strangely tastes like what I’d imagine would happen if I told a machine to make a cake cookie hybrid,” she wrote.

The other hybrid she put together was the “breakie,” which as you surely have guessed by now is half bread, half cookie. This one ended up a little closer to “fluffy cookies, almost the consistency of a muffin.” And indeed they look like muffin tops that have lost their bottoms. But breakie sounds better than muffin tops (or “brookie,” apparently the original name).

These ingredients and ratios were probably invented or tried long ago, but it’s certainly an interesting way to arrive at a new recipe using only old ones.

The recipes below are perfectly doable, but to be transparent were not entirely generated by algorithm. It only indicates proportions of ingredients, and didn’t include any flavorings or features like vanilla or chocolate chips, both which Robinson added. The actual baking instructions had to be puzzled out as well (the AI doesn’t know what temperature is, or pans). But if you need something to try making that’s different from the usual weekend treat, you could probably do worse than one of these.

 Image Credits: Sara Robinson / Google

 Image Credits: Sara Robinson / Google

Epic acquires Rad Game Tools, veteran of many gaming generations

Epic today announced the acquisition of Rad Game Tools, maker of game development tools for many years. They’ve stayed largely behind the scenes, but many gamers will recognize the colorful Bink Video logo, which has appeared in the openings of many a title over the years.

“Our work with Epic goes back decades, and joining forces is a natural next step given our alignment on products, mission, and culture,” Rad Game Tools founder and CEO Jeff Roberts said in the announcement. And it has seemingly only intensified recently.

Close integration with engines and platforms makes for good standards, and good standards get embraced by developers. That’s why Epic has been cozying up to Sony as well as snapping up components to fit into its Unreal engine, positioning it as an all-encompassing development platform for next-generation games.

Image Credits: RAD Game Tools

Rad (styled RAD) has been in games for a long time, as its decidedly old-school website attests. Bink is a video codec for games that focuses on high compression and speedy rendering, both important in the gaming world. Oodle, Telemetry, Granny 3D and Miles Sound System are all development tools beyond what the lay person would understand, but no doubt have many fans.

Epic may be known now as the creator of money-printing machine Fortnite, but the company has been around for decades and probably knows the Rad team well. That may help explain the friendly terms under which the acquisition will take place.

“RAD will continue supporting their game industry, film, and television partners, with their sales and business development team maintaining and selling licenses for their products to companies across industries – including those that do not utilize Unreal Engine,” Epic said in its announcement.

So while Bink and the rest will continue to be available for anyone to use outside Epic’s domain, they will almost certainly be better integrated with the Unreal ecosystem. As game development cost and complexity rises, means of simplification are often taken advantage of. Epic is working hard to make Unreal not just the most graphically powerful engine for development, but also the most unified.

A request for comment and further details on the deal sent to Rad Game Tools was intercepted by Epic and declined.

Social media allowed a shocked nation to watch a coup attempt in real time

Today’s historic and terrifying coup attempt by pro-Trump extremists in Washington, D.C. played out live the same way it was fomented — on social media. Once again Twitter, streaming sites, and other user-generated media were the only place to learn what was happening in the nation’s capital — and the best place to be misled by misinformation and propaganda.

In the morning, official streams and posts portended what people expected of the day: a drawn-out elector certification process in Congress while a Trump-led rally turned to general protests. But when extremists gathered at the steps of the U.S. Capitol building, the country watched isolated flare-ups between them and police turn into a full-blown violent invasion of several federal buildings, including where Congress was holding a joint session.

Network news and mainstream sources struggled to keep up as people on both sides documented the chaos that followed. As extremists pushed into the outlying buildings, then the rotunda, then the House and Senate chambers, everyone from White House press pool reporters to political aides and elected officials from both parties live-tweeted and streamed the events as they happened.

Videos of outnumbered security guards retreating from mobs or trading blows were seen by millions, who no doubt could barely believe it was really occurring. Meanwhile, reports propagated from around the country as smaller invasions of government buildings took place.

On one hand, it further demonstrated the power of social media to serve as a distributed, real-time aggregator of important information. It is hard to overstate the importance of receiving information directly from the source, such as when people inside the Senate chamber posted images of the rioters attempting to break through a barricaded door while security inside pointed their guns through broken windows.

Representatives, aides and reporters posted live as they were evacuated from their offices, told to lie on the ground to avoid being shot, or given gas masks in case tear gas or pepper spray was deployed. What might have seemed an abstraction when reported by a talking head on the National Mall was rendered shockingly visceral as these people expressed fear for their lives. The people to whom we have been trained to alert of such things, our elected officials, were the very ones being threatened.

However, social media also allowed for the amplification and normalization of these historic crimes as rioters streamed as they went and posted images to fringe sites like Parler and Trump-themed Reddit clones. It wasn’t hard to spot rioters apparently “doing it for the ‘gram” despite those images and videos comprising what amounts to a confession of a federal crime.

Meanwhile Trump and his allies downplayed the violence, blaming Democrats for using “malicious rhetoric” and repeating unfounded claims regarding the election.

Years of “we take this very seriously” by the likes of Jack Dorsey and Mark Zuckerberg have done little to curb the activity by the likes of white supremacists, self-styled “militias” like the Proud Boys, and misinformation aggregators like “Stop the steal” groups. Despite constant assurances that AI and a crack team of moderators are on the job, it is still on these platforms that we find misleading and false information about topics such as COVID-19 and election security.

Tech leaders today voiced, not for the first time, their frustration with these companies, and while deplatforming has proven effective in some ways, it is not a complete solution. As the cost and difficulty of launching, say, a streaming site, continues to decrease, it is only to be expected that when a YouTuber gets kicked off that platform, they will land softly on another and their audience will follow.

The promise and the danger of social media were both on display today at their absolute maximum. One can hardly imagine such an event playing out in the future without the intimate details to which we were treated from the sides of both government and insurrectionists.

While Twitter, Facebook, and YouTube have taken varying actions, of varying seriousness and permanence, it seems clear that whether or not they want to crack down on the worst of it, they may no longer be able to, either because they lack the tools, or the offenders have built a Twitter, Facebook, and YouTube of their own.

Facebook and YouTube remove Trump video calling extremists ‘special’

Facebook and YouTube have removed a video posted by President Trump telling rioters who stormed Congress “we love you.” The same video was left online but blocked from being shared by Twitter just minutes ago.

A great deal of video and content from the chaotic scene in Washington, D.C. can be found on social media, but Trump’s commentary was spare. His posts suggested the rioters “remain peaceful,” well after they had broken into the Capitol buildings and Congress had been evacuated.

At about 5 PM Eastern time, Trump posted a video in which he reiterated that the election was “stolen” but that “you have to go home now. Go home, we love you. You’re very special.”

On Twitter this was soon restricted, with a large warning that “this Tweet can’t be replied to, Retweeted, or liked due to a risk of violence.”

Guy Rosen, VP of Integrity at Facebook, wrote on Twitter that “this is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video. We removed it because on balance we believe it contributes to rather than diminishes the risk of ongoing violence.”

At Facebook there is some precedent for one of Trump’s posts being removed. In August, the company took down a video in which Trump stated that children were “almost immune” to COVID-19, a dangerous and false claim not supported by science.

As Twitter and Facebook crafted bespoke policies to address threats to the election leading into November, YouTube mostly remained quiet. In early December, a month after the election, the company announced that it would begin removing content that made false claims that the U.S. election was affected by “widespread fraud or errors.” YouTube’s decision to remove president’s video on Wednesday aligned with that policy.

“We removed a video posted this afternoon to Donald Trump’s channel that violated our policies regarding content that alleges widespread fraud or errors changed the outcome of the 2020 U.S. Election,” a YouTube spokesperson told TechCrunch, noting that the video is allowed if accompanied by proper context for “educational” value.

This story is developing.