Twitter rolls out bigger images and cropping control on iOS and Android

Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.

In March, Twitter rolled out a limited test of uncropped, larger images in users’ feeds. Now, it’s declared those tests a success and improved the image sharing experience for everybody.

On Twitter for Android or iOS, standard aspect ratio images (16:9 and 4:3) will now display in full without any cropping. Instead of gambling on how an image will show up in the timeline — and potentially ruining an otherwise great joke — images will look just like they did when you shot them.

Twitter’s new system will show anyone sharing an image a preview of what it will look like before it goes live in the timeline, resolving past concerns that Twitter’s algorithmic cropping was biased toward highlighting white faces.

“Today’s launch is a direct result of the feedback people shared with us last year that the way our algorithm cropped images wasn’t equitable,” Twitter spokesperson Lauren Alexander said. The new way of presenting images decreases the platform’s reliance on automatic, machine learning-based image cropping.

Super tall or wide images will still get a centered crop, but Twitter says it’s working to make that better too, along with other aspects of how visual media gets displayed in the timeline.

For visual artists like photographers and cartoonists who promote their work on Twitter, this is actually a pretty big deal. Not only will photos and other kinds of art score more real estate on the timeline, but artists can be sure that they’re putting their best tweet forward without awkward crops messing stuff up.

Twitter’s Chief Design Officer Dantley Davis celebrated by tweeting a requisite dramatic image of the Utah desert (Dead Horse Point — great spot!)

We regret to inform you that the brands are also aware of the changes.

The days of “open for a surprise” tweets might be numbered, but the long duck can finally have his day.

Facebook’s Oversight Board threw the company a Trump-shaped curveball

Facebook’s controversial policy-setting supergroup issued its verdict on Trump’s fate Wednesday, and it wasn’t quite what most of us were expecting.

We’ll dig into the decision to tease out what it really means, not just for Trump, but also for Facebook’s broader experiment in outsourcing difficult content moderation decisions and for just how independent the board really is.

What did the Facebook Oversight Board decide?

The Oversight Board backed Facebook’s determination that Trump violated its policies on “Dangerous Individuals and Organizations,” which prohibits anything that praises or otherwise supports violence. The the full decision and accompanying policy recommendations are online for anyone to read.

Specifically, the Oversight Board ruled that two Trump posts, one telling Capitol rioters “We love you. You’re very special” and another calling them “great patriots” and telling them to “remember this day forever” broke Facebook’s rules. In fact, the board went as far as saying the pair of posts “severely” violated the rules in question, making it clear that the risk of real-world harm in Trump’s words was was crystal clear:

The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. At the time of Mr. Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions. As president, Mr. Trump had a high level of influence. The reach of his posts was large, with 35 million followers on Facebook and 24 million on Instagram.”

While the Oversight Board praised Facebook’s decision to suspend Trump, it disagreed with the way the platform implemented the suspension. The group argued that Facebook’s decision to issue an “indefinite” suspension was an arbitrary punishment that wasn’t really supported by the company’s stated policies:

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

The Oversight Board didn’t mince words on this point, going on to say that by putting a “vague, standardless” punishment in place and then kicking the ultimate decision to the Oversight Board, “Facebook seeks to avoid its responsibilities.” Turning things around, the board asserted that it’s actually Facebook’s responsibility to come up with an appropriate penalty for Trump that fits its set of content moderation rules.

 

Is this a surprise outcome?

If you’d asked me yesterday, I would have said that the Oversight Board was more likely to overturn Facebook’s Trump decision. I also called Wednesday’s big decision a win-win for Facebook, because whatever the outcome, it wouldn’t ultimately be criticized a second time for either letting Trump back onto the platform or kicking him off for good. So much for that!

A lot of us didn’t see the “straight up toss the ball back into Facebook’s court” option as a possible outcome. It’s ironic and surprising that the Oversight Board’s decision to give Facebook the final say actually makes the board look more independent, not less.

Facebook likely saw a more clear-cut decision on the Trump situation in the cards. This is a challenging outcome for a company that’s probably ready to move on from its (many, many) missteps during the Trump era. But there’s definitely an argument that if the board declared that Facebook made the wrong call and reinstated Trump that would have been a much bigger headache.

What does it mean that the Oversight Board sent the decision back to Facebook?

Ultimately the Oversight Board is asking Facebook to either a) give Trump’s suspension and end date or b) delete his account. In a less severe case, the normal course of action would be for Facebook to remove whatever broke the rules, but given the ramifications here and the fact that Trump is a repeat Facebook rule-breaker, this is obviously all well past that option.

What will Facebook do?

We’re in for a wait. The board called for Facebook to evaluate the Trump situation and reach a final decision within six months, calling for a “proportionate” response that is justified by its platform rules. Since Facebook and other social media companies are re-writing their rules all the time and making big calls on the fly, that gives the company a bit of time to build out policies that align with the actions it plans to take. See you again on November 5.

In the months following the violence at the U.S. Capitol, Facebook repeatedly defended its Trump call as “necessary and right.” It’s hard to imagine the company deciding that Trump will get reinstated six months from now, but in theory Facebook could decide that length of time was an appropriate punishment and write that into its rules. The fact that Twitter permanently banned Trump means that Facebook could comfortably follow suit at this point.

If Trump had won reelection, this whole thing probably would have gone down very differently. As much as Facebook likes to say its decisions are aligned with lofty ideals — absolute free speech, connecting people — the company is ultimately very attuned to its regulatory and political environment. Trump’s actions were on January 6 were dangerous and flagrant, but Biden’s looming inauguration two weeks later probably influenced the company’s decision just as much.

In direct response to the decision, Facebook’s Nick Clegg wrote only: “We will now consider the board’s decision and determine an action that is clear and proportionate.” Clegg says Trump will stay suspended until then but didn’t offer further hints at what comes next.

Did the board actually change anything?

Potentially. In its decision, the Oversight Board said that Facebook asked for “observations or recommendations from the Board about suspensions when the user is a political leader.” The board’s policy recommendations aren’t binding like its decisions are, but since Facebook asked, it’s likely to listen.

If it does, the Oversight Board’s recommendations could reshape how Facebook handles high profile accounts in the future:

The Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognizing that other users with large audiences can also contribute to serious risks of harm.

While the same rules should apply to all users, context matters when assessing the probability and imminence of harm. When posts by influential users pose a high probability of imminent harm, Facebook should act quickly to enforce its rules. Although Facebook explained that it did not apply its ‘newsworthiness’ allowance in this case, the Board called on Facebook to address widespread confusion about how decisions relating to influential users are made. The Board stressed that considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm.

Facebook and other social networks have hidden behind newsworthiness exemptions for years instead of making difficult policy calls that would upset half their users. Here, the board not only says that political leaders don’t really deserve special consideration while enforcing the rules, but that it’s much more important to take down content that could cause harm than it is to keep it online because it’s newsworthy.

So… we’re back to square one?

Yes and no. Trump’s suspension may still be up in the air, but the Oversight Board is modeled after a legal body and its real power is in setting precedents. The board kicked this case back to Facebook because the company picked a punishment for Trump that wasn’t even on the menu, not because it thought anything about his behavior fell in a gray area.

The Oversight Board clearly believed that Trump’s words of praise for rioters at the Capitol created a high stakes, dangerous threat on the platform. It’s easy to imagine the board reaching the same conclusion on Trump’s infamous “when the looting starts, the shooting starts” statement during the George Floyd protests, even though Facebook did nothing at the time. Still, the board stops short of saying that behavior like Trump’s merits a perma-ban — that much is up to Facebook.

Twitter rolls out improved ‘reply prompts’ to cut down on harmful tweets

A year ago, Twitter began testing a feature that would prompt users to pause and reconsider before they replied to a tweet using “harmful” language — meaning language that was abusive, trolling, or otherwise offensive in nature. Today, the company says it’s rolling improved versions of these prompts to English-language users on iOS and soon, Android, after adjusting its systems that determine when to send the reminders to better understand when the language being used in the reply is actually harmful.

The idea behind these forced slow downs, or nudges, are about leveraging psychological tricks in order to help people make better decisions about what they post. Studies have indicated that introducing a nudge like this can lead people to edit and cancel posts they would have otherwise regretted.

Twitter’s own tests found that to be true, too. It said that 34% of people revised their initial reply after seeing the prompt, or chose not to send the reply at all. And, after being prompted once, people then composed 11% fewer offensive replies in the future, on average. That indicates that the prompt, for some small group at least, had a lasting impact on user behavior. (Twitter also found that users who were prompted were less likely to receive harmful replies back, but didn’t further quantify this metric.)

Image Credits: Twitter

However, Twitter’s early tests ran into some problems. it found its systems and algorithms sometimes struggled to understand the nuance that occurs in many conversations. For example, it couldn’t always differentiate between offensive replies and sarcasm or, sometimes, even friendly banter. It also struggled to account for those situations in which language is being reclaimed by underrepresented communities, and then used in non-harmful ways.

The improvements rolling out starting today aim to address these problems. Twitter says it’s made adjustments to the technology across these areas, and others. Now, it will take the relationship between the author and replier into consideration. That is, if both follow and reply to each other often, it’s more likely they have a better understanding of the preferred tone of communication than someone else who doesn’t.

Twitter says it has also improved the technology to more accurately detect strong language, including profanity.

And it’s made it easier for those who see the prompts to let Twitter know if the prompt was helpful or relevant — data that can help to improve the systems further.

How well this all works remains to be seen, of course.

Image Credits: Twitter

While any feature that can help dial down some of the toxicity on Twitter may be useful, this only addresses one aspect of the larger problem — people who get into heated exchanges that they could later regret. There are other issues across Twitter regarding abusive and toxic content that this solution alone can’t address.

These “reply prompts” aren’t the only time Twitter has used the concept of nudges to impact user behavior. It also reminds users to read an article before you retweet and amplify it in an effort to promote more informed discussions on its platform.

Twitter says the improved prompts are rolling out to all English-language users on iOS starting today, and will reach Android over the next few days.

Twitter expands Spaces to anyone with 600+ followers, details plans for tickets, reminders and more

Twitter Spaces, the company’s new live audio rooms feature, is opening up more broadly. The company announced today it’s making Twitter Spaces available to any account with 600 followers or more, including both iOS and Android users. It also officially unveiled some of the features it’s preparing to launch, like Ticketed Spaces, scheduling features, reminders, support for co-hosting, accessibility improvements, and more.

Along with the expansion, Twitter is making Spaces more visible on its platform, too. The company notes it has begun testing the ability to find and join a Space from a purple bubble around someone’s profile picture right from the Home timeline.

Image Credits: Twitter

Twitter says it decided on the 600 follower figure as being the minimum to gain access to Twitter Spaces based on its earlier testing. Accounts with 600 or more followers tend to have “a good experience” hosting live conversations because they have a larger existing audience who can tune in. However, Twitter says it’s still planning to bring Spaces to all users in the future.

In the meantime, it’s speeding ahead with new features and developments. Twitter has been building Spaces in public, taking into consideration user feedback as it prioritizes features and updates. Already, it has built out an expanded set of audience management controls, as users requested, introduced a way for hosts to mute all speakers at once, and added the laughing emoji to its set of reactions, after users requested it.

Now, its focus is turning towards creators. Twitter Spaces will soon support multiple co-hosts, and creators will be able to better market and even charge for access to their live events on Twitter Spaces. One feature, arriving in the next few weeks, will allow users to schedule and set reminders about Spaces they don’t want to miss. This can also help creators who are marketing their event in advance, as part of the RSVP process could involve pushing users to “set a reminder” about the upcoming show.

Twitter Spaces’ rival, Clubhouse, also just announced a reminders feature during its Townhall event on Sunday as well at the start of its external Android testing. The two platforms, it seems, could soon be neck-and-neck in terms of feature set.

Image Credits: Twitter

But while Clubhouse recently launched in-app donations feature as a means of supporting favorite creators, Twitter will soon introduce a more traditional means of generating revenue from live events: selling tickets. The company says it’s working on a feature that will allow hosts to set ticket prices and how many are available to a given event, in order to give them a way of earning revenue from their Twitter Spaces.

A limited group of testers will gain access to Ticketed Spaces in the coming months, Twitter says. Unlike Clubhouse, which has yet to tap into creator revenue streams, Twitter will take a small cut from these ticket sales. However, it notes that the “majority” of the revenue will go to the creators themselves.

Image Credits: Twitter

Twitter also noted that it’s improving its accessibility feature, live captions, so they can be paused and customized, and is working to make them more accurate.

The company will be hosting a Twitter Space of its own today around 1 PM PT to further discuss these announcements in more detail.

What3Words sends legal threat to a security researcher for sharing an open-source alternative

A U.K. company behind digital addressing system What3Words has sent a legal threat to a security researcher for offering to share an open-source software project with other researchers, which What3Words claims violate its copyright.

Aaron Toponce, a systems administrator at XMission, received a letter on Thursday from a law firm representing What3Words, requesting that he delete tweets related to the open source alternative, WhatFreeWords. The letter also demands that he disclose to the law firm the identity of the person or people with whom he had shared a copy of the software, agree that he would not make any further copies of the software, and to delete any copies of the software he had in his possession.

The letter gave him until May 7 to agree, after which What3Words would “waive any entitlement it may have to pursue related claims against you,” a thinly-veiled threat of legal action.

“This is not a battle worth fighting,” he said in a tweet. Toponce told TechCrunch that he has complied with the demands, fearing legal repercussions if he didn’t. He has also asked the law firm twice for links to the tweets they want deleting but has not heard back. “Depending on the tweet, I may or may not comply. Depends on its content,” he said.

The legal threat sent to Aaron Toponce. (Image: supplied)

U.K.-based What3Words divides the entire world into three-meter squares and labels each with a unique three-word phrase. The idea is that sharing three words is easier to share on the phone in an emergency than having to find and read out their precise geographic coordinates.

But security researcher Andrew Tierney recently discovered that What3Words would sometimes have two similarly-named squares less than a mile apart, potentially causing confusion about a person’s true whereabouts. In a later write-up, Tierney said What3Words was not adequate for use in safety-critical cases.

It’s not the only downside. Critics have long argued that What3Words’ proprietary geocoding technology, which it bills as “life-saving,” makes it harder to examine it for problems or security vulnerabilities.

Concerns about its lack of openness in part led to the creation of the WhatFreeWords. A copy of the project’s website, which does not contain the code itself, said the open-source alternative was developed by reverse-engineering What3Words. “Once we found out how it worked, we coded implementations for it for JavaScript and Go,” the website said. “To ensure that we did not violate the What3Words company’s copyright, we did not include any of their code, and we only included the bare minimum data required for interoperability.”

But the project’s website was nevertheless subjected to a copyright takedown request filed by What3Words’ counsel. Even tweets that pointed to cached or backup copies of the code were removed by Twitter at the lawyers’ requests.

Toponce — a security researcher on the side — contributed to Tierney’s research, who was tweeting out his findings as he went. Toponce said that he offered to share a copy of the WhatFreeWord code with other researchers to help Tierney with his ongoing research into What3Words. Toponce told TechCrunch that receiving the legal threat may have been a combination of offering to share the code and also finding problems with What3Words.

In its letter to Toponce, What3Words argues that WhatFreeWords contains its intellectual property and that the company “cannot permit the dissemination” of the software.

Regardless, several websites still retain copies of the code and are easily searchable through Google, and TechCrunch has seen several tweets linking to the WhatFreeWords code since Toponce went public with the legal threat. Tierney, who did not use WhatFreeWords as part of his research, said in a tweet that What3Words’ reaction was “totally unreasonable given the ease with which you can find versions online.”

We asked What3Words if the company could point to a case where a judicial court has asserted that WhatFreeWords has violated its copyright. What3Words spokesperson Miriam Frank did not respond to multiple requests for comment.

An Oracle EVP took a brass-knuckled approach with a reporter today; now he’s suspended from Twitter

Companies and the reporters who cover them routinely find themselves at odds, particularly when the stories being chased are unflattering or bring unwanted attention to a business’s dealings, or, in the company’s estimation, simply inaccurate.

Many companies fight back, which is why crisis communications is a very big and lucrative business. Still, how a company fights back matters. And according to crisis communications pros who TechCrunch spoke with this afternoon, a new post on Oracle’s corporate blog misses the mark, as did the company’s related follow-up on social media.

In fact, the author of the post, an Oracle executive named Ken Glueck, a 25-year-long veteran of the company, has been temporarily suspended by Twitter, the company told Gizmodo this afternoon.

The trouble ties to a series of pieces by the news site The Intercept about how a “network of local resellers helps funnel Oracle technology to the police and military in China,” and Oracle’s response to the pieces. While it isn’t uncommon for companies to post responses to media stories on their own platforms (as well as to take out ads in mainstream media outlets), the crisis execs with whom we spoke — and who asked not to be named, given that they work with companies like Oracle — had a few observations that might be helpful to Oracle in the future.

Rule number one: don’t draw attention unnecessarily to work that you might prefer didn’t exist. Oracle’s newest post doesn’t link back to the new Intercept story that Glueck works to dismantle, but in an earlier post about the first Intercept story that ran in February, Glueck hyperlinks to the story on Oracle’s blog. It’s hard to know what Oracle wants its audience to read more — Glueck’s blog post or that Intercept story, particularly given its intriguing title (“How Oracle Sells Repression in China.”). “How many of Oracle’s customers or employees saw [The Intercept piece] and didn’t give a damn and now he’s drawing attention to it?” noted one exec we’d interviewed today.

Rule number two: Don’t attack reporters; attack (if you must) the outlet. In Glueck’s first diatribe against The Intercept over its February piece, he mentions the outlet 26 times and the author of the piece once. In Glueck’s newest salvo against The Intercept, he refers to its author, reporter Mara Hvistendahl, 22 times — mostly by her first name — and even invites readers of Oracle’s blog to reach out to him, writing in boldface: “If you have any information about Mara or her reporting, write me securely at kglueck AT protonmail.com.”

Though Glueck has since said the call-out was a tongue-in-cheek gesture, it was subsequently removed from the post, presumably owing to its “sinister tone” as observed by one of our experts. “No one likes a bully,” notes this comms pro, adding that  “bullying conveys weakness.”

Before

After

 

Rule number three: Know your purpose. By lashing out in what is a plainly derisive tone to The Intercept’s piece, as well as continuing to doubling down on its attack against  Hvistendahl on social media afterward, Oracle’s strategy became less and less clear, says one of the crisis specialists we spoke with.

“You can do what Ken did and mock” the reporter, says this person, “but is that going to stop The Intercept from continuing to do stories about Oracle? And what is the reaction of other media? Are they scared off by [what happened today] or are they going to circle the wagons?” (Below: a note from an L.A. Times reporter to Glueck today in response to his call for information about Hvistendahl.)

Rule four: Keep it short. Two of the pros we spoke with today applauded Glueck’s writing style, remarking that it’s both fluid and funny. Both also observed that his response is far too long. “I couldn’t get through it,” said one.

Last rule: Find another way if possible. The crisis experts we spoke with said it’s ideal to first work with a reporter, then the reporter’s editor if necessary, and if it comes to it, involve lawyers, of which Oracle surely has plenty. “That’s the chain of appeal if a reporter has gotten a story blatantly wrong,” said one source.

Very possibly, Glueck decided to throw out this rulebook by design. Oracle tends to do things its own way, and Glueck is very much a product of that culture. (The WSJ wrote a 1,300-word profile about Glueck last year, calling him a “potent weapon” for Oracle.)

As for Hvistendahl, she suggests there is another reason Oracle took the route that it did.

In a statement sent to us earlier, she writes that “Ken Glueck has published two lengthy blog posts attacking me and my editor, Ryan Tate. But Oracle has not refuted my central finding, which is that the company marketed its analytics software for use by police in China. Oracle also hasn’t refuted our reporting on Oracle’s sale and marketing of its analytics software to police elsewhere in the world. We found evidence of Oracle selling or marketing analytics software to police in Mexico, Pakistan, Turkey, and the UAE. In Brazil, my colleague Tatiana Dias uncovered police contracts between Oracle and Rio de Janeiro’s notoriously corrupt Civil Police.”

How Link in Bio became the new online real estate for the creator economy

In the past few months I’ve been writing on VC Cafe about various aspects of the Creator Economy. From the definition of what is the creator economy and what are its main challenges, who are the creators and how they make money and 5 advancements powering the future of content creation. But underneath all that, there’s a seemingly simple infrastructure challenge that is has become a big opportunity for startups – the link in bio page.

It all starts with identity

Creators come from different walks of life. The can be gamers, models, teachers, performing artists, makers, etc. To share their craft with their audience, they typically create some sort of content/ media – text, video, audio, images, etc. They distribute the content via established Platforms that help them reach and grow an audience such as Twitch, Youtube, Tiktok, SubStack, OnlyFans etc, and more often than not, they build following in more than one of these channels and benefit from cross promotion (i.e. link from TikTok page to Instagram profile).

Several creators have also opted to sell/promote products, which is not always possible on the platforms or incurs a high take rate that they would rather avoid (think Apple’s 30% take on in-app purchases, more on this in my next post).

The fragmentation of media creates a challenge for the creators. What’s “home”? do they care about driving users to a personal website that doesn’t add to their community size or engagement? or would they rather engage with their fans/community on the platforms themselves?

So, where’s the creators ‘Homepage’?

In Oct 2009, during the web 2.0 explosion, when new social networks were popping up on a weekly basis, About.me was launched as a hosting service for personal websites. It was meant to be the ‘online directory’, where users could post a short bio and all the links to their social media profiles: at the time MySpace, Tumblr, Flickr, Google+ and some that fared better with time like Pinterest, Twitter and Facebook.

The rise of Instagram meant that ‘influencers’ built huge audiences on the platform, but links were not clickable in the descriptions and every account was limited to one permanent link on the profile bio. In order to leverage that Instagram fandom to drive purchases, eye balls or build following on other networks, Instagram users started adding “link in bio to read more” as part of their captions.

On Instagram, accounts with 10,000+ followers have the option to share links through their Stories, but others with less following, rely on that single ‘link in bio’ to direct followers to a product page or a piece of content. The same challenge exists on other social platforms like TikTok, ClubHouse, Youtube, Twitch, etc.

Enter Linktree

Linktree, an Australian startup founded in 2016, saw the opportunity to create a custom landing page for that seemingly limited, but hugely valuable single URL space on the Instagram bio page. Linktree essentially transforms a single link to multiple links in a branded environment where brands, influencers and creators can highlight their content, commerce and additional social channels.

“part of the uniqueness of Linktree is its deceptively simple design”

According to one of the co-founders, Alex Zaccaria, Linktree’s site was developed in 6 hours but was bootstrapped and cash flow positive from the very beginning. Linktree operates a freemium model. The basic Linktree product is free, but in April 2017 the company launched PRO accounts, a $6 a month subscription unlocking additional features including commerce suite, analytics, customisation, third party integrations, support and more.

Fast forward to 2021, Linktree has more than 8 million users and raised $45M series B a month ago co-led by Index Ventures and Coatue, less than a year after raising $10M series A from AirTree Ventures and Insight Partners. In the founding announcement on March 26th 2021, Linktree shared that 4 million new users, a third of their total users (12M) signed up within the last three months.

In the words of Zaccaria:

“We started Linktree to solve a problem we thought to be uniquely our own, and quickly learned that not only was it a pain point felt globally, but one that’s escalating as social media further fragments…. We’ve focused on building a product that reduces friction and helps users get their content seen, at the right time and in the right place – something that is arguably more essential now than ever before.”

Bootstrapped and Global from Day One: The Story of Linktree

Competitors Join the Party

Linktree’s rise in popularity didn’t go unnoticed. As social media fragmentation continues to grow, the need for better linking a number of pages to a single person has become more clear. There are several startups competing on becoming the ‘personal identity’ page for the Internet, and for creators in particular.

Some of the more prominent contenders for the coveted “link in bio” real estate:

  • Shorby – enables to retarget anyone who clicked on your shared link
  • Linkin.bio by later.com – mainly focused on Instagram, closely integrated with Shopify
  • Beacons – a mobile website builder for creators on TikTok and Instagram, with e-commerce and monetization built in. Founded in 2019 and raised $150K according to Crunchbase. SF Based.
  • Koji – offers a similar personal page, but developed an ‘app store’ of addons like running a giveaway, share music recommendations or allowing fans to request videos. San Diego based, founded in 2016 and started as a meme creator. Venture backed ($16M).

I believe we will continue to see more innovation in the tools available for the creator economy to build and engage their communities. Mighty Networks, a startup founded in 2017 offering tools to start and grow niche communities, just announced a $50M series B to help create the “creator middle class” and pivoted to focus on creators, enabling them to offer community memberships, events and live online courses.  

In my next post, I’ll dive a bit deeper into the latest innovations in the creator monetisation toolbox.

The post How Link in Bio became the new online real estate for the creator economy appeared first on VC Cafe.

At social media hearing, lawmakers circle algorithm-focused Section 230 reform

Rather than a CEO-slamming sound bite free-for-all, Tuesday’s big tech hearing on algorithms aimed for more of a listening session vibe — and in that sense it mostly succeeded.

The hearing centered on testimony from the policy leads at Facebook, YouTube and Twitter rather than the chief executives of those companies for a change. The resulting few hours didn’t offer any massive revelations but was still probably more productive than squeezing some of the world’s most powerful men for their commitments to “get back to you on that.”

In the hearing, lawmakers bemoaned social media echo chambers and the ways that the algorithms pumping content through platforms are capable of completely reshaping human behavior. .

“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.

Coons struck a cooperative note, observing that algorithms drive innovation but that their dark side comes with considerable costs

None of this is new, of course. But Congress is crawling closer to solutions, one repetitive tech hearing at a time. The Tuesday hearing highlighted some zones of bipartisan agreement that could determine the chances of a tech reform bill passing the Senate, which is narrowly controlled by Democrats. Coons expressed optimism that a “broadly bipartisan solution” could be reached.

What would that look like? Probably changes to Section 230 of the Communications Decency Act, which we’ve written about extensively over the years. That law protects social media companies from liability for user-created content and it’s been a major nexus of tech regulation talk, both in the newly Democratic Senate under Biden and the previous Republican-led Senate that took its cues from Trump.

Lauren Culbertson, head of U.S. public policy at Twitter

Lauren Culbertson, head of U.S. public policy at Twitter Inc., speaks remotely during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, April 27, 2021. Photographer: Al Drago/Bloomberg via Getty Images

A broken business model

In the hearing, lawmakers pointed to flaws inherent to how major social media companies make money as the heart of the problem. Rather than criticizing companies for specific failings, they mostly focused on the core business model from which social media’s many ills spring forth.

“I think it’s very important for us to push back on the idea that really complicated, qualitative problems have easy quantitative solutions,” Sen. Ben Sasse (R-NE) said. He argued that because social media companies make money by keeping users hooked to their products, any real solution would have to upend that business model altogether.

“The business model of these companies is addiction,” Josh Hawley (R-MO) echoed, calling social media an “attention treadmill” by design.

Ex-Googler and frequent tech critic Tristan Harris didn’t mince words about how tech companies talk around that central design tenet in his own testimony. “It’s almost like listening to a hostage in a hostage video,” Harris said, likening the engagement-seeking business model to a gun just offstage.

Spotlight on Section 230

One big way lawmakers propose to disrupt those deeply entrenched incentives? Adding algorithm-focused exceptions to the Section 230 protections that social media companies enjoy. A few bills floating around take that approach.

One bill from Sen. John Kennedy (R-LA) and Reps. Paul Gosar (R-A) and Tulsi Gabbard (R-HI) would require platforms with 10 million or more users to obtain consent before serving users content based on their behavior or demographic data if they want to keep Section 230 protections. The idea is to revoke 230 immunity from platforms that boost engagement by “funneling information to users that polarizes their views” unless a user specifically opts in.

In another bill, the Protecting Americans from Dangerous Algorithms Act, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) propose suspending Section 230 protections and making companies liable “if their algorithms amplify misinformation that leads to offline violence.” That bill would amend Section 230 to reference existing civil rights laws.

Section 230’s defenders argue that any insufficiently targeted changes to the law could disrupt the modern internet as we know it, resulting in cascading negative impacts well beyond the intended scope of reform efforts. An outright repeal of the law is almost certainly off the table, but even small tweaks could completely realign internet businesses, for better or worse.

During the hearing, Hawley made a broader suggestion for companies that use algorithms to chase profits. “Why shouldn’t we just remove section 230 protection from any platform that engages in behavioral advertising or algorithmic amplification?” he asked, adding that he wasn’t opposed to an outright repeal of the law.

Sen. Klobuchar, who leads the Senate’s antitrust subcommittee, connected the algorithmic concerns to anti-competitive behavior in the tech industry. “If you have a company that buys out everyone from under them… we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition,” Klobuchar said.

Subcommittee members Klobuchar and Sen. Mazie Hirono (D-HI) have their own major Section 230 reform bill, the Safe Tech Act, but that legislation is less concerned with algorithms than ads and paid content.

At least one more major bill looking at Section 230 through the lens of algorithms is still on the way. Prominent big tech critic House Rep. David Cicilline (D-RI) is due out soon with a Section 230 bill that could suspend liability protections for companies that rely on algorithms to boost engagement and line their pockets.

“That’s a very complicated algorithm that is designed to maximize engagement to drive up advertising prices to produce greater profits for the company,” Cicilline told Axios last month. “…That’s a set of business decisions for which, it might be quite easy to argue, that a company should be liable for.”

Equity Monday: Social media crackdowns, earnings, and a funding deluge

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines.

This is Equity Monday, our weekly kickoff that tracks the latest private market news, talks about the coming week, digs into some recent funding rounds and mulls over a larger theme or narrative from the private markets. You can follow the show on Twitter here and myself here.

This weekend had a key story, earnings are on the way, and there is a huge number of funding rounds to talk about. Ready?

  • The Indian government’s move to remove a number of social media posts critical of its handling of COVID-19 was the key news item this weekend. As the country’s healthcare system buckles, and deaths spike, the move by the current administration to censor the Internet was just about as bad a look you could imagine. At least in terms of a tech response.
  • Also this weekend conversation continued about Substack’s recent push to hire away well-known writers from traditionally-respected publications continued, with Insider reporting that six-figure offers to join the paid newsletter platform are the norm.
  • This morning we’re focused on the impending earnings deluge. Major American tech companies, along with some key social media and ecommerce names will report, giving us a look into how tech companies performed in the first quarter of 2021. We already know that the venture market was hot during the period. How business fared, however, is less clear.
  • On the funding round beat, Mighty Networks raised $50 million, LEAD School raised $30 million, Kidato raised $1.4 million, StashAway stashed away $25 million, and Kyligence put together a $70 million Series D of its own.

The Honest Company also set an early IPO price range after we stopped recording. More to come on the IPO front. Chat Wednesday!

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 AM PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!

Equity Monday: Social media crackdowns, earnings, and a funding deluge

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines.

This is Equity Monday, our weekly kickoff that tracks the latest private market news, talks about the coming week, digs into some recent funding rounds and mulls over a larger theme or narrative from the private markets. You can follow the show on Twitter here and myself here.

This weekend had a key story, earnings are on the way, and there is a huge number of funding rounds to talk about. Ready?

  • The Indian government’s move to remove a number of social media posts critical of its handling of COVID-19 was the key news item this weekend. As the country’s healthcare system buckles, and deaths spike, the move by the current administration to censor the Internet was just about as bad a look you could imagine. At least in terms of a tech response.
  • Also this weekend conversation continued about Substack’s recent push to hire away well-known writers from traditionally-respected publications continued, with Insider reporting that six-figure offers to join the paid newsletter platform are the norm.
  • This morning we’re focused on the impending earnings deluge. Major American tech companies, along with some key social media and ecommerce names will report, giving us a look into how tech companies performed in the first quarter of 2021. We already know that the venture market was hot during the period. How business fared, however, is less clear.
  • On the funding round beat, Mighty Networks raised $50 million, LEAD School raised $30 million, Kidato raised $1.4 million, StashAway stashed away $25 million, and Kyligence put together a $70 million Series D of its own.

The Honest Company also set an early IPO price range after we stopped recording. More to come on the IPO front. Chat Wednesday!

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 AM PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!