Twitter rolls out bigger images and cropping control on iOS and Android

Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.

In March, Twitter rolled out a limited test of uncropped, larger images in users’ feeds. Now, it’s declared those tests a success and improved the image sharing experience for everybody.

On Twitter for Android or iOS, standard aspect ratio images (16:9 and 4:3) will now display in full without any cropping. Instead of gambling on how an image will show up in the timeline — and potentially ruining an otherwise great joke — images will look just like they did when you shot them.

Twitter’s new system will show anyone sharing an image a preview of what it will look like before it goes live in the timeline, resolving past concerns that Twitter’s algorithmic cropping was biased toward highlighting white faces.

“Today’s launch is a direct result of the feedback people shared with us last year that the way our algorithm cropped images wasn’t equitable,” Twitter spokesperson Lauren Alexander said. The new way of presenting images decreases the platform’s reliance on automatic, machine learning-based image cropping.

Super tall or wide images will still get a centered crop, but Twitter says it’s working to make that better too, along with other aspects of how visual media gets displayed in the timeline.

For visual artists like photographers and cartoonists who promote their work on Twitter, this is actually a pretty big deal. Not only will photos and other kinds of art score more real estate on the timeline, but artists can be sure that they’re putting their best tweet forward without awkward crops messing stuff up.

Twitter’s Chief Design Officer Dantley Davis celebrated by tweeting a requisite dramatic image of the Utah desert (Dead Horse Point — great spot!)

We regret to inform you that the brands are also aware of the changes.

The days of “open for a surprise” tweets might be numbered, but the long duck can finally have his day.

Facebook’s Oversight Board threw the company a Trump-shaped curveball

Facebook’s controversial policy-setting supergroup issued its verdict on Trump’s fate Wednesday, and it wasn’t quite what most of us were expecting.

We’ll dig into the decision to tease out what it really means, not just for Trump, but also for Facebook’s broader experiment in outsourcing difficult content moderation decisions and for just how independent the board really is.

What did the Facebook Oversight Board decide?

The Oversight Board backed Facebook’s determination that Trump violated its policies on “Dangerous Individuals and Organizations,” which prohibits anything that praises or otherwise supports violence. The the full decision and accompanying policy recommendations are online for anyone to read.

Specifically, the Oversight Board ruled that two Trump posts, one telling Capitol rioters “We love you. You’re very special” and another calling them “great patriots” and telling them to “remember this day forever” broke Facebook’s rules. In fact, the board went as far as saying the pair of posts “severely” violated the rules in question, making it clear that the risk of real-world harm in Trump’s words was was crystal clear:

The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible. At the time of Mr. Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved in the riots legitimized their violent actions. As president, Mr. Trump had a high level of influence. The reach of his posts was large, with 35 million followers on Facebook and 24 million on Instagram.”

While the Oversight Board praised Facebook’s decision to suspend Trump, it disagreed with the way the platform implemented the suspension. The group argued that Facebook’s decision to issue an “indefinite” suspension was an arbitrary punishment that wasn’t really supported by the company’s stated policies:

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.

In applying this penalty, Facebook did not follow a clear, published procedure. ‘Indefinite’ suspensions are not described in the company’s content policies. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

The Oversight Board didn’t mince words on this point, going on to say that by putting a “vague, standardless” punishment in place and then kicking the ultimate decision to the Oversight Board, “Facebook seeks to avoid its responsibilities.” Turning things around, the board asserted that it’s actually Facebook’s responsibility to come up with an appropriate penalty for Trump that fits its set of content moderation rules.

 

Is this a surprise outcome?

If you’d asked me yesterday, I would have said that the Oversight Board was more likely to overturn Facebook’s Trump decision. I also called Wednesday’s big decision a win-win for Facebook, because whatever the outcome, it wouldn’t ultimately be criticized a second time for either letting Trump back onto the platform or kicking him off for good. So much for that!

A lot of us didn’t see the “straight up toss the ball back into Facebook’s court” option as a possible outcome. It’s ironic and surprising that the Oversight Board’s decision to give Facebook the final say actually makes the board look more independent, not less.

Facebook likely saw a more clear-cut decision on the Trump situation in the cards. This is a challenging outcome for a company that’s probably ready to move on from its (many, many) missteps during the Trump era. But there’s definitely an argument that if the board declared that Facebook made the wrong call and reinstated Trump that would have been a much bigger headache.

What does it mean that the Oversight Board sent the decision back to Facebook?

Ultimately the Oversight Board is asking Facebook to either a) give Trump’s suspension and end date or b) delete his account. In a less severe case, the normal course of action would be for Facebook to remove whatever broke the rules, but given the ramifications here and the fact that Trump is a repeat Facebook rule-breaker, this is obviously all well past that option.

What will Facebook do?

We’re in for a wait. The board called for Facebook to evaluate the Trump situation and reach a final decision within six months, calling for a “proportionate” response that is justified by its platform rules. Since Facebook and other social media companies are re-writing their rules all the time and making big calls on the fly, that gives the company a bit of time to build out policies that align with the actions it plans to take. See you again on November 5.

In the months following the violence at the U.S. Capitol, Facebook repeatedly defended its Trump call as “necessary and right.” It’s hard to imagine the company deciding that Trump will get reinstated six months from now, but in theory Facebook could decide that length of time was an appropriate punishment and write that into its rules. The fact that Twitter permanently banned Trump means that Facebook could comfortably follow suit at this point.

If Trump had won reelection, this whole thing probably would have gone down very differently. As much as Facebook likes to say its decisions are aligned with lofty ideals — absolute free speech, connecting people — the company is ultimately very attuned to its regulatory and political environment. Trump’s actions were on January 6 were dangerous and flagrant, but Biden’s looming inauguration two weeks later probably influenced the company’s decision just as much.

In direct response to the decision, Facebook’s Nick Clegg wrote only: “We will now consider the board’s decision and determine an action that is clear and proportionate.” Clegg says Trump will stay suspended until then but didn’t offer further hints at what comes next.

Did the board actually change anything?

Potentially. In its decision, the Oversight Board said that Facebook asked for “observations or recommendations from the Board about suspensions when the user is a political leader.” The board’s policy recommendations aren’t binding like its decisions are, but since Facebook asked, it’s likely to listen.

If it does, the Oversight Board’s recommendations could reshape how Facebook handles high profile accounts in the future:

The Board stated that it is not always useful to draw a firm distinction between political leaders and other influential users, recognizing that other users with large audiences can also contribute to serious risks of harm.

While the same rules should apply to all users, context matters when assessing the probability and imminence of harm. When posts by influential users pose a high probability of imminent harm, Facebook should act quickly to enforce its rules. Although Facebook explained that it did not apply its ‘newsworthiness’ allowance in this case, the Board called on Facebook to address widespread confusion about how decisions relating to influential users are made. The Board stressed that considerations of newsworthiness should not take priority when urgent action is needed to prevent significant harm.

Facebook and other social networks have hidden behind newsworthiness exemptions for years instead of making difficult policy calls that would upset half their users. Here, the board not only says that political leaders don’t really deserve special consideration while enforcing the rules, but that it’s much more important to take down content that could cause harm than it is to keep it online because it’s newsworthy.

So… we’re back to square one?

Yes and no. Trump’s suspension may still be up in the air, but the Oversight Board is modeled after a legal body and its real power is in setting precedents. The board kicked this case back to Facebook because the company picked a punishment for Trump that wasn’t even on the menu, not because it thought anything about his behavior fell in a gray area.

The Oversight Board clearly believed that Trump’s words of praise for rioters at the Capitol created a high stakes, dangerous threat on the platform. It’s easy to imagine the board reaching the same conclusion on Trump’s infamous “when the looting starts, the shooting starts” statement during the George Floyd protests, even though Facebook did nothing at the time. Still, the board stops short of saying that behavior like Trump’s merits a perma-ban — that much is up to Facebook.

Facebook launches Neighborhoods, a Nextdoor clone

Facebook is launching a new section of its app designed to connect neighbors and curate neighborhood-level news. The new feature, predictably called Neighborhoods, is available now in Canada and will be rolling out soon for U.S. users to test.

As we reported previously, Neighborhoods has technically been around since at least October of last year, but that limited test only recruited residents of Calgary, Canada.

On Neighborhoods, Facebook users can create a separate sub-profile and can populate it with interests and a custom bio. You can join your own lower-case neighborhood and nearby neighborhoods and complain about porch pirates, kids these days, or whatever you’d otherwise be doing on Nextdoor.

Aware of the intense moderation headaches on Nextdoor, Facebook says that it will have a set of moderators dedicated to Neighborhoods to will review comments and posts to keep matters “relevant and kind.” Within Neighborhoods neighborhoods, deputized users can steer and strike up conversations and do some light moderation, it sounds like. The new corner of Facebook will also come with blocking features.

As far as privacy goes, well, it’s Facebook. Neighborhoods isn’t its own standalone app and will naturally be sharing your neighborly behavior to serve you targeted ads elsewhere.

Twitter rolls out improved ‘reply prompts’ to cut down on harmful tweets

A year ago, Twitter began testing a feature that would prompt users to pause and reconsider before they replied to a tweet using “harmful” language — meaning language that was abusive, trolling, or otherwise offensive in nature. Today, the company says it’s rolling improved versions of these prompts to English-language users on iOS and soon, Android, after adjusting its systems that determine when to send the reminders to better understand when the language being used in the reply is actually harmful.

The idea behind these forced slow downs, or nudges, are about leveraging psychological tricks in order to help people make better decisions about what they post. Studies have indicated that introducing a nudge like this can lead people to edit and cancel posts they would have otherwise regretted.

Twitter’s own tests found that to be true, too. It said that 34% of people revised their initial reply after seeing the prompt, or chose not to send the reply at all. And, after being prompted once, people then composed 11% fewer offensive replies in the future, on average. That indicates that the prompt, for some small group at least, had a lasting impact on user behavior. (Twitter also found that users who were prompted were less likely to receive harmful replies back, but didn’t further quantify this metric.)

Image Credits: Twitter

However, Twitter’s early tests ran into some problems. it found its systems and algorithms sometimes struggled to understand the nuance that occurs in many conversations. For example, it couldn’t always differentiate between offensive replies and sarcasm or, sometimes, even friendly banter. It also struggled to account for those situations in which language is being reclaimed by underrepresented communities, and then used in non-harmful ways.

The improvements rolling out starting today aim to address these problems. Twitter says it’s made adjustments to the technology across these areas, and others. Now, it will take the relationship between the author and replier into consideration. That is, if both follow and reply to each other often, it’s more likely they have a better understanding of the preferred tone of communication than someone else who doesn’t.

Twitter says it has also improved the technology to more accurately detect strong language, including profanity.

And it’s made it easier for those who see the prompts to let Twitter know if the prompt was helpful or relevant — data that can help to improve the systems further.

How well this all works remains to be seen, of course.

Image Credits: Twitter

While any feature that can help dial down some of the toxicity on Twitter may be useful, this only addresses one aspect of the larger problem — people who get into heated exchanges that they could later regret. There are other issues across Twitter regarding abusive and toxic content that this solution alone can’t address.

These “reply prompts” aren’t the only time Twitter has used the concept of nudges to impact user behavior. It also reminds users to read an article before you retweet and amplify it in an effort to promote more informed discussions on its platform.

Twitter says the improved prompts are rolling out to all English-language users on iOS starting today, and will reach Android over the next few days.

Facebook’s hand-picked ‘oversight’ panel upholds Trump ban — for now

Facebook’s content decision review body, a quasi-external panel that’s been likened to a ‘Supreme Court of Facebook’ but isn’t staffed by sitting judges, can’t be truly independent of the tech giant which funds it, has no legal legitimacy or democratic accountability, and goes by the much duller official title ‘Oversight Board’ (aka the FOB) — has just made the biggest call of its short life…

Facebook’s hand-picked ‘oversight’ panel has voted against reinstating former U.S. president Donald Trump’s Facebook account.

However it has sought to row the company back from an ‘indefinite’ ban — finding fault with its decision to impose an indefinite restriction, rather than issue a more standard penalty (such as a penalty strike or permanent account closure).

In a press release announcing its decision the board writes:

Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7.

However, it was not appropriate for Facebook to impose an ‘indefinite’ suspension.

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.”

The board wants Facebook to revision its decision on Trump’s account within six months — and “decide the appropriate penalty”. So it appears to have succeeded in… kicking the can down the road.

The FOB is due to hold a press conference to discuss its decision shortly so stay tuned for updates.

This story is developing… refresh for updates…

It’s certainly been a very quiet five months on mainstream social media since Trump had his social media ALL CAPS megaphone unceremoniously shut down in the wake of his supporters’ violent storming of the capital.

For more on the background to Trump’s deplatforming do make time for this excellent explainer by TechCrunch’s Taylor Hatmaker. But the short version is that Trump finally appeared to have torched the last of his social media rule-breaking chances after he succeeded in fomenting an actual insurrection on U.S. soil on January 6. Doing so with the help of the massive, mainstream social media platforms whose community standards don’t, as a rule, give a thumbs up to violent insurrection…

Facebook’s hand-picked ‘oversight’ panel upholds Trump ban — for now

Facebook’s content decision review body, a quasi-external panel that’s been likened to a ‘Supreme Court of Facebook’ but isn’t staffed by sitting judges, can’t be truly independent of the tech giant which funds it, has no legal legitimacy or democratic accountability, and goes by the much duller official title ‘Oversight Board’ (aka the FOB) — has just made the biggest call of its short life…

Facebook’s hand-picked ‘oversight’ panel has voted against reinstating former U.S. president Donald Trump’s Facebook account.

However it has sought to row the company back from an ‘indefinite’ ban — finding fault with its decision to impose an indefinite restriction, rather than issue a more standard penalty (such as a penalty strike or permanent account closure).

In a press release announcing its decision the board writes:

Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7.

However, it was not appropriate for Facebook to impose an ‘indefinite’ suspension.

It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.”

The board wants Facebook to revision its decision on Trump’s account within six months — and “decide the appropriate penalty”. So it appears to have succeeded in… kicking the can down the road.

The FOB is due to hold a press conference to discuss its decision shortly so stay tuned for updates.

This story is developing… refresh for updates…

It’s certainly been a very quiet five months on mainstream social media since Trump had his social media ALL CAPS megaphone unceremoniously shut down in the wake of his supporters’ violent storming of the capital.

For more on the background to Trump’s deplatforming do make time for this excellent explainer by TechCrunch’s Taylor Hatmaker. But the short version is that Trump finally appeared to have torched the last of his social media rule-breaking chances after he succeeded in fomenting an actual insurrection on U.S. soil on January 6. Doing so with the help of the massive, mainstream social media platforms whose community standards don’t, as a rule, give a thumbs up to violent insurrection…

For Trump and Facebook, judgement day is around the corner

Facebook unceremoniously confiscated Trump’s biggest social media megaphone months ago, but the former president might be poised to snatch it back.

Facebook’s Oversight Board, an external Supreme Court-like policy decision making group, will either restore Trump’s Facebook privileges or banish him forever on Wednesday. Whatever happens, it’s a huge moment for Facebook’s nascent experiment in outsourcing hard content moderation calls to an elite group of global thinkers, academics and political figures and allowing them to set precedents that could shape the world’s biggest social networks for years to come.

Facebook CEO Mark Zuckerberg announced Trump’s suspension from Facebook in the immediate aftermath of the Capitol attack. It was initially a temporary suspension, but two weeks later Facebook said that the decision would be sent to the Oversight Board. “We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Facebook CEO Mark Zuckerberg wrote in January.

Facebook’s VP of Global Affairs Nick Clegg, a former British politician, expressed hope that the board would back the company’s own conclusions, calling Trump’s suspension an “unprecedented set of events which called for unprecedented action.”

Trump inflamed tensions and incited violence on January 6, but that incident wasn’t without precedent. In the aftermath of the murder of George Floyd, an unarmed Black man killed by Minneapolis police, President Trump ominously declared on social media “when the looting starts, the shooting starts,” a threat of imminent violence with racist roots that Facebook declined to take action against, prompting internal protests at the company.

The former president skirted or crossed the line with Facebook any number of times over his four years in office, but the platform stood steadfastly behind a maxim that all speech was good speech, even as other social networks grew more squeamish.

In a dramatic address in late 2019, Zuckerberg evoked Martin Luther King Jr. as he defended Facebook’s anything goes approach. “In times of social turmoil, our impulse is often to pull back on free expression,” Zuckerberg said. “We want the progress that comes from free expression, but not the tension.” King’s daughter strenuously objected.

A little over a year later, with all of Facebook’s peers doing the same and Trump leaving office, Zuckerberg would shrink back from his grand free speech declarations.

In 2019 and well into 2020, Facebook was still a roiling hotbed of misinformation, conspiracies and extremism. The social network hosted thousands of armed militias organizing for violence and a sea of content amplifying QAnon, which moved from a fringe belief on the margins to a mainstream political phenomenon through Facebook.

Those same forces would converge at the U.S. Capitol on January 6 for a day of violence that Facebook executives characterized as spontaneous, even though it had been festering openly on the platform for months.

 

How the Oversight Board works

Facebook’s Oversight Board began reviewing its first cases last October. Facebook can refer cases to the board, like it did with Trump, but users can also appeal to the board to overturn policy decisions that affect them after they exhaust the normal Facebook or Instagram appeals process. A five member subset of its 20 total members evaluate whether content should be allowed to remain on the platform and then reach a decision, which the full board must approve by a majority vote. Initially, the Oversight Board was only empowered to reinstate content removed on Facebook and Instagram, but in mid-April began accepting requests to review controversial content that stayed up.

Last month, the Oversight Board replaced departing member Pamela Karlan, a Stanford professor and voting rights scholar critical of Trump, who left to join the Biden administration. Karlan’s replacement, PEN America CEO Susan Nossel, wrote an op-ed in the LA Times in late January arguing that extending a permanent ban on Trump “may feel good” but that decision would ultimately set a dangerous precedent. Nossel joined the board too late to participate in the Trump decision.

The Oversight Board’s earliest batch of decisions leaned in the direction of restoring content that’s been taken down — not upholding its removal. While the board’s other decisions are likely to touch on the full spectrum of frustration people have with Facebook’s content moderation preferences, they come with far less baggage than the Trump decision. In one instance, the Oversight Board voted to restore an image of a woman’s nipples used in the context of a breast cancer post. In another, the board decided that a quote from a famous Nazi didn’t merit removal because it wasn’t an endorsement of Nazi ideology. In all cases, the Oversight Board can issue policy recommendations, but Facebook isn’t obligated to implement them — just the decisions.

Befitting its DNA of global activists, political figures and academics, the Oversight Board’s might have ambitions well beyond one social network. Earlier this year, Oversight Board co-chair and former Prime Minister of Denmark Helle Thorning-Schmidt declared that other social media companies would be “welcome to join” the project, which is branded in a conspicuously Facebook-less way. (The group calls itself the “Oversight Board” though everyone calls it the “Facebook Oversight Board.”)

“For the first time in history, we actually have content moderation being done outside one of the big social media platforms,” Thorning-Schmidt declared, grandly. “That in itself… I don’t hesitate to call it historic.”

Facebook’s decision to outsource some major policy decisions is indeed an experimental one, but that experiment is just getting started. The Trump case will give Facebook’s miniaturized Supreme Court an opportunity to send a message, though whether the takeaway is that it’s powerful enough to keep a world leader muzzled or independent enough to strike out from its parent and reverse the biggest social media policy decision ever made remains to be seen.

If Trump comes back, the company can shrug its shoulders and shirk another PR firestorm, content that its experiment in external content moderation is legitimized. If the board doubles down on banishing Trump, Facebook will rest easy knowing that someone else can take the blowback this round in its most controversial content call to date. For Facebook, for once, it’s a win-win situation.

Twitter expands Spaces to anyone with 600+ followers, details plans for tickets, reminders and more

Twitter Spaces, the company’s new live audio rooms feature, is opening up more broadly. The company announced today it’s making Twitter Spaces available to any account with 600 followers or more, including both iOS and Android users. It also officially unveiled some of the features it’s preparing to launch, like Ticketed Spaces, scheduling features, reminders, support for co-hosting, accessibility improvements, and more.

Along with the expansion, Twitter is making Spaces more visible on its platform, too. The company notes it has begun testing the ability to find and join a Space from a purple bubble around someone’s profile picture right from the Home timeline.

Image Credits: Twitter

Twitter says it decided on the 600 follower figure as being the minimum to gain access to Twitter Spaces based on its earlier testing. Accounts with 600 or more followers tend to have “a good experience” hosting live conversations because they have a larger existing audience who can tune in. However, Twitter says it’s still planning to bring Spaces to all users in the future.

In the meantime, it’s speeding ahead with new features and developments. Twitter has been building Spaces in public, taking into consideration user feedback as it prioritizes features and updates. Already, it has built out an expanded set of audience management controls, as users requested, introduced a way for hosts to mute all speakers at once, and added the laughing emoji to its set of reactions, after users requested it.

Now, its focus is turning towards creators. Twitter Spaces will soon support multiple co-hosts, and creators will be able to better market and even charge for access to their live events on Twitter Spaces. One feature, arriving in the next few weeks, will allow users to schedule and set reminders about Spaces they don’t want to miss. This can also help creators who are marketing their event in advance, as part of the RSVP process could involve pushing users to “set a reminder” about the upcoming show.

Twitter Spaces’ rival, Clubhouse, also just announced a reminders feature during its Townhall event on Sunday as well at the start of its external Android testing. The two platforms, it seems, could soon be neck-and-neck in terms of feature set.

Image Credits: Twitter

But while Clubhouse recently launched in-app donations feature as a means of supporting favorite creators, Twitter will soon introduce a more traditional means of generating revenue from live events: selling tickets. The company says it’s working on a feature that will allow hosts to set ticket prices and how many are available to a given event, in order to give them a way of earning revenue from their Twitter Spaces.

A limited group of testers will gain access to Ticketed Spaces in the coming months, Twitter says. Unlike Clubhouse, which has yet to tap into creator revenue streams, Twitter will take a small cut from these ticket sales. However, it notes that the “majority” of the revenue will go to the creators themselves.

Image Credits: Twitter

Twitter also noted that it’s improving its accessibility feature, live captions, so they can be paused and customized, and is working to make them more accurate.

The company will be hosting a Twitter Space of its own today around 1 PM PT to further discuss these announcements in more detail.

Rapchat tunes into $2.3M as its music-making app hits 7M users

YouTube, Snapchat, Twitter, TikTok and Facebook’s Instagram have upended the film and TV industries, with a new wave of cinematographers, directors and actors leveraging innovations in technology to create new work and connect directly with billions of consumers to see it. Today, a startup is announcing some funding as it looks to make a similar impact in the world of music.

Rapchat, an app that lets people create music tracks — raps, as its names suggest, or something else — using a platform that crowdsources beats and lets people put vocals on top of them, has raised $2.3 million.

Co-led by Sony Music Entertainment and NYC VC firm Adjacent, this is an extension to Rapchat’s seed round of $1.7 million back in 2018, and CEO and co-founder Seth Miller tells me it’s coming as the startup is getting ready for a bigger Series A.

With no connection to Snapchat — not now at least, except that founders Seth Miller and Pat Gibson did think it was a funny pun at the time that they were first conceiving of the company as a side hustle while still in university in 2015 — Rapchat has already gone quite some way in scaling.

The company today has some 7 million registered users, and at the moment some 250,000 songs are being created around a catalog of about 100,000 beats by 500,000 active users on the platform each month. Engagement is hovering right now at 35 minutes per day on average, a mix not just of people making tunes, but through the beginnings of a social graph: people coming onto the app to discover and share those tracks.

Rapchat plans to use the funding to continue expanding the scope of what you can create on its platform, including growing the prize pools for Rapchat’s ‘Challenges’ competition series; expand to have more artists, producers, and industry executives on the platform for mentoring; and to extend that platform’s reach to integrate more deeply with the likes of TikTok, Snapchat, Spotify and Apple Music — platforms where creators are already making a lot of content, and where music is figuring strongly in that effort.

Rapchat’s growth not only speaks to how the startup has pulled off its ambition to make it easier to make music, but it also speaks to an appetite, an itch, in the creator economy: there is a big wide world of music-making out there, and more want to see if they can strike the right note.

Rapchat is definitely not the only, nor the first, company to think of how to address music creators within the bigger creator economy.

Another app called Voisey had conceived of a similar idea but focused primarily on letting people create and record shorter clips rather than full music tracks before sharing them to other platforms. It has not quite come a household name, but it did have some small success in bringing attention to new artists, and interestingly, it was quietly acquired by Snap last year (and for now Snap’s kept Voisey’s app up).

TikTok’s parent ByteDance has also made an acquisition of another music creation app, Jukedeck. As with Snap’s acquisition, so far we’re not fully clear on how and where that acquisition is going, but we’ve heard through the grapevine that TikTok is working on a new music service that sounds like it might let more content get plugged into TikTok’s music layer, so perhaps watch this space.

And in perhaps the most trend-endorsing act of all, Rapchat has been cloned — by Facebook, no less. NPE, the social networking behemoth’s in-house skunkworks team, in February rolled out BARS (all caps! stand out!) — which is, yes — an app on which you can create your own rap music.

Miller, at least for now, is about as laid back as you could be, considering all of the above, confident that at least for now, he is very happy with the engagement Rapchat is seeing, including around tests it has been running around offering new premium features — the app is free to use right now, but it has plans to offer creators more production tools and better ways of sharing their work and helping build a business out of it. Key to that will be never demanding licensing fees on music: creators keep the royalties, with Rapchat’s value lying in helping them make and track how that music gets used with the metadata that it holds on those tracks.

Some of the low-key approach might well come from the fact that Rapchat and its founders are somewhat outside of the startup fray. The idea for the app first came up in 2013, Miller said, when he and Gibson were students at Ohio University in Columbus.

“We were coming of age when everyone in college was using apps like Snapchat and Instagram,” he said. “We loved them for video, but saw there was nothing like them for creating music. So we pitched the idea during a Startup Weekend competition: snapping like Snapchat but for rap. Someone said, ‘Rapchat’ and we liked it.”

They went full-time on the idea in 2015 when they got into 500 Startups with the app, but even so it’s taken them years to build up the business, get attention from investors and raise money. Why? Partly because music is hard, and frankly the main game in town for years has been streaming services, rather than creation services.

Miller and Gibson persisted: “I knew that this market was huge. It just made so much sense to me,” he said. “The advent of the mobile devices the moment that apps like Instagram, VSCO and Snapchat have turned people into photographers and video makers, and Substack is turning people into writers.” And now Rapchat wants to tap the world for rappers.

“Rapchat has created a music studio that fits into your pocket,” said Nico Wittenborn, lead Investor at venture capital firm Adjacent, in a statement. “It decreases the friction of creativity by allowing anyone, anywhere in the world to record and publish music straight from their phones. This mobile-enabled democratization of technology is what Adjacent is all about, and I am super excited to support the team in building out this next-level music platform.”

Instagram Live takes on Clubhouse with options to mute and turn off the video

In addition to Facebook’s Clubhouse competitor built within Messenger Rooms and its experiments with a Clubhouse-like Q&A platform on the web, the company is now leveraging yet another of its largest products to take on the Clubhouse threat: Instagram Live. Today, Instagram announced it’s adding new features that will allow users to mute their microphones and even turn their video off while using Instagram Live.

Instagram explains these new features will give hosts more flexibility during their livestream experiences, as they can decrease the pressure to look or sound a certain way while broadcasting live. While that may be true, the reality is that Facebook is simply taking another page from Clubhouse’s playbook by enabling a “video off” experience that encourages more serendipitous conversations.

When people don’t have to worry about how they look, they’ll often be more amenable to jumping into a voice chat. In addition, being audio-only allows creators to engage with their community while multitasking — perhaps they’re doing chores or moving around, and can’t sit and stare right at the camera. To date, this has been one of the advantages about using Clubhouse versus live video chat. You could participate in Clubhouse’s voice chat rooms without always having to give the conversation your full attention or worrying about background noise.

For the time being, hosts will not be able to turn on or off the video or mute others in the livestream, but Instagram tells us it’s working on offering more of these types of capabilities to the broadcaster, and expects to roll them out soon.

Instagram notes it tested the new features publicly earlier this week during an Instagram Live between Facebook CEO Mark Zuckerberg and Head of Instagram Adam Mosseri.

This isn’t the first feature Instagram has added in recent weeks to lure the creator community to its platform instead of Clubhouse or other competitors. In March, Instagram rolled out the option for creators to host Live Rooms that allow up to four people to broadcast at the same time. The Rooms were meant to appeal to creators who wanted to host live talk shows, expanded Q&As, and more — all experiences that are often found on Clubhouse. It also added the ability for fans to buy badges to support the hosts, to cater to the needs of professional creators looking to monetize their reach.

Although Instagram parent company Facebook already has a more direct Clubhouse clone in development with Live Audio Rooms on Facebook and Messenger, the company said it doesn’t expect it to launch into testing until this summer. And it will first be available to Groups and public figures, not the broader public.

Instagram Live’s new features, meanwhile, are rolling out to Instagram’s global audience on both iOS and Android starting today.