Google Photos revives its prints subscription service, expands same-day print options

Google Photos is reviving its photo printing subscription service and introducing same-day prints. The company earlier this year had briefly tested a new program that used A.I. to suggest the month’s 10 best photos, which were then shipped to your home automatically. But Google ended the test on June 30.

During the trial, Google had offered users a $7.99 per month subscription that would automatically select 10 photos from one of three themes, including people and pets, landscapes, or “a little bit of everything” mix. The 4×6 photos were printed on matte, white cardstock with a 1/8-inch border.

Image Credits: Google

The new subscription, launching soon, leverages feedback from the early tests to now give users more control over which prints they receive and how they look. It also drops the price to $6.99 per month, including shipping and before tax.

With the new Premium Print Series, as the subscription is called, Google Photos will use machine learning techniques to pick 10 of your recent photos to print. But users can edit the photo selection and they can choose either a matte or glossy finish or add a border before the photos ship.

The photos can optionally be turned into postcards, thanks to the cardstock paper backing, Google notes.

Subscribers can also opt to skip a month and can easily cancel the service, if they’re no longer using it.

This updated version of service was recently discovered by reverse engineer Jane Manchun Wong, who detailed the new customization options and the lower price point.

Google says the Premium Print Series will make its ways to Google Photos users in the next few weeks.

The company today is also launching same-day printing at Walgreens, available immediately. This expands Google Photos’ existing same-day options, which already included same-day pickup from CVS and Walmart.

Using the Google Photos app, customers can now order 4×6, 5×7, or 8×10 photo prints for same-day pickup at Walgreens . This nearly doubles the number of stores offering same-day prints to Google Photos users, Google says.

In addition to prints and subscriptions, Google Photos continues to offer canvas prints and photo books — the latter now with up to 140 pages — as part of its online print store.

Image Credits: Google

The launch of the expanded photo printing services and subscription comes at a time when people are traveling less often, due to the pandemic, and are attending fewer large events where photo-taking may take place — like parties or concerts, for example.

But even if times have changed, people are continuing to take photos — though they may not be posting them across social media in order to avoid judgement.The subject of the photos may have changed, too, to now include more family and pets or nature scenes, instead of large, crowded places or big social gatherings, for instance.

The nostalgia for pre-pandemic times could see users turning to prints to help them relive fond memories, too.

Google didn’t say exactly when the new subscription will launch, but said users should be able to access the feature in the coming weeks.

 

 

Instagram rolls out fan badges for live videos, expands IGTV ads test

Instagram is today introducing a new way for creators to make money. The company is now rolling out badges in Instagram Live to an initial group of over 50,000 creators, who will be able to offer their fans the ability to purchase badges during their live videos to stand out in the comments and show their support.

The idea to monetize using fan badges is not unique to Instagram. Other live streaming platforms, including Twitch and YouTube, have similar systems. Facebook Live also allows fans to purchase stars on live videos, as a virtual tipping mechanism.

Instagram users will see three options to purchase a badge during live videos: badges that cost $0.99, $1.99, or $4.99.

On Instagram Live, badges will not only call attention to the fans’ comments, they also unlock special features, Instagram says. This includes a placement on a creator’s list of badge holders and access to a special heart badge.

The badges and list make it easier for creators to quickly see which fans are supporting their efforts, and give them a shout-out, if desired.

Image Credits: Instagram

To kick off the roll out of badges, Instagram says it will also temporarily match creator earnings from badge purchases during live videos, starting in November. Creators @ronnebrown and @youngezee are among those who are testing badges.

The company says it’s not taking a revenue share at launch, but as it expands its test of badges it will explore revenue share in the future.

“Creators push culture forward. Many of them dedicate their life to this, and it’s so important to us that they have easy ways to make money from their content,” said Instagram COO Justin Osofsky, in a statement. “These are additional steps in our work to make Instagram the single best place for creators to tell their story, grow their audience, and make a living,” she added.

Additionally, Instagram today is expanding access to its IGTV ads test to more creators. This program, introduced this spring, allows creators to earn money by including ads alongside their videos. Today, creators keep at least 55% of that revenue, Instagram says.

The introduction of badges and IGTV ads were previously announced, with Instagram saying it would test the former with a small group of creators earlier this year.

The changes follow what’s been a period of rapid growth on Instagram’s live video platform, as creators and fans sheltered at home during the coronavirus pandemic, which had cancelled live events, large meetups, concerts, and more.

During the pandemic’s start, for example, Instagram said Live creators saw a 70% increase in video views from Feb. to March, 2020. In Q2, Facebook also reported monthly active user growth (from 2.99B to 3.14B in Q1) that it said reflected increased engagement from consumers who were spending more time at home.

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

Crypto-driven marketplace Zora raises $2M to build a sustainable creator economy

Dee Goens and Jacob Horne have both the exact and precisely opposite background that you’d expect to see from two people building a way for creators to build a sustainable economy for their followers to participate in. Coinbase, crypto hack projects at university, KPMG, Merill Lynch. But where’s the art?

“Believe it or not, I used to have dreams of being a rapper,” laughs Goens. “There’s a Soundcloud out there somewhere. With that passion you explore the inner workings of the music industry. I would excitedly ask industry friends about the advance and 360 deal models only to realize they were completely broken.”

And, while many may be well intentioned, these deal structures of exploit artistry. In many cases taking the majority of an artist’s ownership. I grew curious why artists were unable to resource themselves from their community in an impactful way — but instead, were forced to seek out potentially predatory relationships. To me, this was bullshit.”

Horne says that he’d always wanted to create a fashion brand. 

“I always thought a fashion brand would be something I’d do after crypto,” he tells me. “I love crypto but it felt overly focused on just finance and felt like it was missing something. When I started to play with the idea of combining these two passions and starting Saint Fame.”

While at Coinbase, Horne hacked on Saint Fame, a side project that leveraged some of the ideas on display in Zora. It was a marketplace that allowed people to sell and trade items with cryptocurrency, buying intermediate variable-value tokens redeemable for future goods. 

“I realized that culture itself was shaped and built upon an old financial system that is systemically skewed against artists and communities,” says Horne. “The operating system of ownership was built in the 1600s with the Dutch East India Trading Company and early Nation States. Like what the fuck is up with that?” 

We have the internet now, we can literally create and share information to billions of people all at once, and the ownership system is the same as when people had to get on a boat for 6 months to send a letter. It’s time for an upgrade. Any community on the internet should be able to come together, with capital, and work towards any shared vision. That starts with empowering creators and artists to create and own the culture they’re creating. In the long term this moves to internet communities taking on societal endeavours.”

The answer that they’re working on is called Zora. It’s a marketplace with two main components but one philosophy: sustainable economics for creators. 

All too often creators are involved in reaping the rewards for their work only once, but the secondary economy continues to generate value out of their reach. Think of an artist, as an example, that creates a piece and sells it for market value. That’s great, but thereafter, every ounce of work that the artist puts into future work, into building a name and a brand and a community for themselves puts additional value into that piece. The artist never sees a dime from that, relying instead on the value of future releases to pay dividends on the work. 

That’s basically the way it has always worked. I have a little background in this as I used to exhibit and was involved in running a gallery and my father is a fine artist. If he sells a painting today for $300, gets a lot better, more popular and more valued over time, the owner of that painting may re-sell it for hundreds or thousands more. He will never see a dime of that. And god forbid that an artist like him gets too locked into the gallery system which slices off enormous chunks of the value of a piece for a square of wall space and the marketing cachet of a curator or storefront. 

The same story can be told across the recording industry, fashion, sports and even social media. Lots of middle-people and lots of vigs to pay. And, unsurprisingly, the same creators of color that drive so much of The Culture are the biggest losers hands down. 

The primary Zora product is a market that allows creators or artists to launch products and then continue to participate in their second market value. 

Here’s how the Zora team explains it:

On Zora, creators have the ability to set two prices: start price and max price. As community members buy and sell a token, it moves the price up or down. This makes the price dynamic as it opens price discovery on the items by the market. When people buy the token it moves the price closer to its maximum. When they sell, it moves closer to its minimum. 

For an excited community like Jeff [Staple’s], this new dynamic price can cause a quick increase in the value of his sneakers. As a creator, they capture the value from selling on a price curve as well as getting a take on trading fees from the market which they now own. What used to trade on StockX is now about to trade on a creator owned market.

There have been some early successes. Designer and marketer Jeff Staple launched a run of 30 Coca-Cola x Staple SB Dunk customs by Reverseland and their value is trending up around 234% since release. A Benji Taylor x Kevin Doan vinyl figure is up 210%

I have seen some other stabs at this. When he was still at StockX, founder Josh Luber launched their Intial Product Offerings, a Blind Dutch Auction system that allowed the market to set a price for an item, with some of the cut of pricing above market going back to the manufacturer or brand making the offering. The focus there was brands vs. individual creators (though they did launch with a Ben Baller slide). Allowing brands to tap into second market value for limited goods is a lot less of a revolution play, but the thesis is similar. I thought that was a good idea then, and I like it even better when it’s being used to democratize rather than maximize returns. 

Side note: I love that this team is messing around with interesting ideas like dogfooding their own marketplace with the value of being in their own TestFlight group. I’m sort of like, is that allowed, but at the same time it’s dope and I’ve never seen anything like it. 

Zora was founded in May of 2020 (right in the middle of this current panny-palooza). The team is Goens (Creators and Community), Horne (Product), Slava Kim (Design), Dai Hovey (Engineering), Ethan Daya (Engineering) and Tyson Batistella (Engineering). 

Zora has raised a $2M seed round led by Kindred Ventures with participation from Trevor McFedries of Brud, Alice Lloyd George, Jeff Staple, Coinbase Ventures and others.

Tokenized community

But this idea that physical goods or even digitally packaged works have to exist as finite containers of value is not a given either. Goens and Horne are pushing to challenge that too with the first big new product for Zora: community tokens. Built on Ethereum, the $RAC token is the first of its kind from Zora. André Allen Anjos, stage name RAC, is a Portuguese- American musician and producer who makes remixes that stream on the web, original music and has had commercial work featured in major brand ads. 

Though he is popular and has a following in the tens of thousands, RAC is not a social media superpower. The token distribution and subsequent activity in trades and sales is purely driven by the buy-in that his fans feel. This is a key learning for a lot of players in this new economy: raw numbers are the social media equivalent of a billboard that people drive by. It may get you eyeballs, but it doesn’t guarantee action. The modern creator is living in a house with their fans, offering them access and interacting via Discord and Snap and comments. 

But those houses are all other people’s houses, which leads into the reason that Zora is launching a token.

The token drop serves multiple purposes. 

  • It unites fans across multiple silos. Whether they’re on Intsa, Tiktok, Spotify or Snapchat, they can all earn tokens. That token serves as a unifying community unit of value that they all understand and pivot around. It’s a way to own a finite binary “atom” of an artist’s digital being.
  • It creates a pool of value that an artist can own and distribute themselves. Currently you cannot buy $RAC directly. You can only earn it. Some of that is retroactive for loyal supporters. If, for instance, you followed RAC on Bandcamp dating back to 2009, you’ll get some of a pool of 25,000 RAC. Bought a bit of RAC merch? You get some credit in tokens too. Future RAC distributions will be given to Patron supporters, merch purchasers etc.
  • The value stays in the artists universe, rather than being spun out into currency. It serves as a way for the artist to incentivize, reward and energize their followers. RAC fans who buy his mixtape get tokens, and they can redeem them for purchases of further merch. 
  • It allows more flexibility for creators whose work doesn’t fall so neatly into package-able categories. Performance art, activism, bite-sized entertainment. These are not easy to ‘drop’ for money. But if you have a circulating token that grows in value as you grow your audience, there is definitely something there. 

The future of Zora most immediately involves spinning up a self-service version of the marketplace, allowing creators and entrepreneurs to launch their products without a direct partnership and onboarding. There are many, many uncertainties here and the team has a lot of challenges ahead on the traction and messaging front. But as mentioned, some early releases have shown promise, and the philosophy is sound and much needed. As the creator universe/passion economy/what you call it depends on how old you are/fandom merchant wave rises there is definitely an opportunity to rethink how the value of their contributions are assigned and whether there is a way to turn the long-term labor of building a community into long-term value. 

The last traded price of RAC’s tape, BOY, by the way? $3,713, up 18,465%. 

With ‘absurd’ timing, FCC announces intention to revisit Section 230

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.

Pew: Most prolific Twitter users tend to be Democrats, but majority of users still rarely tweet

A new study from Pew Research Center, released today, digs into the different ways that U.S. Democrats and Republicans use Twitter. Based on data collected between Nov. 11, 2019 and Sept. 14, 2020, the study finds that members of both parties tweet fairly infrequently, but a majority of Twitter’s most prolific users tend to swing left.

The report updates Pew’s 2019 study with similar findings. At that time, Pew found that 10% of U.S. adults on Twitter were responsible for 80% of all tweets from U.S. adults.

Today, those figures have changed. During the study period, the most active 10% of users produced 92% of all tweets by U.S. adults.

And of these highly active users, 69% identify as Democrats or Democratic-leaning independents.

In addition, the 10% most active Democrats typically produce roughly twice the number of tweets per month (157) compared with the most active Republicans (79).

Image Credits: Pew Research Center

These highly-active users don’t represent how most Twitter users tweet, however.

Regardless of party affiliation, the majority of Twitter users post very infrequently, Pew found.

The median U.S. adult Twitter user posted just once per month during the time of the study. The median Democrat posts just once per month, while the median Republican posts even less often than that.

The typical adult also has very few followers, with the median
Democrat having 32 followers while the median Republican has 21. Democrats, however, tend to follow more accounts than Republicans do, at 126 vs. 71, respectively.

Image Credits: Pew Research Center

The new study additionally examined other differences in how members of the two parties use the platforms, beyond frequency of tweeting.

For starters, it found 60% of the Democrats on Twitter would describe themselves as very or somewhat liberal, compared with 43% of Democrats who don’t use Twitter. Self-identified conservatives on Twitter vs. conservatives not on the platform had closer shares, at 60% and 62%, respectively.

Pew also found that the two Twitter accounts followed by the largest share of U.S. adults were those belonging to former President Barack Obama (@BarackObama) and President Donald Trump
(@RealDonaldTrump).

Not surprisingly, more Democrats followed Obama — 42% of Democrats did, vs. just 12% of Republicans. Trump, meanwhile, was followed by 35% of Republicans and just 13% of Democrats.

Other top political accounts saw similar trends. For instance, Rep. Alexandria Ocasio-Cortez (@AOC) is followed by 16% of Democrats and 3% of Republicans. Fox News personalities Tucker Carlson (@TuckerCarlson) and Sean Hannity (@seanhannity), meanwhile, are both followed by 12% of Republicans but just 1% of Democrats.

Image Credits:

This is perhaps a more important point than Pew’s study indicates, as it demonstrates that even though Twitter’s original goal was to build a “public town square” of sorts, where conversations could take place in the open, Twitter users have built the same isolated bubbles around themselves as they have elsewhere on social media.

Because Twitter’s main timeline only shows tweets and retweets from people you follow, users are only hearing their side of the conversation amplified back to them.

This problem is not unique to Twitter, of course. Facebook, for years, has been heavily criticized for delivering two different versions of reality to its users. An article from The WSJ in 2016 demonstrated how stark this contrast could be, when it showed a “blue” feed and “red” feed, side-by-side.

The problem is being exacerbated even more in recent months, as users from both parties are now exiting mainstream platforms, like Twitter, an isolating themselves even more. On the conservative side, users fled to free speech-favoring and fact check-eschewing platforms like Gab and Parler. The new social network Telepath, on the other hand, favors left-leaning users by aggressively blocking misinformation — often that from conservative news outlets — and banning identity-based attacks.

One other area Pew’s new study examined was the two parties’ use of hashtags on Twitter.

It found that no one hashtag was used by more than 5% of U.S. adults on Twitter during the study period. But there was a bigger difference when it came to the use of the #BlackLivesMatter hashtag, which was tweeted by 4% of Democrats on Twitter and just 1% of Republicans.

Other common hashtags used across both parties included #covid10, #coronavirus, @mytwitteranniversary, #newprofilepic, #sweepstakes, #contest, and #giveaway.

Image Credits: Pew Research Center

It’s somewhat concerning, too, that hashtags were used in such a small percentage of tweets.

While their use has fallen out of favor somewhat — using a hashtag can seem “uncool” — the idea with hashtags was to allow users a quick way to tap into the global conversation around a given topic. But this decline in user adoption indicates there are now fewer tweets that can connect users to an expanded array of views.

Twitter today somewhat addresses this problem through its “Explore” section, highlighting trends, and users can investigate tweets using its keyword search tools. But if Twitter really wants to burst users’ bubbles, it may need to develop a new product — one that offers a different way to connect users to the variety a conversations taking place around a term, whether hashtagged or not.

 

 

 

Snapchat launches its TikTok rival, Sounds on Snapchat

Snapchat this summer announced it would soon release a new music-powered feature that would allow users to set their Snaps to music. Today, the company made good on that promise with the launch of “Sounds on Snapchat” on iOS, a feature that lets users enhance their Snaps with music from curated catalog of both emerging and established artists.

The music can be added to Snaps either pre or post capture, then shared without any limitations. You can post it to your Story or share directly with friends, as you choose.

At launch, the Snapchat music catalog offers “millions” of licensed songs from Snap’s music industry partners, the company says.

When users receive a Snap with Sounds, they can then swipe up to view the album art, the song title, and the name of the artist. There’s also a “Play This Song” link that lets you listen to the full song on your preferred streaming platform, including Spotify, Apple Music and SoundCloud.

This differentiates Snapchat’s music feature from rival TikTok, where a tap on the “sound” takes users to a page in the app that shows other videos using the same music clip. Only some of these pages also offer a link to play the full song, however.

To kick off the launch of the new Snapchat music feature, Justin Bieber and benny blanco’s new song “Lonely” will be offered as an exclusive in Snapchat’s Featured Sounds list today.

“Music makes video creations and communication more expressive, and offers a personal way to recommend music to your closest friends,” notes the company, in announcement about the feature’s launch.

Snap had said in August it would begin testing the new music feature and detailed the deals that made the addition possible.

To power Sounds on Snapchat, the company forged multi-year agreements with major and independent publishers and labels, including Warner Music Group, Merlin (including their independent label members), NMPA, Universal Music Publishing Group, Warner Chappell Music, Kobalt, and BMG Music Publishing.

The move to introduce a music feature is meant to counter the growing threat of the ByteDance-owned TikTok app, which has popularized short-form video sharing with posts set to music from a large catalog.

Though TikTok’s future in the U.S. remains uncertain due to the ever-changing nature of the Trump administration’s TikTok ban (and an election that could upset those plans), it still remains one of the top U.S. apps, with around 100 million monthly active U.S. users as of August. (TikTok is currently engaged in a lawsuit to challenge its ban, so the app remains live today.)

Social media companies have capitalized on the chaos surrounding a possible TikTok U.S. exit to promote their alternatives, like Triller, Dubsmash, Byte, and others, including, of course Instagram Reels.

Snapchat, meanwhile, touts its traction with a younger user base as its new music feature goes to launch.

In the U.S., Snapchat now reaches 90% of all 13-24 year-olds, which the company notes is more than Facebook, Instagram, and Messenger combined. It also reaches 75% of all 13-34 year-olds and, o average, more than 4 billion Snaps are created every day.

The feature is live now on iOS to start.

In other Snapchat music news, the company has partnered with Spotify to launch Spotify’s first Augmented Reality Portal Lens on Snapchat. The Lens allows users to experience a Latinx art gallery, in celebration of Latinx Heritage Month. Snapchat users open the Lens in World view to view art from include Orly Anan, Cristina Martinez, Luisa Salas, Pedro Nekoi, and D’Ana Nunez. The Lens will also raise awareness for Spotify’s Latin Hub in its own app.

Twitter hack probe leads to call for cybersecurity rules for social media giants

An investigation into this summer’s Twitter hack by the New York State Department of Financial Services (NYSDFS) has ended with a stinging rebuke for how easily Twitter let itself be duped by a “simple” social engineering technique — and with a wider call for key social media platforms to be regulated on security.

In the report, the NYSDFS points, by way of contrasting example, to how quickly regulated cryptocurrency companies acted to prevent the Twitter hackers scamming even more people — arguing this demonstrates that tech innovation and regulation aren’t mutually exclusive.

Its point is that the biggest social media platforms have huge societal power (with all the associated consumer risk) but no regulated responsibilities to protect users.

The report concludes this is a problem U.S. lawmakers need to get on and tackle stat — recommending that an oversight council be established (to “designate systemically important social media companies”) and an “appropriate” regulator appointed to ‘monitor and supervise’ the security practices of mainstream social media platforms.

“Social media companies have evolved into an indispensable means of communications: more than half of Americans use social media to get news, and connect with colleagues, family, and friends. This evolution calls for a regulatory regime that reflects social media as critical infrastructure,” the NYSDFS writes, before going on to point out there is still “no dedicated state or federal regulator empowered to ensure adequate cybersecurity practices to prevent fraud, disinformation, and other systemic threats to social media giants”.

“The Twitter Hack demonstrates, more than anything, the risk to society when systemically important institutions are left to regulate themselves,” it adds. “Protecting systemically important social media against misuse is crucial for all of us — consumers, voters, government, and industry. The time for government action is now.”

We’ve reached out to Twitter for comment on the report

Among the key findings from the Department’s investigation are that the hackers broke into Twitter’s systems by calling employees and claiming to be from Twitter’s IT department — through which simple social engineering method they were able to trick four employees into handing over their log-in credentials. From there they were able to access the Twitter accounts of high profile politicians, celebrities, and entrepreneurs, including Barack Obama, Kim Kardashian West, Jeff Bezos, Elon Musk, and a number of cryptocurrency companies — using the hijacked accounts to tweet out a crypto scam to millions of users.

Twitter has previously confirmed that a “phone spear phishing” attack was used to gain credentials.

Per the report, the hackers’ “double your bitcoin” scam messages, which contained links to make a payment in bitcoins, enabled them to steal more than $118,000 worth of bitcoins from Twitter users.

Although a considerably larger sum was prevented from being stolen as a result of swift action taken by regulated crypto companies — namely: Coinbase, Square, Gemini Trust Company and Bitstamp — who the Department said blocked scores of attempted transfers by the fraudsters.

“This swift action blocked over 6,000 attempted transfers worth approximately $1.5 million to the Hackers’ bitcoin addresses,” the report notes.

Twitter is also called out for not having a cybersecurity chief in post at the time of the hack — after failing to replace Michael Coates, who left in March. (Last month it announced Rinki Sethi had been hired as CISO).

“Despite being a global social media platform boasting over 330 million average monthly users in 2019, Twitter lacked adequate cybersecurity protection,” the NYSDFS writes. “At the time of the attack, Twitter did not have a chief information security officer, adequate access controls and identity management, and adequate security monitoring — some of the core measures required by the Department’s first-in-the-nation cybersecurity regulation.”

European Union data protection law already bakes in security requirements as part of a comprehensive privacy and security framework (with major penalties possible for security breaches). However an investigation by the Irish DPC of a 2018 Twitter security incident is still yet to conclude after a draft decision failed to gain the backing of the other EU data watchdogs this August — triggering a further delay to the pan-EU regulatory process.

WordPress can now turn blog posts into tweetstorms automatically

Earlier this year, WordPress .com introduced an easier way to post your Twitter threads, also known as tweetstorms, to your blog with the introduction of “unroll” option for Twitter embeds. Today, the company is addressing the flip side of tweetstorm publication — it’s making it possible to turn your existing WordPress blog post into a tweetstorm with just a couple of clicks.

The new feature will allow you to tweet out every word of your post, as well as the accompanying images and videos, the company says. These will be automatically inserted into the thread where they belong alongside your text.

To use the tweetstorm feature, a WordPress user will first click on the Jetpack icon on the top right of the page, then connect their Twitter account to their WordPress site, if that hadn’t been done already.

Image Credits: WordPress.com

 

The option also supports multiple Twitter accounts, if you want to post your tweetstorms in several places.

Once Twitter is connected, you’ll select the account or accounts where you want to tweet, then choose the newly added option to share the post as a Twitter thread instead of a single post with a link.

Image Credits: WordPress.com

In the box provided, you’ll write an introductory message for your tweetstorm, so Twitter users will know what your Twitter thread will be discussing.

When you then click on the “publish” button, the blog post will be shared as a tweetstorm automatically.

Image Credits: WordPress.com

The feature was also designed with a few thoughtful touches to make the tweetstorm feel more natural, as if it had been written directly on Twitter.

For starters, WordPress says it will pay attention to the blog post’s formatting in order to determine where to separate the tweets. Instead of packing the first tweet with as many words as possible, it places the break at the end of the first sentence, for example. And when a paragraph is too long for a single tweet, it’s automatically split out into as many tweets as needed, instead of being cut off. A list block, meanwhile, will be formatted as a list on Twitter.

To help writers craft a blog post that will work as a tweetstorm, you can choose to view where the tweets will be split in the social preview feature. This allows WordPress users to better shape the post to fit Twitter’s character limit as they write.

Image Credits: WordPress.com

At the end of the published tweetstorm, Twitter followers will be able to click a link to read the post on the WordPress site.

This addresses a common complaint with Twitter threads. While it’s useful to have longer thoughts posted to social media for attention, reading through paragraphs of content directly on Twitter can be difficult. But as tweetstroms grew in popularity, tools to solve this problem emerged. The most popular is a Twitter bot called @ThreadReaderApp, which lets users read a thread in a long-form format by mentioning the account by name within the thread along with the keyword “unroll.”

With the launch of the new WordPress feature, however, Twitter users won’t have to turn to third-party utilities — they can just click through on the link provided to read the content as a blog post. This, in turn, could help turn Twitter followers into blog subscribers, allowing the WordPress writer to increase their overall reach.

WordPress’ plans to introduce the tweetstorm feature had been announced last month as coming in the Jetpack 9.0 release, arriving in early October.

The feature is now publicly available, the company says.

Hands on with Telepath, the social network taking aim at abuse, fake news and, to some extent, ‘free speech’

There’s no doubt that modern social networks have let us down. Filled with hate speech and abuse, moderation and anti-abuse tools were an afterthought they’re now trying to cram in. Meanwhile, personalization engines deliver us only what will keep us engaged, even if it’s not the truth. Today, a number of new social networks are trying to flip the old model on its head — whether that’s attempting to use audio for more personal connections, like Clubhouse, eliminate clout chasing, like Twelv, or, in the case of new social network Telepath, by designing a platform guided by rules that focus on enforcing kindness, countering abuse, and disabling the spread of fake news.

Many of these early efforts are already facing challenges.

Private social network Clubhouse has repeatedly demonstrated that allowing free-flowing communication in the form of audio conversations is an area that’s notoriously difficult to moderate. The app, though still unavailable to the broader public, courted controversy in September when it allowed anti-Semitic content to be discussed in one of its chat rooms. In the past, it had also allowed users to harass an NYT reporter openly.

Meanwhile, Twelv, a sort of Instagram alternative, ditches the “Like” button concept and all the other features now overloading Instagram, which had once been just a photo-sharing network. But, unfortunately, this also means there’s no easy way to find and follow interesting users or trends on Twelv — you have to push friends to join the app with you or know someone’s username to look them up, otherwise it shows you no content. The result is a social network without the “social.”

Telepath, meanwhile, is a more interesting development.

It’s pursuing an even loftier goal in social networking — creating a hate speech-free platform where fake news can’t be distributed.

No social network to date has been able to accomplish what Telegraph claims it will be able to do in terms of content moderation. Its ambitions are optimistic and, as the network remains in private beta, they’re also untested at scale.

Though positioned as a different kind of social network, Telepath isn’t actually focused on developing a new sharing format that could encourage participation — the way TikTok popularized the 15-second video clip, for example, or how Snapchat turned the world onto “Stories.”

Instead, Telepath, at first glance, looks very much like just another feed to scroll through. (And given the amount of linked Twitter content in Telepath posts, it’s almost serving as a backchannel for the rival platform.)

The startup itself was founded by former Quora employees, including former Quora Business & Community head, Marc Bodnick, now Telepath Executive Chairman; and former Quora Product Lead, Richard Henry, now Telepath CEO. They’re aided by former Quora Global Writer Relations Lead, Tatiana Estévez, now Telepath Head of Community and Safety; and Ro Applewhaite, previously research staff for Pete Buttigieg for America, now Telepath Head of Outreach.

It’s backed by a couple million in seed funding, led by First Round Capital (Josh Kopelman). Other backers include Unusual Ventures (Andy Johns), Slow Ventures (Sam Lessin), and unnamed angels. Bodnick and his wife, Michelle Sandberg, also invested.

Image Credits: Telepath

When talking about Telepath, it’s clear the founders are nostalgic for the early days of the web — before all the people joined, that is. In smaller, online communities in years past, people connected and made internet friends who would become real-world friends. That’s a moment in time they hope to recapture.

“I’ve benefited a lot by meeting people through the internet, forming relationships and having conversations — that sort of thing,” says Henry. “But the internet just isn’t fun in the ways that it used to be fun.”

He suggests that the anonymity offered by networks like Reddit and Twitter make it more difficult for people to make real-world connections. Telepath, with its focus on conversations, aims to change that.

“If we facilitate a really fun, kind, and empathetic conversation environment, then lots of good things can happen. And it might be that you potentially find someone you want to work with, or you end up getting a job, or you meet new friends, or you end up meeting offline,” Henry says.

Getting Started

To get started on Telepath, you join the network with your mobile phone number and name, find and follow other users, similar to Twitter, then join interest-based communities as you would on Reddit. When you launch the app, you’re meant to browse a home feed where conversation topics from your communities and interesting replies are highlighted — orange for those replies from people you follow and gray for those that Telepath has determined are worth being elevated to the home screen.

As you read through the posts and visit the communities, you can “Thumbs Up” content you like, downvote what you don’t, reply, mute, block, and use @usernames to flag someone.

Image Credits: Telepath, screenshot via TechCrunch

Another interesting design choice: everything on Telepath disappears after 30 days. No one will get to dig through your misinformed posts from a decade ago to shame you in the present, it seems.

What’s most different about Telepath, however, is not the design or format. It’s what’s taking place behind the scenes, as detailed by Telepath’s rules.

Users who join Telepath must agree to “be kind,” which is rule number one. They must also not attack one another based on identity or harass others. They must use a real name (or their preferred name, if transgender), and not post violent content or porn. “Fake news” is banned, as determined by a publisher’s attempts at disseminating misinformation on a regular basis.

Telepath has even tried to formalize rules around how polite conversations should function online with rules like “don’t circle the drain” — meaning don’t keep trying to have the last word in a contentious debate or circumvent a locked thread; and “stay on topic,” which means don’t bombard a pro-x network with an anti-x agenda (and vice versa.)

Image Credits: Telepath

To enforce its rules, Telepath begins by requiring users to sign up with a mobile phone number, which is verified as a “real” number associated with a SIM card, and not a virtual one — like the kind you could grab through a “burner” app.

In order to the create its “kind environment,” Telepath says it will sacrifice growth and hire moderators who work in-house as long-term, trusted employees.

“All the major social networks essentially grew in an unbounded way,” explains Henry. “They had 100 million-plus active users, then were like, ‘okay, now how do we moderate this enormous thing?’,” he continues. “We’re in a lucky position because we get to moderate from day one. We get to set the norms.”

Moderation

“Day one” was a long time in the making, however. The team rebuilt the product four times over a couple of years. Now, they say they’ve developed internal tools that provide moderators with visibility into the system.

According to moderator head Estévez, these include a reporting system, real-time content streams organized in to buckets (e.g. a bucket for “only new users”), as well as various searchable ways to get context around a report or a particular problematic user.

“Really good tools — including real-time streams of content, classifiers for problematic behavior, searchable context, and making it hard for banned users to return — mean that each moderator we hire will be quite scalable. We think that there are network effects around positive behavior,” she says.

Image Credits: Telepath

“It’s our intention to scale up fast and high accuracy moderation decision-making, which means that we’re going to be investing a lot of engineering effort in getting these tools right,” she adds.

The founders have decided not to use any third-party systems to aid in moderation at this time, they told TechCrunch.

“We looked at a bunch of off-the-shelf [moderation systems], and we’re basically building everything that we need from scratch,” says Henry. “We just need more control over being able to tweak how these systems work in order to get the outcome that we want.”

The investment in human moderation over automation will also require additional capital to scale. And Telepath’s decision to not run ads means it will eventually need to consider alternative business models to sustain itself. The company, for now, is interested in subscriptions, but hasn’t made decisions on this front yet.

Banning the trolls

Though Telepath has only 4,000-plus users in its private beta, the two-person moderation team is already tasked with moderating posts from across the thousands of pieces of content shared on a daily basis. (The company doesn’t disclose how many violations it takes action against per day, on average.)

When a user breaks the rules, moderators may first warn them about the violation and may require them to take down or edit a specific post. No one is punished for making a mistake or being unaware of the rules — they’re first given a chance to fix it.

But if a user breaks the rules repeatedly or in a way that seems intentional, such as engaging in a harassment campaign around another user, they are banned entirely. Because of the phone number verification system, they also can’t easily return — unless they go out and purchase a new phone, that is.

These moderation actions don’t necessarily have to follow strict guidelines, like a “three strikes rule,” for example. Instead, the way the rules may be enforced are determined on a case-by-case basis. Where Telepath leans towards stricter enforcement is around intentional and flagrant violations, or those where there’s a pattern of bad behavior. (As with Reply Guys and sealioning behavior.)

In addition, unlike on Facebook and Twitter — platforms that sometimes seem to be caught off guard by viral trends in need of moderation — Telepath intends for nothing to go viral on its platform without having been seen by a human moderator, the company says.

Fake News

Telepath is also working to develop a reputation score for users and trust scores for publishers.

In the case of the former, the goal is help the company determine how likely the user is to break Telepath’s rules. This isn’t developed yet, but would be something used behind the scenes, not put on display for all to see.

For publishers, the trust score will be how factually correct they are what percentage of the time.

Image Credits: Thomas Faull (opens in a new window) / Getty Images

“For example, if the most popular article in terms of views from the publisher is just completely factually incorrect or intentionally misleading…that should have a bigger penalty on the trust score,” explains Henry. “The problem is that the incumbent platforms have rules against disinformation, but the problem is that they don’t enforce them out of this desire to appear balanced.”

Bodnick adds this challenge is not as insurmountable as it seems.

“Our view is that, actually, a handful of outlets are responsible for most of the disinformation…I don’t think our intent is to build out some modern-day truth system that will figure out if The Washington Post is slightly more accurate than The New York Times. I think the main goal will be to identify repeat disinformation publishers — determine that they are perpetual publishers of disinformation, and then crush their distribution,” says Bodnick.

This plan, however, involves setting rules on Telepath that fly in the face of what many today consider “free speech.” In fact, Telepath’s position is that free speech-favoring social networks are a failed system.

“The problem, in our view, is that when you take this free-speech centered approach that sort of says: ‘I don’t care how many disinformation posts Breitbart has published in the last — three years, three months, three weeks — we’re going to treat every new post as if it could be equally likely to be truthful as any other post in the system,'” says Bodnick. “That is inefficient.”

“That’s how we will scale this disinformation rule — by determining which relatively small group of publishers — I’m guessing it’s hundreds, low hundreds — are responsible for publishing lots of disinformation. And then take their distribution down,” he says.

This opinion on free speech is shared by the team.

“We’re trying to build a community, which means that we have to make certain tradeoffs,” adds Estévez. “In the rules we refer to Karl Popper’s paradox of tolerance — to maintain a tolerant society, you have to be intolerant of intolerance. We have no interest in giving a platform to certain kinds of speech,” she notes.

This is the exact opposite approach that conservative social media sites are taking, like Parler and Gab. There, the companies believe in free speech to the point that they’ve left up content posted by an alleged Russian disinformation campaign, saying that no one filed a report about the threat, and law enforcement hadn’t reached out. These MAGA-friendly social networks are also filled with conspiracies, un-fact checked reports, and, frankly, a lot of vitriol.

The expectation is that if you go on their platforms, you’re in charge of muting and blocking trolls or the content you don’t like. But by their nature, those who join these platforms will generally find themselves among like-minded users.

Twitter, meanwhile, tries to straddle the middle ground. And in doing so, has alienated a number of users who think it doesn’t go far enough in counteracting abuse. Users report harassment and threats, then wait for days for their report to be reviewed only to be told the tweet in question didn’t break Twitter’s terms.

Telepath sits on the other end of the spectrum, aggressively moderating content, blocking and banning users if needed, and punishing publications that don’t fact check or those that peddle misinformation.

“Kindness” carve-outs

And yet, despite all this extra effort, Telepath doesn’t always feature only thoughtful and kind-hearted conversations.

That’s because it has carved out an exception in its kindness rule that allows users to criticize public figures, and because it doesn’t appear to be taking action on what could be problematic, if not violating, conversations.

Image Credits: Telepath

A user’s experience in these “gray” areas may vary by community.

Telepath’s communities today focus on hobbies and interests, and can range from the innocuous — like Books or Branding or Netflix or Cooking, for example — to the potentially fraught, like Race in America. In the latter, there have been discussions about the capitalization of “Black” where it was suggested that maybe this wasn’t a useful idea. In another, sympathy is expressed for a person who was falsely pretending to be a person of color.

In a post about affordable housing, someone openly wondered if a woman who said she didn’t want to live near poor people was actually racist. Another commenter then noted that gang members can bring down property values.

A QAnon community, meanwhile, discusses the movement and its ridiculous followers from afar — which is apparently permitted — though supporting it in earnest would not be.

There are also nearly 20 groups about things that “suck,” as in GOPSucks or CNNSucks or QuibiSucks.

Anti-Trump content, meanwhile, can be found on a network called “DumbHitler.”

Meanwhile, online publishers who routinely post discredited information are banned from Telepath, but YouTube is not. So if feel you need to share a link to a video of Rudy Giuliani accusing Biden of dementia, you can do so — so long as you don’t call it the truth.

And you can post opinions about some terrible people in which you describe them as terrible, thanks to the public figure carve-out.

Cheater and deadbeat dad? Go ahead and call them a “disgusting human being.” VP Pence was referred to by a commenter as “SmugFace mcWhitey” and Ronny Jackson is described as “such a piece of sh**.”

Explains Estévez, that’s because Telepath’s “be kind” rule is not intended to protect public figures from criticism.

“It is important to note that toxicity on the internet around politics isn’t because people are using bad words, but because people are using bad faith arguments. They are spreading misinformation. They are gaslighting marginalised groups about their experiences. These are the real issues we’re addressing,” she says.

She also notes that online “civility” is often used to silence people from marginalized groups.

“We don’t want Telepath’s focus on kindness to be turned against those who criticize powerful people,” she adds.

In practice, the way this plays out on Telepath today is that it’s become a private, closed door network where users can bash Trump, his supporters and right-wing politicians in peace from Twitter trolls. And it’s a place where a majority agrees with those opinions, too.

It has, then, seemingly built the Twitter that many on the left have wanted, the way that conservative social media, like Gab and Parler, built what the right had wanted. But in the end, it’s not clear if this is the solution for the problems of modern social media or merely an escape. It also remains to be seen whether a mainstream user base will follow.

Telepath remains in a closed beta of indefinite length. You need an invite to join.