Maisie Williams’ talent discovery startup Daisie raises $2.5 million, hits 100K members

Maisie Williams’ time on Game of Thrones may have come to an end, but her talent discovery app Daisie is just getting started. Co-founded by film producer Dom Santry, Daisie aims to make it easier for creators to showcase their work, discover projects and collaborate with one another through a social networking-style platform. Only 11 days after Daisie officially launched to the public, the app hit an early milestone of 100,000 members. It also recently closed on $2.5 million in seed funding, the company tells TechCrunch.

The round was led by Founders Fund, which contributed $1.5 million. Other investors included 8VC, Kleiner Perkins, and newer VC firm Shrug Capital, from AngelList’s former head of marketing Niv Dror, who also separately invested. To date — including friends and family money and the founders’ own investment — Daisie has raised roughly $3 million.

It will later move toward raising a larger Series A, Santry says.

On Daisie, creators establish a profile as you would on a social network, find and follow other users, then seek out projects based on location, activity, or other factors.

“Whether it’s film, music, photography, art — everything is optimized around looking for collaborators,” explains Santry. “So the projects that are actively open and looking for people to get involved, are the ones we’re really pushing for people to discover and hopefully get involved with,” he says.

The company’s goal to offer an alternative path to talent discovery is a timely one. Today, the creative industry is waking up — as are many others — to the ramifications of the #MeToo and #TimesUp movements. As power-hungry abusers lose their jobs, new ways of working, networking and sourcing talent are taking hold.

As Williams said when she first introduced the app last year, Daisie’s focus is on giving the power back to the creator.

“Instead of [creators] having to market themselves to fit someone else’s idea of what their job would be, they can let their art speak for themselves,” she said at the time.

The app was launched into an invite-only beta on iOS last summer, and quickly saw a surge of users. After 37,000 downloads in week one, it crashed.

“We realized that the community was a lot larger than the product we had built, and that scale was something we needed to do properly,” Santry tells TechCrunch.

The team realized there was another problem, too: Once collaborators found each other in Daisie, there wasn’t a clear cut way for them to get in touch with one another as the app had no communication tools or ways to share files built in.

“That journey from concept to production was pretty muddy and quite muddled…so we realized, if we were bringing teams together, we actually wanted to give them a place to work — give them this creative hub…and take their project from concept all the way to production on Daisie,” Santry notes.

With this broader concept in mind, Daisie began fundraising in San Francisco shortly after the beta launch. The round initially closed in October 2018, but was more recently reopened to allow Dror’s investment.

With the additional funding in tow, Daisie has been able to grow its team of five to eighteen, including new hires from Monzo, Deliveroo, BBC, Microsoft, and others — specifically engineers who were familiar with designing apps for scale. Tasked with developing better infrastructure and a more expansive feature set, the team set to work on bringing Daisie to the web.

Nine months later, the new version launched to the public and is stable enough to handle the load. Today, it topped 100,000 users — most of which are in London. However, Daisie is planning to focus on taking its app to other cities including Berlin, New York, and L.A. going forward.

The company has monetization ideas in mind, but the app does not currently generate revenue. However, it’s already fielding inquiries from companies who want Daisie to find them the right talent for their projects.

“We want the best for the creators on the platform, so if that means bringing clients on — and hopefully giving those connectivity opportunities — then we’ll absolutely [go] down those roads,” Santry says.

The app may also serve as a talent pipeline for Maisie Williams’ own Daisy Chain Productions. In fact, Daisie recently ran a campaign called London Creates which connected young, emerging creators with project teams, two of which were headed by Santry’s Daisy Chain Productions co-founders, Williams and Bill Milner.

Now Daisy Chain Productions is going to produce a film from the Daisie collaboration as a result.

While celebs sometimes do little more than lend their name to projects, Williams was hands-on in terms of getting Daisie off the ground, Santry says. During the first quarter of 2019, she worked on Daisie 9-to-5, he notes. But she has since started another film project and plans to continue to work as an actress, which will limit her day-to-day involvement. Her role now and in the future may be more high-level.

“I think her role is going to become one of, culturally, like: where does Daisie stand? What do we stand for? Who do we work with? What do we represent?” he says. “How do we help creators everywhere? That’s mainly want Maisie wants to make sure Daisie does.”

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

Instagram’s IGTV copies TikTok’s AI, Snapchat’s design

Instagram conquered Stories, but it’s losing the battle for the next video formats. TikTok is blowing up with an algorithmically suggested vertical one-at-a-time feed featuring videos of users remixing each other’s clips. Snapchat Discover’s 2 x infinity grid has grown into a canvas for multi-media magazines, themed video collections, and premium mobile TV shows.

Instagram’s IGTV…feels like a flop in comparison. Launched a year ago, it’s full of crudely cropped & imported viral trash from around the web. The long-form video hub that lives inside both a homescreen button in Instagram as well as a standalone app has failed to host lengthier must-see original vertical content. Sensor Tower estimates that the IGTV app has just 4.2 million installs worldwide with just 7,700 new ones per day — implying less than half a percent of Instagram’s billion-plus users have downloaded it. IGTV doesn’t rank on the overall charts and hangs low at #191 on the US – Photo & Video app charts according to App Annie.

Now Instagram has quietly overhauled the design of IGTV’s space inside its main app to crib what’s working from its two top competitors. The new design showed up in last week’s announcements for Instagram Explore’s new Shopping and IGTV discovery experiences. At the time, Instagram’s product lead on Explore Will Ruben told us that with the redesign, “the idea is this is more immersive and helps you to see the breadth of videos in IGTV rather than the horizontal scrolling interface that used to exist” but the company declined to answer follow-up questions about it.

IGTV has ditched its category-based navigation system’s tabs like “For You”, “Following”, “Popular”, and “Continue Watching” for just one central feed of algorithmically suggested videos — much like TikTok. This affords a more lean-back, ‘just show me something fun’ experience that relies on Instagram’s AI to analyze your behavior and recommend content instead of putting the burden of choice on the viewer.

IGTV has also ditched its awkward horizontal scrolling design that always kept a clip playing in the top half of the screen. Now you’ll scroll vertically through a 2 x infinity grid of recommended clips in a what looks just like Snapchat Discover feed. Once you get past a first video that auto-plays up top, you’ll find a full-screen grid of things to watch. You’ll only see the horizontal scroller in the standalone IGTV app, or if you tap into an IGTV video, and then tap the Browse button for finding a next clip while the last one plays up top.

Instagram seems to be trying to straddle the designs of its two competitors. The problem is that TikTok’s one-at-a-time feed works great for punchy, short videos that get right to the point. If you’re bored after 5 second you swipe to the next. IGTV’s focus on long-form means its videos might start too slowly to grab your attention if they were auto-played full-screen in the feed rather than being chosen by a viewer. But Snapchat makes the most of the two previews per row design IGTV has adopted because professional publishers take the time to make compelling cover thumbnail images promoting their content. IGTV’s focus on independent creators means fewer have labored to make great cover images, so viewers have to rely on a screenshot and caption.

Instagram is prototyping a number of other features to boost engagement across its app, as discovered by reverse engineering specialist and frequent TechCrunch tipster Jane Manchun Wong. Those include options to blast a direct message to all your Close Friends at once but in individual message threads, see a divider between notifications and likes you have or haven’t seen, or post a Chat sticker to Stories that lets friends join a group message thread about that content. And to better compete with TikTok, it may let you add lyrics stickers to Stories that appear word-by-word in sync with Instagram’s licensed music soundtrack feature, and share Music Stories to Facebook. What we haven’t seen is any cropping tool for IGTV that would help users reformat landscape videos. The vertical-only restriction keeps lots of great content stuck outside IGTV, or letterboxed with black, color-matched backgrounds, or meme-style captions with the video as just a tiny slice in the middle.

When I spoke with Instagram co-founder and ex-CEO Kevin Systrom last year a few months after IGTV’s launch, he told me “It’s a new format. It’s different. We have to wait for people to adopt it and that takes time . . . Everything that is great starts small.”

But to grow large, IGTV needs to demonstrate how long-form portrait mode video can give us a deeper look at the nuances of the influencers and topics we care about. The company has rightfully prioritized other drives like safety and well-being with features that hide bullies and deter overuse. But my advice from August still stands despite all the ground Instagram has lost in the meantime. “Concentrate on teaching creators how to find what works on the format and incentivizing them with cash and traffic. Develop some must-see IGTV and stoke a viral blockbuster. Prove the gravity of extended, personality-driven vertical video.” Until the content is right, it won’t matter how IGTV surfaces it.

On the Internet of Women with Moira Weigel

“Feminism,” the writer and editor Marie Shear famously said in an often-misattributed quote, “is the radical notion that women are people.” The genius of this line, of course, is that it appears to be entirely non-controversial, which reminds us all the more effectively of the past century of fierce debates surrounding women’s equality.

And what about in tech ethics? It would seem equally non-controversial that ethical tech is supposed to be good for “people,” but is the broader tech world and its culture good for the majority of humans who happen to be women? And to the extent it isn’t, what does that say about any of us, and about all of our technology?

I’ve known, since I began planning this TechCrunch series exploring the ethics of tech, that it would need to thoroughly cover issues of gender. Because as we enter an age of AI, with machines learning to be ever more like us, what could be more critical than addressing the issues of sex and sexism often at the heart of the hardest conflicts in human history thus far?

Meanwhile, several months before I began envisioning this series I stumbled across the fourth issue of a new magazine called Logic, a journal on technology, ethics, and culture. Logic publishes primarily on paper — yes, the actual, physical stuff, and a satisfyingly meaty stock of it, at that.

In it, I found a brief essay, “The Internet of Women,” that is a must-read, an instant classic in tech ethics. The piece is by Moira Weigel, one of Logic’s founders and currently a member of Harvard University’s “Society of Fellows” — one of the world’s most elite societies of young academics.

A fast-talking 30-something Brooklynite with a Ph.D. from Yale, Weigel’s work combines her interest in sex, gender, and feminism, with a critical and witty analysis of our technology culture.

In this first of a two-part interview, I speak with Moira in depth about some of the issues she covers in her essay and beyond: #MeToo; the internet as a “feminizing” influence on culture; digital media ethics around sexism; and women in political and tech leadership.

Greg E.: How would you summarize the piece in a sentence or so?

Moira W.: It’s an idiosyncratic piece with a couple of different layers. But if I had to summarize it in just a sentence or two I’d say that it’s taking a closer look at the role that platforms like Facebook and Twitter have played in the so-called “#MeToo moment.”

In late 2017 and early 2018, I became interested in the tensions that the moment was exposing between digital media and so-called “legacy media” — print newspapers and magazines like The New York Times and Harper’s and The Atlantic. Digital media were making it possible to see structural sexism in new ways, and for voices and stories to be heard that would have gotten buried, previously.

A lot of the conversation unfolding in legacy media seemed to concern who was allowed to say what where. For me, this subtext was important: The #MeToo moment was not just about the sexualized abuse of power but also about who had authority to talk about what in public — or the semi-public spaces of the Internet.

At the same time, it seemed to me that the ongoing collapse of print media as an industry, and really what people sometimes call the “feminization” of work in general, was an important part of the context.

When people talk about jobs getting “feminized” they can mean many things — jobs becoming lower paid, lower status, flexible or precarious, demanding more emotional management and the cultivation of an “image,” blurring the boundary between “work” and “life.”

The increasing instability or insecurity of media workplaces only make women more vulnerable to the kinds of sexualized abuses of power the #MeToo hashtag was being used to talk about.

TikTok owner ByteDance’s long-awaited chat app is here

In WeChat -dominated China, there’s no shortage of challengers out there claiming to create an alternative social experience. The latest creation comes from ByteDance, the world’s most valuable startup and the operator behind TikTok, the video app that has consistently topped the iOS App Store over the last few quarters.

The new offer is called Feiliao (飞聊), or Flipchat in English, a hybrid of an instant messenger plus interest-based forums, and it’s currently available for both iOS and Android. It arrived only four months after Bytedance unveiled its video-focused chatting app Duoshan at a buzzy press event.

Screenshots of Feiliao / Image source: Feiliao

Some are already calling Feiliao a WeChat challenger, but a closer look shows it’s targeting a more niche need. WeChat, in its own right, is the go-to place for daily communication in addition to facilitating payments, car-hailing, food delivery and other forms of convenience.

Feiliao, which literally translates to ‘fly chat’, encourages users to create forums and chat groups centered around their penchants and hobbies. As its app description writes:

Feiliao is an interest-based social app. Here you will find the familiar [features of] chats and video calls. In addition, you will discover new friends and share what’s fun; as well as share your daily life on your feed and interact with close friends.

Feiliao “is an open social product,” said ByteDance in a statement provided to TechCrunch. “We hope Feiliao will connect people of the same interests, making people’s life more diverse and interesting.”

It’s unclear what Feiliao means by claiming to be ‘open’, but one door is already shut. As expected, there’s no direct way to transfer people’s WeChat profiles and friend connections to Feiliao, and there’s no option to log in via the Tencent app. As of Monday morning, links to Feiliao can’t be opened on WeChat, which recently crossed 1.1 billion monthly active users.

On the other side, Alibaba, Tencent’s long-time nemesis, is enabling Feiliao’s payments function through the Alipay digital wallet. Alibaba has also partnered with Bytedance elsewhere, most notably on TikTok’s Chinese version Douyin where certain users can sell goods via Taobao stores.

In all, Flipchat is more reminiscent of another blossoming social app — Tencent-backed Jike — than WeChat. Jike (pronounced ‘gee-keh’) lets people discover content and connect with each other based on various topics, making it one of the closest counterparts to Reddit in China.

Jike’s CEO Wa Nen has taken noticed of Feiliao, commenting with the 👌 emoji on his Jike feed, saying no more.

Screenshot of Jike CEO Wa Ren commenting on Feiliao

“I think [Feiliao] is a product anchored in ‘communities’, such as groups for hobbies, key opinion leaders/celebrities, people from the same city, and alumni,” a product manager for a Chinese enterprise software startup told TechCrunch after trying out the app.

Though Feiliao isn’t a direct take on WeChat, there’s little doubt that the fight between Bytedance and Tencent has heated up tremendously as the former’s army of apps captures more user attention.

According to a new report published by research firm Questmobile, ByteDance accounted for 11.3 percent of Chinese users’ total time spent on ‘giant apps’ — those that surpassed 100 million MAUs — in March, compared to 8.2 percent a year earlier. The percentage controlled by Tencent was 43.8 percent in March, down from 47.5 percent, while the remaining share, divided between Alibaba, Baidu and others, grew only slightly from 44.3 percent to 44.9 percent over the past year.

Instagram is killing Direct, its standalone Snapchat clone app, in the next several weeks

As Facebook pushes ahead with its strategy to consolidate more of the backend of its various apps on to a single platform, it’s also doing a little simplifying and housekeeping. In the coming month, it will shut down Direct, the standalone Instagram direct messaging app that it was testing to rival Snapchat, on iOS and Android. Instead, Facebook and its Instagram team will channel all developments and activity into the direct messaging feature of the main Instagram app.

We first saw a message about the app closing down by way of a tweet from Direct user Matt Navarra: “In the coming month, we’ll no longer be supporting the Direct app,” Instagram notes in the app itself. “Your conversations will automatically move over to Instagram, so you don’t need to do anything.”

The details were then confirmed to us by Instagram itself:

“We’re rolling back the test of the standalone Direct app,” a spokesperson said in a statement provided to TechCrunch. “We’re focused on continuing to make Instagram Direct the best place for fun conversations with your friends.”

From what we understand, Instagram will continue developing Direct features — they just won’t live in a standalone app. (Tests and rollouts of new features that we’ve reported before include encryption in direct messaging, the ability to watch videos with other people, a web version of the direct messaging feature,

Instagram didn’t give any reason for the decision, but in many ways, the writing was on the wall with this one.

The app first appeared December 2017, when Instagram confirmed it had rolled it out in a select number of markets — Uruguay, Chile, Turkey, Italy, Portugal and Israel — as a test. (Instagram first launched direct messaging within the main app in 2013.)

“We want Instagram to be a place for all of your moments, and private sharing with close friends is a big part of that,” it said at the time. “To make it easier and more fun for people to connect in this way, we are beginning to test Direct – a camera-first app that connects seamlessly back to Instagram.”

But it’s not clear how many markets beyond ultimately have had access to the app, although Instagram did expand it to more. The iOS version currently notes that it is available in a much wider range of languages than Spanish, Turkish, Italian and Portuguese. It also includes English, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Indonesian, Japanese, Korean, Malay, Norwegian Bokmål, Polish, Romanian, Russian, Simplified Chinese, Slovak, Swedish, Tagalog, Thai, Traditional Chinese, Ukrainian and Vietnamese.

But with Instagram doing little to actively promote the app or its expansion to more markets, Direct never really found a lot of traction in the markets where it was active.

The only countries that make it on to AppAnnie’s app rankings for Direct are Uruguay for Android, where it was most recently at number 55 among social networking apps (with no figures for overall rankings, meaning it was too low down to be counted); and Portugal on iOS, where it was number 24 among social apps and a paltry 448 overall.

The Direct app hadn’t been updated on iOS since the end of December, although the Android version was updated as recently as the end of April.

At the time of its original launch as a test, however, Direct looked like an interesting move from Instagram.

The company had already been releasing various other features that cloned popular ones in Snapchat. The explosive growth and traction of one of them, Stories, could have felt like a sign to Facebook that there was more ground to break on creating more Snapchat-like experiences for its audience. More generally, the rise of Snapchat and direct messaging apps like WhatsApp has shown that there is a market demand for more apps based around private conversations among smaller groups, if not one-to-one.

On top of that, building a standalone messaging app takes a page out of Facebook’s own app development book, in which it launched and began to really accelerate development of a standalone Messenger app separate from the Facebook experience on mobile.

The company has not revealed any recent numbers for usage of Direct since 2017, when it said there were 375 million users of the service as it brought together permanent and ephemeral (disappearing) messages within the service.

More recently, Instagram and Facebook itself have been part of the wider scrutiny we have seen over how social platforms police and moderate harmful or offensive content. Facebook itself has faced an additional wave of criticism from some over its plans to bring together its disparate app ecosystem in terms of how they function together, with the issue being that Facebook is not giving apps like WhatsApp and Instagram enough autonomy and becoming an even bigger data monster in the process.

It may have been the depressingly low usage that ultimately killed off Direct, but I’d argue that the optics for promoting an expansion of its app real estate on to another platform weren’t particularly strong, either.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

After year-long lockout, Twitter is finally giving people their accounts back

Twitter is finally allowing a number of locked users to regain control of their accounts once again. Around a  year after Europe’s new privacy laws (GDPR) rolled out, Twitter began booting users out of their accounts if it suspected the account’s owner was underage — that is, younger than 13. But the process also locked out many users who said they were now old enough to use Twitter’s service legally.

While Twitter’s rules had stated that users under 13 can’t create accounts or post tweets, many underage users did so anyway thanks to lax enforcement of the policy. The GDPR regulations, however, forced Twitter to address the issue.

But even if the Twitter users were old enough to use the service when the regulations went into effect in May 2018, Twitter still had to figure out a technical solution to delete all the content published to its platform when those users were underage.

The lock-out approach was an aggressive way to deal with the problem.

By comparison, another app favored by underage users, TikTok, was recently fined by the FTC for being in violation of U.S. children’s privacy law, COPPA. But instead of kicking out all its underage users for months on end, it forced an age gate to appear in the app after it deleted all the videos made by underage users. Those users who were still under 13 were then redirected to a new COPPA-compliant experience.

Although Twitter was forced to address the problem because of the new regulations, lest it face possible fines, the company seemingly didn’t prioritize a fix. For example, VentureBeat reported how Twitter emailed users in June 2018 saying they’d be in touch with an update about the problem soon, but no update ever arrived.

The hashtag #TwitterLockOut became a common occurrence on Twitter and cries of “Give us back our accounts!” would be found in the Replies whenever Twitter shared other product news on its official accounts. (Well, that and requests for an Edit button, of course.) 

Twitter says that it’s now beginning — no, for real this time! — to give the locked out users control of their accounts. The process will roll out in waves as it scales up, with those who have waited the longest getting their emails first.

It also claims the process “was a lot more complicated” than anticipated, which is why it took a year (or in some cases, more than a year) to complete.

However, there are some caveats.

The users will first need to give Twitter permission to delete any tweets posted before they were 13, as well as any likes, DMs sent or received, moments, lists, and collections. Twitter will also need to remove all profile information besides the account’s username and date of birth.

In other words, the company is offering users a way to reclaim their username but none of their content.

Though many of these users have since moved on to new Twitter accounts, they may still want to reclaim their old username if it was a good one. In addition, their follower/following counts will return to normal after up to 24 hours after they take control of their account once again.

Twitter says it’s beginning to email those who are eligible starting today with these details. If the user doesn’t have an email address, they can instead log into the account where they’ll see a “Get Started” button to kick off the process instead.

To proceed, users will have to confirm their name and either the email or phone number that was associated with the account.

The account isn’t immediately unlocked after the steps are completed, users report. But Twitter’s dialog box informs the users they’ll be notified when the process is finalized on Twitter’s side.

Hopefully, that won’t take another year.

Image credits (of the process): Reddit user nyuszika7h, via r/Twitter 

World leaders ask tech giants to tackle toxic content with Christchurch Call

On Wednesday, New Zealand Prime Minister Jacinda Ardern will ask tech companies to sign a pledge called the Christchurch Call, as The New York Times previously reported. Digital ministers of the Group of 7 nations are meeting tomorrow to talk about toxic content and tech regulation.

The Christchurch Call is the first result on that work and a way to start involving tech companies with a nonbinding pledge. Named after the terrorist attack in Christchurch, the agreement should ask tech platforms to increase their efforts when it comes to blocking toxic content. In other words, democracies don’t want another shooting video going viral and also don’t want to block Facebook, YouTube or Twitter altogether.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Countries get to decide what they mean by violent and extremist content for instance.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” France Digital Minister Cédric O said in a meeting with a few journalists.

Companies that sign the pledge agree to improve their moderation processes and share more information about the work they’re doing to prevent terrorist content from going viral. On the other side, governments agree to work on laws that ban toxic content from social networks.

Tomorrow, a handful of countries are expected to sign the Christchurch Call. According to French government officials, members of the Group of 7 nations should sign it but the U.S. might not sign it. New Zealand, Norway and a handful of countries that are not part of the Group of 7 nations should also sign the pledge.

After that, it’ll be up to tech companies to side with those governments and say that they have heard their plea. It’s a nonbinding agreement after all, so I’m sure many social networks will see it as gestures of goodwill.

In addition to digital ministers and government officials, the French Economy Ministry says that representatives from Microsoft, Facebook, Twitter, Snap, Mozilla, Google, Qwant, the Wikimedia Foundation and the Web Foundation will be there on Wednesday.

So you can expect that some, if not all of them, will sign the pledge. The New York Times says that Facebook, Google and Microsoft have already agreed to sign the pledge.

Looking back at Zoom’s IPO with CEO Eric Yuan

Since the launch of its IPO in mid-April, Zoom stock has skyrocketed, up nearly 30% as of Monday’s open. However, as the company’s valuation continues to tick up, analysts and industry pundits are now diving deeper to try and unravel what the company’s future growth might look like.

TechCrunch’s venture capital ax Kate Clark has been following the story with a close eye and will be sitting down for an exclusive conversation with Zoom CEO Eric Yuan on Wednesday at 10:00 am PT. Eric, Kate and Extra Crunch members will be taking a look back at the company’s listing process and Zoom’s road to IPO.

Tune in to join the conversation and for the opportunity to ask Eric and Kate any and all things Zoom.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.