Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.
In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.
In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.
One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.
“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”
Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.
“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”
Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.
In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.
It also reveals that it received around $135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).
“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”
These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.
Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network.
It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.
In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.
In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.
Again Facebook received money from the disinformation purveyors, saying it took in around $25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)
“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”
In the Ukraine case it says it found no Events being hosted by the pages.
“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”
A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.
This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.
However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.
Academics at the universities of Oxford and Stanford think Facebook should give users greater transparency and control over the content they see on its platform.
They also believe the social networking giant should radically reform its governance structures and processes to throw more light on content decisions, including by looping in more external experts to steer policy.
Such changes are needed to address widespread concerns about Facebook’s impact on democracy and on free speech, they argue in a report published today, which includes a series of recommendations for reforming Facebook (entitled: Glasnost! Nine Ways Facebook Can Make Itself a Better Forum for Free Speech and Democracy.)
“There is a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities as well as international human rights norms,” writes lead author TimothyGarton Ash.
“Executive decisions made by Facebook have major political, social, and cultural consequences around the world. A single small change to the News Feed algorithm, or to content policy, can have an impact that is both faster and wider than that of any single piece of national (or even EU-wide) legislation.”
Here’s a rundown of the report’s nine recommendations:
Tighten Community Standards wording on hate speech — the academics argue that Facebook’s current wording on key areas is “overbroad, leading to erratic, inconsistent and often context-insensitive takedowns;” and also generating “a high proportion of contested cases.” Clear and tighter wording could make consistent implementation easier, they believe.
Hire more and contextually expert content reviewers — “the issue is quality as well as quantity,” the report points out, pressing Facebook to hire more human content reviewers plus a layer of senior reviewers with “relevant cultural and political expertise;” and also to engage more with trusted external sources such as NGOs. “It remains clear that AI will not resolve the issues with the deeply context-dependent judgements that need to be made in determining when, for example, hate speech becomes dangerous speech,” they write.
Increase “decisional transparency” — Facebook still does not offer adequate transparency around content moderation policies and practices, they suggest, arguing it needs to publish more detail on its procedures, including specifically calling for the company to “post and widely publicize case studies” to provide users with more guidance and to provide potential grounds for appeals.
Expand and improve the appeals process — also on appeals, the report recommends Facebook gives reviewers much more context around disputed pieces of content, and also provide appeals statistics data to analysts and users. “Under the current regime, the initial internal reviewer has very limited information about the individual who posted a piece of content, despite the importance of context for adjudicating appeals,” they write. “A Holocaust image has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi.” They also suggest Facebook should work on developing “a more functional and usable for the average user” appeals due process, in dialogue with users — such as with the help of a content policy advisory group.
Provide meaningful News Feed controls for users — the report suggests Facebook users should have more meaningful controls over what they see in the News Feed, with the authors dubbing current controls as “altogether inadequate,” and advocating for far more. Such as the ability to switch off the algorithmic feed entirely (without the chronological view being defaulted back to algorithm when the user reloads, as is the case now for anyone who switches away from the AI-controlled view). The report also suggests adding a News Feed analytics feature, to give users a breakdown of sources they’re seeing and how that compares with control groups of other users. Facebook could also offer a button to let users adopt a different perspective by exposing them to content they don’t usually see, they suggest.
Expand context and fact-checking facilities — the report pushes for “significant” resources to be ploughed into identifying “the best, most authoritative, and trusted sources” of contextual information for each country, region and culture — to help feed Facebook’s existing (but still inadequate and not universally distributed) fact-checking efforts.
Establish regular auditing mechanisms — there have been some civil rights audits of Facebook’s processes (such as this one, which suggested Facebook formalizes a human rights strategy), but the report urges the company to open itself up to more of these, suggesting the model of meaningful audits should be replicated and extended to other areas of public concern, including privacy, algorithmic fairness and bias, diversity and more.
Create an external content policy advisory group — key content stakeholders from civil society, academia and journalism should be enlisted by Facebook for an expert policy advisory group to provide ongoing feedback on its content standards and implementation; as well as also to review its appeals record. “Creating a body that has credibility with the extraordinarily wide geographical, cultural, and political range of Facebook users would be a major challenge, but a carefully chosen, formalized, expert advisory group would be a first step,” they write, noting that Facebook has begun moving in this direction but adding: “These efforts should be formalized and expanded in a transparent manner.”
Establish an external appeals body — the report also urges “independent, external” ultimate control of Facebook’s content policy, via an appeals body that sits outside the mothership and includes representation from civil society and digital rights advocacy groups. The authors note Facebook is already flirting with this idea, citing comments made by Mark Zuckerberg last November, but also warn this needs to be done properly if power is to be “meaningfully” devolved. “Facebook should strive to make this appeals body as transparent as possible… and allow it to influence broad areas of content policy… not just rule on specific content takedowns,” they warn.
In conclusion, the report notes that the content issues it’s focused on are not only attached to Facebook’s business but apply widely across various internet platforms — hence growing interest in some form of “industry-wide self-regulatory body.” Though it suggests that achieving that kind of overarching regulation will be “a long and complex task.”
In the meanwhile, the academics remain convinced there is “a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities, as well as international human rights norms” — with the company front and center of the frame given its massive size (2.2 billion+ active users).
“We recognize that Facebook employees are making difficult, complex, contextual judgements every day, balancing competing interests, and not all those decisions will benefit from full transparency. But all would be better for more regular, active interchange with the worlds of academic research, investigative journalism, and civil society advocacy,” they add.
We’ve reached out to Facebook for comment on their recommendations.
The report was prepared by the Free Speech Debate project of the Dahrendorf Programme for the Study of Freedom, St. Antony’s College, Oxford, in partnership with the Reuters Institute for the Study of Journalism, University of Oxford, the Project on Democracy and the Internet, Stanford University and the Hoover Institution, Stanford University.
Last year we offered a few of our own ideas for fixing Facebook — including suggesting the company hire orders of magnitude more expert content reviewers, as well as providing greater transparency into key decisions and processes.
A new study is making waves in the worlds of tech and psychology by questioning the basis of thousands of papers and analyses with conflicting conclusions on the effect of screen time on well-being. The researchers claim is that the science doesn’t agree because it’s bad science. So is screen time good or bad? It’s not that simple.
The conclusions only make the mildest of claims about screen time, essentially that as defined it has about as much effect on well-being as potato consumption. Instinctively we may feel that not to be true; technology surely has a greater effect than that — but if it does, we haven’t found a way to judge it accurately.
The paper, by Oxford scientists Amy Orben and Andrew Przybylski, amounts to a sort of king-sized meta-analysis of studies that come to some conclusion about the relationship between technology and well-being among young people.
Their concern was that the large data sets and statistical methods employed by researchers looking into the question — for example, thousands and thousands of survey responses interacting with weeks of tracking data for each respondent — allowed for anomalies or false positives to be claimed as significant conclusions. It’s not that people are doing this on purpose necessarily, only that it’s a natural result of the approach many are taking.
“Unfortunately,” write the researchers in the paper, “the large number of participants in these designs means that small effects are easily publishable and, if positive, garner outsized press and policy attention.” (We’re a part of that equation, of course, but speaking for myself at least I try to include a grain of salt with such studies, indeed with this one as well.)
In order to show this, the researchers essentially redid the statistical analysis for several of these large data sets (Orben explains the process here), but instead of only choosing one result to present, they collected all the plausible ones they could find.
For example, imagine a study where the app use of a group of kids was tracked, and they were surveyed regularly on a variety of measures. The resulting (fictitious, I hasten to add) paper might say it found kids who use Instagram for more than two hours a day are three times as likely to suffer depressive episodes or suicidal ideations. What the paper doesn’t say, and which this new analysis could show, is that the bottom quartile is far more likely to suffer from ADHD, or the top five percent reported feeling they had a strong support network.
In the new study, any and all statistically significant results like those I just made up are detected and compared with one another. Maybe a study came out six months later that found the exact opposite in terms of ADHD but also didn’t state it as a conclusion.
This figure from the paper shows a few example behaviors that have more or less of an effect on well-being.
Ultimately what the Oxford study found was that there is no consistent good or bad effect, and although a very slight negative effect was noted, it was small enough that factors like having a single parent or needing to wear glasses were far more important.
Yet, and this is important to understand, the study does not conclude that technology has no negative or positive effect; such a broad conclusion would be untenable on its face. The data it rounds up are (as some experts point out with no ill will toward the paper) simply inadequate to the task and technology use is too variable to reduce to a single factor. Its conclusion is that studies so far have in fact been inconclusive and we need to go back to the drawing board.
“The nuanced picture provided by these results is in line with previous psychological and epidemiological research suggesting that the associations between digital screen-time and child outcomes are not as simple as many might think,” the researchers write.
Could, for example, social media use affect self-worth, either positively or negatively? Could be! But the ways that scientists have gone about trying to find out have, it seems, been inadequate.
In the future, the authors suggest, researchers should not only design their experiments more carefully, but be more transparent about their analysis. By committing to document all significant links in the data set they create, whether they fit the narrative or hypothesis or go against it, researchers show that they have not rigged the study from the start. Designing and iterating with this responsibility in mind will produce better studies and perhaps even some real conclusions.
What should parents, teachers, siblings and others take away from this? Not anything about screen time or whether tech is good or bad, certainly. Rather let it be another instance of the frequently learned lesson that science is a work in progress and must be considered very critically before application.
Your kid is an individual, and things like social media and technology affect them differently from other kids; it may very well be that your informed opinion of their character and habits, tempered with that of a teacher or psychologist, is far more accurate than the “latest study.”
Twitter has made a name for itself, at its most basic level, as a platform that gives everyone who uses it a voice. But as it has grown, that unique selling point has set Twitter up for as many challenges — harassment, confusing way to manage conversations — as it has opportunities — the best place to see in real time how the public reacts to something, be it a TV show, a political uprising, or a hurricane.
Now, to fix some of the challenges, the company is going to eat its own dogfood (birdfood?) when it comes to having a voice.
In the coming weeks, it’s going to launch a new beta program, where a select group of users will get access to features, by way of a standalone app, to use and talk about new features with others. Twitter, in turn, will use data that it picks up from that usage and chatter to decide how and if to turn those tests into full-blown product features for the rest of its user base.
We sat down with Sara Haider, Twitter’s director of product management, to take a closer look at the new app and what features Twitter will be testing in it (and what it won’t), now and in the future.
The company today already runs an Experiments Program for testing, as well as other tests, for example to curb abusive behavior, to figure out how to help the service run more smoothly. This new beta program will operate differently.
While there will only be around a couple thousand participants, those accepted will not be under NDA (unlike the Experiments Program). That means they can publicly discuss and tweet about the new features, allowing the wider Twitter community to comment and ask questions.
And unlike traditional betas, where users test nearly completed features before a public launch, the feedback from the beta could radically change the direction of what’s being built. Or, in some cases, what’s not.
The first version of the beta will focus on a new design for the way conversation threads work on Twitter. This includes a different color scheme, and visual cues to highlight important replies.
“It’s kind of a new take on our thinking about product development,” explains Haider. “One of the reasons why this is so critical for this particular feature is because we know we’re making changes that are pretty significant.”
She says changes of this scale shouldn’t just be dropped on users one day.
“We need you to be part of this process, so that we know we’re building the right experience,” Haider says.
Once accepted into the beta program, users will download a separate beta app – something that Twitter isn’t sure will always be the case. It’s unclear if that process will create too much friction, the company says, so it will see how testers respond.
Here are some of the more interesting features we talked and saw getting tested in the beta we were shown:
During the first beta, participants will try out new conversation features which offer color-coded replies to differentiate between responses from the original poster of the tweet, those from people you follow, and those from people you don’t follow.
In a development build of the beta app, Haider showed us what this looked like, with the caveat that the color scheme being used has been intentionally made to be overly saturated – it will be dialed down when the features launch to testers.
When you click into a conversation thread, the beta app will also offer visual cues to help you better find the parts of the thread that are of interest to you.
One way it’s doing so is by highlighting the replies in a thread that were written by people you follow on Twitter. Another change is that the person who posted the original tweet will also have their own replies in the thread highlighted.
In the build Haider showed us, replies from people she followed were shown in green, those from non-followers were blue, and her own replies were blue.
Algorithmically sorted responses
One of the big themes in Twitter’s user experience for power and more casual users is that they come up with workarounds for certain features that Twitter does not offer.
Take reading through long threads that may have some interesting detail that you would like to come back to later, or that branches off at some point that you’d like to follow after reading through everything else. Haider says she marks replies she’s seen with a heart to keep her place. Other people use Twitter’s “Tweets & Replies” section to find out when the original poster had replied within the thread, since it’s hard to find those replies when just scrolling down.
Now, the same kind of algorithmic sorting that Twitter has applied to your main timeline might start to make its way to your replies. These may also now be shown in a ranked order, so the important ones — like those from your Twitter friends — are moved to the top.
A later test may involve a version of Twitter’s Highlights, summaries of what it deems important, coming to longer threads, Haider said.
The time-based view is not going to completely leave, however. “The buzz, that feeling and that vibe [of live activity] that is something that we never want to lose,” CEO and co-founder Jack Dorsey said last week on stage at CES. “Not everyone will be in the moment at the exact same time, but when you are, it’s an electrifying feeling…. Anything we can do to make a feeling of something much larger than yourself [we should].”
Removing hearts + other engagement icons
Another experiment Twitter is looking at is what it should do with its engagement buttons to streamline the look of replies for users. The build that we saw did not have any hearts to favorite/like Tweets, nor any icons for retweets or replies, when the Tweets came in the form of replies to another Tweet.
The icons and features didn’t completely disappear, but they would only appear when you tapped on a specific post. The basic idea seems to be: engagement for those who want it, a more simplified view for those who do not.
The heart icon has been a subject of speculation for some time now. Last year, the company told us that it was considering removing it, as part of an overall effort to improve the quality of conversation. This could be an example of how Twitter might implement just that.
Twitter may also test other things like icebreakers (pinned tweets designed to start conversations), and a status update field (i.e. your availability, location, or what you are doing, as on IM).
The status test, in fact, points to a bigger shift we may see in how Twitter as a whole is used, especially by those who come to the platform around a specific event.
One of the biggest laments has been that on-boarding on the app — the experience for those who are coming to Twitter for the first time — continues to be confusing. Twitter admits as much itself, and so — as with its recent deal with the NBA to provide a unique Twitter experience around a specific game — it will be making more tweaks and tests to figure out how to move Twitter on from being fundamentally focused around the people you follow.
“We have some work to do to make it easier to discover,” Dorsey said, adding that right now the platform is “more about people than interests.”
While all products need to evolve over time, Twitter in particular seems a bit obsessed with continually changing the basic mechanics of how its app operates.
It seems that there are at least a couple of reasons for that. One is that, although the service continues to see some growth in its daily active users, its monthly active users globally have been either flat, in decline, or growing by a mere two percent in the last four quarters (and in decline in the last three of the four quarters in the key market of the US).
That underscores how the company still has some work to do to keep people engaged.
The other is that change and responsiveness seem to be the essence of how Twitter wants to position itself these days. Last week, Dorsey noted that Twitter itself didn’t invent most of the ways that the platform gets used today. (The “RT” (retweet), which is now a button in the app; the hashtag; tweetstorms; expanded tweets, and even the now-ubiquitous @mention are all examples of features that weren’t created originally by Twitter, but added in based around how the app was used.)
“We want to continue our power of observation and learning… what people want Twitter to be and how to use it,” Dorsey said. “It allows us to be valuable and relevant.”
While these continual changes can sometimes make things more confusing, the beta program could potentially head off any design mistakes, uncover issues Twitter itself may have missed, and help Twitter harness that sort of viral development in a more focused way.
A TechCrunch-commissioned report has found damning evidence on Microsoft’s search engine. Our findings show a massive failure on Microsoft’s part to adequately police its Bing search engine and to prevent its suggested searches and images from assisting pedophiles.
Unity, the widely popular gaming engine, has pulled the rug out from underneath U.K.-based cloud gaming startup Improbable and revoked its license — effectively shutting them out from a top customer source. The conflict arose after Unity claimed Improbable broke the company’s Terms of Service and distributed Unity software on the cloud.
Just when you thought things were going south for Improbable the company inked a late-night deal with Unity competitor Epic Games to establish a fund geared toward open gaming engines. This begs the question of how Unity and Improbable’s relationship managed to sour so quickly after this public debacle.
WeChat boasts more than 1 billion daily active users, but user growth is starting to hit a plateau. That’s been expected for some time, but it is forcing the Chinese juggernaut to build new features to generate more time spent on the app to maintain growth.
The creator behind games like Halo and Destiny is splitting from its publisher Activision to go its own way. This is good news for gamers, as Bungie will no longer be under the strict deadlines of a big gaming studio that plagued the launch of Destiny and its sequel.
The leaking server was — ironically — a bug-reporting server, running the popular Jira bug triaging and tracking software. In NASA’s case, the software wasn’t properly configured, allowing anyone to access the server without a password.
This week Samsung made a surprise announcement during its CES press conference and unveiled three new consumer and retail robots and a wearable exoskeleton. It was a pretty massive reveal, but the company’s look-but-don’t-touch approach raised far more questions than it answered.
Researchers at Michigan State University are exploring the idea that there’s more to “social media addiction” than casual joking about being too online might suggest. Their paper, titled “Excessive social media users demonstrate impaired decision making in the Iowa Gambling Task” (Meshi, Elizarova, Bender, & Verdejo-Garcia) and published in the Journal of Behavioral Addictions, indicates that people who use social media sites heavily actually display some of the behavioral hallmarks of someone addicted to cocaine or heroin.
The study asked 71 participants to first rate their own Facebook usage with a measure known as the Bergen Facebook Addiction Scale. The study subjects then went on to complete something called the Iowa Gambling Task (IGT), a classic research tool that evaluates impaired decision making. The IGT presents participants with four virtual decks of cards associated with rewards or punishments and asks them to choose cards from the decks to maximize their virtual winnings. As the study explains, “Participants are also informed that some decks are better than others and that if they want to do well, they should avoid the bad decks and choose cards from the good decks.”
What the researchers found was telling. Study participants who self-reported as excessive Facebook users actually performed worse than their peers on the IGT, frequenting the two “bad” decks that offer immediate gains but ultimate result in losses. That difference in behavior was statistically significant in the later portion of the IGT, when a participant has had ample time to observe the deck’s patterns and knows which decks present the greatest risk.
The IGT has been used to study everything from patients with frontal lobe brain injuries to heroin addicts, but using it as a measure to examine social media addicts is novel. Along with deeper, structural research, it’s clear that researchers can apply much of the existing methodological framework for learning about substance addiction to social media users.
The study is narrow, but interesting and offers a few paths for follow-up research. As the researchers recognize, in an ideal study the researchers could actually observe participants’ social media usage and sort them into categories of high or low social media usage based on behavior rather than a survey they fill out.
Future research could also delve more deeply into excessive users across different social networks. The study only looked at Facebook use, “because it is currently the most widely used [social network] around the world” but one could expect to see similar results with the billion-plus monthly Instagram and potentially the substantially smaller portion of people on Twitter.
Ultimately, we know that social media is shifting human behavior and potentially its neurological underpinnings, we just don’t know the extent of it — yet. Due to the methodical nature of behavioral research and the often extremely protracted process of publishing it, we likely won’t know the results of studies conducted now for years to come. Still, as this study proves, there are researchers at work examining how social media is impacting our brains and our behavior, we just might not be able to see the big picture for some time.
Twitter says it’s going to make it easier for publishers to better understand what sort of content is resonating with its readers on the social network. The company this morning at CES briefly discussed a concept for a new publisher dashboard offering insights and analytics that can better inform their content strategy.
The company clarified the dashboard is still very much an “early concept.”
However, the idea is to offer publishers an easy way to see who on Twitter is reading and engaging with their content, when they’re viewing it, and what content is working best.
The goal is to allow publishers to better optimize what they produce to make it effective, the company said.
In addition, Twitter is working on another publisher tool – an events dashboard that will show what events are coming up, including breaking news events.
For example, an event like the Consumer Electronics Show in Las Vegas would be the type of the event that would appear on this dashboard.
This will allow the publishers to figure out – in advance – how they want to participate in that conversation on Twitter.
The company also discussed how the events would appear on Twitter, explaining that it’s trying to making it easier for newcomers to the network to follow events, without the need of a knowing the hashtag.
“We know people want to come to see what’s happening. And particularly, they want to come to Twitter to see what’s happening when events are unfolding in the real world,” said Keith Coleman, VP, Product at Twitter, speaking on stage at CES this morning.
“If you think about the experience of actually following that – it’s hard. You have to follow the publications, you have to follow the journalists, you have to follow the attendees whose names you don’t even know. You don’t have all the hashtags,” he said.
The events section, meanwhile, will organize this information for you, so you can “tune in” to the live events, without having to know who or what to follow.
Events will be pinned to the top of the timeline, in Explore and accessible through Search, he said.
Short-form video app TikTok has been growing in popularity across international markets, including in the U.S. where a merger with Musical.ly has seen the app topping the App Store charts. Facebook and Snapchat have been hastily trying to copy TikTok’s features as a result. A part of TikTok’s ambitious global expansion plan has been its more recent targeting of emerging markets — like India and Indonesia — where the company’s quietly launched “TikTok Lite” app has been gaining ground in the latter half of 2018.
TikTok hasn’t yet made much fuss over its Lite version, which actually consists of two separate apps.
The first was launched on August 6, 2018 in Thailand, but is now available across other primarily Asian markets, including Indonesia, where it’s most popular, as well as Vietnam, Malaysia and the Philippines.
This version of TikTok Lite has grown to 5 million installs since its August debut. (It’s actually written with a lowercase “l” in “Lite” in the Play store, which is how you can tell the difference between this and the other app.)
This version was also briefly live in India, Brazil and Russia, but now these countries are served by a separate Lite app (written as “Lite” with an uppercase “L”), which launched on November 1, 2018.
This second version of TikTok Lite has now become the larger of the two, thanks to India. It has around 7.1 million total downloads, according to Sensor Tower data.
It has also now been installed across 15 additional non-Asian countries, including Egypt, Brazil, Algeria, Tunisia, Russia, Ecuador, South Africa, Dominican Republic, Guatemala, Kenya, Costa Rica, El Salvador, Nigeria, Angola and Ghana.
Combined, the two TikTok Lite apps have gained more than 12 million downloads in around six months’ time, TechCrunch confirmed with Sensor Tower.
However, TikTok Lite is not being heavily promoted at this time — especially when compared with the outsize marketing that TikTok’s flagship app has been seeing as of late.
Sensor Tower found that more than half of TikTok Lite’s downloads came over the past month, following TikTok Lite’s return to India. Combined, the two Lite apps’ downloads reached around 6.7 million in December, which was a 158 percent increase over November’s 2.6 million installs, it said.
Despite TikTok Lite’s growth, 12 million+ downloads is only a drop in the bucket when it comes to TikTok’s larger user base. This represents only 4.5 percent of TikTok’s downloads on Google Play since August 2018, and only about 3.6 percent of all TikTok downloads since then across both the iOS and Google Play app stores combined.
To date, TikTok’s main app has been downloaded more than 887 million times on Google Play. That doesn’t count the downloads from the Chinese version called Douyin, which is found on third-party Android app stores. That means TikTok’s true install base is even bigger.
ByteDance itself had publicly said last July the TikTok user base had grown to 500 million+ monthly active users — a way of counting who’s regularly using the app instead of just installing it on their phone.
The problem seems to stem from a variety of factors, including how TikTok Lite is marketed in the Play Store. The app uses screenshots and a description that make it seem like it’s just another version of TikTok. But according to user reviews, people were disappointed to find it’s a consumption-only app. Many have left reviews complaining about how they can’t make videos. They call it “fake” and “bad” as a result.
TikTok would do better to clarify how its Lite version is different, to eliminate this confusion.
It’s common for major tech companies to offer a “lighter” version of their app for emerging markets where low bandwidth is a concern. These apps tend to be smaller in size, more performant and sometimes either have reduced capabilities or special features aimed at low-bandwidth users.
Like most “Lite” apps, TikTok Lite clocks in at a smaller size — it’s only 10MB to 11MB (depending on the version) versus the much larger 71MB of TikTok’s main app.
A rep for TikTok confirmed the company is now offering a Lite version in some markets so users can choose a smaller app if they have concerns around data or the storage space on their phone. No other information about the Lite versions or strategy was provided.
So far, ByteDance’s efforts around TikTok Lite seem more experimental, given it hasn’t put up a proper description on Google Play, runs two separate Lite versions and had offered the app in some markets briefly, pulled out, then returned with another version. It will be interesting to see what TikTok Lite becomes when it gets the sort of attention that the main TikTok app is receiving today.
My parents are approaching 60. When they were young, they hung out at diners, or drove around in their cars. My generation hung out in the parking lot after school, or at the mall. My colleague John Biggs often talks of hanging out with his nerd buddies in his basement, playing games and making crank calls.
Today, young people are hanging out on a virtual island plagued by an ever-closing fatal storm. It’s called Fortnite .
They hang out in Fortnite the way we used to hang out in basements or back yards. We played games or kicked a ball around, but it was all a pretense for the social aspect.
The thread above describes exactly what I’m talking about. Yes, people most certainly log on and play the game. Some play it very seriously. But many, especially young folks, hop on to Fortnite to socialize.
The phenomenon of “hanging out” on a game is not new.
I was in a 50 person clan in World of Warcraft in 2004 and we all hung out on a Ventrilo for hours every day for years and years. I saw real romantic relationships begin, grow and die on there. So “x is a place” is a fine observation, but it’s not a new phenomenon.
Almost any popular game results in a community of players who connect not only through the common interest of the game itself, but as real friends who discuss their lives, thoughts, dreams, etc. But something else is afoot on Fortnite that may be far more effectual.
Gaming culture has long had a reputation for being highly toxic. To be clear, there is a difference between talking about someone’s skills in the game and making a personal attack:
“You are bad at this game.” = Fine by me
“You should kill yourself.” = Not fine at all
But many streamers and pro gamers make offensive jokes, talk shit about each other and rage when they lose. It’s not shocking, then, that the broader gaming community that tries to emulate them, especially the young men growing up in a world where e-sports are real, tend to do many of the same things.
A new type of community
But Fortnite doesn’t have the same type of community. Sure, as with any game, there are bad apples. But on the whole, there isn’t the same toxicity permeating every single part of the game.
For what it’s worth, I’ve played hundreds of hours of both Fortnite and Call of Duty over the past few years. The difference between the way I’m treated on Fortnite and Call of Duty, particularly once my game-matched teammates discover I’m a woman, is truly staggering. I’ve actually been legitimately scared by my interactions with people on Call of Duty. I’ve met some of my closest friends on Fortnite.
One such relationship is with a young man named Luke, who is set to graduate from college this spring.
During the course of our now year-long friendship, Luke revealed to me that he is gay and was having trouble coming out to his parents and peers at school. As an older gay, I tried to provide him with as much guidance and advice as possible. Being there for him, answering his phone calls when he was struggling and reminding him that he’s a unique, strong individual, has perhaps been one of the most rewarding parts of my life this past year.
I’ve also made friends with young men who, once they realize that I’m older and a woman and have a perspective that they might not, casually ask me for advice. They’ve asked me why the girl they like doesn’t seem to like them back — “don’t try to make her jealous, just treat her with kindness,” I advised, and then added “OK, make her a little jealous” — or vented to me about how their parents “are idiots” — “they don’t understand you, and you don’t understand them, but they’re doing their best for you and no one loves you like they do” — or expressed insecurity about who they are — “you’re great at Fortnite, why wouldn’t you be great at a bunch of other things?” and “have more confidence in yourself.”
(Though paraphrased, these are real conversations I’ve had with random players on Fortnite.)
There is perhaps no other setting where I might meet these young people, nor one where they might meet me. And even if we did meet, out in the real world, would we open up and discuss our lives? No. But we have this place in common, and as we multitask playing the game and having a conversation, suddenly our little hearts open up to one another in the safety of the island.
But that’s just me. I see this mentorship all the time in Fortnite, in both small and big ways.
Gaming culture is often seen as a vile thing, and there are a wide array of examples to support that conclusion. Though this perception is slowly changing, and not always fair, gamers are usually either perceived as lonely people bathed in the blue glow of the monitor light, or toxic brats who cuss, and throw out slurs, and degrade women.
So why is Fortnite any different from other games? Why does it seem to foster a community that, at the very least, doesn’t actively hate on one another?
One map, a million colors
First, it’s the game itself. Even though Fortnite includes weapons, it’s not a “violent” game. There is no blood or gore. When someone is eliminated, their character simply evaporates into a pile of brightly colored loot. The game feels whimsical and cartoonish and fun, full of dances and fun outfits. This musical, colorful world most certainly affects the mood of its players.
Logging on to Fortnite feels good, like hearing the opening music to the Harry Potter movies. Logging on to a game like, say, Call of Duty: WWII feels sad and scary, like watching the opening sequence to Saving Private Ryan.
Moreover, Fortnite Battle Royale takes place on a single large map. That map may change and evolve from time to time, but it’s even more “common ground” between players. Veterans of the game show noobs new spots to find loot or ways to get around. As my colleague Greg Kumparak said to me, “Every time you go in, you’re going to the same place. Maybe it’s skinned a little different or there’s suddenly a viking ship, but it’s home.”
Of course, there are other colorful, bubbly games that still have a huge toxicity problem. Overwatch is a great example. So what’s the difference?
Battle Royale has introduced a brand new dynamic to the world of gaming. Instead of facing off in a one-versus-one or a five-versus-five scenario as with Starcraft or Overwatch respectively, Battle Royale is either 1 versus 99, 2 versus 98 or 4 versus 96.
“It isn’t as binary as winning or losing,” said Rod “Slasher” Breslau, longtime gaming and e-sports journalist formerly of ESPN and CBS Interactive’s GameSpot. “You could place fifth and still feel satisfied about how you played.”
Breslau played Overwatch at the highest levels for a few seasons and said that it was the most frustrating game he’s ever played in 20 years of gaming. It may be colorful and bubbly, but it is built in a way that gives an individual player a very limited ability to sway the outcome of the game.
“You have all the normal problems of playing in a team, relying on your teammates to play their best and communicate and to simply have the skill to compete, but multiply that because of the way the game works,” said Breslau. “It’s very reliant on heroes, the meta is pretty stale because it’s a relatively new game, and the meta has been figured out.”
All that, combined with the fact that success in Overwatch is based on teamwork, make it easy to get frustrated and unleash on teammates.
With Fortnite, a number of factors relieve that stress. In an ideal scenario, you match up with three other players in a Squads match and they are all cooperative. Everyone lands together, they share shield potions and weapons, communicate about nearby enemies and literally pick each other up when one gets knocked down. This type of teamwork, even among randos, fosters kindness.
In a worst-case scenario, you are matched up with players who aren’t cooperative, who use toxic language, who steal your loot or simply run off and die, leaving you alone to fight off teams of four. Even in the latter scenario, there are ways to play more cautiously — play passive and hide, or third-party fights that are underway and pick players off, or lure teams intro trapped up houses.
Sure, it’s helpful to have skilled, communicative teammates, but being matched with not-so-great teammates doesn’t send most people into a blind rage.
And because the odds are against you — 1 versus 99 in Solos or 4 versus 96 in Squads — the high of winning is nearly euphoric.
“The lows are the problem,” says Breslau. “Winning a close game of Overwatch, when the team is working together and communicating, feels great. But when you’re depending on your team to win, the lows are so low. The lows aren’t like that in Fortnite.”
The more the merrier
The popularity of Fortnite as a cultural phenomenon, not just a game, means that plenty of non-gamers have found their way onto the island. Young people, a brand new generation of gamers, are obsessed with the game. But folks who might have fallen away from gaming as they got older are still downloading it on their phone, or installing it on the Nintendo Switch, and giving Battle Royale a try. Outsiders, who haven’t been steeped in the all-too-common hatred found in the usual gaming community, are bringing a sense of perspective to Fortnite. There is simply more diversity that comes with a larger pool of players, and diversity fosters understanding.
Plus, Fortnite has solid age distribution among players. The majority (63 percent) of players on Fortnite are between the ages of 18 and 24, according to Verto Analytics. Twenty-three percent of players are ages 24 to 35, and thirteen percent are 35 to 44 years old. However, this data doesn’t take into account players under the age of 18, which represent 28 percent of overall gamers, according to Verto. One way Fortnite is like other games is that 70 percent of players are male.
There aren’t many scenarios where four people, from different backgrounds and age groups, join up under a common goal in the type of mood-lifting setting that Fortnite provides. More often than not, the youngest little guy tries to make some sort of offensive joke to find his social place in the group. But surprisingly, for a shoot and loot game played by a lot of people, that’s rarely tolerated by the older members of a Fortnite squad.
All eyes on Fortnite
The popularity of the game also means that more eyes are on Fortnite than any other game. Super-popular streamer Ninja’s live stream with Drake had more than 600,000 concurrent viewers, setting a record. The more people watching, the more streamers are forced to watch their behavior.
Fortnite streamers are setting a new example for gamers everywhere.
One such streamer is Nick “NickMercs” Kolcheff. Nick has been streaming Fortnite since it first came out and has a huge community of mostly male viewers. I consider myself a part of, albeit a minority in, that community — I’ve subscribed to his channel and cheered for him with bits and participated in the chat. In short, I’ve spent plenty of time watching Nick and have seen him offer a place of support and friendship for his viewers.
I’ve seen Nick’s audience ask him, in so many words, how to lose weight (Nick’s a big fitness guy), or share that they’re dealing with an illness in the family, or share that they’re heartbroken because their girlfriend cheated on them.
In large part, Nick says he learned how to be a mentor from his own dad.
“I remember being in those kinds of positions, but I have a great father that always sat me down and let me vent and then shared his opinion, and reminded me that it isn’t supposed to be easy,” said Kolcheff. “It feels good to bounce things off other people and hard things always feel much easier when you know you’re not alone, and I can relate to my chat the way my dad relates to me.”
Nick always has something positive to say. He reminds his audience that even if they feel alone IRL, they have a community right there in his Twitch channel to talk to. He sets an example in the way he talks about his girlfriend Emu, and the way he treats her on screen. When Nick loses a game and his chat explodes with anger, he reminds them to be cool and to not talk shit about other players.
And it’s easy to see his example followed in the chat, where young people are treating each other with respect and answering each other’s questions.
Nick wasn’t always like this. In fact, the first time that NickMercs and Ninja played together on stream, they brought up the time that Nick challenged Ninja to a fight at a LAN tournament years ago. But both Nick and Ninja have matured into something that you rarely find in online gaming: a role model — and it’s had an effect.
Tyler “Ninja” Blevins, far and away the most successful Twitch streamer ever, decided to stop swearing and using degrading language as his influence in the community and his viewership grew. When his audience said they missed the old Ninja, he had this to say:
I’m the same person, you guys. 2018 can’t handle old Ninja and… guess what, I can’t handle old Ninja because the words that I used to say and the gaming terms I used to say… they weren’t ok, alright? I’ve matured.
Jack “Courage” Dunlop is another Fortnite streamer who uses his influence in the community to mentor young people. He has befriended a young fellow named Connor. Courage helped Connor get his first win and has since continued playing with him and talking to him.
Not only is he being kind to Connor, but he’s setting an example for his viewers.
“In comparison to games like Call of Duty and Gears of War and Halo, the top content creators like Ninja, Sypher PK, Timthetatman, are a little older now,” said Kolcheff. “They’ve come from other games where they already had a following. If you look at me five or six years ago, or any of us, we’ve all chilled out. We were more combative and crazy and had a lot more words to say, but I think we just grew up, and it bleeds through to the community.”
These guys are the exception in the wider world of gaming and streaming. But they represent the future of gaming in general. As e-sports explode with growth, pro players will undoubtedly be held to the same behavioral standards as pro players in traditional sports. That’s not to say that pro athletes are angels, and that’s not to say that bad actors won’t have a following. Just look at PewDiePie.
A matter of time
The e-sports world is realizing that they can’t let their professionals run their mouth without consequences. As the industry grows, highly dependent on advertisers and brand endorsements, with a young audience hanging on every word, it will become increasingly important for leagues, e-sports organizations and game makers to start paying closer attention to the behavior of their top players.
There is plenty more work to do. But the problem of removing toxicity from any platform is incredibly difficult. Just ask Facebook and Twitter. Still, it’s only a matter of time before e-sports decision-makers raise the stakes on what they’ll allow from their representatives, which are pro players and streamers.
Toxic behavior is being rejected in most polite society anywhere (except Twitter, because Twitter), and it surely can’t be tolerated much longer in the gaming world. But Fortnite maker Epic Games hasn’t had to put too much effort forth to steer clear of toxic behavior. The community seems to be doing a pretty good job holding itself accountable.
Winning where it counts
Believe you me, Fortnite is not some magical place filled with unicorns and rainbows. There are still players on the game who behave badly, cheat, use toxic language and are downright mean. But compared to other shooters, Fortnite is a breath of fresh air.
No one thing makes Fortnite less toxic. A beautiful, mood-lifting game can’t make much of a difference on its own. A huge, relatively diverse player base certainly makes a dent. And yes, the game limits frustration by simply managing expectations. But with leaders that have prioritized their position as role models, and all the other factors above working in harmony, Fortnite is not only the most popular game in the world, but perhaps one of the most polite.
We reached out to Epic Games, Courage and Ninja for this story, but didn’t hear back at the time of publication.
Gfycat, a home for GIF-making tools and an online community, is rolling out a new way to create GIFs – it will now let you keep the sound on. With “Gfycat Sound,” as the feature is called, GIF makers will have the option to retain the audio from the video file they’re using to create their “GIF” – something Gfycat believes will be especially popular among gamers.
The company had already experimented with other types of non-traditional GIFs, like longer GIFs, AR GIFs, HD GIFs, and 360 GIFs, for example – in order to evolve the concept of the GIF beyond the classic, grainy loop.
Of course, the resulting GIFs aren’t necessarily “.gifs” at this point – they’re short-form videos.
The same holds true for “Gfycat Sound.” But end users don’t necessarily care about the GIFs’ technical underpinnings – they just want to create and share short clips pulled from longer pieces of content.
The company says it decided to roll out support for sound after polling its community for their top feature requests earlier this year. “GIFs with sound” came back as number one the demand from users.
To take advantage of the added support, GIF creators will be able to toggle a switch in Gfycat’s upload tool to keep the sound on or remove it before creating their GIF. As before, GIFs can be created using a video file you upload, or through a link you paste from a site like YouTube, Facebook, Twitch or elsewhere. And if users upload a .gif file or a video that doesn’t have sound, the software will detect that on its end.
The GIF editing software lets you select the start and stop times for the GIF and add captions before sharing, as well.
Once the GIF is uploaded to Gfycat’s site, users will be able to view the audio GIFs while browsing by clicking the icon on the top right of the GIF to turn the sound on. (The site will default to sound off, thankfully – you won’t all of a sudden be bombarded with noise.)
These new “audio GIFs” work on all mobile and desktop browsers at launch, and will come to Gfycat’s iOS and Android apps in 2019, as well as to its API documentation for developers.
“We see our creators using gaming first and foremost for Gfycat Sound, as eSports has become a global phenomenon,” explains Gfycat CEO Richard Rabbat. “Now, a gamer can share their achievement with the sound of the ‘shot’ that won her or him the game and achieve more virality for their content, “he says. “We also see our sports content benefiting from Gfycat Sound because you can now share the emotions of the audience,” Rabbat added.
While an actual GIF file cannot have sound, Gfycat is not the first GIF toolmaker that has expanded to include short-form video alongside its traditional collection of .gifs – Imgur did the same back in May. The reasoning in that case was similar – sometimes you need to hear the clip to really enjoy the content. Plus, advertisers love video, too.