Facebook launches poll worker recruitment push in the News Feed

With the election looming and a pandemic still raging through the U.S., a shortage of poll workers is just one of many threats to voting this November — but it’s a big one.

In a Facebook post Friday, Mark Zuckerberg announced that the company will launch a poll worker recruitment drive on the platform. Beginning this weekend, U.S. Facebook users over the age of 18 will see a message on their News Feeds linking them to state poll worker registration pages. The company also includes that information in its voting information center, a hub of U.S. election resources that the company launched last month.

Facebook will join a number of other large employers, including Microsoft, Starbucks, Old Navy and Target, in offering paid time off to employees who want to volunteer at the polls. The company has also offered free advertising to state election authorities for the poll worker recruitment push and expects several states to be joining California in running those campaigns soon.

While poll worker shortages were already an issue, a dearth of poll workers is a much bigger concern in 2020. Poll workers skew older, making them generally more vulnerable to the threat of COVID-19 and making poll worker recruitment even more essential than it would be in a normal year.

As a Pew state policy report observed in 2018, poll workers are critical to elections running smoothly. A shortage of poll workers could mean “long lines, mass confusion and miscounted ballots.” And that’s without even taking the pandemic into account.

“They greet you at the plastic folding table set up in your neighborhood’s library, church or fire station, asking you for your name, address and, depending on your state, photo ID before handing you a ballot or directing you to a voting machine,” the Pew report reads.

“More than just glorified receptionists, these underpaid few are really the gatekeepers to democracy.”

Facebook changes name of its annual VR event and its overall AR/VR organization

Facebook is moving further away from the Oculus brand.

The company says it is changing the name of their augmented reality and virtual reality division to “Facebook Reality Labs,” a division which will encompass the company’s AR/VR products under the Oculus, Spark and Portal brands.

The company’s AR/VR research division had its title changed from Oculus Research to Facebook Reality Labs back in 2018. That division will now be known as FRL Research.

Facebook has also announced that Oculus Connect, its annual virtual reality developer conference, will be renamed Facebook Connect and will be occurring entirely virtually on September 16.

Oculus has held a very different existence inside Facebook than other high-profile acquisitions like Instagram or WhatsApp . The org has been folded deeper into the core of the company, both in terms of leadership and organizational structure. The entire AR/VR org is run by Andrew Bosworth, a long-time executive at the company who is a close confidant of CEO Mark Zuckerberg.

Bosworth

In some sense, the name change is just an indication that the product ambitions of Facebook in the AR/VR world have grown larger since its 2014 Oculus acquisition.

Facebook is now no longer just building headsets, they’re also building augmented reality glasses, they’re adding AR software integrations into their core app and Instagram through Spark AR and, yes, they’re still doing some stuff with Facebook Portal.

In another sense, adding the term “Labs” to the end of a division that’s several years old with several products you’ve spent billions of dollars to realize, seems to be Facebook doubling down on the idea that everything contained therein is (1) pretty experimental and (2) not contributing all that much to the Facebook bottom line. This seems like the likely home for future Facebook moonshots.

The change will likely upset some Oculus users. Facebook’s reputation problems anecdotally seem to have a particularly strong hold among PC gamers leaving some Oculus fans generally unhappy with any news that showcases the Oculus brand coming further beneath the core Facebook org. Last week, the company shared that new Oculus headset users will need to sign into the platform with their Facebook account and that they would be phasing out Oculus accounts over time, a change that was met with hostility from insiders who believed that Facebook would keep more distance between the core social app and its virtual reality platform.

At this point, Oculus is still the brand name of the VR headsets Facebook sells and the company maintains that the brand isn’t going anywhere, but directionally it seems that Facebook is aiming to bring the brand closer beneath its wing.

Facebook trails expanding portability tools ahead of FTC hearing

Facebook is considering expanding the types of data its users are able to port directly to alternative platforms.

In comments on portability sent to US regulators ahead of an FTC hearing on the topic next month, Facebook says it intends to expand the scope of its data portability offerings “in the coming months”.

It also offers some “possible examples” of how it could build on the photo portability tool it began rolling out last year — suggesting it could in future allow users to transfer media they’ve produced or shared on Facebook to a rival platform or take a copy of their “most meaningful posts” elsewhere.

Allowing Facebook-based events to be shared to third party cloud-based calendar services is another example cited in Facebook’s paper.

It suggests expanding portability in such ways could help content creators build their brands on other platforms or help event organizers by enabling them to track Facebook events using calendar based tools.

However there are no firm commitments from Facebook to any specific portability product launches or expansions of what it offers currently.

For now the tech giant only lets Facebook users directly send copies of their photos to Google’s eponymous photo storage service — a transfer tool it switched on for all users this June.

“We remain committed to ensuring the current product remains stable and performant for people and we are also exploring how we might extend this tool, mindful of the need to preserve the privacy of our users and the integrity of our services,” Facebook writes of its photo transfer tool.

On whether it will expand support for porting photos to other rival services (i.e. not just Google Photos) Facebook has this non-committal line to offer regulators: “Supporting these additional use cases will mean finding more destinations to which people can transfer their data. In the short term, we’ll pursue these destination partnerships through bilateral agreements informed by user interest and expressions of interest from potential partners.”

Beyond allowing photo porting to Google Photos, Facebook users have long been able to download a copy of some of the information it holds on them.

But the kind of portability regulators are increasingly interested in is about going much further than that — meaning offering mechanisms that enable easy and secure data transfers to other services in a way that could encourage and support fast-moving competition to attention-monopolizing tech giants.

The Federal Trade Commission is due to host a public workshop on September 22, 2020, which it says will  “examine the potential benefits and challenges to consumers and competition raised by data portability”.

The regulator notes that the topic has gained interest following the implementation of major privacy laws that include data portability requirements — such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

It asked for comment submissions by August 21, which is what Facebook’s paper is responding to.

In comments to the Reuters news agency, Facebook’s privacy and public policy manager, Bijan Madhani, said the company wants to see “dedicated portability legislation” coming out of any post-workshop recommendations.

It reports that Facebook supports a portability bill that’s doing the rounds in Congress — called the Access Act, which is sponsored by Democratic Senators Richard Blumenthal and Mark Warner, and Republican senator Josh Hawley — which would require large tech platforms to let their users easily move their data to other services.

Albeit Madhani dubs it a good first step, adding that the company will continue to engage with the lawmakers on shaping its contents.

“Although some laws already guarantee the right to portability, our experience suggests that companies and people would benefit from additional guidance about what it means to put those rules into practice,” Facebook also writes in its comments to the FTC .

Ahead of dipping its toe into portability via the photo transfer tool, Facebook released a white paper on portability last year, seeking to shape the debate and influence regulatory thinking around any tighter or more narrowly defined portability requirements.

In recent months Mark Zuckerberg has also put in facetime to lobby EU lawmakers on the topic, as they work on updating regulations around digital services.

The Facebook founder pushed the European Commission to narrow the types of data that should fall under portability rules. In the public discussion with commissioner Thierry Breton, in May, he raised the example of the Cambridge Analytica Facebook data misuse scandal, claiming the episode illustrated the risks of too much platform “openness” — and arguing that there are “direct trade-offs about openness and privacy”.

Zuckerberg went on to press for regulation that helps industry “balance these two important values around openness and privacy”. So it’s clear the company is hoping to shape the conversation about what portability should mean in practice.

Or, to put it another way, Facebook wants to be able to define which data can flow to rivals and which can’t.

“Our position is that portability obligations should not mandate the inclusion of observed and inferred data types,” Facebook writes in further comments to the FTC — lobbying to put broad limits on how much insight rivals would be able to gain into Facebook users who wish to take their data elsewhere.

Both its white paper and comments to the FTC plough this preferred furrow of making portability into a ‘hard problem’ for regulators, by digging up downsides and fleshing out conundrums — such as how to tackle social graph data.

On portability requests that wrap up data on what Facebook refers to as “non-requesting users”, its comments to the FTC work to sew doubt about the use of consent mechanisms to allow people to grant each other permission to have their data exported from a particular service — with the company questioning whether services “could offer meaningful choice and control to non-requesting users”.

“Would requiring consent inappropriately restrict portability? If not, how could consent be obtained? Should, for example, non-requesting users have the ability to choose whether their data is exported each time one of their friends wants to share it with an app? Could an approach offering this level of granularity or frequency of notice could lead to notice fatigue?” Facebook writes, skipping lightly over the irony given the levels of fatigue its own apps’ default notifications can generate for users.

Facebook also appears to be advocating for an independent body or regulator to focus on policy questions and liability issues tied to portability, writing in a blog post announcing its FTC submission: “In our comments, we encourage the FTC to examine portability in practice. We also ask it to recommend dedicated federal portability legislation and provide advice to industry on the policy and regulatory tensions we highlight, so that companies implementing data portability have the clear rules and certainty necessary to build privacy-protective products that enhance people’s choice and control online.”

In its FTC submission the company goes on to suggest that “an independent mechanism or body” could “collaboratively set privacy and security standards to ensure data portability partnerships or participation in a portability ecosystem that are transparent and consistent with the broader goals of data portability”.

Facebook then further floats the idea of an accreditation model under which recipients of user data “could demonstrate, through certification to an independent body, that they meet the data protection and processing standards found in a particular regulation, such as the [EU’s] GDPR or associated code of conduct”.

“Accredited entities could then be identified with a seal and would be eligible to receive data from transferring service providers. The independent body (potentially in consultation with relevant regulators) could work to assess compliance of certifying entities, revoking accreditation where appropriate,” it further suggests.

However its paper also notes the risk that requiring accreditation might present a barrier to entry for the small businesses and startups that might otherwise be best positioned to benefit from portability.

Zuckerberg unconvincingly feigns ignorance of data-sucking VPN scandal

Facebook’s Mark Zuckerberg appeared less than entirely truthful at today’s House Judiciary hearing, regarding last year’s major Onavo controversy, in which his company paid teenagers to use a VPN app that reported detailed data on their internet use. Though he may not have outright lied about it, his answers were evasive and misleading enough to warrant a rushed clarification shortly afterward.

Rep. Hank Johnson (D-GA) was asking Zuckerberg to confirm a series events last year first reported by TechCrunch: A VPN app called Onavo, owned by Facebook, was kicked out of Apple’s App Store for collecting and reporting usage data while purporting to provide a protective service.

Soon afterward, Facebook quietly began paying people — 18 percent of whom were teenagers — to install the “Facebook Research” app, which did much the same thing as Onavo under a different name. TechCrunch reported this and Apple issued a ban before the end of that day; Facebook claimed to have removed it voluntarily, but this was shown not to be true.

Rep. Johnson questioned Zuckerberg along these lines, and the latter repeatedly expressed his unsureness about and lack of familiarity with these issues.

Johnson: When it became public that Facebook was using Onavo to conduct digital surveillance, your company got kicked out of Apple’s App store, isn’t that true?

Zuckerberg: Congressman, I’m not sure I’d characterize it in that way.

Johnson: I mean, Onavo did get kicked out of the app store, isn’t that true?

Zuckerberg: Congressman, we took the app out after Apple changed their policies on VPN apps.

Johnson: And it was because of the use of the surveillance tools.

Zuckerberg: Congressman, I’m not sure the policy was worded that way or that it’s exactly the right characterization of it… [The policies are explained below.]

Johnson: Let me ask you this question, after Onavo was booted out of the app store, you turned to other surveillance tools, such as Facebook Research App, correct?

Zuckerberg: Congressman, in general, yes, we do a broad variety—

Johnson: Isn’t it true, Mr. Zuckerberg, that Facebook paid teenagers to sell their privacy by installing Facebook Research App?

Zuckerberg: Congressman, I’m not familiar with that, but I think it’s a general practice that companies use to, uh, have different surveys and understand data from how people are using different products and what their preferences are.

Johnson: Facebook Research app got thrown out of the App Store too, isn’t that true?

Zuckerberg: Congressman, I’m not familiar with that.

Image Credits: YouTube

Of course, the idea that Zuckerberg was not familiar with events that made headlines, took down Facebook’s internal apps for days, and prompted an angry letter to him from a senator is absurd. (After all, Facebook responded.)

Perhaps intuiting that this particular claim of ignorance was a bridge too far (and perhaps in response to some frantic off-screen action in the CEO’s barnlike virtual testimony HQ), Zuckerberg took the opportunity to backpedal a few minutes later:

In response to Congressman Johnson’s question, before I said that I wasn’t familiar with the Facebook research app when I wasn’t familiar with that name for it. I just want to be clear that I do recall we used an app for research and it’s since been discontinued.

Of course, although Zuckerberg may plausibly have been unsure about the name, it’s not to be believed that he was not familiar with the events of that time, as they were both highly publicized and very costly for Facebook. Naturally he would also have been refreshed on them during preparation for this testimony.

That Zuckerberg is unfamiliar with the exact wording of Apple’s rules is possible, even probable, but it was no secret that the rules were changed basically in response to reports of Facebook’s Onavo shenanigans. Here is what Apple said at the time:

We work hard to protect user privacy and data security throughout the Apple ecosystem. With the latest update to our guidelines, we made it explicitly clear that apps should not collect information about which other apps are installed on a user’s device for the purposes of analytics or advertising/marketing and must make it clear what user data will be collected and how it will be used.

Later, when TechCrunch showed that Facebook had been using an enterprise deployment tool to essentially sideload spyware onto teenagers’ phones, Apple said this:

We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.

So Facebook was the reason, implicitly first, then later explicitly, for these App Store lockdowns. Rep. Johnson put the whole thing quite plainly at the end of his questions.

Johnson: You tried one thing and then you got caught, made some apologies, then you did it all over again. [long pause]… Isn’t that true?

Zuckerberg: Congressman, I respectfully disagree with that characterization.

You can watch the full hearing here:

Zuckerberg says there’s ‘no deal of any kind’ between Facebook and Trump

In an interview with Axios, Mark Zuckerberg shot down suspicions that Facebook is giving President Trump lenient treatment on the platform as part of a closed-door agreement.

“I’ve heard this speculation, too, so let me be clear: There’s no deal of any kind,” Zuckerberg told Axios. “Actually, the whole idea of a deal is pretty ridiculous.”

While Trump faces increasing scrutiny for rule violations on other social platforms — most notably Twitter — the president’s activity on Facebook has largely remained untouched. In October, Zuckerberg faced criticism for attending an undisclosed dinner at the White House with the president and Facebook board member and close Trump ally Peter Thiel.

“I accepted the invite for dinner because I was in town and he is the president of the United States,” Zuckerberg said, noting that he’d done the same during the Obama administration. “The fact that I met with a head of state should not be surprising, and does not suggest we have some kind of deal.”

In a company Q & A last week, the Facebook CEO defended his relationship with President Trump to employees.

“One specific critique that I’ve seen is that there are a lot of people who’ve said that maybe we’re too sympathetic or too close in some way to the Trump administration,” Zuckerberg said, arguing that “giving people some space for discourse” was not the same as agreeing with their beliefs.

While there’s certainly a notably friendly dynamic between Facebook and the Trump administration, an explicit agreement designed to benefit Facebook isn’t that likely, if only because of the firestorm it would ignite were it to come to light. The idea that Facebook could extract a deal from the president, say for less regulatory scrutiny, is also hard to imagine due to the fact that any change to regulations governing Facebook would also apply to other online platforms. When President Trump signed an executive order designed to punish Twitter for taking action against his tweets, that threat applied across the board to all social media sites, including Facebook.

Still, the Justice Department, which often works closely with Trump’s White House to pursue the president’s own agenda, can choose which fights to pick in its antitrust pursuits. And Trump’s ability to mobilize his political allies in Congress against enemies of his choosing could create headaches for a company like Facebook around claims of political bias. Facebook’s seemingly conciliatory stance toward the White House and the Trump campaign isn’t likely to have gone unnoticed.

Zuckerberg’s reluctance to criticize Trump is well-documented, but he has been slightly more critical of the administration in recent days. Last week, he held a live-streamed chat with Anthony Fauci, a key voice for the scientific community’s pandemic response — and one currently on the outs with Trump. In the chat, Zuckerberg didn’t name Trump explicitly, but criticized the U.S. government’s failure to scale up national testing and the refusal for some parts of the administration to recommend mask-wearing as a protective measure.

As advertisers revolt, Facebook commits to flagging ‘newsworthy’ political speech that violates policy

As advertisers pull away from Facebook to protest the social networking giant’s hands-off approach to misinformation and hate speech, the company is instituting a number of stronger policies to woo them back.

In a livestreamed segment of the company’s weekly all-hands meeting, CEO Mark Zuckerberg recapped some of the steps Facebook is already taking, and announced new measures to fight voter suppression and misinformation — although they amount to things that other social media platforms like Twitter have already enahatected and enforced in more aggressive ways.

At the heart of the policy changes is an admission that the company will continue to allow politicians and public figures to disseminate hate speech that does, in fact, violate Facebook’s own guidelines — but it will add a label to denote they’re remaining on the platform because of their “newsworthy” nature.

It’s a watered-down version of the more muscular stance that Twitter has taken to limit the ability of its network to amplify hate speech or statements that incite violence.

Zuckerberg said:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.

The problems with this approach are legion. Ultimately, it’s another example of Facebook’s insistence that with hate speech and other types of rhetoric and propaganda, the onus of responsibility is on the user.

Zuckerberg did emphasize that threats of violence or voter suppression are not allowed to be distributed on the platform whether or not they’re deemed newsworthy, adding that “there are no exceptions for politicians in any of the policies I’m announcing here today.”

But it remains to be seen how Facebook will define the nature of those threats — and balance that against the “newsworthiness” of the statement.

The steps around election year violence supplement other efforts that the company has taken to combat the spread of misinformation around voting rights on the platform.

 

The new measures that Zuckerberg announced also include partnerships with local election authorities to determine the accuracy of information and what is potentially dangerous. Zuckerberg also said that Facebook would ban posts that make false claims (like saying ICE agents will be checking immigration papers at polling places) or threats of voter interference (like “My friends and I will be doing our own monitoring of the polls”).

Facebook is also going to take additional steps to restrict hate speech in advertising.

“Specifically, we’re expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others,” Zuckerberg said. “We’re also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.”

Zuckerberg’s remarks came days of advertisers — most recently Unilever and Verizon — announced that they’re going to pull their money from Facebook as part the #StopHateforProfit campaign organized by civil rights groups.

These are small, good steps from the head of a social network that has been recalcitrant in the face of criticism from all corners (except, until now. from the advertisers that matter most to Facebook). But they don’t do anything at all about the teeming mass of misinformation that exists in the private channels that simmer below the surface of Facebook’s public facing messages, memes and commentary.

How Reliance Jio Platforms became India’s biggest telecom network

It’s raised $5.7 billion from Facebook. It’s taken $1.5 billion from KKR, another $1.5 billion from Vista Equity Partners, $1.5 billion from Saudi Arabia’s Public Investment Fund$1.35 billion from Silver Lake, $1.2 billion from Mubadala, $870 million from General Atlantic, $750 million from Abu Dhabi Investment Authority, $600 million from TPG, and $250 million from L Catterton.

And it’s done all that in just nine weeks.

India’s Reliance Jio Platforms is the world’s most ambitious tech company. Founder Mukesh Ambani has made it his dream to provide every Indian with access to affordable and comprehensive telecommunications services, and Jio has so far proven successful, attracting nearly 400 million subscribers in just a few years.

The unparalleled growth of Reliance Jio Platforms, a subsidiary of India’s most-valued firm (Reliance Industries), has shocked rivals and spooked foreign tech companies such as Google and Amazon, both of which are now reportedly eyeing a slice of one of the world’s largest telecom markets.

What can we learn from Reliance Jio Platforms’s growth? What does the future hold for Jio and for India’s tech startup ecosystem in general?

Through a series of reports, Extra Crunch is going to investigate those questions. We previously profiled Mukesh Ambani himself, and in today’s installment, we are going to look at how Reliance Jio went from a telco upstart to the dominant tech company in four years.

The birth of a new empire

Months after India’s richest man, Mukesh Ambani, launched his telecom network Reliance Jio, Sunil Mittal of Airtel — his chief rival — was struggling in public to contain his frustration.

That Ambani would try to win over subscribers by offering them free voice calling wasn’t a surprise, Mittal said at the World Economic Forum in January 2017. But making voice calls and the bulk of 4G mobile data completely free for seven months clearly “meant that they have not gotten the attention they wanted,” he said, hopeful the local regulator would soon intervene.

This wasn’t the first time Ambani and Mittal were competing directly against each other: in 2002, Ambani had launched a telecommunications company and sought to win the market by distributing free handsets.

In India, carrier lock-in is not popular as people prefer pay-as-you-go voice and data plans. But luckily for Mittal in their first go around, Ambani’s journey was cut short due to a family feud with his brother — read more about that here.

Affirming the position of tech advocates, Supreme Court overturns Trump’s termination of DACA

The U.S. Supreme Court ruled today that President Donald Trump’s administration unlawfully ended the federal policy providing temporary legal status for immigrants who came to the country as children.

The decision, issued Thursday, called the termination of the Obama-era policy known as the Deferred Action for Childhood Arrivals “arbitrary and capricious.” As a result of its ruling, nearly 640,000 people living in the United States are now temporarily protected from deportation.

While a blow to the Trump Administration, the ruling is sure to be hailed nearly unanimously by the tech industry and its leaders, who had come out strongly in favor of the policy in the days leading up to its termination by the current President and his advisors.

At the beginning of 2018, many of tech’s most prominent executives, including the CEOs of Apple, Facebook, Amazon and Google, joined more than 100 American business leaders in signing an open letter asking Congress to take action on the Deferred Action for Childhood Arrivals (DACA) program before it expired in March.

Tim Cook, Mark Zuckerberg, Jeff Bezos and Sundar Pichai who made a full throated defense of the policy and pleaded with Congress to pass legislation ensuring that Dreamers, or undocumented immigrants who arrived in the United States as children and were granted approval by the program, can continue to live and work in the country without risk of deportation.

At the time, those executives said the decision to end the program could potentially cost the U.S. economy as much as $215 billion.

In a 2017 tweet, Tim Cook noted that Apple employed roughly 250 of the company’s employees were “Dreamers”.

The list of tech executives who came out to support the DACA initiative is long. It included: IBM CEO Ginni Rometty; Brad Smith, the president and chief legal officer of Microsoft; Hewlett-Packard Enterprise CEO Meg Whitman; and CEOs or other leading executives of AT&T, Dropbox, Upwork, Cisco Systems, Salesforce.com, LinkedIn, Intel, Warby Parker, Uber, Airbnb, Slack, Box, Twitter, PayPal, Code.org, Lyft, Etsy, AdRoll, eBay, StitchCrew, SurveyMonkey, DoorDash, Verizon (the parent company of Verizon Media Group, which owns TechCrunch).

At the heart of the court’s ruling is the majority view that Department of Homeland Security officials didn’t provide a strong enough reason to terminate the program in September 2017. Now, the issue of immigration status gets punted back to the White House and Congress to address.

As the Boston Globe noted in a recent article, the majority decision written by Chief Justice John Roberts did not determine whether the Obama-era policy or its revocation were correct, just that the DHS didn’t make a strong enough case to end the policy.

“We address only whether the agency complied with the procedural requirement that it provide a reasoned explanation for its action,” Roberts wrote. 

While the ruling from the Supreme Court is some good news for the population of “dreamers,” the question of their citizenship status in the country is far from settled. And the U.S. government’s response to the COVID-19 pandemic has basically consisted of freezing as much of the nation’s immigration apparatus as possible.

An Executive Order in late April froze the green card process for would-be immigrants, and the administration was rumored to be considering a ban on temporary workers under H1-B visas as well.

The President has, indeed, ramped up the crackdown with strict border control policies and other measures to curb both legal and illegal immigration. 

More than 800,000 people joined the workforce as a result of the 2012 program crafted by the Obama administration. DACA allows anyone under 30 to apply for protection from deportation or legal action on their immigration cases if they were younger than 16 when they were brought to the US, had not committed a crime, and were either working or in school.

In response to the Supreme Court decision, the President tweeted “Do you get the impression that the Supreme Court doesn’t like me?”

 

 

Mark Zuckerberg and Priscilla Chan respond to Chan Zuckerberg Initiative scientists’ open letter on Trump

Mark Zuckberg and Priscilla Chan have penned a response to an open letter sent last week by a group of over 140 scientists who are working on projects funded by the Chan Zuckberberg Initiative. The letter, included in full below, expressed concerns about how Facebook manages misinformation and harmful, offensive and discriminatory language toward specific groups of people — and specifically around its treatment of Trump’s offensive, racist and dangerous rants.

The response from Chan and Zuckerberg thanks the scientists for expressing their concerns, and says specifically that both are “personally […] deeply shaken and disgusted by President Trump’s divisive and incendiary rhetoric,” and it also acknowledges that despite CZI and Facebook existing as wholly separate entities, they obviously share a common leader in Zuckerberg.

The letter goes on to point to some of the recent blog posts and resources that Facebook has published regarding its chosen position, as well as what it will be doing to review its existing policies around its products as they pertain to racial issues and social justice.

The response from CZI’s top leaders does say that Facebook’s policies are not its own, however, and goes on to suggest that from its perspective, it will be committing to doing more around addressing racial inequities and injustice.

Ultimately, the letter from Chan and Zuckerberg doesn’t say very much of substance, and if anything actually re-emphasizes the problem at the core of the letter from the concerned scientists to begin with. It notes the contradiction in having the entities remain separate in terms of elements of their guiding principles while led by a common individual, but doesn’t directly address the main ask of the scientists, which is that Zuckerberg use his position at Facebook to wield the power of that platform for greater social good, not that the CZI change its behavior necessarily.

This is bound to be a recurring tension for CZI and Facebook going forward, given the relative positions and participants in each. It’s unlikely that responses like this one will do much to quell any long-term concerns on the part of CZI researchers and academics.

When it comes to social media moderation, reach matters

Social media in its current form is broken.

In 20 years, we’ll look back at the social media of 2020 like we look at smoking on airplanes or road trips with the kids rolling around in the back of a station wagon without seatbelts. Social media platforms have grown into borderless, society-scale misinformation machines. Any claim that they do not have editorial influence on the flow of information is nonsense. Just as a news editor selects headlines of the day, social media platforms channel content with engagement-maximizing algorithms and viral dynamics. They are by no means passive observers of our political discourse.

At the same time, I sympathize with the position that these companies are in — caught between the interests of shareholders and the public. I’ve started technology companies and helped build large-scale internet platforms. So I understand that social media CEOs have a duty to maximize the value of the business for their shareholders. I also know that social media companies can do better. They are not helpless to improve themselves. Contrary to Mark Zuckerberg’s recent heel dragging in dealing with President Trump’s reckless posts, the executives and boards at these companies have full dominion over their products and policies, and they have an obligation to their shareholders and society to make material changes to them.

The way to fix social media starts with realizing it is two different things: personal media and mass media.

Personal media is most of social media. Selfies from a hike or a shot of that Oreo sundae, stuff you share with friends and family. Mass media is content that reaches large audiences — such as a tweet that reaches a Super Bowl-sized audience in real-time. To be clear, it’s not just about focusing on people with a lot of followers. High-reach content can also be posts that go viral and get viewed by a large audience.

Twitter’s decision to annotate a couple of Trump’s tweets is a baby step in this direction. By applying greater scrutiny to a mega-visibility user, the company is treating those posts differently than low-reach tweets. But this extra attention should not be tied to any particular individual, but rather applied to all tweets that reach a large audience.

Reach is an objective measure of the impact of a social media post. It makes sense. Tweets that go to more people carry more weight and therefore should be the focus of any effort at cleaning up disinformation. The audience size of a message is as important, if not more, than its content. So, reach is a useful first-cut filter removed from the hornet’s nest of interpreting the underlying content or beliefs of the sender.

From a technology perspective, it is very doable. When a social media post exceeds a reach threshold, the platform should automatically subject the content to additional processes to reduce disinformation and promote community standards. One idea, an extension of what Twitter recently did, would be to prominently connect a set of links to relevant articles from a pool of trusted sources — to add context, not censor. The pool of trusted content would need to be vetted and diverse in its point of view, but that’s possible, and users could even be involved in crowd-sourcing those decisions. For the highest-reach content, there could be additional human curation and even journalistic-style fact checking. If these platforms can serve relevant ads in milliseconds, they can serve relevant content from trusted sources.

From a regulatory perspective, reach is also the right framework for reforming Section 230 of the Communications Decency Act. That’s the pre-social media law that gives internet platforms a broad immunity from liability for the content they traffic. Conceptually, Section 230 continues to make sense for low-reach content. Facebook should not be held liable for every comment your uncle Bob makes. It’s when posts reach a vast number of people that Twitter and Facebook start to look more like The Wall Street Journal or The New York Times than an internet service provider. In these cases, it’s reasonable that they should be subject to similar legal liability as mass media outlets for broadly distributing damaging falsehoods.

Improving social media intelligently starts with breaking the problem down based on the reach of the content. Social media is two very different things thrown together in an internet blender: personal media and mass media. Let’s start treating it that way.