YouTube asks the FTC to clarify how video creators should comply with COPPA ruling

YouTube is asking the U.S. Federal Trade Commission for further clarification and better guidance to help video creators understand how to comply with the FTC’s guidelines set forth as part of YouTube’s settlement with the regulator over its violations of children’s privacy laws. The FTC in September imposed a historic fine of $170 million for YouTube’s violations of COPPA (the U.S. Children’s Online Privacy Protection Act). It additionally required YouTube creators to now properly identify any child-directed content on the platform.

To comply with the ruling, YouTube created a system where creators could either label their entire channel as child-directed, or they could identify only certain videos as being directed at children, as needed. Videos that are considered child-directed content would then be prohibited from collecting personal data from viewers. This limited creators’ ability to leverage Google’s highly profitable behavioral advertising technology on videos kids were likely to watch.

As a result, YouTube creators have been in an uproar since the ruling, arguing that it’s too difficult to tell the difference between what’s child-directed content and what’s not. Several popular categories of YouTube videos — like gaming, toy reviews and family vlogging, for instance — fall under gray areas, where they’re watched by children and adults alike. But because the FTC’s ruling left creators held liable for any future violations, YouTube could only advise creators to consult a lawyer to help them work through the ruling’s impact on their own channels.

Today, YouTube says it’s asking the FTC to provide more clarity.

“Currently, the FTC’s guidance requires platforms must treat anyone watching primarily child-directed content as children under 13. This does not match what we see on YouTube, where adults watch favorite cartoons from their childhood or teachers look for content to share with their students,” noted YouTube in an announcement. “Creators of such videos have also conveyed the value of product features that wouldn’t be supported on their content. For example, creators have expressed the value of using comments to get helpful feedback from older viewers. This is why we support allowing platforms to treat adults as adults if there are measures in place to help confirm that the user is an adult viewing kids’ content,” the company said.

Specifically, YouTube wants the FTC to clarify what’s to be done when adults are watching kids’ content. It also wants to know what’s to be done about content that doesn’t intentionally target kids — like videos in the gaming, DIY and art space, for example — if those videos end up attracting a young audience. Are these also to be labeled “made for kids,” even though that’s not their intention?, YouTube asks.

The FTC had shared some guidance in November, which YouTube passed along to creators. But YouTube says it’s not enough as it doesn’t help creators to understand what’s to be done about this “mixed audience” content.

YouTube says it supports platforms treating adults who view primarily child-directed video content as adults, as long as there are measures in place to help confirm the user is an adult. It didn’t suggest what those measures would be, though possibly this could involve users logged in to an adult-owned Google account or perhaps an age-gate measure of some sort.

YouTube submitted its statements as a part of the FTC’s comment period on the agency’s review of the COPPA Rule, which has been extended until December 11, 2019. The FTC is giving commenters additional time to submit comments and an alternative mechanism to file them as the federal government’s Regulations.gov portal is temporarily inaccessible. Instead, commenters can submit their thoughts via email to the address [email protected], with the subject line “COPPA comment.” These must be submitted before 11:59 PM ET on December 11, the FTC says.

YouTube’s announcement, however, pointed commenters to the FTC’s website, which isn’t working right now.

“We strongly support COPPA’s goal of providing robust protections for kids and their privacy. We also believe COPPA would benefit from updates and clarifications that better reflect how kids and families use technology today, while still allowing access to a wide range of content that helps them learn, grow and explore. We continue to engage on this issue with the FTC and other lawmakers (we previously participated in the FTC’s public workshop) and are committed to continue [sic] doing so,” said YouTube.

In wake of Shutterstock’s Chinese censorship, American companies need to relearn American values

It’s among the most iconic images of the last few decades — a picture of an unknown man standing before a line of tanks during the protests in 1989 in Beijing’s Tiananmen Square. In just one shot, the photographer, Jeff Widener, managed to convey a society struggling between the freedoms of individual citizens and the heavy hand of the Chinese militarized state.

It’s also an image that few within China’s “great firewall” have access to, let alone see. For those who have read 1984, it can almost seem as if “Tank Man” was dropped into a memory hole, erased from the collective memory of more than a billion people.

By now, it’s well-known that China’s search engines like Baidu censor such political photography. Regardless of the individual morality of their decisions, it’s at least understandable that Chinese companies with mostly Chinese revenues would carefully hew to the law as set forth by the Chinese Communist Party. It’s a closed system after all.

What we are learning though is that it isn’t just Chinese companies that are aiding and abetting this censorship. It’s Western companies too. And Western workers aren’t pleased that they are working to enforce the anti-freedom policies in the Middle Kingdom.

Take Shutterstock, which has come under great fire for complying with China’s great firewall. As Sam Biddle described in The Intercept last month, the company has been riven internally between workers looking to protect democratic values, and a business desperate to expand further in one of the world’s most dynamic countries. From Biddle:

Shutterstock’s censorship feature appears to have been immediately controversial within the company, prompting more than 180 Shutterstock workers to sign a petition against the search blacklist and accuse the company of trading its values for access to the lucrative Chinese market.

Those petitions have allegedly gone nowhere internally, and that has led employees like Stefan Hayden, who describes nearly ten years of experience at the company as a frontend developer on his LinkedIn profile, to resign:

The challenge of these political risks is hardly unknown to Shutterstock. The company’s most recent annual financial filing with the SEC lists market access and censorship as a key risk for the company (emphasis mine):

For example, domestic internet service providers have blocked and continue to block access to Shutterstock in China and other countries, such as Turkey, have intermittently restricted access to Shutterstock. There are substantial uncertainties regarding interpretation of foreign laws and regulations that censor content available through our products and services and we may be forced to significantly change or discontinue our operations in such markets if we were to be found in violation of any new or existing law or regulation. If access to our products and services is restricted, in whole or in part, in one or more countries or our competitors can successfully penetrate geographic markets that we cannot access, our ability to retain or increase our contributor and customer base may be adversely affected, we may not be able to maintain or grow our revenue as anticipated, and our financial results could be adversely affected.

Thus the rub: market access means compromising the very values that a content purveyor like Shutterstock relies on to operate as a business. The stock image company is hardly unique to find itself in this position; it’s a situation that the NBA has certainly had to confront in the last few weeks:

It’s great to see Shutterstock’s employees standing up for freedom and democracy, and if not finding purchase internally with their values, at least walking with their feet to other companies who value freedom more reliably.

Unfortunately, far too many companies — and far too many tech companies — blindly chase the dollars and yuans, without considering the erosion in the values at the heart of their own business. That erosion ultimately adds up — without guiding principles to handle business challenges, decisions get made ad hoc with an eye to revenues, intensifying the risk of crises like the one facing Shutterstock.

The complexity of the Chinese market has only expanded with the country’s prodigious growth. The sharpness, intensity, and self-reflection of values required for Western companies to operate on the mainland has reached new highs. And yet, executives have vastly under-communicated the values and constraints they face, both to their own employees but also to their shareholders as well.

As I wrote earlier this year when the Google China search controversy broke out, it’s not enough to just be militant about values. Values have to be cultivated, and everyone from software engineers to CEOs need to understand a company’s objectives and the values that constrain them.

As I wrote at the time:

The internet as independence movement is 100% dead.

That makes the ethical terrain for Silicon Valley workers much more challenging to navigate. Everything is a compromise, in one way or another. Even the very act of creating value — arguably the most important feature of Silicon Valley’s startup ecosystem — has driven mass inequality, as we explored on Extra Crunch this weekend in an in-depth interview.

I ultimately was in favor of Google’s engagement with China, if only because I felt that the company does understand its values better than most (after all, it abandoned the China market in the first place, and one would hope the company would make the same choice again if it needed to). Google has certainly not been perfect on a whole host of fronts, but it seems to have had far more self-reflection about the values it intends to purvey than most tech companies.

It’s well past time for all American companies though to double down on the American values that underly their business. Ultimately, if you compromise on everything, you stand for nothing — and what sort of business would anyone want to join or back like that?

China can’t be ignored, but neither should companies ignore their own duties to commit to open, democratic values. If Tank Man can stand in front of a line of tanks, American execs can stand before a line of their colleagues and find an ethical framework and a set of values that can work.

Justice Dept. charges Russian hacker behind the Dridex malware

U.S. prosecutors have brought computer hacking and fraud charges against a Russian citizen, Maksim Yakubets, who is accused of developing and distributing Dridex, a notorious banking malware used to allegedly steal more than $100 million from hundreds of banks over a multi-year operation.

Per the unsealed 10-count indictment, Yakubets is accused of leading and overseeing Evil Corp, a Russian-based cybercriminal network that oversaw the creation of Dridex. The malware is often spread by email and infects computers, silently siphoning off banking logins. The malware has also been known to be used as a delivery mechanism for ransomware, as was the case with the April cyberattack on drinks giant Arizona Beverages.

The Russian hacker is also alleged to have used the Zeus malware to successfully steal more than $70 million from victims’ bank accounts. Prosecutors said the Zeus scheme was “one of the most outrageous cybercrimes in history.”

Yakubets’ wanted poster. (Image: FBI/supplied)

Justice Department officials, speaking in Washington DC with their international partners from the U.K.’s National Crime Agency, said Yakubets also provided “direct assistance” to the Russian government in his role working for the FSB (formerly KGB) from 2017 to work on projects involving the theft of confidential documents through cyberattacks.

Prosecutors said Evil Corp was to blame for an “unimaginable” amount of cybercrime during the past decade, with a primary focus on attacking financial organizations in the U.S. and the U.K.

“Maksim Yakubets allegedly has engaged in a decade-long cybercrime spree that deployed two of the most damaging pieces of financial malware ever used and resulted in tens of millions of dollars of losses to victims worldwide,” said Brian Benczkowski, assistant attorney general in the Justice Department’s criminal division, in remarks.

The State Department announced a $5 million reward for information related to the capture of Yakubets, who remains at large.

In a separate statement, Treasury secretary Steven Mnuchin said the department issued sanctions against Evil Corp for the group’s role in international cyber crime, including two other hackers associated with the group — Igor Turashev and Denis Gusev — as well as seven Russian companies with connections to Evil Corp..

“This coordinated action is intended to disrupt the massive phishing campaigns orchestrated by this Russian-based hacker group,” said Mnuchin.

Read more:

How to build or invest in a startup without paying capital gains tax

Founders, entrepreneurs, and tech executives in the know realize they may be able to avoid paying tax on all or part of the gain from the sale of stock in their companies — assuming they qualify.

If you’re a founder who’s interested in exploring this opportunity, put careful consideration put into the formation, operation and selling of your company.

Qualified Small Business Stock (QSBS) presents a significant tax savings opportunity for people who create and invest in small businesses. It allows you to potentially exclude up to $10 million, or 10 times your tax basis, whichever is greater, from taxation. For example, if you invested $2 million in QSBS in 2012, and sell that stock after five years for $20 million (10x basis) you could pay zero federal capital gains tax on that gain. 

What is QSBS, and why is it important?

These tax savings can be so significant, that it’s one of a handful of high-priority items we’ll first discuss, when working with a founder or tech executive client. Surprisingly, most people in general either:

  1. Know a few basics about QSBS;
  2. Know they may have it, but don’t explore ways to leverage or protect it;
  3. Don’t know about it at all.

Founders who are scaling their companies usually have a lot on their minds, and tax savings and personal finance usually falls to the bottom of the list. For example, I recently met with someone who will walk away from their upcoming liquidity event with between $30-40 million. He qualifies for QSBS, but until our conversation, he hadn’t even considered leveraging it. 

Instead of paying long-term capital gains taxes, how does 0% sound? That’s right — you may be able to exclude up to 100% of your federal capital gains taxes from selling the stake in your company. If your company is a venture-backed tech startup (or was at one point), there’s a good chance you could qualify.

In this guide I speak specifically to QSBS on a federal tax level, however it’s important to note that many states such as New York follow the federal treatment of QSBS, while states such as California and Pennsylvania completely disallow the exclusion. There is a third group of states, including Massachusetts and New Jersey, that have their own modifications to the exclusion. Like everything else I speak about here, this should be reviewed with your legal and tax advisors.

My team and I recently spoke with a founder whose company was being acquired. She wanted to do some financial planning to understand how her personal balance sheet would look post-acquisition, which is a savvy move. 

We worked with her corporate counsel and accountant to obtain a QSBS representation from the company and modeled out the founder’s effective tax rate. She owned equity in the form of company shares, which met the criteria for qualifying as Section 1202 stock (QSBS). When she acquired the shares in 2012, her cost basis was basically zero. 

A few months after satisfying the five-year holding period, a public company acquired her business. Her company shares, first acquired for basically zero, were now worth $15 million. When she was able to sell her shares, the first $10 million of her capital gains were completely excluded from federal taxation — the remainder of her gain was taxed at long-term capital gains.

This founder saved millions of dollars in capital gains taxes after her liquidity event, and she’s not the exception! Most founders who run a venture-backed C Corporation tech company can qualify for QSBS if they acquire their stock early on. There are some exceptions. 

qsbs tax savings example

Do I have QSBS?

A frequently asked question as we start to discuss QSBS with our clients is: how do I know if I qualify? In general, you need to meet the following requirements:

  1. Your company is a Domestic C Corporation.
  2. Stock is acquired directly from the company.
  3. Stock has been held for over 5 years.
  4. Stock was issued after August 10th, 1993, and ideally, after September 27th, 2010 for a full 100% exclusion.qsbs stock acquired
  5. Aggregate gross assets of the company must have been $50 million or less when the stock was acquired.
  6. The business must be active, with 80% of its assets being used to run the business. It cannot be an investment entity. 
  7. The business cannot be an excluded business type such as, but not limited to: finance, professional services, mining/natural resources, hotel/restaurants, farming or any other business where the business reputation is a skill of one or more of the employees.

When in doubt, follow this flowchart to see if you qualify:

Instagram still doesn’t age-check kids. That must change.

Instagram dodges child safety laws. By not asking users their age upon signup, it can feign ignorance about how old they are. That way, it can’t be held liable for $40,000 per violation of the Child Online Privacy Protection Act. The law bans online services from collecting personally identifiable information about kids under 13 without parental consent. Yet Instagram is surely stockpiling that sensitive info about underage users, shrouded by the excuse that it doesn’t know who’s who.

But here, ignorance isn’t bliss. It’s dangerous. User growth at all costs is no longer acceptable.

It’s time for Instagram to step up and assume responsibility for protecting children, even if that means excluding them. Instagram needs to ask users’ age at sign up, work to verify they volunteer their accurate birthdate by all practical means, and enforce COPPA by removing users it knows are under 13. If it wants to allow tweens on its app, it needs to build a safe, dedicated experience where the app doesn’t suck in COPPA-restricted personal info.

Minimum Viable Responsibility

Instagram is woefully behind its peers. Both Snapchat and TikTok require you to enter your age as soon as you start the sign up process. This should really be the minimum regulatory standard, and lawmakers should close the loophole allowing services to skirt compliance by not asking. If users register for an account, they should be required to enter an age of 13 or older.

Instagram’s parent company Facebook has been asking for birthdate during account registration since its earliest days. Sure, it adds one extra step to sign up, and impedes its growth numbers by discouraging kids to get hooked early on the social network. But it also benefits Facebook’s business by letting it accurately age-target ads.

Most importantly, at least Facebook is making a baseline effort to keep out underage users. Of course, as kids do when they want something, some are going to lie about their age and say they’re old enough. Ideally, Facebook would go further and try to verify the accuracy of a user’s age using other available data, and Instagram should too.

Both Facebook and Instagram currently have moderators lock the accounts of any users they stumble across that they suspect are under 13. Users must upload government-issued proof of age to regain control. That policy only went into effect last year after UK’s Channel 4 reported a Facebook moderator was told to ignore seemingly underage users unless they explicitly declared they were too young or were reported for being under 13. An extreme approach would be to require this for all signups, though that might be expensive, slow, significantly hurt signup rates, and annoy of-age users.

Instagram is currently on the other end of the spectrum. Doing nothing around age-gating seems recklessly negligent. When asked for comment about how why it doesn’t ask users’ ages, how it stops underage users from joining, and if it’s in violation of COPPA, Instagram declined to comment. The fact that Instagram claims to not know users’ ages seems to be in direct contradiction to it offering marketers custom ad targeting by age such as reaching just those that are 13.

Instagram Prototypes Age Checks

Luckily, this could all change soon.

Mobile researcher and frequent TechCrunch tipster Jane Manchun Wong has spotted Instagram code inside its Android app that shows it’s prototyping an age-gating feature that rejects users under 13. It’s also tinkering with requiring your Instagram and Facebook birthdates to match. Instagram gave me a “no comment” when I asked about if these features would officially roll out to everyone.

Code in the app explains that “Providing your birthday helps us make sure you get the right Instagram experience. Only you will be able to see your birthday.” Beyond just deciding who to let in, Instagram could use this info to make sure users under 18 aren’t messaging with adult strangers, that users under 21 aren’t seeing ads for alcohol brands, and that potentially explicit content isn’t shown to minors.

Instagram’s inability to do any of this clashes with it and Facebook’s big talk this year about its commitment to safety. Instagram has worked to improve its approach to bullying, drug sales, self-harm, and election interference, yet there’s been not a word about age gating.

Meanwhile, underage users promote themselves on pages for hashtags like #12YearOld where it’s easy to find users who declare they’re that age right in their profile bio. It took me about 5 minutes to find creepy “You’re cute” comments from older men on seemingly underage girls’ photos. Clearly Instagram hasn’t been trying very hard to stop them from playing with the app.

Illegal Growth

I brought up the same unsettling situations on Musically, now known as TikTok, to its CEO Alex Zhu on stage at TechCrunch Disrupt in 2016. I grilled Zhu about letting 10-year-olds flaunt their bodies on his app. He tried to claim parents run all of these kids’ accounts, and got frustrated as we dug deeper into Musically’s failures here.

Thankfully, TikTok was eventually fined $5.7 million this year for violating COPPA and forced to change its ways. As part of its response, TikTok started showing an age gate to both new and existing users, removed all videos of users under 13, and restricted those users to a special TikTok Kids experience where they can’t post videos, comment, or provide any COPPA-restricted personal info.

If even a Chinese app social media app that Facebook CEO has warned threatens free speech with censorship is doing a better job protecting kids than Instagram, something’s gotta give. Instagram could follow suit, building a special section of its apps just for kids where they’re quarantined from conversing with older users that might prey on them.

Perhaps Facebook and Instagram’s hands-off approach stems from the fact that CEO Mark Zuckerberg doesn’t think the ban on under-13-year-olds should exist. Back in 2011, he said “That will be a fight we take on at some point . . . My philosophy is that for education you need to start at a really, really young age.” He’s put that into practice with Messenger Kids which lets 6 to 12-year-olds chat with their friends if parents approve.

The Facebook family of apps’ ad-driven business model and earnings depend on constant user growth that could be inhibited by stringent age gating. It surely doesn’t want to admit to parents it’s let kids slide into Instagram, that advertisers were paying to reach children too young to buy anything, and to Wall Street that it might not have 2.8 billion legal users across its apps as it claims.

But given Facebook and Instagram’s privacy scandals, addictive qualities, and impact on democracy, it seems like proper age-gating should be a priority as well as the subject of more regulatory scrutiny and public concern. Society has woken up to the harms of social media, yet Instagram erects no guards to keep kids from experiencing those ills for themselves. Until it makes an honest effort to stop kids from joining, the rest of Instagram’s safety initiatives ring hollow.

Instagram still doesn’t age-check kids. That must change.

Instagram dodges child safety laws. By not asking users their age upon signup, it can feign ignorance about how old they are. That way, it can’t be held liable for $40,000 per violation of the Child Online Privacy Protection Act. The law bans online services from collecting personally identifiable information about kids under 13 without parental consent. Yet Instagram is surely stockpiling that sensitive info about underage users, shrouded by the excuse that it doesn’t know who’s who.

But here, ignorance isn’t bliss. It’s dangerous. User growth at all costs is no longer acceptable.

It’s time for Instagram to step up and assume responsibility for protecting children, even if that means excluding them. Instagram needs to ask users’ age at sign up, work to verify they volunteer their accurate birthdate by all practical means, and enforce COPPA by removing users it knows are under 13. If it wants to allow tweens on its app, it needs to build a safe, dedicated experience where the app doesn’t suck in COPPA-restricted personal info.

Minimum Viable Responsibility

Instagram is woefully behind its peers. Both Snapchat and TikTok require you to enter your age as soon as you start the sign up process. This should really be the minimum regulatory standard, and lawmakers should close the loophole allowing services to skirt compliance by not asking. If users register for an account, they should be required to enter an age of 13 or older.

Instagram’s parent company Facebook has been asking for birthdate during account registration since its earliest days. Sure, it adds one extra step to sign up, and impedes its growth numbers by discouraging kids to get hooked early on the social network. But it also benefits Facebook’s business by letting it accurately age-target ads.

Most importantly, at least Facebook is making a baseline effort to keep out underage users. Of course, as kids do when they want something, some are going to lie about their age and say they’re old enough. Ideally, Facebook would go further and try to verify the accuracy of a user’s age using other available data, and Instagram should too.

Both Facebook and Instagram currently have moderators lock the accounts of any users they stumble across that they suspect are under 13. Users must upload government-issued proof of age to regain control. That policy only went into effect last year after UK’s Channel 4 reported a Facebook moderator was told to ignore seemingly underage users unless they explicitly declared they were too young or were reported for being under 13. An extreme approach would be to require this for all signups, though that might be expensive, slow, significantly hurt signup rates, and annoy of-age users.

Instagram is currently on the other end of the spectrum. Doing nothing around age-gating seems recklessly negligent. When asked for comment about how why it doesn’t ask users’ ages, how it stops underage users from joining, and if it’s in violation of COPPA, Instagram declined to comment. The fact that Instagram claims to not know users’ ages seems to be in direct contradiction to it offering marketers custom ad targeting by age such as reaching just those that are 13.

Instagram Prototypes Age Checks

Luckily, this could all change soon.

Mobile researcher and frequent TechCrunch tipster Jane Manchun Wong has spotted Instagram code inside its Android app that shows it’s prototyping an age-gating feature that rejects users under 13. It’s also tinkering with requiring your Instagram and Facebook birthdates to match. Instagram gave me a “no comment” when I asked about if these features would officially roll out to everyone.

Code in the app explains that “Providing your birthday helps us make sure you get the right Instagram experience. Only you will be able to see your birthday.” Beyond just deciding who to let in, Instagram could use this info to make sure users under 18 aren’t messaging with adult strangers, that users under 21 aren’t seeing ads for alcohol brands, and that potentially explicit content isn’t shown to minors.

Instagram’s inability to do any of this clashes with it and Facebook’s big talk this year about its commitment to safety. Instagram has worked to improve its approach to bullying, drug sales, self-harm, and election interference, yet there’s been not a word about age gating.

Meanwhile, underage users promote themselves on pages for hashtags like #12YearOld where it’s easy to find users who declare they’re that age right in their profile bio. It took me about 5 minutes to find creepy “You’re cute” comments from older men on seemingly underage girls’ photos. Clearly Instagram hasn’t been trying very hard to stop them from playing with the app.

Illegal Growth

I brought up the same unsettling situations on Musically, now known as TikTok, to its CEO Alex Zhu on stage at TechCrunch Disrupt in 2016. I grilled Zhu about letting 10-year-olds flaunt their bodies on his app. He tried to claim parents run all of these kids’ accounts, and got frustrated as we dug deeper into Musically’s failures here.

Thankfully, TikTok was eventually fined $5.7 million this year for violating COPPA and forced to change its ways. As part of its response, TikTok started showing an age gate to both new and existing users, removed all videos of users under 13, and restricted those users to a special TikTok Kids experience where they can’t post videos, comment, or provide any COPPA-restricted personal info.

If even a Chinese app social media app that Facebook CEO has warned threatens free speech with censorship is doing a better job protecting kids than Instagram, something’s gotta give. Instagram could follow suit, building a special section of its apps just for kids where they’re quarantined from conversing with older users that might prey on them.

Perhaps Facebook and Instagram’s hands-off approach stems from the fact that CEO Mark Zuckerberg doesn’t think the ban on under-13-year-olds should exist. Back in 2011, he said “That will be a fight we take on at some point . . . My philosophy is that for education you need to start at a really, really young age.” He’s put that into practice with Messenger Kids which lets 6 to 12-year-olds chat with their friends if parents approve.

The Facebook family of apps’ ad-driven business model and earnings depend on constant user growth that could be inhibited by stringent age gating. It surely doesn’t want to admit to parents it’s let kids slide into Instagram, that advertisers were paying to reach children too young to buy anything, and to Wall Street that it might not have 2.8 billion legal users across its apps as it claims.

But given Facebook and Instagram’s privacy scandals, addictive qualities, and impact on democracy, it seems like proper age-gating should be a priority as well as the subject of more regulatory scrutiny and public concern. Society has woken up to the harms of social media, yet Instagram erects no guards to keep kids from experiencing those ills for themselves. Until it makes an honest effort to stop kids from joining, the rest of Instagram’s safety initiatives ring hollow.

DHS wants to expand airport face recognition scans to include US citizens

Homeland Security wants to expand facial recognition checks for travelers arriving to and departing from the U.S. to also include citizens, which had previously been exempt from the mandatory checks.

In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.

Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.

But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.

Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.

“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .

“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.

“Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.

Citing a data breach of close to 100,000 license plate and traveler images in June, as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.

When reached, spokespeople for Homeland Security and Customs & Border Protection did not immediately have comment.

Will the future of work be ethical? Founder perspectives

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below, Greg speaks to two founders of innovative startups whose work provoked much discussion at the EmTech Next conference. Moxi, the robot assistant created by Andrea Thomasz of Diligent Robotics and her team, was a constant presence in the Media Lab reception hall immediately outside the auditorium in which all the main talks took place. And Prayag Narula of LeadGenius was featured, alongside leading tech anthropologist Mary Gray, in a panel on “Ghost Work” that sparked intense discussion throughout the conference and beyond.

Andrea Thomaz is the Co-Founder and CEO of Diligent Robotics. Image via MIT Technology Review

Could you give a sketch of your background?

Andrea Thomaz: I was always doing math and science, and did electrical engineering as an Undergrad at UT Austin. Then I came to MIT to do my PhD. It really wasn’t until grad school that I started doing robotics. I went to grad school interested in doing AI and was starting to get interested in this new machine learning that people were starting to talk about. In grad school, at the MIT Media Lab, Cynthia Breazeal was my advisor, and that’s where I fell in love with social robots and making robots that people want to be around and are also useful.

Say more about your journey at the Media Lab?

My statement of purpose for the Media Lab, in 1999, was that I thought that computers that were smarter would be easier to use. I thought AI was the solution to HCI [Human-computer Interaction]. So I came to the Media Lab because I thought that was the mecca of AI plus HCI.

It wasn’t until my second year as a student there that Cynthia finished her PhD with Rod Brooks and started at the Media Lab. And then I was like, “Oh wait a second. That’s what I’m talking about.”

Who is at the Media Lab now that’s doing interesting work for you?

For me, it’s kind of the same people. Patty Maes has kind of reinvented her group since those days and is doing fluid interfaces; I always really appreciate the kind of things they’re working on. And Cynthia, her work is still very seminal in the field.

So now, you’re a CEO and Founder?

CEO and Co-Founder of Diligent Robotics. I had twelve years in academia in between those. I finished my PhD, went and I was a professor at Georgia Tech in computing, teaching AI and robotics and I had a robotics lab there.

Then I got recruited away to UT Austin in electrical and computer engineering. Again, teaching AI and having a robotics lab. Then at the end of 2017, I had a PhD student who was graduating and also interested in commercialization, my Co-Founder and CTO Vivian Chu.

Let’s talk about the purpose of the human/robot interaction. In the case of your company, the robot’s purpose is to work alongside humans in a medical setting, who are doing work that is not necessarily going to be replaced by a robot like Moxi. How does that work exactly?

One of the reasons our first target market [is] hospitals is, that’s an industry where they’re looking for ways to elevate their staff. They want their staff to be performing, “at the top of their license.” You hear hospital administrators talking about this because there’s record numbers of physician burnout, nurse burnout, and turnover.

They really are looking for ways to say, “Okay, how can we help our staff do more of what they were trained to do, and not spend 30% of their day running around fetching things, or doing things that don’t require their license?” That for us is the perfect market [for] collaborative robots.” You’re looking for ways to automate things that the people in the environment don’t need to be doing, so they can do more important stuff. They can do all the clinical care.

In a lot of the hospitals we’re working with, we’re looking at their clinical workflows and identifying places where there’s a lot of human touch, like nurses making an assessment of the patient. But then the nurse finishes making an assessment [and] has to run and fetch things. Wouldn’t it be better if as soon as that nurse’s assessment hit the electronic medical record, that triggered a task for the robot to come and bring things? Then the nurse just gets to stay with the patient.

Those are the kind of things we’re looking for: places you could augment the clinical workflow with some automation and increase the amount of time that nurses or physicians are spending with patients.

So your robots, as you said before, do need human supervision. Will they always?

We are working on autonomy. We do want the robots to be doing things autonomously in the environment. But we like to talk about care as a team effort; we’re adding the robot to the team and there’s parts of it that the robot’s doing and parts of it that the human’s doing. There may be places where the robot needs some input or assistance and because it’s part of the clinical team. That’s how we like to think about it: if the robot is designed to be a teammate, it wouldn’t be very unusual for the robot to need some help or supervision from a teammate.

That seems different than what you could call Ghost Work.

Right. In most service robots being deployed today, there is this remote supervisor that is either logged in and checking in on the robots, or at least the robots have the ability to phone home if there’s some sort of problem.

That’s where some of this Ghost Work comes in. People are monitoring and keeping track of robots in the middle of the night. Certainly that may be part of how we deploy our robots as well. But we also think that it’s perfectly fine for some of that supervision or assistance to come out into the forefront and be part of the face-to-face interaction that the robot has with some of its coworkers.

Since you could potentially envision a scenario in which your robots are monitored from off-site, in a kind of Ghost Work setting, what concerns do you have about the ways in which that work can be kind of anonymized and undercompensated?

Currently we are really interested in our own engineering staff having high-touch customer interaction that we’re really not looking to anonymize. If we had a robot in the field and it was phoning home about some problem that was happening, at our early stage of the company, that is such a valuable interaction that in our company that wouldn’t be anonymous. Maybe the CTO would be the one phoning in and saying, “What happened? I’m so interested.”

I think we’re still at a stage where all of the customer interactions and all of the information we can get from robots in the field are such valuable pieces of information.

But how are you envisioning best-case scenarios for the future? What if your robots really are so helpful that they’re very successful and people want them everywhere? Your CTO is not going to take all those calls. How could you do this in a way that could make your company very successful, but also handle these responsibilities ethically?

Will the future of work be ethical? Future leader perspectives

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below he speaks to two conference attendees who had crucial insights to share. Meili Gupta is a high school senior at Phillips Exeter Academy, an elite boarding school in New Hampshire; Gupta attended the EmTech Next conference with her mother and has attended with family in previous years as well; her voice and thoughts on privilege and inequality in education and technology are featured prominently in Greg’s essay. Walter Erike is a 31-year-old independent consultant and SAP Implementation Senior Manager. from Philadelphia. Between conference session, he and Greg talked about diversity and inclusion at tech conferences and beyond.

Meili Gupta is a senior at Phillips Exeter Academy. Image via Meili Gupta

Greg Epstein: How did you come to be at EmTech Next?

Meili Gupta: I am a rising high school senior at Phillips Exeter Academy; I’m one of the managing editors for my school’s science magazine called Matter Magazine.

I [also] attended the conference last year. My parents have come to these conferences before, and that gave me an opportunity to come. I am particularly interested in the MIT Technology Review because I’ve grown up reading it.

You are the Managing Editor of Matter, a magazine about STEM at your high school. What subjects that Matter covers are most interesting to you?

This year we published two issues. The first featured a lot of interviews from top {AI} professors like Professor Fei-Fei Li, at Stanford. We did a review for her and an interview with Professor Olga Russakovsky at Princeton. That was an AI special issue and, being at this conference you hear about how AI will transform industries.

The second issue coincided with Phillips Exeter Global Climate Action Day. We focused both on environmentalism clubs at Exeter and environmentalism efforts worldwide. I think Matter, as the only stem magazine on campus has a responsibility in doing that.

AI and climate: in a sense, you’ve already dealt with this new field people are calling the ethics of technology. When you hear that term, what comes to mind?

As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers.

About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good.

What are your biggest concerns about AI? What do you think needs to be addressed in order for us to feel more comfortable as a society with increased use of AI?

That’s not an easy answer; it’s something our society is going to be grappling with for years. From what I’ve learned at this conference, from what I’ve read and tried to understand, it’s a multidimensional solution. You’re going to need computer programmers to learn the technical skills to make their algorithms less biased. You’re going to need companies to hire those people and say, “This is our goal; we want to create an algorithm that’s fair and can do good.” You’re going to need the general society to ask for that standard. That’s my generation’s job, too. WikiLeaks, a couple of years ago, sparked the conversation about personal privacy and I think there’s going to be more sparks.

Seems like your high school is doing some interesting work in terms of incorporating both STEM and a deeper, more creative than usual focus on ethics and exploring the meaning of life. How would you say that Exeter in particular is trying to combine these issues?

I’ll give a couple of examples of my experience with that in my time at Exeter, and I’m very privileged to go to a school that has these opportunities and offerings for its students.

Don’t worry, that’s in my next question.

Absolutely. With the computer science curriculum, starting in my ninth grade they offered a computer science 590 about [introduction to] artificial intelligence. In the fall another 590 course was about self driving cars, and you saw the intersection between us working in our robotics lab and learning about computer vision algorithms. This past semester, a couple students, and I was involved, helped to set up a 999: an independent course which really dove deep into machine learning algorithms. In the fall, there’s another 590 I’ll be taking called social innovation through software engineering, which is specifically designed for each student to pick a local project and to apply software, coding or AI to a social good project.

I’ve spent 15 years working at Harvard and MIT. I’ve worked around a lot of smart and privileged people and I’ve supported them. I’m going to ask you a question about Exeter and about your experience as a privileged high school student who is getting a great education, but I don’t mean it from a perspective of it’s now me versus you.

Of course you’re not.

I’m trying to figure this out for myself as well. We live in a world where we’re becoming more prepared to talk about issues of fairness and justice. Yet by even just providing these extraordinary educational experiences to people like you and me and my students or whomever, we’re preparing some people for that world better than others. How do you feel about being so well prepared for this sort of world to come that it can actually be… I guess my question is, how do you relate to the idea that even the kinds of educational experiences that we’re talking about are themselves deepening the divide between haves and have nots?

I completely agree that the issue between haves and have nots needs to be talked about more, because inequality between the upper and the lower classes is growing every year. This morning, Mr. Isbell from Georgia Tech talk was really inspiring. For example, at Phillips Exeter, we have a social service club called ESA which houses more than 70 different social service clubs. One I’m involved with, junior computer programming, teaches programming to local middle school students. That’s the type of thing, at an individual level and smaller scale, that people can try to help out those who have not been privileged with opportunities to learn and get ahead with those skills.

What Mr. Isbell was talking about this morning was at a university level and also tying in corporations bridge that divide. I don’t think that the issue itself should necessarily scare us from pushing forward to the frontier to say, the possibility that everybody who does not have a computer science education in five years won’t have a job.

Today we had that debate about role or people’s jobs and robot taxes. That’s a very good debate to have, but it sometimes feeds a little bit into the AI hype and I think it may be a disgrace to society to try to pull back technology, which has been shown to have the power to save lives. It can be two transformations that are happening at the same time. One, that’s trying to bridge an inequality and is going to come in a lot of different and complicated solutions that happen at multiple levels and the second is allowing for a transformation in technology and AI.

What are you hoping to get out of this conference for yourself, as a student, as a journalist, or as somebody who’s going into the industry?

The theme for this conference is the future of the workforce. I’m a student. That means I’m going to be the future of the workforce. I was hoping to learn some insight about what I may want to study in college. After that, what type of jobs do I want to pursue that are going to exist and be in demand and really interesting, that have an impact on other people? Also, as a student, in particular that’s interested in majoring in computer science and artificial intelligence, I was hoping to learn about possible research projects that I could pursue in the fall with this 590 course.

Right now, I’m working on a research project with a Professor at the University of Maryland about eliminating bias in machine learning algorithms. What type of dataset do I want to apply that project to? Where is the need or the attention for correcting bias in the AI algorithms?

As a journalist, I would like to write a review summarizing what I’ve learned so other [Exeter students] can learn a little too.

What would be your biggest critique of the conference? What could be improved?

Will the future of work be ethical? Perspectives from MIT Technology Review

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below he speaks to two key organizers: Gideon Lichfield, the editor in chief of the MIT Technology Review, and Karen Hao, its artificial intelligence reporter. Lichfield led the creative process of choosing speakers and framing panels and discussions at the EmTech Next conference, and both Lichfield and Hao spoke and moderated key discussions.

Gideon Lichfield is the editor in chief at MIT Technology Review. Image via MIT Technology Review

Greg Epstein: I want to first understand how you see your job — what impact are you really looking to have?

Gideon Lichfield: I frame this as an aspiration. Most of the tech journalism, most of the tech media industry that exists, is born in some way of the era just before the dot-com boom. When there was a lot of optimism about technology. And so I saw its role as being to talk about everything that technology makes possible. Sometimes in a very negative sense. More often in a positive sense. You know, all the wonderful ways in which tech will change our lives. So there was a lot of cheerleading in those days.

In more recent years, there has been a lot of backlash, a lot of fear, a lot of dystopia, a lot of all of the ways in which tech is threatening us. The way I’ve formulated the mission for Tech Review would be to say, technology is a human activity. It’s not good or bad inherently. It’s what we make of it.

The way that we get technology that has fewer toxic effects and more beneficial ones is for the people who build it, use it, and regulate it to make well informed decisions about it, and for them to understand each other better. And I said the role of a tech publication like Tech Review, one that is under a university like MIT, probably uniquely among tech publications, we’re positioned to make that our job. To try to influence those people by informing them better and instigating conversations among them. And that’s part of the reason we do events like this. So that ultimately better decisions get taken and technology has more beneficial effects. So that’s like the high level aspiration. How do we measure that day to day? That’s an ongoing question. But that’s the goal.

Yeah, I mean, I would imagine you measure it qualitatively. In the sense that… What I see when I look at a conference like this is, I see an editorial vision, right? I mean that I’m imagining that you and your staff have a lot of sort of editorial meetings where you set, you know, what are the key themes that we really need to explore. What do we need to inform people about, right?

Yes.

What do you want people to take away from this conference then?

A lot of the people in the audience work at medium and large companies. And they’re thinking about…what effect does automation and AI going to have in their companies? How should it affect their workplace culture? How should it affect their high end decisions? How should it affect their technology investments? And I think the goal for me is, or for us is, that they come away from this conference with a rounded picture of the different factors that can play a role.

There are no clear answers. But they ought to be able to think in an informed and in a nuanced way. If we’re talking about automating some processes, or contracting out more of what we do to a gig work style platform, or different ways we might train people on our workforce or help them adapt to new job opportunities, or if we’re thinking about laying people off versus retraining them. All of the different implications that that has, and all the decisions you can take around that, we want them to think about that in a useful way so that they can take those decisions well.

You’re already speaking, as you said, to a lot of the people who are winning, and who are here getting themselves more educated and therefore more likely to just continue to win. How do you weigh where to push them to fundamentally change the way they do things, versus getting them to incrementally change?

That’s an interesting question. I don’t know that we can push people to fundamentally change. We’re not a labor movement. What we can do is put people from labor movements in front of them and have those people speak to them and say, “Hey, this is the consequences that the decisions you’re taking are having on the people we represent.” Part of the difficulty with this conversation has been that it has been taking place, up till now, mainly among the people who understand the technology and its consequences. Which with was the people building it and then a small group of scholars studying it. Over the last two or three years I’ve gone to conferences like ours and other people’s, where issues of technology ethics are being discussed. Initially it really was only the tech people and the business people who were there. And now you’re starting to see more representation. From labor, from community organizations, from minority groups. But it’s taken a while, I think, for the understanding of those issues to percolate and then people in those organizations to take on the cause and say, yeah, this is something we have to care about.

In some ways this is a tech ethics conference. If you labeled it as such, would that dramatically affect the attendance? Would you get fewer of the actual business people to come to a tech ethics conference rather than a conference that’s about tech but that happened to take on ethical issues?

Yeah, because I think they would say it’s not for them.

Right.

Business people want to know, what are the risks to me? What are the opportunities for me? What are the things I need to think about to stay ahead of the game? The case we can make is [about the] ethical considerations are part of that calculus. You have to think about what are the risks going to be to you of, you know, getting rid of all your workforce and relying on contract workers. What does that do to those workers and how does that play back in terms of a risk to you?

Yes, you’ve got Mary Gray, Charles Isbell, and others here with serious ethical messages.

What about the idea of giving back versus taking less? There was an L.A. Times op ed recently, by Joseph Menn, about how it’s time for tech to give back. It talked about how 20% of Harvard Law grads go into public service after their graduation but if you look at engineering graduates, the percentage is smaller than that. But even going beyond that perspective, Anand Giridharadas, popular author and critic of contemporary capitalism, might say that while we like to talk about “giving back,” what is really important is for big tech to take less. In other words: pay more taxes. Break up their companies so they’re not monopolies. To maybe pay taxes on robots, that sort of thing. What’s your perspective?

I don’t have a view on either of those things. I think the interesting question is really, what can motivate tech companies, what can motivate anybody who’s winning a lot in this economy, to either give back or take less? It’s about what causes people who are benefiting from the current situation to feel they need to also ensure other people are benefiting.

Maybe one way to talk about this is to raise a question I’ve seen you raise: what the hell is tech ethics anyway? I would say there isn’t a tech ethics. Not in the philosophy sense your background is from. There is a movement. There is a set of questions around it, around what should technology companies’ responsibility be? And there’s a movement to try to answer those questions.

A bunch of the technologies that have emerged in the last couple of decades were thought of as being good, as being beneficial. Mainly because they were thought of as being democratizing. And there was this very naïve Western viewpoint that said if we put technology and power in the hands of the people they will necessarily do wise and good things with it. And that will benefit everybody.

And these technologies, including the web, social media, smart phones, you could include digital cameras, you could include consumer genetic testing, all things that put a lot more power in the hands of the people, have turned out to be capable of having toxic effects as well.

That took everybody by surprise. And the reason that has raised a conversation around tech ethics is that it also happens that a lot of those technologies are ones in which the nature of the technology favors the emergence of a dominant player. Because of network effects or because they require lots of data. And so the conversation has been, what is the responsibility of that dominant player to design the technology in such a way that it has fewer of these harmful effects? And that again is partly because the forces that in the past might have constrained those effects, or imposed rules, are not moving fast enough. It’s the tech makers who understand this stuff. Policy makers, and civil society have been slower to catch up to what the effects are. They’re starting to now.

This is what you are seeing now in the election campaign: a lot of the leading candidates have platforms that are about the use of technology and about breaking up big tech. That would have been unthinkable a year or two ago.

So the discussion about tech ethics is essentially saying these companies grew too fast, too quickly. What is their responsibility to slow themselves down before everybody else catches up?

Another piece that interests me is how sometimes the “giving back,” the generosity of big tech companies or tech billionaires, or whatever it is, can end up being a smokescreen. A way to ultimately persuade people not to regulate. Not to take their own power back as a people. Is there a level of tech generosity that is actually harmful in that sense?

I suppose. It depends on the context. If all that’s happening is corporate social responsibility drives that involve dropping money into different places, but there isn’t any consideration of the consequences of the technology itself those companies are building and their other actions, then sure, it’s a problem. But it’s also hard to say giving billions of dollars to a particular cause is bad, unless what is happening is that then the government is shirking its responsibility to fund those causes because it’s coming out of the private sector. I can certainly see the U.S. being particularly susceptible to this dynamic, where government sheds responsibility. But I don’t think we’re necessarily there yet.