How to ‘watch’ NASA’s OSIRIS-REx snatch a sample from near-Earth asteroid Bennu

NASA’s OSIRIS-REx probe is about to touch down on an asteroid for a smash-and-grab mission, and you can follow its progress live — kind of. The craft is scheduled to perform its collection operation this afternoon, and we’ll know within minutes if all went according to plan.

OSIRIS-REx, which stands for Origins Spectral Interpretation Resource Identification Security — Regolith Explorer, was launched in September of 2016 and since arriving at its destination, the asteroid Bennu, has performed a delicate dance with it, entering an orbit so close it set records.

Today is the culmination of the team’s efforts, the actual “touch and go” or TAG maneuver that will see the probe briefly land on the asteroid’s surface and suck up some of its precious space dust. Just a few seconds later, once sampling is confirmed, the craft will jet upward again to escape Bennu and begin its journey home.

Image Credits: NASA

Image Credits: NASA

While there won’t be live HD video of the whole attempt, NASA will be providing both a live animation of the process, informed by OSIRIS-REx’s telemetry, and presumably any good images that are captured as it descends.

We know for certain this is both possible and very cool because Japan’s Hayabusa-2 asteroid mission did something very similar last year, but with the added complexity (and coolness) of firing a projectile into the surface to stir things up and get a more diverse sample.

NASA’s coverage starts at 2 p.m. PDT, and the touchdown event is planned to take place an hour or so later, at 3:12, if all goes according to plan. You can watch the whole thing take place in simulation at this Twitch feed, which will be updated live, but NASA TV will also have live coverage and commentary on its YouTube channel. Images may come back from the descent and collection, but they’ll be delayed (it’s hard sending lots of data over a million-mile gap) so if you want the latest, listen closely to the NASA feeds.

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

Lockheed picks Relativity’s 3D-printed rocket for experimental NASA mission

Relativity Space has bagged its first public government contract, and with a major defense contractor at that. The launch startup’s 3D-printed rockets are a great match for a particularly complex mission Lockheed is undertaking for NASA’s Tipping Point program.

The mission is a test of a dozen different cryogenic fluid management systems, including liquid hydrogen, which is a very difficult substance to work with indeed. The tests will take place on a single craft in orbit, which means it will be a particularly complicated one to design and accommodate.

The payload itself and its cryogenic systems will be designed and built by Lockheed and their partners at NASA, of course, but the company will need to work closely with its launch provider during development and especially in the leadup to the actual launch.

Relativity founder and CEO Tim Ellis explained that the company’s approach of 3D printing the entire rocket top to bottom is especially well suited for this.

“We’re building a custom payload fairing that has specific payload loading interfaces they need, custom fittings and adapters,” he said. “It still needs to be smooth, of course — to a lay person it will look like a normal rocket,” he added.

Every fairing (the external part of the launch vehicle covering the payload) is necessarily custom, but this one much more so. The delicacy of having a dozen cryogenic operations being loaded up and tested until moments before launch necessitates a number of modifications that, in other days, would result in a massive increase in manufacturing complexity.

“If you look at the manufacturing tools being used today, they’re not much different from the last 60 years,” Ellis explained. “It’s fixed tooling, giant machines that look impressive but only make one shape or one object that’s been designed by hand. And it’ll take 12-24 months to make it.”

Not so with Relativity.

“With our 3D printed approach we can print the entire fairing in under 30 days,” Ellis said. “It’s also software defined, so we can just change the file to change the dimensions and shape. For this particular object we have some custom features that we’re able to do more quickly and adapt. Even though the mission is three years out, there will always be last minute changes as you get closer to launch, and we can accommodate that. Otherwise you’d have to lock in the design now.”

Ellis was excited about the opportunity to publicly take on a mission with such a major contractor. These enormous companies field billions of government dollars and take part in many launches, so it’s important to be in their good books, or at least in their rolodexes. A mission like this, complex but comparatively low stakes (compared with a crewed launch or billion-dollar satellite) is a great chance for a company like Relativity to show its capabilities. (Having presold many of its launches already, there’s clearly no lack of interest in the 3D printed launch vehicles, but more is always better.)

The company will be going to space before then, though, if all continues to go according to plan. The first orbital test flight is scheduled for late 2021. “We’re actually printing the launch hardware right now, the last few weeks,” Ellis mentioned.

The NASA Tipping Point program that is funding Lockheed with an $89.7 million contract for this experiment is one intended to, as its name indicates, help tip promising technologies over the edge into commercial viability. With hundreds of millions awarded yearly for companies pursuing things like lunar hoppers and robotic arms, it’s a bit like the agency’s venture fund.

Analogue takes on the TurboGrafx-16 with its Duo retro console

Analogue’s beautiful, functional retro gaming consoles provide a sort of “archival quality” alternative to the cheap mini-consoles proliferating these days. The latest system to be resurrected by the company is the ill-fated, but still well-thought-of TurboGrafx-16 or PC Engine.

The Duo, as Analogue’s device is called, is named after a later version of the TurboGrafx-16 that included its expensive CD-ROM add-on — and indeed the new Duo supports both game cards and CDs, provided they have survived all this time without getting scratched.

Like the rest of Analogue’s consoles, and unlike the popular SNES and NES Classic Editions from Nintendo (and indeed the new TurboGrafx-16 Mini), the Duo does not use emulation in any way. Instead, it’s a painstaking recreation of the original hardware, with tweaks to introduce modern conveniences like high-definition video, wireless controllers, and improvements to reliability and so on.

Image Credits: Analogue

As a bonus, it’s all done in FPGA, which implies that this hardware is truly one of a kind in service of remaking the console accurately. Games should play exactly as they would have on the original hardware down to the annoying glitches and slowdowns of that era of consoles.

And what games! Well, actually, few of them ever reached the status of their competitors on Nintendo and Sega consoles here in the U.S., where the TurboGrafx-16 sold poorly. But titles like Bonk’s Adventure, Bomberman ’93, Ninja Spirit, Splatterhouse, and Devil’s Crush should be played more widely. Shmup fans like myself were spoiled with originals and arcade ports like R-Type and Blazing Lazers. The Ys series ( also got its start on the PC Engine (if you could afford the CD attachment). Here’s a good retrospective.

I wouldn’t mind having an HDMI port on the back of my SNES. Oh, Analogue makes one…

Analogue’s consoles are made for collectors who would prefer not to have to baby their original hardware, or want to upscale the signal and play wirelessly without too much fuss. I still have my original SNES, but 240p just doesn’t look as crisp as it did on a 15-inch CRT in the ’90s.

At $199, it’s more expensive than finding one at a garage sale, but good luck with that. The original and its CD add-on cost a fortune, so if you think about it from that perspective, this is a real bargain. Analogue says limited quantities are available, and will be shipping in 2021.

FAA streamlines commercial launch rules to keep the rockets flying

The FAA has published its updated rules for commercial space launches and reentries, streamlining and modernizing the large and complicated set of regulations. With rockets launching in greater numbers and variety, and from more providers, it makes sense to get a bit of the red tape out of the way.

The rules provide for licensing of rocket launch operators and approval of individual launches and reentry plans, among other things. As you can imagine, such rules must be complex in the first place, more so when they’ve been assembled piecemeal for years to accommodate a quickly moving industry.

U.S. Transportation Secretary Elaine Chao called the revisions a “historic, comprehensive update.” They consolidate four sets of regulations and unify licensing and safety rules under a single umbrella, while allowing flexibility for different types of operators or operations.

According to a press release from the FAA, the new rules allow:

  • A single operator’s license that can be used to support multiple launches or reentries from potentially multiple launch site locations.
  • Early review when applicants submit portions of their license application incrementally.
  • Applicants to negotiate mutually agreeable reduced time frames for submittals and application review periods.
  • Applicants to apply for a safety element approval with a license application, instead of needing to submit a separate application.
  • Additional flexibility on how to demonstrate high consequence event protection.
  • Neighboring operations personnel to stay during launch or reentry in certain circumstances.
  • Ground safety oversight to be scoped to better fit the safety risks and reduce duplicative requirements when operating at a federal site.

In speaking with leaders in the commercial space industry, a common theme is the burden of regulation. Any reform that simplifies and unifies will likely be welcomed by the community.

The actual regulations are hundreds of pages long, so it’s still hardly a simple task to get a license and start launching rockets. But at least it isn’t several sets of 500-page documents that you have to accommodate simultaneously.

The new rules have been submitted for entry in the Federal Register, and will take effect 90 days after that happens. In addition, the FAA will be putting out Advisory Circulars for public comment — additions and elaborations on the rules that the agency says there may be as many as two dozen of in the next year. You can keep up with those here.

With ‘absurd’ timing, FCC announces intention to revisit Section 230

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.

Suspect provenance of Hunter Biden data cache prompts skepticism and social media bans

A cache of emails and other selected data purportedly from a laptop owned by Hunter Biden were published today by the New York Post. Ordinarily a major leak related to a figure involved in a controversy of Presidential importance would be on every front page — but the red flags on this one are so prominent that few editors would consent to its being published as-is.

Almost no news outlets have reported the data or its origin as factual, and Facebook and Twitter have both restricted sharing of the Post articles pending further information. Here’s why.

When something of this nature comes up, it pays to investigate the sources very closely: It may very well be, as turned out to be the case before, that foreign intelligence services are directly involved. We know that Russia, among others, is actively attempting to influence the election using online influence campaigns and hackery. Any report of a political data leakage — let alone one friendly to Trump and related to Ukraine — must be considered within that context, and the data understood to be either purposefully released, purposefully edited, or both.

But even supposing no global influence effort existed, the provenance of this so-called leak would be difficult to swallow. So much so that major news organizations have held off coverage, and Facebook and Twitter have both limited the distribution of the NY Post article.

In a statement, Twitter said that it is blocking links or images of the material “in line with our hacked materials policy.” The suspicious circumstances surrounding the data’s origin apparently do not adequately exclude the possibility of their having been acquired through hacking or other illicit means. (I’ve asked Twitter for more more clarity on this; Facebook has not responded to a request for comment.)

The story goes that a person dropped off three MacBook Pros to a repair shop in Delaware in April of 2019, claiming they were water damaged and needed data recovery services. The owner of the repair shop “couldn’t positively identify the customer as Hunter Biden,” but the laptop had a Beau Biden Foundation sticker on it.

On the laptops were, reportedly, many emails including many pertaining to Hunter Biden’s dealings with Ukrainian gas company Burisma, which Trump has repeatedly alleged were a cover for providing access to Hunter’s father, who was then Vice President. (There is no evidence for this, and Joe Biden has denied all this many times. Today the campaign specifically denied a meeting mentioned in one of the purported emails.)

In addition, the laptops were full of private and images and personal videos that are incriminating of the younger Biden, whose drug habit at the time has become public record.

The data was recovered, but somehow the client could not be contacted. The repair shop then apparently inspected the data, found it relevant to the national interest, and made a copy to give to Trump ally Rudy Giuliani before handing it over to the FBI. Giuliani, through former Trump strategist Steve Bannon, shared the data with the New York Post, which published the articles today.

There are so many problems with this story it is difficult to know where to begin.

  1. The very idea that a laptop with a video of Hunter Biden smoking crack on it would be given to a random repair shop to recover is absurd. It is years since his drug use and Burisma dealings became a serious issue of international importance, and professionals would long since have taken custody of any relevant hardware or storage. It is beyond the worst operational security in the world to give an unencrypted device with confidential data on it to a third party. It is, however, very much a valid way for someone to make a device appear to be from a person or organization without providing any verification that it is so.
  2. The repair shop supposedly could not identify Hunter Biden, who lives in Los Angeles, as the customer. But the invoice (for $85 — remarkably cheap for diagnosis, recovery, and backup of three damaged Macs) has “Hunter Biden” written right on it, with a phone number and one of the email addresses he reportedly used. It seems unlikely that Hunter Biden’s personal laptop — again, loaded with personal and confidential information, and possibly communications with the VP — would be given to a small repair shop rather (than an Apple Store or vetted dealer) and that shop would be given his personal details for contact. Political operators with large supporting organizations simply don’t do that — though someone else could have.
  3. Even if they did, the idea that Biden or his assistant or whoever would not return to pick up the laptop or pay for the services is extremely suspicious. Again, these are supposedly the personal devices of someone who communicated regularly with the VP, and whose work had come under intense scrutiny long before they were dropped off. They would not be treated lightly or forgotten. On the other hand, someone who wanted this data to be inspected would do exactly this.
  4. That the laptops themselves were open and unencrypted is ridiculous. The serial number of the laptop suggests it was a 2017 MacBook Pro, probably running Mojave. Every Mac running Lion or later has easily enabled built-in encryption. It would be unusual for anyone to provide a laptop for repair that had no password or protection whatsoever on its files, let alone a person like Hunter Biden — again, years into efforts to uncover personal data relating to his work in Ukraine. An actor who wanted this data to be discovered and read would leave it unencrypted.
  5. That this information would be inspected by the repair shop at all is very suspect indeed. Recovery of an ostensibly damaged Mac would likely take the form of cloning the drive and checking its integrity against the original. There is no reason the files or apps themselves would need to be looked at in the course of the work in the first place. Some shops have software that checks file hashes, if they can see them, against a database of known child sex abuse material. And there have been notable breaches of trust where repair staff illicitly accessed the contents of a laptop to get personal data. But there’s really no legitimate reason for this business to inspect the contents of the devices they are working on, let alone share that information with anyone, let alone a partisan operative. The owner, and avid Trump supporter, gave an interview this morning giving inconsistent information on what had happened and suggested he investigated the laptops of his own volition and retained copies for personal protection.
  6. The data itself is not convincing. The Post has published screenshots of emails instead of the full text with metadata — something you would want to do if you wanted to show they were authentic. For stories with potential political implications, it’s wise to verify.
  7. Lastly, the fact that a copy was given to Giuliani and Bannon before being handed over to the FBI, and that it is all being published two weeks before the election, lends the whole thing a familiar stink — one you may remember from other pre-election shenanigans in 2016. The choice of the Post as the outlet for distribution is curious as well; one need only to accidentally step on one in the subway to understand why.

As you can see, very little about the story accompanying this data makes any real sense as told. None of these major issues is addressed or really even raised in the Post stories. If however you were to permit yourself to speculate even slightly as to the origin of the data, the story starts to make a lot of sense.

Say, for example, that Hunter Biden’s iCloud account was hacked, something that has occurred to many celebrities and persons of political interest. This would give access not only to the emails purported to be shown in the Post article, but also personal images and video automatically backed up from the phone that took them. That data, however, would have to be “laundered” in order to have a plausible origin that did not involve hackers, whose alliance and intent would be trivial to deduce. Loaded on a laptop with an obvious political sticker on it, with no password, left at a demonstrably unscrupulous repair shop with Hunter Biden’s personal contact details, it would be trivial to tip confederates off to its existence and vulnerability.

That’s pure speculation, of course. But it aligns remarkably well with the original story, doesn’t it? It would be the duty of any newsroom with integrity to exclude some or all of these very distinct possibilities or to at least explain their importance. Then and only then can the substance of the supposed leak be considered at all.

This story is developing. Inquiries are being made to provide further information and context.

NASA loads 14 companies with $370M for ‘tipping point’ technologies

NASA has announced more than a third of a billion dollars worth of “Tipping Point” contracts awarded to over a dozen companies pursuing potentially transformative space technologies. The projects range from in-space testing of cryogenic tech to a 4G LTE network for the Moon.

The space agency is almost always accepting applications for at least one of its many grant and contract programs, and Tipping Point is directly aimed at commercial space capabilities that need a bit of a boost. According to the program description, “a technology is considered at a tipping point if an investment in a demonstration will significantly mature the technology, increase the likelihood of infusion into a commercial space application, and bring the technology to market for both government and commercial applications.”

In this year’s awards, which take the form of multi-year contracts with multiple milestones, the focus was on two main areas: cryogenics and lunar surface tech. Note that the amounts provided are not necessarily the cost of developing the tech, but rather the sums deemed necessary to advance it to the next stage. Here’s a brief summary of each award:

Cryogenics

  • Eta Space, $27M: In-space demonstration of a complete cryogenic oxygen management system
  • Lockheed Martin, $89.7M: In-space demonstration of liquid hydrogen in over a dozen cryogenic applications
  • SpaceX, $53.2M: Flight demonstration transferring 10 tons of liquid oxygen between tanks in Starship
  • ULA, $86.2M: Demonstration of a smart propulsion cryogenic system on a Vulcan Centaur upper stage

Lunar surface innovation

  • Alpha Space Test and Research Alliance, $22.1M: Develop a small tech and science platform for lunar surface testing
  • Astrobotic, $5.8M: “Mature” a fast wireless charging system for use on the lunar surface
  • Intuitive Machines, $41.6M: Develop a hopper lander with a 2.2-pound payload capacity and 1.5-mile range
  • Masten Space Systems, $2.8M: Demonstrate a universal chemical heat and power source for lunar nights and craters
  • Masten Space Systems, $10M: Demonstrate precision landing an hazard avoidance on its Xogdor vehicle (Separate award under “descent and landing” heading)
  • Nokia of America, $14.1M: Deploy the first LTE network in space for lunar surface communications
  • pH Matter, $3.4M: Demonstrate a fuel cell for producing and storing energy on the lunar surface
  • Precision Compustion, $2.4M: Advance a cheap oxide fuel stack to generate power from propellants
  • Sierra Nevada, $2.4M: Demonstrate a device using solar energy to extract oxygen from lunar regolith
  • SSL Robotics, $8.7M: Develop a lighter, cheaper robotic arm for surface, orbital, and “terrestrial defense” applications
  • Teledyne Energy Systems, $2.8M: Develop a hydrogen fuel cell power system with a 10,000-hour battery life

You can read more about the proposal process and NASA’s areas of interest at the Tipping Point solicitation page.

Two Screens for Teachers will outfit all educators in Seattle Public Schools with a second monitor

Two Screens for Teachers, which as you may guess is about getting teachers a second screen to use at home, has put together enough funds to get every educator in the Seattle Public School system a new monitor who needs one. They’re hoping it will spur others to pony up for a similar treatment at their local schools.

The idea of running a class with 30 kids while also juggling teaching materials and administrative stuff all on a single laptop screen is anxiety-inducing just to think about, and thousands of teachers have been doing just that for months.

Two Screens for Teachers was announced in September as a way to connect those educators who need a second monitor (which is to say, most of them) with people who want to pay for it — it’s that simple. Thousands of monitors have already been distributed, but the waiting list is more than 20,000 people long, the kind of scale where the needle only gets moved over time — which teachers have little of — or generosity.

Fortunately there are enough generous people with a bit of cash on hand in Seattle that the organization has enough to give a new monitor to all of the 3,000 or so teachers in Seattle public schools. If you’re in one of them, sign up here in the next week to get yours!

Walk Score co-founders Matt Lerner and Mike Mathieu put the thing together in the first place, but a bunch of Seattle-based investors and entrepreneurs came together to raise the ~$430K needed to cover the costs of covering the whole district: “a matching grant from the Mark Torrance Foundation, a collection of early Amazon, Microsoft, and Redfin employees, and venture capitalists from the Madrona Venture Group and Pioneer Square Labs,” as the organization put it.

He’s hoping that the success in Seattle will activate similar communities all over the country where there are thousands more teachers in need.

“We’re asking our fellow techies in the Bay Area, Los Angeles, New York, Chicago, Denver, Salt Lake City, Atlanta, Austin, Dallas, Pittsburgh and Raleigh-Durham to show teachers they matter and keep students connected across the country,” said Lerner in a press release. Naturally other cities are welcome to join in as well, but those on the list have been challenged directly.

Lerner confirmed to me that Two Screens for Teachers would happily act as an intermediary, doing discounted bulk purchases of monitors (as opposed to matching individual donors with individual teachers, which was how it got started) and having regional leaders raise cash to cover the distribution to their local educators. There’s a sort of how-to section in this post showing how to estimate the costs and find local partners.

In the spirit of friendly competition, here’s hoping other cities, and the people in them looking for a way to give back tangibly in these hard times, will take up the gauntlet and get their educators this hugely helpful resource. You can learn more (or donate) at twoscreensforteachers.com.

Apple’s iPhone 12 Pro camera upgrades sharpen focus on serious photographers

Apple’s iPhone 12 Pro heaps improvements on the already formidable power of its camera system, adding features that will be prized by “serious” photographers — that is to say, the type who like to really mess around with their shots after the fact. Of course, the upgrades will also be noticeable for us “fire and forget” shooters as well.

The most tangible change is the redesign of two of the three lens systems on the rear camera assembly. The Pro Max comes with a new, deeper telephoto camera: a 65 mm-equivalent rather than the 52 mm found on previous phones. This closer optical zoom will be prized by many; after all, 52 mm is still quite wide for portrait shots.

The improved wide-angle lens, common to all the new iPhone 12 models, has had its lens assembly simplified down to seven elements, improving light transmission and getting its equivalent aperture to F/1.6. At this scale, practically every photon counts, especially for the revamped Night Mode.

A disassembled iPhone camera.

Image Credits: Apple

Perhaps a more consequential (and portentous) hardware change is the introduction of sensor-level image stabilization to the wide camera. This system, first used in DSLRs, detects motion and shifts the sensor a tiny bit to compensate for it, thousands of times per second. It’s a simpler, lighter-weight alternative to solutions that shift the lens itself.

Practically every flagship phone out there has some form of image stabilization, but implementations matter, so hands-on testing will determine whether this one is, in Apple’s words, “a game changer.” At any rate, it suggests that this is going to be a feature of the iPhone camera system going forward, and gains we see from it are here to stay; the presenter at today’s virtual event suggested a full F-stop, allowing two-second handheld exposures, but I’d take that with a grain of salt.

An image showing the layers of a picture processed by an iPhone.

Image Credits: Apple

On the software side, the introduction of Apple ProRAW will be a godsend to photographers who use the iPhone either as a primary or secondary camera. When you take a photo, only a fraction of the information the sensor collects ends up on your screen — a huge amount of processing goes into removing redundant data, punching up colors, finding a good tone curve and so on. This produces a good-looking image at the cost of customizability; once you throw away that “extra” information, the colors and tone are restricted to a much narrower range of adjustment.

An iPhone showing the camera app shooting in RAW mode.

Image Credits: Apple

RAW files are the answer to this, as DSLR photographers know — they’re a minimally processed representation of what the sensor collects, letting the user do all the work to make the photo look good. Being able to shoot to a RAW format (or RAW-adjacent; we’ll know more with hands-on testing) frees up photographers who may have felt hemmed in by the iPhone’s default image processing. There were ways of getting around this before, but Apple has an advantage over third-party apps with its low-level access to the camera architecture, so this format will probably be the new standard.

This newfound elasticity at the image format level also enables the iPhone Pros to shoot in Dolby Vision, a grading standard usually applied in editing suites after you shoot your movie or commercial on a digital cinema camera. Shooting directly to it may be helpful to people planning to use the format but shooting with iPhones as B cameras. If cinematographer Emmanuel Lubezki approves, it’s good enough for pretty much everyone else on Earth. I sincerely doubt anyone will cut their work together on the phone, though.

These two advances, ProRAW and Dolby, suggest that Apple’s improved silicon has left a lot of wiggle room in the photography backend. As I’ve written before, this is the most important segment of the imaging workflow right now, and the company is probably coming up with all kinds of ways to take advantage of the power offered by the latest chips.

Though larger cameras and lenses still offer advantages that the iPhone can never hope to match, the reverse is true as well. And the closer the iPhone gets to offering cinema-like quality — even if it’s simulated — the greater its advantages of portability and ease of use grow in proportion. Apple has been ruthlessly targeting enthusiast photographers, who aren’t quite sure if they want to buy a DSLR or mirrorless system in addition to a phone with a nice camera. By sweetening the deal on the phone side, Apple surely rakes in more of these users every generation.

Of course the Pro phones come at a significant premium over the normal range of iPhone devices (the Max starts at $1,099), but these improvements aren’t impossible or really even difficult to bring to lower-end models — most of them will probably trickle down next year. Of course, by then a whole new set of features will have been cooked up for the Pro devices. For photographers, however, planned obsolescence is part of the lifestyle.