Mars orbiter spots silent, dust-covered Opportunity rover as dust storm clears

Mars rover Opportunity has been operating on the surface of the Red Planet since 2004, but a dust storm this summer may prove to be the mission’s toughest challenge. The enormous storm caked Opportunity in dust and blocked out the sun, its source of energy — and there’s no guarantee the batteries aren’t dead for good. But now that the skies have cleared, we at least have our first look at the workhorse rover from orbit since it went radio silent.

The Mars Reconnaissance Orbiter captures fabulous imagery of the planet at a regular rate, but it happened that it passed over Perseverance Valley last week, where Opportunity is currently stationary. In the image you can just make it out as a few pixels raised above the surface.

That valley isn’t the only place that was hit by the storm — this was no flurry but a full-blown planet-spanning tempest that lasted for months. It isn’t the first dust storm Opportunity has weathered by a long shot, but it was probably the worst.

The last we heard from the rover was on June 10, at which point the storm was getting so intense that Opportunity couldn’t charge its batteries any more and lowered itself into a hibernation state, warmed only by its plutonium-powered heaters — if they’re even working.

Once a day, Opportunity’s deeply embedded safety circuit checks if there’s any power in its battery or coming in via solar.

“Now that the sun is shining through the dust, it will start to charge its batteries,” explained Jim Watzin, director of the Mars Exploration Program at NASA. And so some time in the coming weeks it will have sufficient power to wake up and place a call back to Earth. But we don’t know when that call will come.”

An Opportunity shadow-selfie from 2004, when Opportunity was comparatively young (and had “only” doubled its mission length).

That’s the hope, anyway. There is of course the possibility that the dust has obscured the solar cells too thickly, or some power fault during the storm led to the safety circuit not working… there’s no shortage of what-if scenarios. But space exploration is a unique combination of the deeply realistic with the deeply optimistic, and there’s no way Opportunity’s handlers aren’t going to give the little rover all the time it needs, within reason, to get back in touch.

The team has been sending extra signals out to spur a response from Opportunity and will continue to do so for the next few weeks, but even that won’t be the end of the line.

Thomas Zurbuchen, associate administrator at NASA’s Science Mission Directorate, assured the many Opportunity superfans out there that they plan to keep listening at least through January. And you can bet a few sentimental types will find a way to check now and then after that as well.

Should the worst happen and the dust storm appear to have disabled the rover for good, that would still be a hell of a run — Opportunity was intended to last for 90 days and has instead gone for 14 years. Nothing sad about that. But here’s hoping we hear from this long-lived explorer soon.

See the new iPhone’s ‘focus pixels’ up close

The new iPhones have excellent cameras, to be sure. But it’s always good to verify Apple’s breathless on-stage claims with first-hand reports. We have our own review of the phones and their photography systems, but teardowns provide the invaluable service of letting you see the biggest changes with your own eyes — augmented, of course, by a high-powered microscope.

We’ve already seen iFixit’s solid-as-always disassembly of the phone, but TechInsights gets a lot closer to the device’s components — including the improved camera of the iPhone XS and XS Max.

Although the optics of the new camera are as far as we can tell unchanged since the X, the sensor is a new one and is worth looking closely at.

Microphotography of the sensor die show that Apple’s claims are borne out and then some. The sensor size has increased from 32.8mm2 to 40.6mm2 — a huge difference despite the small units. Every tiny bit counts at this scale. (For comparison, the Galaxy S9 is 45mm2, and the soon-to-be-replaced Pixel 2 is 25mm2.)

The pixels themselves also, as advertised, grew from 1.22 microns (micrometers) across to 1.4 microns — which should help with image quality across the board. But there’s an interesting, subtler development that has continually but quietly changed ever since its introduction: the “focus pixels.”

That’s Apple’s brand name for phase detection autofocus (PDAF) points, found in plenty of other devices. The basic idea is that you mask off half a sub-pixel every once in a while (which I guess makes it a sub-sub-pixel), and by observing how light enters these half-covered detectors you can tell whether something is in focus or not.

Of course, you need a bunch of them to sense the image patterns with high fidelity, but you have to strike a balance: losing half a pixel may not sound like much, but if you do it a million times, that’s half a megapixel effectively down the drain. Wondering why that all the PDAF points are green? Many camera sensors use an “RGBG” sub-pixel pattern, meaning there are two green sub-pixels for each red and blue one — it’s complicated why. But there are twice as many green sub-pixels and therefore the green channel is more robust to losing a bit of information.

 

Apple introduced PDAF in the iPhone 6, but as you can see in TechInsights’ great diagram, the points are pretty scarce. There’s one for maybe every 64 sub-pixels, and not only that, they’re all masked off in the same orientation: either the left or right half gone.

The 6S and 7 Pluses saw the number double to one PDAF point per 32 sub-pixels. And in the 8 Plus, the number is improved to one per 20 — but there’s another addition: now the phase detection masks are on the tops and bottoms of the sub-pixels as well. As you can imagine, doing phase detection in multiple directions is a more sophisticated proposal, but it could also significantly improve the accuracy of the process. Autofocus systems all have their weaknesses, and this may have addressed one Apple regretted in earlier iterations.

Which brings us to the XS (and Max, of course), in which the PDAF points are now one per 16 sub-pixels, having increased the frequency of the vertical phase detection points so that they’re equal in number to the horizontal one. Clearly the experiment paid off and any consequent light loss has been mitigated or accounted for.

I’m curious how the sub-pixel patterns of Samsung, Huawei, and Google phones compare, and I’m looking into it. But I wanted to highlight this interesting little evolution. It’s an interesting example of the kind of changes that are hard to understand when explained in simple number form — we’ve doubled this, or there are a million more of that — but which make sense when you see them in physical form.

Big cameras and big rivalries take center stage at Photokina

Photokina is underway in London and the theme of the show is “large.” Unusually for an industry that is trending towards the compact, the cameras on stage at this show sport big sensors, big lenses, and big price tags. But though they may not be for the average shooter, these cameras are impressive pieces of hardware that hint at things to come for the industry as a whole.

The most exciting announcement is perhaps that from Panasonic, which surprised everyone with the S1 and S1R, a pair of not-quite-final full frame cameras that aim to steal a bit of the thunder from Canon and Nikon’s entries into the mirrorless full frame world.

Panasonic’s cameras have generally had impressive video performance, and these are no exception. They’ll shoot 4K at 60 FPS, which in a compact body like that shown is going to be extremely valuable to videographers. Meanwhile the S1R, with 47 megapixels to the S1’s 24, will be optimized for stills. Both will have dual card slots (which Canon and Nikon declined to add to their newest gear), weather sealing, and in-body image stabilization.

The timing and inclusion of so many desired features indicates either that Panasonic was clued in to what photographers wanted all along, or they waited for the other guys to move and then promised the things their competitors wouldn’t or couldn’t. Whatever the case, the S1 and S1R are sure to make a splash, whatever their prices.

Panasonic was also part of an announcement that may have larger long-term implications: a lens mount collaboration with Leica and Sigma aimed at maximum flexibility for the emerging mirrorless full-frame and medium format market. L-mount lenses will work on any of the group’s devices (including the S1 and S1R) and should help promote usage across the board.

Leica, for its part, announced the S3, a new version of its medium format S series that switches over to the L-mount system as well as bumping a few specs. No price yet but if you have to ask, you probably can’t afford it.

Sigma had no camera to show, but announced it would be taking its Foveon sensor tech to full frame and that upcoming bodies would be using the L mount as well.

This Fuji looks small here, but it’s no lightweight. It’s only small in comparison to previous medium format cameras.

Fujifilm made its own push on the medium format front with the new GFX 50R, which sticks a larger than full frame (but smaller than “traditional” medium format) sensor inside an impressively small body. That’s not to say it’s insubstantial: Fuji’s cameras are generally quite hefty, and the 50R is no exception, but it’s much smaller and lighter than its predecessor and, surprisingly, costs $2,000 less at $4,499 for the body.

The theme, as you can see, is big and expensive. But the subtext is that these cameras are not only capable of extraordinary imagery, but they don’t have to be enormous to do it. This combination of versatility with portability is one of the strengths of the latest generation of cameras, and clearly Fuji, Panasonic and Leica are eager to show that it extends to the pro-level, multi-thousand dollar bodies as well as the consumer and enthusiast lineup.

Happy 10th anniversary, Android

It’s been 10 years since Google took the wraps off the G1, the first Android phone. Since that time the OS has grown from buggy, nerdy iPhone alternative to arguably the most popular (or at least populous) computing platform in the world. But it sure as heck didn’t get there without hitting a few bumps along the road.

Join us for a brief retrospective on the last decade of Android devices: the good, the bad, and the Nexus Q.

HTC G1 (2008)

This is the one that started it all, and I have a soft spot in my heart for the old thing. Also known as the HTC Dream — this was back when we had an HTC, you see — the G1 was about as inauspicious a debut as you can imagine. Its full keyboard, trackball, slightly janky slide-up screen (crooked even in official photos), and considerable girth marked it from the outset as a phone only a real geek could love. Compared to the iPhone, it was like a poorly dressed whale.

But in time its half-baked software matured and its idiosyncrasies became apparent for the smart touches they were. To this day I occasionally long for a trackball or full keyboard, and while the G1 wasn’t pretty, it was tough as hell.

Moto Droid (2009)

Of course, most people didn’t give Android a second look until Moto came out with the Droid, a slicker, thinner device from the maker of the famed RAZR. In retrospect, the Droid wasn’t that much better or different than the G1, but it was thinner, had a better screen, and had the benefit of an enormous marketing push from Motorola and Verizon. (Disclosure: Verizon owns Oath, which owns TechCrunch, but this doesn’t affect our coverage in any way.)

For many, the Droid and its immediate descendants were the first Android phones they had — something new and interesting that blew the likes of Palm out of the water, but also happened to be a lot cheaper than an iPhone.

HTC/Google Nexus One (2010)

This was the fruit of the continued collaboration between Google and HTC, and the first phone Google branded and sold itself. The Nexus One was meant to be the slick, high-quality device that would finally compete toe-to-toe with the iPhone. It ditched the keyboard, got a cool new OLED screen, and had a lovely smooth design. Unfortunately it ran into two problems.

First, the Android ecosystem was beginning to get crowded. People had lots of choices and could pick up phones for cheap that would do the basics. Why lay the cash out for a fancy new one? And second, Apple would shortly release the iPhone 4, which — and I was an Android fanboy at the time — objectively blew the Nexus One and everything else out of the water. Apple had brought a gun to a knife fight.

HTC Evo 4G (2010)

Another HTC? Well, this was prime time for the now-defunct company. They were taking risks no one else would, and the Evo 4G was no exception. It was, for the time, huge: the iPhone had a 3.5-inch screen, and most Android devices weren’t much bigger, if they weren’t smaller.

The Evo 4G somehow survived our criticism (our alarm now seems extremely quaint, given the size of the average phone now) and was a reasonably popular phone, but ultimately is notable not for breaking sales records but breaking the seal on the idea that a phone could be big and still make sense. (Honorable mention goes to the Droid X.)

Samsung Galaxy S (2010)

Samsung’s big debut made a hell of a splash, with custom versions of the phone appearing in the stores of practically every carrier, each with their own name and design: the AT&T Captivate, T-Mobile Vibrant, Verizon Fascinate, and Sprint Epic 4G. As if the Android lineup wasn’t confusing enough already at the time!

Though the S was a solid phone, it wasn’t without its flaws, and the iPhone 4 made for very tough competition. But strong sales reinforced Samsung’s commitment to the platform, and the Galaxy series is still going strong today.

Motorola Xoom (2011)

This was an era in which Android devices were responding to Apple, and not vice versa as we find today. So it’s no surprise that hot on the heels of the original iPad we found Google pushing a tablet-focused version of Android with its partner Motorola, which volunteered to be the guinea pig with its short-lived Xoom tablet.

Although there are still Android tablets on sale today, the Xoom represented a dead end in development — an attempt to carve a piece out of a market Apple had essentially invented and soon dominated. Android tablets from Motorola, HTC, Samsung and others were rarely anything more than adequate, though they sold well enough for a while. This illustrated the impossibility of “leading from behind” and prompted device makers to specialize rather than participate in a commodity hardware melee.

Amazon Kindle Fire (2011)

And who better to illustrate than Amazon? Its contribution to the Android world was the Fire series of tablets, which differentiated themselves from the rest by being extremely cheap and directly focused on consuming digital media. Just $200 at launch and far less later, the Fire devices catered to the regular Amazon customer whose kids were pestering them about getting a tablet on which to play Fruit Ninja or Angry Birds, but who didn’t want to shell out for an iPad.

Turns out this was a wise strategy, and of course one Amazon was uniquely positioned to do with its huge presence in online retail and the ability to subsidize the price out of the reach of competition. Fire tablets were never particularly good, but they were good enough, and for the price you paid, that was kind of a miracle.

Xperia Play (2011)

Sony has always had a hard time with Android. Its Xperia line of phones for years were considered competent — I owned a few myself — and arguably industry-leading in the camera department. But no one bought them. And the one they bought the least of, or at least proportional to the hype it got, has to be the Xperia Play. This thing was supposed to be a mobile gaming platform, and the idea of a slide-out keyboard is great — but the whole thing basically cratered.

What Sony had illustrated was that you couldn’t just piggyback on the popularity and diversity of Android and launch whatever the hell you wanted. Phones didn’t sell themselves, and although the idea of playing Playstation games on your phone might have sounded cool to a few nerds, it was never going to be enough to make it a million-seller. And increasingly that’s what phones needed to be.

Samsung Galaxy Note (2012)

As a sort of natural climax to the swelling phone trend, Samsung went all out with the first true “phablet,” and despite groans of protest the phone not only sold well but became a staple of the Galaxy series. In fact, it wouldn’t be long before Apple would follow on and produce a Plus-sized phone of its own.

The Note also represented a step towards using a phone for serious productivity, not just everyday smartphone stuff. It wasn’t entirely successful — Android just wasn’t ready to be highly productive — but in retrospect it was forward thinking of Samsung to make a go at it and begin to establish productivity as a core competence of the Galaxy series.

Google Nexus Q (2012)

This abortive effort by Google to spread Android out into a platform was part of a number of ill-considered choices at the time. No one really knew, apparently at Google or anywhere elsewhere in the world, what this thing was supposed to do. I still don’t. As we wrote at the time:

Here’s the problem with the Nexus Q:  it’s a stunningly beautiful piece of hardware that’s being let down by the software that’s supposed to control it.

It was made, or rather nearly made in the USA, though, so it had that going for it.

HTC First — “The Facebook Phone” (2013)

The First got dealt a bad hand. The phone itself was a lovely piece of hardware with an understated design and bold colors that stuck out. But its default launcher, the doomed Facebook Home, was hopelessly bad.

How bad? Announced in April, discontinued in May. I remember visiting an AT&T store during that brief period and even then the staff had been instructed in how to disable Facebook’s launcher and reveal the perfectly good phone beneath. The good news was that there were so few of these phones sold new that the entire stock started selling for peanuts on Ebay and the like. I bought two and used them for my early experiments in ROMs. No regrets.

HTC One/M8 (2014)

This was the beginning of the end for HTC, but their last few years saw them update their design language to something that actually rivaled Apple. The One and its successors were good phones, though HTC oversold the “Ultrapixel” camera, which turned out to not be that good, let alone iPhone-beating.

As Samsung increasingly dominated, Sony plugged away, and LG and Chinese companies increasingly entered the fray, HTC was under assault and even a solid phone series like the One couldn’t compete. 2014 was a transition period with old manufacturers dying out and the dominant ones taking over, eventually leading to the market we have today.

Google/LG Nexus 5X and Huawei 6P (2015)

This was the line that brought Google into the hardware race in earnest. After the bungled Nexus Q launch, Google needed to come out swinging, and they did that by marrying their more pedestrian hardware with some software that truly zinged. Android 5 was a dream to use, Marshmallow had features that we loved … and the phones became objects that we adored.

We called the 6P “the crown jewel of Android devices”. This was when Google took its phones to the next level and never looked back.

Google Pixel (2016)

If the Nexus was, in earnest, the starting gun for Google’s entry into the hardware race, the Pixel line could be its victory lap. It’s an honest-to-god competitor to the Apple phone.

Gone are the days when Google is playing catch-up on features to Apple, instead, Google’s a contender in its own right. The phone’s camera is amazing. The software works relatively seamlessly (bring back guest mode!), and phone’s size and power are everything anyone could ask for. The sticker price, like Apple’s newest iPhones, is still a bit of a shock, but this phone is the teleological endpoint in the Android quest to rival its famous, fruitful, contender.

Let’s see what the next ten years bring.

Japan’s Hayabusa 2 mission lands on the surface of a distant asteroid

The coolest mission you haven’t heard of just hit a major milestone: the Japanese Hayabusa 2 probe has reached its destination, the asteroid Ryugu, and just deployed a pair of landers to its surface. Soon it will touch down itself and bring back a sample of Ryugu back to Earth! Are you kidding me? That’s amazing!

Hayabusa 2 is, as you might guess, a sequel to the original Hayabusa, which like this one was an asteroid sampling mission. So this whole process isn’t without precedent, though some of you may be surprised that asteroid mining is essentially old hat now.

But as you might also guess, the second mission is more advanced than the first. Emboldened by and having learned much from the first mission, Hayabusa 2 packs more equipment and plans a much longer stay at its destination.

That destination is an asteroid in an orbit between the Earth and Mars named Ryugu. Ryugu is designated “Type C,” meaning it is thought to have considerable amounts of water and organic materials, making it an exciting target for learning about the possibilities of extraterrestrial life and the history of this (and perhaps other) solar systems.

It launched in late 2014 and spent the next several years in a careful approach that would put it in a stable orbit above the asteroid; it finally arrived this summer. And this week it descended to within 55 meters (!) of the surface and dropped off two of four landers it brought with. Here’s what it looked like as it descended towards the asteroid:

These “MINERVA” landers (seen in render form up top) are intended to hop around the surface, with each leap lasting some 15 minutes due to the low gravity there. They’ll take pictures of the surface, test the temperature, and generally investigate wherever they land.

Waiting for deployment are one more MINERVA and MASCOT, a newly developed lander that carries more scientific instruments but isn’t as mobile. It’ll look more closely at the magnetic qualities of the asteroid and also non-invasively check the minerals on the surface.

The big news will come next year, when Hayabusa 2 itself drops down to the surface with the “small carry-on impactor,” which it will use to create a crater and sample below the surface of Ryugu. This thing is great. It’s basically a giant bullet: a 2-kilogram copper plate mounted in front of an explosive, which when detonated fires the plate towards the target at about two kilometers per second, or somewhere around 4,400 miles per hour.

Hayabusa 2’s impactor in a test, blowing through targets and hitting the rubble on the far side of the range.

The orbiter will not just observe surface changes from the impact, which will help illuminate the origins of other craters and help indicate the character of the surface, but it will also land and collect the “fresh” exposed substances.

All in all it’s a fabulously interesting mission and one that JAXA, Japan’s NASA equivalent, is uniquely qualified to run. You can bet that asteroid mining companies are watching Hayabusa 2 closely, since a few years from now they may be launching their own versions of it.

18 new details about Elon Musk’s redesigned, moon-bound ‘Big F*ing Rocket’

Although the spotlight at this week’s SpaceX event was squarely on Japanese billionaire Yusaku Maezawa — the first paying passenger for the company’s nascent space tourism business — Elon Musk also revealed a wealth of new details about the BFR and just how this enormous rocket and spacecraft will get to the moon and back.

In a lengthy (one might even say rambling, in the true Musk style) presentation, we were treated to cinematic and technical views of the planned rocket, which is already under construction and could take flight as early as a couple years from now — and Musk then candidly held forth on numerous topics in a lengthy Q&A period. As a result we learned quite a bit about this newly redesigned craft-in-progress.

Are you sitting comfortably? Good. Hope you like pictures of spaceships!

(Note: Quotes are transcribed directly from the video but may have been very slightly edited for clarity, such as the removal of “you know” and “like.”)

BFR is “ridiculously big”

Well, that’s not really news — it’s right there in the name. But now we know exactly how ridiculously big.

“The production design of BFR is different in some important ways from what I presented about a year ago,” Musk said, including its dimensions. The redesigned spacecraft (or BFS) will be 118 meters in length, or about 387 feet; just under half of that, 55 meters, will be the spacecraft itself. Inside you have about 1,100 cubic meters of payload space. That’s all around 15-20 percent larger than how it was last described. Its max payload is 100 metric tons to low Earth orbit.

“I mean, this is a ridiculously big rocket,” he added. The illustration on the wall, he pointed out, is life-size. As you can see it dwarfs the crowd and the other rockets.

What will fit in there? It depends on the mission, as you’ll see later.

No one knows what to call the fin-wing-things

Although Musk was clear on how the spacecraft worked, he was still a little foggy on nomenclature — not because he forgot, but because the parts don’t really correspond exactly with anything in flight right now. “There are two forward and two rear actuated wings, or fins,” he said. They don’t really fit the definition of either, he suggested — especially since they also act as legs.

The top fin “really is just a leg”

The fin on top of the craft gives it a very Space Shuttle-esque look, and it was natural that most would think that it’s a vertical stabilizer of some kind. But Musk shut that down quickly: “It doesn’t have any aerodynamic purpose — it really is just a leg.” He pointed out that during any atmospheric operations, the fin will be in the lee of the craft and won’t have any real effect.

“It looks the same as the other ones for purposes of symmetry,” he explained.

“If in doubt, go with Tintin”

It was pointed out when the new design was teased last week that it bore some resemblance to the ship Tintin (and Captain Haddock, and Professor Calculus, et al.) pilot to the moon in the classic comics. Turns out this isn’t a coincidence.

“The iteration before this decoupled the landing legs from the control surfaces — it basically had 6 legs. I actually didn’t like the aesthetics of that design,” Musk said. “I love the Tintin rocket design, so I kind of wanted to bias it toward that. So now we have the three large legs, with two of them actuating as body flaps or large moving wings.”

“I think this design is probably on par with the other one. It might be better. Yeah, if in doubt, go with Tintin,” he said.

BFR is “more like a skydiver than an aircraft”

An interplanetary spacecraft doesn’t have the same design restrictions as a passenger jet, so it may fly completely differently.

“You want four control surfaces to be able to control the vehicle through a wide range of atmospheric densities and velocities,” Musk explained, referring to the four fin-wing-flaps. “The way it behaves is a bit more like a skydiver than an aircraft. If you apply normal intuition it will not make sense.”

Actually if you imagine the plane as a person falling to earth, and that person controlling their orientation by moving their arms and legs — their built-in flaps — it does seem rather intuitive.

Reentry will “look really epic”

“Almost the entire time it is reentering, it’s just trying to brake, while distributing that force over the most area possible — it uses the entire body to brake,” Musk said. This is another point of similarity with the Space Shuttle, which used its heat-resistance bottom surface as a huge air brake.

“This will look really epic in person,” he enthused.

Of course, that only applies when there’s an atmosphere. “Obviously if you’re landing on the moon you don’t need any aerodynamic surfaces at all, because there’s no air.”

The seven-engine configuration leaves a huge safety margin

Astute observers like yours truly noticed that the number and arrangement of the craft’s Raptor engines had changed in the picture tweeted last week. Musk complimented the questioner (and by extension, me) for noting this and explained.

“In order to minimize the development risk and costs, we decided to harmonize the engine between the booster and the ship,” he said. In other words, it made more sense and cost less to put a similar type of Raptor engine on both the craft itself and the rocket that would take it to space. Previously the ship had been planned to have four large Raptor engines and two smaller sea-level engines for landing purposes. The trade-off, obviously, is that it will be a bit more costly to build the ship, but the benefits are manifold.

“Having the engines in that configuration, with seven engines, means it’s definitely capable of engine out at any time, including two engine out in almost all circumstances,” he said, referring to the possibility of an engine cutting out during flight. “In fact in some cases you could lose up to four engines and still be fine. It only needs three engines for landing.”

The booster, of course, will have considerably more thrusters — 31 to start, and as many as 42 down the road. (The number was not chosen arbitrarily, as you might guess.)

It has a deployable solar array

In the video explaining the mission, the BFS deploys a set of what appear to be solar panels from near the engines. How exactly this would work wasn’t explained at all — and in the images you can see there really isn’t a place for them to retract into. So this is likely only in the concept phase right now.

This isn’t exactly a surprise — solar is by far the most practical way to replenish small to medium amounts of electricity used for things like lights and life support, as demonstrated by most spacecraft and of course the International Space Station.

But until now we haven’t seen how those solar panels would be deployed. The fan structure at the rear would keep the panels out of view of passengers and pilots, and the single-stem design would allow them to be tilted and rotated to capture the maximum amount of sunlight.

The interior will depend on the mission

Although everyone is no doubt eager to see what the inside of the spaceship looks like, Musk cautioned that they are still at a concept stage there. He did say that they have learned a lot from the Crew Dragon capsule, however, and that will be plenty of shared parts and designs.

“Depending on the type of mission, you’d have a different configuration,” he explained. “If you were going to Mars that’s at least a three-month journey. You want to have a cabin, like a common area for recreation, some sort of meeting rooms… because you’ll be in this thing for months.”

Water and air in a months-long journey would have to be a closed-loop system, he noted, though he didn’t give any indication how that would work.

But it will include “the most fun you can possibly have in zero G”

“Now if you’re going, say, to the moon or around the moon, you have a several-day journey,” Musk continued. But then he mused on what the spare space would be used for. “What is the most fun you can have in zero G? That for sure is a key thing. Fun is underrated. Whatever is the most enjoyable thing you could possibly do — we’ll do that.”

Assuming the passengers have gotten over their space sickness, of course.

BFR will cost “roughly $5 billion” to develop

Musk was reticent to put any hard numbers out, given how early SpaceX is in development, but said: “If I were to guess it would be something like 5 billion dollars, which would be really quite a small amount for a project of this nature.”

He’s not wrong. Just for a sense of scale, the Space Shuttle program would probably have cost nearly $200 billion in today’s dollars. The F-35 program will end up costing something like $400 billion. These things aren’t directly comparable, of course, but they do give you a sense of how much money is involved in this type of thing.

Funding is still a semi-open question

Where exactly that money will come from isn’t totally clear, but Musk did point out that SpaceX does have reliable business coming from its International Space Station resupply missions and commercial launches. And next year, he pointed out, crewed launches could bring another source of income to the mix.

That’s in addition to Starlink, the satellite internet service in the offing. That’s still in tests, of course (and Tintin-related, as well).

Yusaku Maezawa’s ticket price is a “non-trivial” contribution

Although both men declined to elaborate on the actual price Maezawa paid, Musk did indicate it was considerable — and of course, he’s also essentially paying for the artists he plans to bring with him.

“He’s made a significant deposit on the price, which is a significant price and will actually have a material effect on paying for the cost of developing the BFR,” Musk said. “It’s a non-trivial amount.”

But it’s already under construction

“We’re already building it. We’ve built the first cylinder section,” Musk said, showing an image of that part, 9 meters in diameter. “We’ll build the domes and the engine section soon.”

Test flights could begin as early as next year

“We’ll start doing hopper flights next year,” Musk said. “Depending on how those go we’ll do high-altitude, high-velocity flights in 2020, then start doing tests of the booster. If things go well we could be doing the first orbital flights in about two to three years.”

This is the most optimistic scenario, he later clarified.

“We’re definitely not sure. But you have to set a date that’s kind of like the ‘things go right’ date.”

The circumlunar flight could “skim the surface” of the moon

The flight plan for the trip around the moon is relatively straightforward, as lunar missions go. Launch, orbit Earth, thrust to zoom off towards the moon, use moon’s gravity to boomerang back, and then land. But the exact path is to be determined, and Musk has ideas.

“I think it would be pretty exciting to like skim the surface,” he said, attempting to illustrate the orbit with gestures. “Go real close, then zoom out far, then come back around. In the diagram it looks kinda symmetric but I think you’d want to go real close.”

As the moon has no atmosphere, there’s no question of the craft getting slowed down or having its path altered by getting closer to it. The orbital dynamics would change, of course, but the moon’s trajectory is nothing if not well understood, so it’s just a question of how safe the mission planners want to play it, regardless of Musk’s fantasies.

“This is pretty off the cuff,” he admitted.

“This is a dangerous mission”

There will be plenty of tests before Maezawa and his artist friends take off.

“We’ll do many such test flights before putting any people on board. I’m not sure if we will actually test a flight around the moon or not, but probably we will try to do that without people before sending people.”

“That would be wise,” he concluded, seeming to make a decision then and there. But spaceflight is inherently risky, and he did not attempt to hide that fact.

“This is a dangerous mission,” he said. “We’ll leave a lot of extra room for extra food and oxygen, food and water, spare parts… you know, just in case.”

Maezawa, who was sitting next to him on stage, did not seem perturbed by this — he was certain to have assessed the risks before buying the ticket. In answer to a related question, he did indicate that astronaut-style training was in the plans, but the regimen was not yet planned.

It probably won’t even be called the BFR

There’s no getting around the fact that BFR stands for “Big Fucking Rocket,” or at least that’s what Musk and others have implied while coyly avoiding confirming. This juvenile naming scheme is in line with Tesla’s. Perhaps cognizant of posterity and the dignity of mankind’s expansion into space, Musk suggested this might not be permanent.

“We should probably think of a different name,” he admitted. This was kind of a code name and it kind of stuck.”

Again, if it officially just stood for “Big Falcon Rocket,” this probably wouldn’t be an issue. But regardless, Musk’s trademark geeky sense of humor remained.

“The only thing is, we’d like to name the first ship that goes to Mars after — Douglas Adams, my favorite spaceship — the Heart of Gold, from Hitchhiker’s Guide to the Galaxy.”

As far off as the moon mission is, the Mars mission is even further, and Musk changes his mind on nearly everything — but this is one thing I can sense he’s committed to.

Sen. Harris tells federal agencies to get serious about facial recognition risks

Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.

In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.

Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.

Sen. Harris at a recent hearing.

Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”

If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just  positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.

“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.

Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:

Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.

First, the technology may interpret her mannerisms less accurately than a white male candidate.

Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.

Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.

If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.

“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.

A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.

Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.

“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”

Another example is offered:

Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.

Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.

Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.

The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.

The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).

Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?

“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.

The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.

“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.

The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?

The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.

These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.


You can find the letters in full below.

EEOC:

SenHarris – EEOC Facial Rec… by on Scribd

FTC:

SenHarris – FTC Facial Reco… by on Scribd

FBI:

SenHarris – FBI Facial Reco… by on Scribd

NASA’s climate-monitoring space laser is the last to ride to space on a Delta II rocket

This weekend, NASA is launching a new high-tech satellite to monitor the planet’s glacier and sea ice levels — with space lasers, naturally. ICESat-2 will be a huge boon for climatologists, and it’s also a bittersweet occasion: it will be the final launch aboard the trusty Delta II rocket, which has been putting birds in the air for nearly 30 years.

Takeoff is set for 5:46 AM Pacific Time Saturday morning, so you’ll have to get up early if you want to catch it. You can watch the launch live here, with NASA coverage starting about half an hour before.

Keeping track of the Earth’s ice levels is more important than ever; with climate change causing widespread havoc, precise monitoring of major features like the Antarctic ice sheet could help climatologists predict and understand global weather patterns.

Like Aeolus, which launched in July, ICESat-2 is a spacecraft with a single major instrument, not a “Christmas tree” of sensors and antennas. And like Aeolus, ICESat-2 carries a giant laser. But while the first was launched to watch the movement of the air in-between it and the ground, the second must monitor the ground through that moving air.

It does so by using an industrial-size, hyper-precise altimeter: a single, powerful green laser split into six beams — three pairs of two, really, arranged to pass over the landscape in a predictable way.

But the real magic is how those lasers are detected. Next to the laser is a special telescope that watches for the beams’ reflections. Incredibly, it only collects “about a dozen” photons from each laser pulse, and times their arrival down to a billionth of a second. And it does this 10,000 times per second, which at its speed means a pulse is bouncing off the Earth every 2.3 feet or so.

As if that wasn’t impressive enough, its altitude readings are accurate down to the inch. And with multiple readings over time, it should be able to tell whether an ice sheet has risen or fallen on the order of millimeters.

So if you’re traveling in the Antarctic and you drop a pencil, be sure to pick it up or it might throw things off.

Of course, it’s not just for ice; the same space laser will also return the exact heights of buildings, tree canopies and other features. It’s a pity there aren’t more of these satellites — they sound rather useful.

Although ICESat-2 itself is notable and interesting, this launch is significant for a second reason: this will be the final launch atop a Delta II rocket. Rocketry standby United Launch Alliance is in charge of this one, as it has been for so many others.

Introduced in 1989, the Delta II has launched everything from communication satellites to Mars orbiters and landers; Spirit and Opportunity both left the Earth on Delta IIs. All told, more than 150 launches have been made on these rockets, and if Saturday’s launch goes as planned, it will be the 100th successful Delta II launch in a row. That’s a hell of a record. (To be clear, that doesn’t mean 50 failed; but a handful of failures over the decades have marred the launch vehicle’s streak.)

A Delta II launching for the Aquarius mission in 2011

One charming yet perhaps daunting idiosyncrasy of the system is that someone somewhere has to literally click a button to initiate takeoff — no automation for this thing; it’s someone’s job to hit the gas, so they better look sharp.

The ULA’s Bill Cullen told Jason Davis of the Planetary Society, for his epitaph on the rocket:

Yes, the Delta II engine start command is initiated by a console operator. The launch control system is 25 years old, and at the time this used a ‘person in the loop’ control which was preferred compared to the complexities of a fault-tolerant computer system.

So why are we leaving this tried and true rocket behind? It’s expensive and not particularly big. With a payload capacity of 4 tons and a cost (for this mission anyway) approaching a hundred million dollars, it’s just not a good value any more. Not only that, but Launch Complex 2 at Vandenberg Air Base is the only place left on Earth with the infrastructure to launch it, which significantly limits the orbits and opportunities for prospective missions. After ICESat-2’s launch, even that will be torn down — though hopefully they’ll keep the pieces somewhere, for posterity.

Although this is the last Delta II to launch, there is one more rocket left without a mission, the last, as it were, on the lot. Plans are not solid yet, but it’s a good bet this classic rocket will end up in a museum somewhere — perhaps standing upright with others at Kennedy Space Center.

California is ‘launching our own damn satellite’ to track pollution, with help from Planet

California plans to launch a satellite to monitor pollution in the state and contribute to climate science, Governor Jerry Brown announced today. The state is partnering with satellite imagery purveyor Planet to create a custom craft to “pinpoint – and stop – destructive emissions with unprecedented precision, on a scale that’s never been done before.”

Governor Brown made the announcement in the closing remarks of the Global Climate Action Summit in San Francisco, echoing a pledge made two years ago to scientists at the American Geophysical Union’s 2016 meeting.

“With science still under attack and the climate threat growing, we’re launching our own damn satellite,” Brown said today.

Planet, which has launched hundreds of satellites in the last few years in order to provide near-real-time imagery of practically anywhere on Earth, will develop and operate the satellite. The plan is to equip it with sensors that can detect pollutants at their point sources, be they artificial or natural. That kind of direct observation enables direct action.

Technical details of the satellite are to be announced as the project solidifies. We can probably expect something like a 6U CubeSat loaded with instruments focused on detecting certain gases and particulates. An orbit with the satellite passing across the whole state along its north/south axis seems most likely; a single craft sitting in one place probably wouldn’t offer adequate coverage. That said, multiple satellites are also a stated possibility.

“These satellite technologies are part of a new era of environmental innovation that is supercharging our ability to solve problems,” said Fred Krupp, president of the Environmental Defense Fund. “They won’t cut emissions by themselves, but they will make invisible pollution visible and generate the transparent, actionable, data we need to protect our health, our environment and our economies.”

The EDF is launching its own satellite to that end (MethaneSAT), but will also be collaborating with California in the creation of a shared Climate Data Partnership to make sure the data from these platforms is widely accessible.

More partners are expected to join up now that the endeavor is public, though none were named in the press release or in response to my questions on the topic to Planet. The funding, too, is something of an open question.

The effort is still a ways off from launch — these things take time — but Planet has certainly proven capable of designing and launching on a relatively short timeframe. In fact, it just opened up a brand new facility in San Francisco dedicated to pumping out new satellites.

Senator claps back after Ajit Pai calls California’s net neutrality bill ‘radical’ and ‘illegal’

FCC Chairman Ajit Pai has provoked a biting senatorial response from California after calling the “nanny state’s” new net neutrality legislation “radical,” “anti-consumer,” “illegal” and “burdensome.” Senator Scott Wiener (D-CA), in response, said Pai has “abdicated his responsibility to ensure an open internet” and that the FCC lacks the authority to intervene.

The political flame war was kicked off this morning in Pai’s remarks at the Maine Heritage Policy Center, a free market think tank. You can read them in full here, but I’ve quoted the relevant part below:

Of course, those who demand greater government control of the Internet haven’t given up. Their latest tactic is pushing state governments to regulate the Internet. The most egregious example of this comes from California. Last month, the California state legislature passed a radical, anti-consumer Internet regulation bill that would impose restrictions even more burdensome than those adopted by the FCC in 2015.

If this law is signed by the Governor, what would it do? Among other things, it would prevent Californian consumers from buying many free-data plans. These plans allow consumers to stream video, music, and the like exempt from any data limits. They have proven enormously popular in the marketplace, especially among lower-income Americans. But nanny-state California legislators apparently want to ban their constituents from having this choice. They have met the enemy, and it is free data.

The broader problem is that California’s micromanagement poses a risk to the rest of the country. After all, broadband is an interstate service; Internet traffic doesn’t recognize state lines. It follows that only the federal government can set regulatory policy in this area. For if individual states like California regulate the Internet, this will directly impact citizens in other states.

Among other reasons, this is why efforts like California’s are illegal.

The bogeyman of banning zero rating plans has been raised again and again, but everyone should understand now that the whole thing is a sham — just another ploy by telecoms to parcel out data the way they choose.

The legal question is far from decided, but Pai has been crowing about a recent court ruling for a week or so now, despite the fact that it has very little to do with net neutrality. Ars Technica went into detail on this ruling; the takeaway is that while it is possible that the FCC could preempt state law on information services in some cases, it’s not clear at all that it has any authority whatsoever to do so with broadband services. Ironically, that’s because Pai’s FCC drastically reduced the FCC’s jurisdiction with its reclassification of broadband in Restoring Internet Freedom.

At any rate, more consequential legal challenges and questions are still in the works, so Pai’s jubilation is somewhat premature.

“The Internet should be run by engineers, entrepreneurs, and technologists, not lawyers, bureaucrats, and politicians,” he concluded. Odd then that those very engineers, entrepreneurs and technologists almost unanimously oppose his policy, while he — literally seconds earlier — justified that policy via the world of lawyers, bureaucrats and politicians.

Senator Wiener was quick to issue a correction to the Chairman’s remarks. In an official statement, he explained that “Unlike Pai’s FCC, California isn’t run by the big telecom and cable companies.” The statement continued:

SB 822 is necessary and legal because Chairman Pai abdicated his responsibility to ensure an open internet. Since the FCC says it no longer has any authority to protect an open internet, it’s also the case that the FCC lacks the legal power to preempt states from protecting their residents and economy.

When Verizon was caught throttling the data connection of a wildfire fighting crew in California, Chairman Pai said nothing and did nothing. That silence says far more than his words today.

SB 822 is supported by a broad coalition of consumer groups, groups advocating for low income people, small and mid-size technology companies, labor unions, and President Obama’s FCC chairman, Tom Wheeler. I’ll take that support over Ajit Pai any day of the week.

The law in question has been approved by the state legislature, but has yet to be signed by Governor Jerry Brown, who has another two weeks to consider it.