This robotics museum in Korea will construct itself (in theory)

The planned Robot Science Museum in Seoul will have a humdinger of a first exhibition: its own robotic construction. It’s very much a publicity stunt, though a fun one — but who knows? Perhaps robots putting buildings together won’t be so uncommon in the next few years, in which case Korea will just be an early adopter.

The idea for robotic construction comes from Melike Altinisik Architects, the Turkish firm that won a competition to design the museum. Their proposal took the form of an egg-like shape covered in panels that can be lifted into place by robotic arms.

“From design, manufacturing to construction and services robots will be in charge,” wrote the firm in the announcement that they had won the competition. Now, let’s be honest: this is obviously an exaggeration. The building has clearly been designed by the talented humans at MAA, albeit with a great deal of help from computers. But it has been designed with robots in mind, and they will be integral to its creation.The parts will all be designed digitally, and robots will “mold, assemble, weld and polish” the plates for the outside, according to World Architecture, after which of course they will also be put in place by robots. The base and surrounds will be produced by an immense 3D printer laying down concrete.

So while much of the project will unfortunately have to be done by people, it will certainly serve as a demonstration of those processes that can be accomplished by robots and computers.

Construction is set to begin in 2020, with the building opening its (likely human-installed) doors in 2022 as a branch of the Seoul Metropolitan Museum. Though my instincts tell me that this kind of unprecedented combination of processes is more likely than not to produce significant delays. Here’s hoping the robots cooperate.

This custom ‘hyperfisheye’ lens can see behind itself

If you’re doing ordinary photography and videography, there’s rarely any need to go beyond extreme wide angle lenses — but why be ordinary? This absurd custom fisheye lens has a 270 degree field of view, meaning it can see behind the camera it’s mounted on — or rather the camera mounted on it.

It’s certainly a bit of fun from Lens Rentals, the outfit that put it together, but it’s definitely real and might even be useful. Their detailed documentation of how they put it together piece by piece is fascinating (at least I found it so) and gives an idea of how complex lens assemblies can be. Of course, this one’s not exactly standard, but still.

The C-4 Optics 4.9mm f/3.5 Hyperfisheye Prototype, as they call it (hereafter “the lens”) first appeared as what seemed at the time to be an April Fools joke, at best half-serious. “The Flying Saucer,” as they called it, AKA the Light Bender, AKA the Mother of all Fisheye Lenses, included a vaguely plausible optical diagram showing the path of light traveling from the far edge of its view, from about 45 degrees rearward of the camera.

Sure, why not? Because it’s ridiculous, that’s why not!

But the beautiful bastards did it anyway, and the results are as ridiculous as you’d imagine. There are lenses out there that produce past-180-degree images, but 270 is really quite beyond them. Here’s what the output looks like, raw on top and corrected below:

Naturally you wouldn’t want this for snapshots. It would be for very specific shots in high resolution that you would massage to get back to something resembling an ordinary field of view, or somehow incorporate into a VR or AR experience.

The camera has to mount in between the legs that support the lens, which is probably a rather fiddly process to undertake. The enormous lens cap, or “lens helmet,” doubles as an upside-down stand to ease the task.

It’s a fun project and adds one more weird thing (two, technically, since they built a second) to the world, so I support it wholeheartedly. Unfortunately because it’s a “passion project” it won’t be available for rent, so you’ll be stuck with something like the Nikon 6mm f/2.8, with its paltry 220 degree field of view. What’s even the point?

These hyper-efficient solar panels could actually live on your roof soon

The clean energy boffins in their labs are always upping the theoretical limit on how much power you can get out of sunshine, but us plebes actually installing solar cells are stuck with years-old tech that’s not half as good as what they’re seeing. This new design from Insolight could be the one that changes all that.

Insolight is a spinoff from the École Polytechnique Fédérale de Lausanne, where they’ve been working on this new approach for a few years — and it’s almost ready to hit your roof.

Usually solar cells collect sunlight on their entire surface, converting it to electricity at perhaps 15-19 percent efficiency — meaning about 85 percent of the energy is lost in the process. There are more efficient cells out there, but they’re generally expensive and special-purpose, or use some exotic material.

One place people tend to spare no expense, however, is in space. Solar cells on many satellites are more efficient but, predictably, not cheap. But that’s not a problem if you only use just a tiny amount of them and concentrate the sunlight on those; that’s the Insolight insight.

Small but very high-efficiency cells are laid down on a grid, and above that is placed a honeycomb-like lens array that takes light and bends it into a narrow beam concentrated only on the tiny cells. As the sun moves, the cell layer moves ever so slightly, keeping the beams on target. They’ve achieved as high as 37 percent efficiency in tests, and 30 percent in consumer-oriented designs. That means half again or twice the power from the same area as ordinary panels.

Certainly this adds a layer or two of complexity to the current mass-manufactured arrays that are “good enough” but far from state of the art. But the resulting panels aren’t much different in size or shape, and don’t require special placement or hardware, such as a concentrator or special platform. And a recently completed pilot test on an EPFL roof was passed with flying colors.

“Our panels were hooked up to the grid and monitored continually. They kept working without a hitch through heat waves, storms and winter weather,” said Mathiu Ackermann, the company’s CTO, in an EPFL news release. “This hybrid approach is particularly effective when it’s cloudy and the sunlight is less concentrated, since it can keep generating power even under diffuse light rays.”

The company is now in talks with solar panel manufacturers, whom they are no doubt trying to convince that it’s not that hard to integrate this tech with their existing manufacturing lines — “a few additional steps during the assembly stage,” said Ackermann. Expect Insolight panels to hit the market in 2022 — yeah, it’s still a ways off, but maybe by then we’ll all have electric cars too and this will seem like an even better deal.

When surveillance meets incompetence

Last week brought an extraordinary demonstration of the dangers of operating a surveillance state — especially a shabby one, as China’s apparently is. An unsecured database exposed millions of records of Chinese Muslims being tracked via facial recognition — an ugly trifecta of prejudice, bureaucracy, and incompetence.

The security lapse was discovered by Victor Gevers at the GDI Foundation, a security organization working in the public’s interest. Using the infamous but useful Shodan search engine, he found a MongoDB instance owned by the Chinese company SenseNets that stored an ever-increasing number of data points from a facial recognition system apparently at least partially operated by the Chinese government.

Many of the targets of this system were Uyghur Muslims, an ethnic and religious minority in China that the country has persecuted in what it considers secrecy, isolating them in remote provinces in what amount to religious gulags.

This database was no limited sting operation: some 2.5 million people had their locations and other data listed in it. Gevers told me that data points included national ID card number with issuance and expiry dates; sex; nationality; home address; DOB; photo; employer; and known previously visited face detection locations.

This data, Gevers said, plainly “had been visited multiple times by visitors all over the globe. And also the database was ransacked somewhere in December by a known actor,” one known as Warn, who has previously ransomed poorly configured MongoDB instances. So it’s all out there now.

A bad idea, poorly executed, with sad parallels

Courtesy: Victor Gevers/GDI.foundation

First off, it is bad enough that the government is using facial recognition systems to target minorities and track their movements, especially considering the treatment many of these people have already received. The ethical failure on full display here is colossal but unfortunately no more than we have come to expect from an increasingly authoritarian China.

Using technology as a tool to track and influence the populace is a proud bullet point on the country’s security agenda, but even allowing for the cultural differences that produce something like the social credit rating system, the wholesale surveillance of a minority group is beyond the pale. (And I say this in full knowledge of our own problematic methods in the U.S.)

But to do this thing so poorly is just embarrassing, and should serve as a warning to anyone who thinks a surveillance state can be well administrated — in Congress, for example. We’ve seen security tech theater from China before, in the ineffectual and likely barely functioning AR displays for scanning nearby faces, but this is different — not a stunt but a major effort and correspondingly large failure.

The duty of monitoring these citizens was obviously at least partially outsourced to SenseNets (note this is different from SenseTime, but many of the same arguments will apply to any major people-tracking tech firm), which in a way mirrors the current controversy in the U.S. regarding Amazon’s Rekognition and its use — though on a far, far smaller scale — by police departments. It is not possible for federal or state actors to spin up and support the tech and infrastructure involved in such a system on short notice; like so many other things the actual execution falls to contractors.

And as SenseNets shows, these contractors can easily get it wrong, sometimes disastrously so.

MongoDB, it should be said, is not inherently difficult to secure; it’s just a matter of choosing the right settings in deployment (settings that are now but were not always the defaults). But for some reason people tend to forget to check those boxes when using the popular system; over and over we’ve seen poorly configured instances being accessible to the public, exposing hundreds of thousands of accounts. This latest one must surely be the largest and most damaging, however.

Gevers pointed out that the server was also highly vulnerable to MySQL exploits among other things, and was of course globally visible on Shodan. “So this was a disaster waiting to happen,” he said.

In fact it was a disaster waiting to happen twice; the company re-exposed the database a few days after securing it, after I wrote this story but before I published:

Living in a glass house

The truth is, though, that any such centralized database of sensitive information is a disaster waiting to happen, for pretty much everyone involved. A facial recognition database full of carefully organized demographic data and personal movements is a hell of a juicy target, and as the SenseTimes instance shows, malicious actors foreign and domestic will waste no time taking advantage of the slightest slip-up (to say nothing of a monumental failure).

We know major actors in the private sector fail at this stuff all the time and, adding insult to injury, are not held responsible — case in point: Equifax. We know our weapons systems are hackable; our electoral systems are trivial to compromise and under active attack; the census is a security disaster; and unsurprisingly the agencies responsible for making all these rickety systems are themselves both unprepared and ignorant, by the government’s own admission… not to mention unconcerned with due process.

The companies and governments of today are simply not equipped to handle the enormousness, or recognize the enormity, of large scale surveillance. Not only that, but the people that compose those companies and governments are far from reliable themselves, as we have seen from repeated abuse and half-legal uses of surveillance technologies for decades.

Naturally we must also consider the known limitations of these systems, such as their poor record with people of color, the lack of transparency with which they are generally implemented, and the inherently indiscriminate nature of their collection methods. The systems themselves are not ready.

A failure at any point in the process of legalizing, creating, securing, using, or administrating these systems can have serious political consequences (such as the exposure of a national agenda, which one can imagine could be held for ransom), commercial consequences (who would trust SenseNets after this? The government must be furious), and most importantly personal consequences — to the people whose data is being exposed.

And this is all due (here, in China, and elsewhere) to the desire of a government to demonstrate tech superiority, and of a company to enable that and enrich itself in the process.

In the case of this particular database Gevers says that although the policy of the GDI is one of responsible disclosure, he immediately regretted his role. “Personally it made angry after I found out that I unknowingly helping the company secure its oppression tool,” he told me. “This was not a happy experience.”

The best we can do, and which Gevers did, is to loudly proclaim how bad the idea is and how poorly it has been done, is being done, and will be done.

Vision system for autonomous vehicles watches not just where pedestrians walk, but how

The University of Michigan, well known for its efforts in self-driving car tech, has been working on an improved algorithm for predicting the movements of pedestrians that takes into account not just what they’re doing, but how they’re doing it. This body language could be critical to predicting what a person does next.

Keeping an eye on pedestrians and predicting what they’re going to do is a major part of any autonomous vehicle’s vision system. Understanding that a person is present and where makes a huge difference to how the vehicle can operate — but while some companies advertise that they can see and label people at such and such a range, or under these or those conditions, few if any can or say they can see gestures and posture.

Such vision algorithms can (though nowadays are unlikely to) be as simple as identifying a human and seeing how many pixels it moves over a few frames, then extrapolating from there. But naturally human movement is a bit more complex than that.

UM’s new system uses the lidar and stereo camera systems to estimate not just a person’s trajectory, but their pose and gait. Pose can indicate whether a person is looking towards or away from the car, or using a cane, or stooped over a phone; gait indicates not just speed but also intention.

Is someone glancing over their shoulder? Maybe they’re going to turn around, or walk into traffic. Are they putting their arms out? Maybe they’re signaling someone (or perhaps the car) to stop. This additional data helps a system predict motion and makes for a more complete set of navigation plans and contingencies.

Importantly, it performs well with only a handful of frames to work with — perhaps comprising a single step and swing of the arm. That’s enough to make a prediction that beats simpler models handily, a critical measure of performance as one cannot assume that a pedestrian will be visible for any more than a few frames between obstructions.

Not too much can be done with this noisy, little-studied data right now but perceiving and cataloguing it is the first step to making it an integral part of an AV’s vision system. You can read the full paper describing the new system in IEEE Robotics and Automation Letters or at Arxiv (PDF).

Deploy the space harpoon

Watch out, starwhales. There’s a new weapon for the interstellar dwellers whom you threaten with your planet-crushing gigaflippers, undergoing testing as we speak. This small-scale version may only be good for removing dangerous orbital debris, but in time it will pierce your hypercarbon hides and irredeemable sun-hearts.

Literally a space harpoon. (Credit: Airbus)

However, it would be irresponsible of me to speculate beyond what is possible today with the technology, so let a summary of the harpoon’s present capabilities suffice.

The space harpoon is part of the RemoveDEBRIS project, a multi-organization European effort to create and test methods of reducing space debris. There are thousands of little pieces of who knows what clogging up our orbital neighborhood, ranging in size from microscopic to potentially catastrophic.

There are as many ways to take down these rogue items as there are sizes and shapes of space junk; perhaps it’s enough to use a laser to edge a small piece down toward orbital decay, but larger items require more hands-on solutions. And seemingly all nautical in origin: RemoveDEBRIS has a net, a sail and a harpoon. No cannon?

You can see how the three items are meant to operate here:

The harpoon is meant for larger targets, for example full-size satellites that have malfunctioned and are drifting from their orbit. A simple mass driver could knock them toward the Earth, but capturing them and controlling descent is a more controlled technique.

While an ordinary harpoon would simply be hurled by the likes of Queequeg or Dagoo, in space it’s a bit different. Sadly it’s impractical to suit up a harpooner for EVA missions. So the whole thing has to be automated. Fortunately the organization is also testing computer vision systems that can identify and track targets. From there it’s just a matter of firing the harpoon at it and reeling it in, which is what the satellite demonstrated today.

This Airbus-designed little item is much like a toggling harpoon, which has a piece that flips out once it pierces the target. Obviously it’s a single-use device, but it’s not particularly large and several could be deployed on different interception orbits at once. Once reeled in, a drag sail (seen in the video above) could be deployed to hasten reentry. The whole thing could be done with little or no propellant, which greatly simplifies operation.

Obviously it’s not yet a threat to the starwhales. But we’ll get there. We’ll get those monsters good one day.

Nintendo makes the old new again with Mario, Zelda, Tetris titles for Switch

The afternoon brought an eventful series of announcements from Nintendo in one of its Direct video promos, and 2019 is looking to be a banner year for the Switch. Here’s everything the company announced, from Super Mario Maker 2 to the unexpected remake of Game Boy classic Link’s Awakening.

The stream cold opened with a look at the new Mario Maker, which would honestly be enough announcement for one day. But boy did they have more up their sleeves.

First the actually new stuff:

Shown last but likely to garner the bulk of the internet’s response is the remake of Link’s Awakening, which came out more than a quarter of a century ago on Game Boy. I admit to never finishing this but I loved the feel of it, so I’m dying to play this new tilt-shifted, perspective-switching 3D version.

Platinum has an intriguing new game called Astral Chain, in which you appear to control two fighters at the same time in some crazy-looking robot(?)-on-robot action. Talent from The Wonderful 101, Bayonetta, and Nier: Automata ensure this will be worth keeping an eye on.

The recent trend of battle royale and perhaps the best game ever made, Tetris, combine in Tetris 99, where 100 people simultaneously and competitively drop blocks. It looks bonkers, and it’s free on Switch starting right now.

And on the JRPG tip:

Fire Emblem: Three Houses got a long spot that introduced the main characters, whom you’ll no doubt ally with and/or be betrayed by. Romance is in the air! And arrows.

From the back-to-basics studio that put out I Am Setsuna and Lost Sphear comes Oninaki, an action RPG that looks like a good well-crafted bit of fun, if not particularly original.

Dragon Quest 11 S — an enhanced version of the original hit — and DQ Builders 2 are on their way to Switch later this year, in Fall and July respectively.

Rune Factory 4 Special is another enhanced, remastered classic in a series that I adore (though I wish they’d remaster Frontier). It was also announced that RF5 is in development, so thank god for that.

Final Fantasy VII is coming at the end of March, and Final Fantasy IX is available now. I’m ashamed to say I never played the latter but this is a great opportunity to.

Sidescrollers new and old:

BOXBOY! + BOXGIRL! is a new entry in a well-like puzzle platformer series that introduces some new characters and multiplayer. Coming in April.

Bloodstained: Ritual of the Night got a teaser, but we’ve heard a lot about this Castlevania spiritual sequel already. Just come out!

Yoshi’s Crafted World comes out March 29, but there’s a demo available today.

Captain Toad: Treasure Tracker gets an update adding multiplayer to its intricate levels and soon, a paid pack for new ones. I might wait for a combined version but this should be fun.

Miscellaneous but still interesting:

The new Marvel Ultimate Alliance is coming this summer and I can’t wait. The second one was a blast but it came out way too long ago. A good co-op brawler is a natural fit for the Switch, plus being a superhero is fun.

Daemon X Machina, the striking-looking mech combat game, is getting a demo ahead of the summer release. They’re going to incorporate changes and advice from players so if you want to help shape the game, get to it.

Disney Tsum Tsum Festival… I don’t know what this is. But it looks wild.

Deltarune! It’s the sequel-ish to the beloved Undertale, and you can get the first chapter on Switch now. Play Undertale first, or you won’t get the dog jokes.

There were a few more little items here and there but that’s the gist. Boy am I glad I have a Switch!

You can watch the full Direct here.

Another fine mesh

Amazon’s acquisition of mesh router company Eero is a smart play that adds a number of cards to its hand in the rapidly evolving smart home market. Why shouldn’t every router be an Echo, and every Echo be a router? Consolidating the two makes for powerful synergies and significant leverage against stubborn competition.

It’s no secret that Amazon wants to be in every room of the house — and on the front door to boot. It bought connected camera and doorbell companies Blink and Ring, and of course at its events it has introduced countless new devices from connected plugs to microwaves.

All these devices connect to each other, and the internet, wirelessly. Using what? Some router behind the couch, probably from Netgear or Linksys, with a 7-character model number and utilitarian look. This adjacent territory is the clear next target for expansion.

But Amazon could easily have moved into this with a Basics gadget years ago. Why didn’t it? Because it knew that it would have to surpass what’s on the market, not just in signal strength or build, but by changing the product into a whole new category.

The router is one of a dwindling number of devices left in the home that is still just a piece of “equipment.” Few people use their routers for anything but a basic wireless connection. Bits come and go through the cable and are relayed to the appropriate devices, mechanically and invisibly. It’s a device few think to customize or improve, if they think of it at all.

Apple made some early inroads with its overpriced and ultimately doomed Airport products, which served some additional purposes, like simple backups, and were also designed well enough to live on a table instead of under it. But it’s only recently that the humble wireless router has advanced beyond the state of equipment. It’s companies like Eero that did it, but it’s Amazon that’s made it realistic.

Build the demand, then sell the supply

It’s become clear that in many homes a single Wi-Fi router isn’t sufficient. Two or even three might be necessary to get the proper signal to the bedrooms upstairs and the workshop in the garage.

A few years ago this wasn’t even necessary, because there were far fewer devices that needed a wireless connection to work. But now if your signal doesn’t reach the front door, the lock won’t send a video of the mail carrier; if it doesn’t reach the garage, you can’t activate the opener for the neighbor; if it doesn’t reach upstairs, the kids come downstairs to watch TV — and we can’t have that.

A mesh system of multiple devices relaying signals is a natural solution, and one that’s been used for many years in other contexts. Eero was among the first not to create a system but to make a consumer play, albeit at the luxury level, rather like Sonos.

Google got in on the game relatively soon after that with the OnHub and its satellites, but neither company really seemed to crack the code. How many people do you know who have a mesh router system? Very few, I’d wager, likely vanishingly few when compared with ordinary router sales.

It seems clear now that the market wasn’t quite ready for the kind of investment and complexity that mesh networking necessitated. Amazon, however, solves that, because its mesh router will be an Echo, or an Echo Dot, or an Echo Show — all devices that are already found in multiple rooms of the house, and seem very likely to include some kind of mesh protocol in their next update.

It’s hard to say exactly how it will work, since a high-quality router necessarily has features and hardware that let it do its job. Adding these to an Echo product would be non-trivial. But it seems extremely likely that we can expect an Echo Hub or the like, which connects directly to your cable modem (it’s unlikely to perform that duty as well) and performs the usual router duties, while also functioning as an attractive multipurpose Alexa gadget.

That’s already a big step up from the ordinary spiky router. But the fun’s just getting started for Amazon.

Platform play

Apple has powerful synergies in its ecosystems, among which iMessage has to be the strongest. It’s the only reason I use an iPhone now; if Android got access to iMessage, I’d switch tomorrow. But I doubt it ever will, so here I am. Google has that kind of hold on search and advertising — just try to get away. And so on.

Amazon has a death grip on online retail, of course, but its naked thirst for an Amazon-populated smart home has been obvious since it took the smart step to open its Alexa platform up for practically anyone to ship with. The following Alexavalanche brought garbage from all corners of the world, and some good stuff too. But it shipped devices.

Now, any device will work with the forthcoming Echo-Eero hybrids. After all it will function as a perfectly ordinary router in some ways. But Amazon will be putting another layer on that interface specifically with Alexa and other Amazon devices. Imagine how simple the interface will be, how easily you’ll be able to connect and configure new smart home devices — that you bought on Amazon, naturally.

Sure, that non-Alexa baby cam will work, but like Apple’s genius blue and green bubbles, some indicator will make it clear that this device, while perfectly functional, is, well, lacking. A gray, generic device image instead of a bright custom icon or live view from your Amazon camera, perhaps. It’s little things like that that change minds, especially when Amazon is undercutting the competition via subsidized prices.

Note that this applies to expanding the network as well — other Amazon devices (the Dot and its ilk) will likely not only play nice with the hub but will act as range extenders and perform other tasks like file transfers, intercom duty, throwing video, etc. Amazon is establishing a private intranet in your house.

The rich data interplay of smart devices will soon become an important firehose. How much power is being used? How many people are at home and when? What podcasts are being listened to, at what times, and by whom? When did that UPS delivery actually get to the door? Amazon already gets much of this but building a mesh network gives it greater access and allows it to set the rules, in effect. It’s a huge surface area through which to offer services and advertisements, or to preemptively meet users’ needs.

Snooping ain’t easy (or wise)

One thing that deserves a quick mention is the possibility, as it will seem to some, that Amazon will snoop on your internet traffic if you use its router. I’ve got good news and bad news.

The good news is that it’s not only technically very difficult but very unwise to snoop at that level. Any important traffic going through the router will be encrypted, for one thing. And it wouldn’t be much of an advantage to Amazon anyway. The important data on you is generated by your interactions with Amazon: items you browse, shows you watch, and so on. Snatching random browsing data would be invasive and weird, with very little benefit.

Eero addressed the question directly shortly after the acquisition was announced:

Maybe they would have eventually as a last-ditch effort to monetize, but that’s neither here nor there.

Now the bad news. You don’t want Amazon to see your traffic? Too bad! Most of the internet runs on AWS! If Amazon really cared, it could probably do all kinds of bad stuff that way. But again it would be foolish self-sabotage.

Free-for-all

What happens next is an arms race, though it seems to me that Amazon might have already won. Google took its shot and may be once bitten, twice shy; its smart home presence isn’t nearly so large, either. Apple got out of the router game because there’s not much money in it; it won’t care if someone uses an Apple Homepod (what a name) with an Amazon router.

Huawei and Netgear already have Alexa-enabled routers, but they can’t offer the level of deep integration Amazon can; there’s no doubt the latter will reserve many interesting features for its own branded devices.

Linksys, TP-Link, Asus, and other OEMs serving the router space may blow this off to start as a toy, though it seems more likely that they will lean on the specs and utilitarian nature to push it with budget and performance markets, leaving Amazon to dominate a sliver… and hope that sliver doesn’t grow into a wedge.

One place you may see interesting competition is from someone leaning on the privacy angle. Although we’ve established that Amazon isn’t likely to use the device that way, the fear doesn’t have to be justified for it to be taken advantage of in advertising. And anyway there are other features like robust ad blocking and so on that, say, a Mozilla-powered open source router could make a case for.

But it seems likely that by acquiring an advanced but beleaguered startup that was ahead of the market, Amazon will be able to make a quick entry and multiply while the others are still engineering their responses.

Expect specials on Eeros while stock lasts, then a new wave of mesh-enabled Echo-branded devices that are backwards compatible, mega-simple to set up, and more than competitive on price. Now is the time and the living room is the place; Amazon will strike hard and perhaps it will set in motion the end of the router as mere equipment.

Feel the beep: This album is played entirely on a PC motherboard speaker

If you’re craving a truly different sound with which to slay the crew this weekend, look no further than System Beeps, a new album by shiru8bit — though you may have to drag your old 486 out of storage to play it. Yes, this album runs in MS-DOS and its music is produced entirely through the PC speaker — you know, the one that can only beep.

Now, chiptunes aren’t anything new. But the more popular ones tend to imitate the sounds found in classic computers and consoles like the Amiga and SNES. It’s just limiting enough to make it fun, and of course many of us have a lot of nostalgia for the music from that period. (The Final Fantasy VI opening theme still gives me chills.)

But fewer among us look back fondly on the days before sample-based digital music, before even decent sound cards let games have meaningful polyphony and such. The days when the only thing your computer could do was beep, and when it did, you were scared.

Shiru, a programmer and musician who’s been doing “retro” sound since it before it was retro, took it upon himself to make some music for this extremely limited audio platform. Originally he was just planning on making a couple tunes for a game project, but in this interesting breakdown of how he made the music, he explains that it ended up ballooning as he got into the tech.

“A few songs became a few dozens, collection of random songs evolved into conceptualized album, plans has been changing, deadlines postponing. It ended up to be almost 1.5 years to finish the project,” he writes (I’ve left his English as I found it, because I like it).

Obviously the speaker can do more than just “beep,” though indeed it was originally meant as the most elementary auditory feedback for early PCs. In fact the tiny loudspeaker is capable of a range of sounds and can be updated 120 times per second, but in true monophonic style can only produce a single tone at a time between 100 and 2,000 Hz, and that in a square wave.

Inspired by games of the era that employed a variety of tricks to create the illusion of multiple instruments and drums that in fact never actually overlap one another, he produced a whole album of tracks; I think “Pixel Rain” is my favorite, but “Head Step” is pretty dope too.

You can of course listen to it online or as MP3s or whatever, but the entire thing fits into a 42 kilobyte MS-DOS program you can download here. You’ll need an actual DOS machine or emulator to run it, naturally.

How was he able to do this with such limited tools? Again I direct you to his lengthy write-up, where he describes, for instance, how to create the impression of different kinds of drums when the hardware is incapable of the white noise usually used to create them (and if it could, it would be unable to layer it over a tone). It’s a fun read and the music is… well, it’s an acquired taste, but it’s original and weird. And it’s Friday.

Feel the beep: This album is played entirely on a PC motherboard speaker

If you’re craving a truly different sound with which to slay the crew this weekend, look no further than System Beeps, a new album by shiru8bit — though you may have to drag your old 486 out of storage to play it. Yes, this album runs in MS-DOS and its music is produced entirely through the PC speaker — you know, the one that can only beep.

Now, chiptunes aren’t anything new. But the more popular ones tend to imitate the sounds found in classic computers and consoles like the Amiga and SNES. It’s just limiting enough to make it fun, and of course many of us have a lot of nostalgia for the music from that period. (The Final Fantasy VI opening theme still gives me chills.)

But fewer among us look back fondly on the days before sample-based digital music, before even decent sound cards let games have meaningful polyphony and such. The days when the only thing your computer could do was beep, and when it did, you were scared.

Shiru, a programmer and musician who’s been doing “retro” sound since it before it was retro, took it upon himself to make some music for this extremely limited audio platform. Originally he was just planning on making a couple tunes for a game project, but in this interesting breakdown of how he made the music, he explains that it ended up ballooning as he got into the tech.

“A few songs became a few dozens, collection of random songs evolved into conceptualized album, plans has been changing, deadlines postponing. It ended up to be almost 1.5 years to finish the project,” he writes (I’ve left his English as I found it, because I like it).

Obviously the speaker can do more than just “beep,” though indeed it was originally meant as the most elementary auditory feedback for early PCs. In fact the tiny loudspeaker is capable of a range of sounds and can be updated 120 times per second, but in true monophonic style can only produce a single tone at a time between 100 and 2,000 Hz, and that in a square wave.

Inspired by games of the era that employed a variety of tricks to create the illusion of multiple instruments and drums that in fact never actually overlap one another, he produced a whole album of tracks; I think “Pixel Rain” is my favorite, but “Head Step” is pretty dope too.

You can of course listen to it online or as MP3s or whatever, but the entire thing fits into a 42 kilobyte MS-DOS program you can download here. You’ll need an actual DOS machine or emulator to run it, naturally.

How was he able to do this with such limited tools? Again I direct you to his lengthy write-up, where he describes, for instance, how to create the impression of different kinds of drums when the hardware is incapable of the white noise usually used to create them (and if it could, it would be unable to layer it over a tone). It’s a fun read and the music is… well, it’s an acquired taste, but it’s original and weird. And it’s Friday.