Hidden Drawers in Software

Huawei has been in the news again recently, with its CFO detained over concerns about a possible sanctions violation. The larger context is what makes that situation interesting: beyond simple concerns about trafficking with Iran, the US government has been warning US telecommunication companies and allied governments that Huawei equipment is not trustworthy. There are concerns that Huawei equipment in a network could be used to spy or otherwise cause harm. If we grant for a moment the possibility that Huawei is doing these things, we might well wonder why we can’t have some kind of technical standards or tests for the equipment. Presumably, such tests would resolve the dispute about whether Huawei is actually doing bad things, as well as keeping our networks safe. Unfortunately, the problem is harder than it might seem intuitively, and is arguably unfixable. To understand the issue, we can start by taking a detour into fine furniture.

The Metropolitan Museum in New York displays an elaborate roll-top writing desk from the late 1700s that was made by the master craftsman David Roentgen. An associated video shows how various hidden mechanisms can be activated to reveal additional hidden drawers, presumably for the concealment of crucial secrets. We can admire the ingenuity and artistry of the desk, but living in the modern world gives us a crucial advantage over the maker’s contemporaries. We have confidence we can find such hidden compartments: even if we didn’t already know the drawers were there, X-ray imaging of the desk could reveal them. That imagery would likely also reveal enough about the mechanism to let us access them.

Unfortunately, if we want to know about hidden mechanisms in our software, we have nothing like an X-ray. Instead we are like Roentgen’s contemporaries, who might grope around seeking a hidden compartment but never find the magic unlocking mechanism. That’s a crucial problem, because we are increasingly concerned about what those software systems do, and whether we can trust them. Sometimes the problem is that the software comes from China, and we’re not sure if the political or military organizations of that country might have included some features we don’t want to have.

Although the most serious concerns arise from national security challenges, not all of these problems arise from sinister motives. Frequently these kinds of problems are innocent consequences of the development process for complex systems: programmers may include special features for privileged access to data or behavior, simply because that access is handy for troubleshooting in the lab. Then it’s easier to leave them in place than to remove them once the software goes into production – especially because a customer might have the same sorts of problems arise that required troubleshooting in the lab. If we’re the customer, we may want to be sure that the provider of the software hasn’t left themselves some kind of hidden “back door” access to our data or decisions.

One part of the solution is being allowed to inspect the software, and such inspection is often enough to detect the innocent versions of this problem. Others have written about the problem of hiding algorithms: one good example is Cathy O’Neil’s book Weapons of Math Destruction. As a matter of policy, it should be possible to examine the workings of important programs – ones that make crucial decisions about our lives.

The “open source” movement has been a good influence in this regard, setting a standard that the human-readable version of a progam should be readily available – and readily modifiable. Conventional wisdom increasingly includes the assumption that secret proprietary programs are not, in general, of higher quality than readily-available open source equivalents. However, even if we had achieved all of what O’Neil calls for in the policy domain – or if we are a customer and secure the right to read some program in its entirety – we still have a subtler problem. Some software doesn’t readily reveal itself, even when we have been given the right to read it.

There is a pithy demonstration of the problem called Thompson’s Hack. Ken Thompson is a David Roentgen of software. Instead of building writing desks, he built operating systems. An operating system is the low-level software that makes a computer usable. A “bare” computer is a little like a stand-alone gasoline engine: lots of potential, but not too useful on its own. The operating system provides the surrounding facilities to make the computer useful, in much the way that the various other parts of your car make the engine useful to you.

Like Roentgen, Thompson hid secret mechanisms – like the hidden drawers of the desk – inside what he built. He explained the crafting of his surprise in a talk he gave in 1983 when he received the Turing Award – often called the Nobel Prize of computing. Instead of carefully shaping and joining wood, Thompson took ingenious advantage of the way that software is built via translation.

To understand what Thompson did, we have to note a general characteristic of most programming: people write programs in languages that are well-suited for people, but those programs are then translated into more primitive languages that are well-suited for machines. Writing programs at the level of the machine would involve too many little fiddly details for most people; and likewise, executing programs at the level of people would require elaborate, expensive, and slow machinery. So the sensible compromise is to translate programs.

Crucially for Thompson’s Hack, the translation from “human-friendly” to “machine-friendly” language is itself performed by a program. So the first level of sneakiness is to realize that if you subvert the translation program, the machine-friendly output can include features that didn’t appear in the human-friendly input. Instead of doing a simple translation, the subverted translator splices in some additional stuff.

Of course, the additional stuff was the hidden part that corresponds to Roentgen’s secret drawers. Instead of secret drawers, Thompson’s original trick was to tweak the operating system so that he had a magic ability to login – even though he didn’t appear in any list of authorized users.

That situation already seems kind of disturbing, but it gets worse. After all, the translator is itself a program, and so improving or fixing the translator means taking the human-friendly version of the translator and translating it into a machine-friendly version. (The spectacle of the translator translating itself is an example of “recursion,” and is the sort of thing that delights many computer scientists.) If the translator is subverted yet again, we can use the same trick – but now we apply that trick to the translator itself.

If your head is spinning at this point, don’t be worried – that’s normal. The tangle of translators is inherently confusing. (There’s a longer explanation, with more illustrations, in chapter 22 of my book Bits to Bitcoin). The basic principle to keep in mind is that a translator can cheat – instead of doing a good translation, it can put in bad stuff. Since the translation is only going to be “read” by a machine, you can’t count on any person noticing the bad stuff. And there’s a particular kind of bad stuff that’s really pernicious, as well as mind-scrambling: we can make it so that any version of a translator picks up bad stuff as it goes through the translator.

Once we’ve pulled off that trick, even replacing the translator with a new “clean” version doesn’t work right: as the allegedly-good translator is translated by the bad translator, the bad translator produces another bad translator as the result.

A sensible question here is, why do we keep using the bad translator? The answer there is that we typically don’t know that it’s bad.

You might be acquainted with the problem of a “card skimmer” that might be attached at a cash register or gasoline pump. A card skimmer is a small device that reads the magnetic stripe on a payment card, but does so on behalf of a criminal. Part of the reason that the trick works is that there’s no standard appearance for a payment-card reader, and so the modified one doesn’t necessarily look “wrong.” There may not be any obvious clues that there is a problem.

When we consider the situation with the bad translator, there’s nothing to see in any human-readable input program that represents this bad behavior. And it’s hard for any person to read the machine-readable output to see that something bad has been spliced in. Also, these translators are themselves large complex programs – there are not a lot of alternatives available, in most cases. So once our local translator has been corrupted, it is both hard to know that there’s a problem and hard to replace the bad translator with one that doesn’t have the problem.

So we can see there’s a problem here – but what does that problem really mean for us? Well, there’s good news and bad news. The good news is that this problem only relates to entities that are affected by software. The necessary combination of widespread translation, unreadable output, and translated translators is not found frequently – if ever – outside the world of software development.

However, the bad news is that there’s precious little in the modern world that’s unaffected by software. Indeed, one widely-quoted assessment of the overall technology trend of the last few years has been that “software is eating the world.” Almost everything is being affected by software, either in its actual operations (such as the apps on our mobile phones) or in its design (everything from buildings and furniture to the microchips in our devices).

How can we be sure that something designed by software doesn’t have hidden misbehaviors waiting to spring on us? For physical objects with straightforward mechanisms, like the Roentgen roll-top desk, X-rays let us see. But for physical objects with complex mechanisms like a microprocessor, or for nonphysical objects like software, we are lost: we must simply trust that the supplier is honest. For any situation where we can’t trust the supplier – as the US government says about Huawei – it is somewhere between difficult and impossible to verify that there is nothing hidden inside the complexity.

Leave a Comment