In the modern world, there are many kinds of speedy interpersonal communication. They make quite an impressive list: “ordinary” phone calls, video calls, voice chats, texting (with or without pictures and location information), plus a separate list of specialized applications like Snapchat, WhatsApp, Facebook Messenger, and more. With all of these options, it would seem unlikely for anyone to be consistently unreachable. Modern instant accessibility is a cultural shift. Indeed, it’s striking how many old movies or classic plays turn on some kind of communication difficulty. For a child watching a Shakespeare play where two characters don’t know the same information, a natural question is, “why doesn’t he just call her cell phone?”
However, for most of human history – almost everything before the invention of the telegraph in the 19th century – communication has happened at the same speed as travel. Even if we didn’t have to deliver the message personally, message transport was a process that a single traveler could, in principle, perform – walking, running, riding on horses or ships.
We might feel that those problems are safely relegated to history, but they come back with a vengeance if we consider computer systems and infrastructure of any size. To understand why, we have to look at the underpinnings of telecommunication. Although modern telecommunication speeds are incredibly fast by human standards, they’re distinctly unimpressive by the standards of modern computers. Pretty much every telecommunication mechanism depends in some way on light or its electromagnetic wave relatives. The speeds involved are so high as to seem instantaneous. If a person turns on a light switch, it does technically take a miniscule fraction of a second for electricity to reach the bulb, for the bulb to convert the electricity to light, and for the light to reach the walls of the room; but the interval of time is so small that no human comes anywhere close to being able to perceive it.
However, once we put modern computers into the mix, electricity and light don’t seem so fast. One important but eye-rollingly technical aspect of a computer is its clock speed. A computer is essentially a step-taking machine, and its clock speed is the rate at which it takes its smallest and fastest steps, over and over. Modern computers have a clock speed measured in GHz, which means some number of billions of steps every second. Turning that upside down, we can say that each step is taking less than a billionth of a second. How far does light travel in that span of time? One of the handy rules of thumb for computer scientists is that light travels about a foot in a billionth of a second. So even though you can’t notice the length of time for light to fill a room, your computer certainly could. Weirdly, your laptop is probably running fast enough that it takes a step or two in the time it takes for light to travel from the left side of the keyboard to the right side of the keyboard.
So although we humans live in a world of instant connectivity and simultaneous exchange of information, our computers are effectively trapped in the 18th century or before – they can only send messages and wait for replies. If a message is not received as expected, they have to guess: is the other party slow, or has something been lost, or has something failed?
Likewise, computers that cooperate have to be designed in accord with these constraints. Especially if the computers are separated by any meaningful distance – which usually means any time they aren’t in the same building – communication may be unpredictable in terms of its speed or availability. People can operate in a 21st century style, where they call or text as needed. In contrast, cooperating computers have to operate much like agents, ambassadors, or partners in the 18th century.
In particular, designers of such systems have to consider what to do when one party (the “agent”) has to make a decision, and the other party with final decision powers (the “principal”) is unavailable because they’re too far away, or otherwise unreachable. As was true in the 18th century, there are really only two choices:
- The agent may have a considerable degree of autonomy to make the best decision under the circumstances, possibly following explicit instructions or using their best judgment
- Alternatively, the agent must decline to make any decision for which they do not have adequate authority – instead waiting for the principal to decide.
Whichever scheme is in use, the result may well be one that the principal didn’t want. Unfortunately, there is simply no way of avoiding the problem – at modern computer speeds and global scale, the physics rule out the kind of close coordination that we might expect to be available in human situations.
This design issue comes up in any kind of system that involves computers cooperating over a distance – what computer scientists call a “distributed system.”
In the modern world of web applications, mobile applications, and cloud services, it’s sometimes challenging to identify any interesting computer-based service that isn’t a distributed system. So it’s worth being aware that behind your 21st-century services there are 18th-century constraints on the computers – and sometimes those design choices and constraints will show through.