EWD 35 starts out in typical Dijkstra way.
It is not unusual for a speaker to start a lecture with an introduction. As some in the audience might be totally not familiar with the issues that I wish to address and the terminology, which I will have to use, I will to give two examples as a means of introduction, the first one to describe the background of the problem and a second one, to give you an idea of the kind of logical problems, that we will encounter.
The story that EWD 35 tells is a brief history of the resolution to the scientific topic that brought Dijkstra his fame: the problem of synchronising parallel processes and the invention of the the concept of the semaphore for these types of problems in computing.
Today it is almost impossible to imagine a time in which this problem was still unsolved. Collaborating parallel processes, semaphores and indivisible operations have become a such a commodity in computing. But at that time computing meant sequential processes and parallelism of interacting processes was being looked at for the first time.
In 1959, the question was of whether the described capability for communication between two machines made it possible to couple two machines such, that the execution of the critical sections was mutually excluded in time. In 1962, a much wider group of issues was considered and the question was also changed to “What communication possibilities between machines are required, to play this kind of games as gracefully as possible?”
A side note. The word “required” and “possibilities” here do not cover the meaning of Dijkstra’s original in Dutch “welke communicatiemogeljkheden tussen machines zijn gewenst”. I would say “which communication facilities between machines are desired” – expressing better that Dijkstra is in the mode of designing the thing, thinking up these communication facilities.
Another side note, much of Dijkstra’s beautiful loose writing in Dutch is lost in this straightforward translation to English here. For example, we can translate “Jantje” to “Johnny” but in Dutch we have a completely different image in our head when we see Jantje than we someone, Dutch or non-Dutch, would have with Johnny.
Dijkstra uses the word machines in the paragraph quoted above. The terminology developed further and today we would call these, a bit more abstract, processes.
Further in his article he gives an insight into the potential for his “machines” and sees them simulated on a central computer. We really have to imagine this is a remarkable insight in the future. At that time computer ran single tasks. Running multiple processes at the same time on the same hardware was at best experimental. He makes this remark in the context of alternative ways of having machines collaborate and let them wait on each other by building in wait times in case they need access to the same resource.
However, if we consider now, that will be the majority of these machines will be simulated by a central computer so that any action in one machine can only be performed at the expense of the effective speed of the other machines, then it is very costly using a wait cycle to demand attention of the central computer for something completely useless.
The basic problem Dijkstra extensively examines is best described by Dijkstra in the following sentences. One machine wants to assign a value to a shared variable, after testing its value. For example something like:
if x=3 then x :=4 else x:=5
Another machine is trying to do the same thing, at the same time. What happens if machine 1 tests for the value of x, while at the same time machine 2 changes this value. Or, what happens if they both change the value of x at the same.
Dijkstra indicates a mechanism is necessary which he call indivisible actions. Today these things are indivisible, uninterruptible or atomic operations or instructions.
These two actions, assigning a new value and inquiring about the current value are considered indivisible actions, i.e. if both machines wish to “simultaneously” assign a value to a common variable then the value assigned at the end is one or the other value, but not some mixture. Similarly, if one machine asks for the value of a shared variable at the time that the other machine is assigning a new value to it, then the requesting machine will receive or the old or the new value, not a random value.
Having these instructions is key to the solution of the problem of interaction sequential processes.
In EWD35 Dijkstra then goes on to describe, in a rather lengthy paragraph – but remember this was front end research at that time – the invention of the semaphore in computing.
The programmed wait cycle that exists herein, is of course very nice, but it did little to what our goal. A tiny wait cycle is indeed the way to keep a machine busy “without effect” . However, if we consider now, that will be the majority of these machines will be simulated by a central computer so that any action in one machine can only be performed at the expense of the effective speed of the other machines, then it is very costly using a wait cycle to demand attention of the central computer for something completely useless. As long as the wait cycle cannot be exited, in our opinion the speed of this machine may be reduced to zero, To express this we introduce a statement instead of wait cycle, a basic instruction in the repertoire of the machines, that may take a very long time. We indicate this with a P (to Pass); in anticipation of future needs, we represent the statement “SX: = true” by “V(SX) – with V of “vrijgave” (in English: release) (This terminology is taken from the railway environment. In an earlier stage the common logical variables were called “Semaphores” and if their name starts with an S, it is a reminiscence of it). The text of the programs in this new notation is as follows:
“LXi: P(SX); TXi; V(SX); proces Xi; goto LXi” .
And you hold your breath, watching the birth of modern computing here in EWD 35.
The rest of the article provides the proof that this mechanism holds for any number of machines.