Register Machine Model of Computation: Difference between revisions

From OSDev.wiki
Jump to navigation Jump to search
[unchecked revision][unchecked revision]
Content added Content deleted
m (appearance)
Line 12: Line 12:
The next significant model to arise was the ''lambda calculus'', created by Alonzo Church in 1936. This was a simpler model of functions than the existing type of recursive function theory, but was demonstrated to have the same computational power as RFT. That same year, Alan Turing developed yet another model, the ''Turing Machine'', which he used to demonstrate that a well-known unsolved goal in mathematics — finding a method for proving that any arbitrary computation process would go to completion — was impossible, by showing that the testing process itself would necessarily be undecidable in the general case. In doing so, he showed that it was possible to design a variant of the Turing Machine that could simulate any other possible Turing machine; he postulated that this ''Universal Turing Machine'' could perform any decidably mechanizable computation. -
The next significant model to arise was the ''lambda calculus'', created by Alonzo Church in 1936. This was a simpler model of functions than the existing type of recursive function theory, but was demonstrated to have the same computational power as RFT. That same year, Alan Turing developed yet another model, the ''Turing Machine'', which he used to demonstrate that a well-known unsolved goal in mathematics — finding a method for proving that any arbitrary computation process would go to completion — was impossible, by showing that the testing process itself would necessarily be undecidable in the general case. In doing so, he showed that it was possible to design a variant of the Turing Machine that could simulate any other possible Turing machine; he postulated that this ''Universal Turing Machine'' could perform any decidably mechanizable computation. -


These models were 'mechanical' only in the sense of being built on a set of rules; they were not very useful for developing a physical computing device. Of them all, the UTM was the closest, as it was described in terms of an imaginary mechanism consisting of a reading and writing head and an infinitely long paper tape. While it was not a practical design, it demonstrated that the mechanical calculators of the time had a solid theoretical basis, and more importantly, re-introduced the concept (originally proposed by Charles Babbage) of a ''programmable'' computation device. Turing would go on to do significant work in the development of one of the first such machines, the ULTRA project's Colossus decryption machine originally conceived by Tom Flowers in early 1943. During this period, from the late 1930s to the early 1950s, several groups of designers seem to have developed similar ideas independently of each other: Konrad Zuse in Germany, John Atanasoff and Clifford Berry at Iowa State University, Presper Eckert and John Mauchly at Princeton, and Howard Aiken at Harvard, all rediscovered the principle of stored program computers between 1941 and 1943, a concept later codified in a white paper by a team led by John Von Neumann. Because of this, designs in which the program and data are stored in a shared memory became known as the ''Von Neumann architecture'', while those which have independent memories for program and data, such as the Zuse Z4 and the Harvard Mark 1, became known as the ''Harvard Architecture''.
These models were 'mechanical' only in the sense of being built on a set of rules; they were not very useful for developing a physical computing device. Of them all, the UTM was the closest, as it was described in terms of an imaginary mechanism consisting of a reading and writing head and an infinitely long paper tape. While it was not a practical design, it demonstrated that the mechanical calculators of the time had a solid theoretical basis, and more importantly, re-introduced the concept (originally proposed by Charles Babbage) of a ''programmable'' computation device.

Turing would go on to do significant work in the development of one of the first such machines, the ULTRA project's Colossus decryption machine originally conceived by Tom Flowers in early 1943. During this period, from the late 1930s to the early 1950s, several groups of designers seem to have developed similar ideas independently of each other: Konrad Zuse in Germany, John Atanasoff and Clifford Berry at Iowa State University, Presper Eckert and John Mauchly at Princeton, and Howard Aiken at Harvard, all rediscovered the principle of stored program computers between 1941 and 1943, a concept later codified in a white paper by a team led by John Von Neumann. Because of this, designs in which the program and data are stored in a shared memory became known as the ''Von Neumann architecture'', while those which have independent memories for program and data, such as the Zuse Z4 and the Harvard Mark 1, became known as the ''Harvard Architecture''.

Revision as of 22:24, 25 November 2017

This page is a stub.
You can help the wiki by accurately adding more contents to it.

This page is intended to present an overview of the theoretical Register Machine model of computation, which is the conceptual basis for most real-world CPU designs. While it is assumed that anyone coming here will be at least passingly familiar with the concepts, it is meant to give a review of the topic to eliminate any misconceptions newcomers may have.

The Register Machine Model is a model of computation used to describe the computability of different equations. It is a sub-type of a class of computation models called Random Access Machines (RAMs), which also includes the related Stack Machine model. It is one of many highly diverse models, some of which — including many register machines — are Turing-Equivalent, that is to say, they can compute the same set of propositions as a Universal Turing Machine. The relevance of the Register Machine model is that is it the easiest of all models to approximate in electronic hardware, and in fact the model was developed to describe the first programmable computer systems in mathematical terms, in order to demonstrate that they were Turing-equivalent.

Computational Models and the History of Computing

A model of computation is a mathematical formalism that describes a mechanical process ('mechanical' in the sense of deterministic and rules-based, not necessarily mechanized or electronic) for performing computations. The idea of such formalisms arose in the early 20th century, when several mathematicians — most notably Alfred Whitehead, David Hilbert, and Bertrand Russell — began to speculate on whether it was possible to develop a mechanical system for deriving mathematical proofs. The earliest computational models, both developed in the early 1920s, were recursive function theory and recursive set theory, both of which were used for developing a set of rules for performing computations and proving theorems.

The original goal of theorem proving was dashed in 1933, when Kurt Goedel demonstrated that any formalism of mathematical logic would have blind spots — either in the form of undecidable propositions, true theorems which could not be proven, or invalid propositions, theorems which could be 'proven' in the system but which were not actually true. While this was a major setback, the proof Goedel used demonstrated that recursive functions were a useful tool in exploring the new area of computability.

The next significant model to arise was the lambda calculus, created by Alonzo Church in 1936. This was a simpler model of functions than the existing type of recursive function theory, but was demonstrated to have the same computational power as RFT. That same year, Alan Turing developed yet another model, the Turing Machine, which he used to demonstrate that a well-known unsolved goal in mathematics — finding a method for proving that any arbitrary computation process would go to completion — was impossible, by showing that the testing process itself would necessarily be undecidable in the general case. In doing so, he showed that it was possible to design a variant of the Turing Machine that could simulate any other possible Turing machine; he postulated that this Universal Turing Machine could perform any decidably mechanizable computation. -

These models were 'mechanical' only in the sense of being built on a set of rules; they were not very useful for developing a physical computing device. Of them all, the UTM was the closest, as it was described in terms of an imaginary mechanism consisting of a reading and writing head and an infinitely long paper tape. While it was not a practical design, it demonstrated that the mechanical calculators of the time had a solid theoretical basis, and more importantly, re-introduced the concept (originally proposed by Charles Babbage) of a programmable computation device.

Turing would go on to do significant work in the development of one of the first such machines, the ULTRA project's Colossus decryption machine originally conceived by Tom Flowers in early 1943. During this period, from the late 1930s to the early 1950s, several groups of designers seem to have developed similar ideas independently of each other: Konrad Zuse in Germany, John Atanasoff and Clifford Berry at Iowa State University, Presper Eckert and John Mauchly at Princeton, and Howard Aiken at Harvard, all rediscovered the principle of stored program computers between 1941 and 1943, a concept later codified in a white paper by a team led by John Von Neumann. Because of this, designs in which the program and data are stored in a shared memory became known as the Von Neumann architecture, while those which have independent memories for program and data, such as the Zuse Z4 and the Harvard Mark 1, became known as the Harvard Architecture.