Register Machine Model of Computation: Difference between revisions

get rid of some stub labels
[unchecked revision][unchecked revision]
(get rid of some stub labels)
 
(3 intermediate revisions by 2 users not shown)
Line 1:
{{stub}}
 
This page is intended to present an overview of the theoretical Register Machine model of computation, which is the conceptual basis for most real-world CPU designs. While it is assumed that anyone coming here will be at least passingly familiar with the concepts, it is meant to give a review of the topic to eliminate any misconceptions newcomers may have.
 
Line 11 ⟶ 9:
The earliest computational models, both developed in the early 1920s, were ''[https://en.wikipedia.org/wiki/%CE%9C-recursive_function recursive function theory]'' and ''recursive set theory'', both of which were used for developing a set of rules for performing computations and proving theorems.
 
The original goal of automatic theorem proving was dealt a critical blow in 1933, when [https://en.wikipedia.org/wiki/Kurt_G%C3%B6del Kurt Gödel] demonstrated that any formalism of mathematical logic would have blind spots — either in the form of ''undecidable propositions'', true theorems which could not be proven, or ''invalid propositions'', theorems which could be 'proven' in the system but which were not actually true or contradicted another theorem. While this was a major setback, the proof Gödel used demonstrated that recursive functions were a useful tool in exploring the new area of ''computability''.
 
The next significant model to arise was the ''[https://en.wikipedia.org/wiki/Lambda_calculus lambda calculus]'', created by Alonzo Church in 1935. This was a simpler model of functions than the existing type of recursive function theory, but was demonstrated to have the same computational power as RFT. Church postulated that this these two models were, in fact, as complete as any sequential model could ever be.
 
That same year, a graduate student of Church's, Alan Turing, developed yet another model, the ''[[https://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machine]]'', which he used to demonstrate that a well-known unsolved goal in mathematics — finding a method for [https://en.wikipedia.org/wiki/Halting_problem proving that any arbitrary computation process would go to completion] — was impossible, by showing that the testing process itself would necessarily be ''[https://en.wikipedia.org/wiki/Undecidable_problem undecidable]'' in the general case.
 
The Turing Machine, in its basic form as described in the paper "[https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf On Computable Numbers, with an Application to the ''Entscheidungsproblem'']", is thought of as a machine consisting of a tape read/writer and an infinitely long paper tape divided into cells containing data - one can think of the cells as boxes drawn on the tape to mark where a particular datum was. For each operation, the Turing Machine would position itself over a cell, read the datum in the cell, and based on that datum, decide whether to write a datum in a cell (whether the same one or a different one), move to another cell, or stop running and eject the tape in its current state as its final output. In his paper, he initially described each machine having a fixed set of responses to the data, stored separately in an ''action table'', and a fixed set of states which the data could put it into. He then proceeded
 
{{stub}}
 
In doing so, he showed that it was possible to design a variant of the Turing Machine that could simulate any other possible Turing machine; this ''Universal Turing Machine'' (UTM) demonstrated the same kind of generality as RF and λ-Calculius, and he expanded on Church's premise that these three models represented the absolute limit of sequential computation, which became known as the ''[https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis Church-Turing Thesis]''. While it is not proven, it is generally accepted that it is likely to be correct, with the result that ''[https://en.wikipedia.org/wiki/Turing_completenessTuring Completeness]'' became the benchmark for the computing power of any mechanical model.
Line 47 ⟶ 43:
== The Random Access Machine model ==
The Random-Access Machine is an abstract model that closely resembles the structure of a conventional computer. It consists of a data memory divided into a set of cells, arranged linearly and monotonically (that is to say, the cells follow each other in a strictly numeric order, and each cell is the same 'size' in the address space - which doesn't ''necessarily'' reflect its capacity to hold data), where the cells can be accessed by a numeric address; an instruction memory, which may or may not be the same as the data memory, but is similarly structured; a set of operations which it can perform; a set of instructions, encoded in a form which could be held in a set of memory cells, which act to direct which operation to perform; and a Program Counter, which holds the address of a cell in the instruction memory which encodes which operation to perform next.
 
 
{{stub}}
Anonymous user