Historical Notes on CISC and RISC: Difference between revisions

Jump to navigation Jump to search
no edit summary
[unchecked revision][unchecked revision]
(Restructured footnote and added more)
No edit summary
Line 5:
To understand why and how this reversal came about, you need to know a bit of history. When stored-program computers based on the Von Neumann architecture were first being developed in the late 1940s and early 1950s, the size and complexity of the hardware (which was much more diverse than today, using such disparate technologies as mercury delay lines, CRT-phosphor memories, magnetic drums, wire and tape recorders, ferromagnetic cores, and the ever-present vacuum tubes) meant that the hardware capacities were extremely limited; it was not uncommon for even a large machine to have only one or two registers and a main memory of a few thousand words (which varied in size from machine to machine). The instruction sets were equally constrained, and often had special-purpose instructions for connecting to the peripherals, taking up the already small instruction space.
 
For example, the 18-bit Lincoln Labs TX-0 (one of the first transistor-based systems designed) had a grand total of four primary instructions (though one of these was a kind of escape code that told it to use the following word as an instruction from a different group of special-purpose operations) and two registers, X and Y. Many of these designs had what would today be considered poorly developed instruction sets, full of special cases, lacking useful instructions <ref name="oisc">In principle, only a single operation, 'subtract two memory values and branch if the result is negative' (or several variants on this) is sufficient to allow a Random Access Machine to perform all Turing-computable calculations). There are even (simulated) machines which are designed on this principle, such as the OISC. In practice, of course, such a system would be both tedious and wasteful, especially for the more commonly used operations such as integer arithmetic.</ref>, and including instructions that were of little use.
 
Generally speaking, these early designs had only a single special-purpose register, called the accumulator, on which it could perform most of the common operations such as addition or subtraction; the other operand, if any, usually was either in main memory, or in another special-purpose register called the source, and the result would remain in the accumulator (hence the name).
Line 23:
Finally, the advent of single chip microprocessors meant that it would soon be possible to mass-produce inexpensive computers for a fraction of the cost of minis and mainframes. Early on, these were limited by the capabilities of the hardware in much the same ways the first generation of CPUs were (and some of the design mistakes of the first generation were repeated as well), but the technology soon grew beyond anyone's expectations, and in some ways, beyond the abilities of the designers to predict what they would later need to support.
 
This last part had some major repercussions, especially regarding some of the design decisions as microprocessors became more powerful. Chief amongst these was the decision by Intel to try and extend the 8-bit [[https://en.wikipedia.org/wiki/Intel_8080 8080]] design into a new 16-bit design, the [[https://en.wikipedia.org/wiki/Intel_8086 8086]]; they bent over backwards to make the CPU as familiar as possible to programmers of the older chip, with the result that the design ended up with some unnecessary complications and limitations, especially in how it addresses larger sections of memory<ref name="segmentation">The specific issue is the use of 'memory segmentation', which was intended to provide a 20-bit address space while still only using 16-bit addressing for most purposes. It worked by having a separate set of segment registers, which pointed to 64K regions overlapping each other at 16-byte intervals. The addresses are formed by taking the segment address and a 16-bit offset, which are equivalent to the 8080's addresses, adding the two values with a 4-bit displacement to get the 20-bit address. Most instructions could use one of the segment registers as a default value for code or data, and use just the offsets in the instruction stream. The particular overlap was set at 4 bits because using more than 20 address pins was determined to be prohibitively expensive at the time, and would make the Dual-Inline Package design too large. It was assumed that a 1 MiB address space would be sufficient for a CPU meant as a microcontroller with a limited design lifespan rather than a general-purpose system that would be in use 40 years later. This segmentation system persists as 'Real mode' even in current x86 models.</ref> Also, because the designers had given the system only a small number of registers, most of the operations have several complicated addressing modes to avoid running out of registers.
 
The fact that this processor was selected for the IBM PC, which would soon become the dominant platform, meant that these weaknesses were of critical importance to millions of users. This was further exacerbated when they extended the design still further with the [[https://en.wikipedia.org/wiki/Intel_80286 80286]], which now needed a separate 'protected mode' to access its full abilities while retaining 'normal mode' for backwards compatibility. Some of the design flaws were resolved in the next design, the 80386, but at the cost of exponentially increasing complexity both of the chip itself and of assembly programming for it.
Line 29:
Meanwhile, other chip designers were going in other directions. One, Motorola, started over with a classic 'big' design, the 68000, which resembled a scaled-down version of the VAX in many ways, with 16 general-purpose registers and a complicated instruction set. This would become the CPU for several successful workstations, as well as the original [[https://en.wikipedia.org/wiki/Macintosh Apple Macintosh]] line. Later design extensions would complicate this, though not to the extent that the 80x86 design would be.
 
Still, it was growing clear that the complex instruction sets were growing counter-productive. Thus, many chip designers decided to go in the opposite direction: minimal instruction sets, no microcoding, load/store architectures with few if any operations working on memory directly, large register sets which could be used to avoid accessing main memory whenever possible, and an emphasis on supporting high-level languages rather than assembly programing. This new idea, which was called RISC (reduced instruction set computer), would be the basis several new CPU designs, including the MIPS, the SPARC, the ARM, the IBM POWER<ref name="powerpc">used in modified form as the PowerPC, which was used in Macintoshen from 1994 to 2006 and in some IBM OS/2 systems from 1992 to 1996</ref>), which, and the Alpha. Of these, all but the Alpha remain in use today for certain specialized areas of use, and the ARM in particular has become the ''de facto'' standard for mobile computing, though for the most part the domination by the Windows-x86 system has forced them out of the market for home and business systems.
 
=== Footnotes ===
{{reflist}}
<references>
<ref name="oisc">In principle, only a single operation, 'subtract two memory values and branch if the result is negative' (or several variants on this) is sufficient to allow a Random Access Machine to perform all Turing-computable calculations). There are even (simulated) machines which are designed on this principle, such as the OISC. In practice, of course, such a system would be both tedious and wasteful, especially for the more commonly used operations such as integer arithmetic.</ref>
 
<ref name="segmentation">The specific issue is the use of 'memory segmentation', which was intended to provide a 20-bit address space while still only using 16-bit addressing for most purposes. It worked by having a separate set of segment registers, which pointed to 64K regions overlapping each other at 16-byte intervals. The addresses are formed by taking the segment address and a 16-bit offset, which are equivalent to the 8080's addresses, adding the two values with a 4-bit displacement to get the 20-bit address. Most instructions could use one of the segment registers as a default value for code or data, and use just the offsets in the instruction stream. The particular overlap was set at 4 bits because using more than 20 address pins was determined to be prohibitively expensive at the time, and would make the Dual-Inline Package design too large. It was assumed that a 1 MiB address space would be sufficient for a CPU meant as a microcontroller with a limited design lifespan rather than a general-purpose system that would be in use 40 years later. This segmentation system persists as 'Real mode' even in current x86 models.</ref>
 
<ref name="powerpc">used in modified form as the PowerPC, which was used in Macintoshen from 1994 to 2006 and in some IBM OS/2 systems from 1992 to 1996</ref>
</references>
Anonymous user
Cookies help us deliver our services. By using our services, you agree to our use of cookies.

Navigation menu