User:Johnburger/PIC

From OSDev.wiki
Revision as of 03:17, 20 March 2014 by osdev>Johnburger (Added "System design" section)
Jump to navigation Jump to search
This page is a work in progress.
This page may thus be incomplete. Its content may be changed in the near future.

Priority Interrupt Controller (PIC)

Introduction

The Intel 8259 PIC was originally designed as a support chip for the 8086/8088 (and 8085 - never mind!), to extend the CPU's single interrupt pin and allow a number of different devices to interrupt it. The PIC had 8 inputs, and it would prioritise these for the CPU to ensure that the more important interrupts were serviced before - or even during - less important ones. Of course, this complicated interrupt handling for all hardware interrupt handlers, but this complexity was seen as important for a true computer like the IBM PC.

Essentially, the CPU would program the PIC at startup, and from then on when an attached peripheral would raise an interrupt line, the CPU would stop what it was doing, 'service' the interrupt, and then resume from where it left off.

System design

General PIC capabilities

The PIC is very flexible in its usage. As an example, I once programmed a system that had an 80188 CPU, an 8259 PIC, and 8x 16450 UART serial chips. After one of the UARTs interrupted the CPU, I had configured the PIC to "rotate" the priority of the interrupts to demote that UART to the bottom of the list, allowing others to get a fairer service.

But it would be true to say that well over 99% of all 8259 PICs (or their equivalents) are installed in PCs, which only ever uses the PIC in one of its many modes.

The PC: IBM made a Whoops! (or two... or...)

IBM's decision in 1980 to base its new Personal Computer on Intel's architecture arguably made Intel the world's largest chip manufacturer today. Not only the processor but also the support chips (the PIC was just one of the many Intel chips used) all worked together, were readily available, and were cheap. IBM just had to put them all together and make some design decisions: but at the time they weren't aware of how fundamental some of those decisions were going to affect the future.

Intel specification

Intel's 8086/8088 processor supported up to 256 interrupts, up to 64 of which could be external interrupts (called Interrupt ReQuests or IRQs) using the 8259 PIC (see below). Intel predefined the first few interrupts (INTs) of the CPU to signal internal exceptions, such as Division by Zero or Debug, and quietly documented that they reserved the first 32 interrupts for future use - even though only the first 5 were currently defined.

IBM either missed the documented reservation, or ignored it since they weren't actually in use. They promptly and arbitrarily assigned various interrupts from 5 upwards for their own use: system calls, hardware interrupts (IRQs), and even simple pointers to data tables. (Tip: Don't ever make an INT call to one of those ones - unless you want to crash your computer?) For example, INT 5, the first one not in use, was adopted to perform a Screen Print (that smacks of an early debugging requirement to me...) At least when Microsoft added their own interrupts in MS-DOS they started from 32 (20h) - but that may be because IBM had already used most of the lower ones!

The effect of this decision became apparent when the PC was upgraded to use the 80286 and 80386. These newer processors used more of those Intel-reserved interrupts, which meant that executing a simple BOUND instruction on a PC could now cause the printer to burst into life! Worse, the IRQs (assigned to INTs 8 through 15 on the PC) interfered with more of the CPU's internal exceptions, complicating their handlers. Was that INT 13 caused by an internal General Protection Fault, or the Ethernet card?

This is why one of the first things that an OS writer on the modern PC platform needs to do is reprogram the PIC to get the IRQs away from the Intel-reserved exceptions! Blame it on IBM...

IRQ sharing

Another design decision - or at least a decision not to be more proscriptive in their documentation - also had an effect on the future of the new PC. The PC had a bus where expansion boards could be installed, adding features such as communications or storage to the basic PC. Some of these expansion cards may need to interrupt the CPU, so the bus carried IRQs 2 through 7 (IRQs 0 and 1 were already dedicated to the timer and keyboard on the motherboard). Properly designed cards could use these IRQs in a cooperative manner, sharing the same interrupt with each other.

Unfortunately, most cards weren't properly designed in this respect - they assumed that they were the only one using a particular IRQ. That very quickly used up all the available IRQs, making adding new cards a tricky proposition for unskilled users. Moveable jumpers were required on expansion cards to try to utilize available IRQs - and then the drivers for those cards needed to be informed of the jumper position. The invention of Plug-and-Play (PnP) could arguably be ascribed to this situation.

Level- versus Edge-triggered Interrupts

The final design decision - with respect to IRQs, anyway - was actually one without a correct answer. The PIC supports two kinds of Interrupt modes: level-triggered and edge-triggered. Think of a pupil sitting in a classroom wanting to attract the attention of the teacher. She could raise her hand and keep it up until the teacher acknowledged her, or she could raise it and quickly lower it again, hoping to catch the teacher's eye. If the teacher had his back turned (busy doing something else) he may completely miss the second type, but with the first type the raised hand could obscure someone sitting behind the girl.

A level-triggered interrupt is raised for as long as the hardware is requesting service for the interrupt. If the CPU doesn't service the interrupt, which would allow the hardware to lower the interrupt line, the CPU cannot be told about other interrupts that may be pending.

However, an edge-triggered interrupt merely pulses the interrupt line, and if the CPU misses the pulse the hardware may go unserviced. Also, if the hardware decides it doesn't need an interrupt after all, there's no way for it to "take back" its interrupt pulse.

The designers of the original IBM PC decided to go for edge-triggered interrupts. Later designers of the PCI bus decided to go for level-triggered interrupts. Either way, the programmer who is writing the interrupt handler (you!) needs to carefully handle the device(s) assigned to an interrupt to discover if it actually needs servicing: never assume that the interrupt came from 'your' device, and always assume that there may be other devices hanging on the same interrupt.

One more issue to do with interrupts is the concept of a "spurious" interrupt. If a PIC sees an interrupt, it will immediately pass it on to the CPU - which may have interrupts disabled. By the time the CPU gets back to the PIC, the interrupt may have gone away (maybe the edge-triggered pulse was too short, or maybe the level-triggered interrupt simply stopped, or maybe electrical noise made a phantom spike). But the PIC is committed: it has to tell the CPU something. What it does is to signify an IRQ 7. The IRQ 7 handler has to handle the possibility that "its" associated device may not be the cause of the interrupt: examine the PIC to see if there really was an interrupt, and simply ignore it if there wasn't. (Also, see Programming the Slave below).

Adding more Interrupts - the IBM PC/AT

Programming the Master

Programming the Slave

Initialisation

Servicing an Interrupt

The new APIC - Advanced Priority Interrupt Controller

Legacy Mode