(redirected from 8086 microprocessor)

[Home]Intel 8086

HomePage | Recent Changes | Preferences

The 8086 (officially called iAPX 86) is a 16-bit microprocessor chip designed by Intel in 1978, which gave rise to the x86 architecture. Shortly later the Intel 8088 was introduced with an external 8-bit bus, allowing the use of cheap chipsets. It was based on the design of the 8080 and 8085? (it was [source compatible]? with the 8080) with a similar register set, but was expanded to 16 bits. The Bus Interface Unit fed the instruction stream to the Execution Unit through a 6 byte prefetch queue, so fetch and execution were concurrent - a primitive form of pipelining? (8086 instructions varied from 1 to 4 bytes).

It featured four 16-bit general registers, which could also be accessed as eight 8-bit registers, and four 16-bit index registers (including the [stack pointer]?). The data registers were often used implicitly by instructions, complicating register allocation for temporary values. It featured 64K 8-bit I/O (or 32K 16 bit) ports and fixed vectored interrupts. Most instructions could only access one memory location, so one operand had to be a register. The result was stored in one of the operands.

There were also four segment? registers that could be set from index registers. The segment registers allowed the CPU to access one megabyte of memory in an odd way. Rather than just supplying missing bytes, as in most segmented processors, the 8086 shifted the segment register left 4 bits and added it to the address. As a result segments overlapped, which most people consider to have been poor design. Although this was largely acceptable (and even useful) for assembly language, where control of the segments was complete, it caused confusion in languages which make heavy use of pointers (such as C). It made efficient representation of pointers difficult, and made it possible to have two pointers with different values pointing to the same location. Worse, this scheme made expanding the address space to more than one megabyte difficult. Effectively, it was expanded by changing the addressing scheme in the 80286.

The processor runs at clock speeds between 4.77 (in the original IBM PC) and 10 MHz.

Typical execution times in cycles (estimates):

EA: time to compute effective address, ranging from 5 to 12 cycles

The 8086 was cloned by the NEC VC20 and NEC VC30. There were mathematical coprocessors for the 8086: the [Intel 8087]?, ... What were the Weitek coprocessors called?


The following bit should be editted and integrated

So why did IBM choose the 8086 series when most of the alternatives were so much better? Apparently IBM's own engineers wanted to use the Motorola 68000, and it was used later in the forgotten IBM Instruments 9000 Laboratory Computer, but IBM already had rights to manufacture the 8086, in exchange for giving Intel the rights to its bubble memory designs. Apparently IBM was using 8086s in the IBM Displaywriter word processor. Other factors were the 8-bit Intel 8088 version, which could use existing Intel 8085-type components, and allowed the computer to be based on a modified 8085 design. 68000 components were not widely available, though it could use Motorola 6800 components to an extent. Intel bubble memory was on the market for a while, but faded away as better and cheaper memory technologies arrived.


Article based on [Intel 8086] at [FOLDOC], used with permission.


HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited December 17, 2001 12:13 am by Uriyan (diff)
Search: