CPU design at the gate level

From: Richard Erlacher <edick_at_idcomm.com>
Date: Wed Oct 31 02:31:53 2001

Well, there's a wide gap between core memory, and the designs of the era (pre
'72) when it was common, and the '80's when fully synchronous design became the
order of the day.

Another thing to keep in mind is that most CPU's of yesteryear are not
integrated circuits, but, rather, board(s) full of the them. I remember looking
at a 16-bit CPU from some Florida company that was being customed up by my then
local circuit house and it occupied a 22x32" panel (very large for the time) of
5-layer circuit board, all in STTL. The outer layers were intended not so much
for containing the RFI as for dissipating the heat, to wit, the entire long
outer edges of the board was mounted to a 3-1/2" square heatsink that drew heat
from the board and had fans attached specially provided to manage this board's
dissipation requirement. It was quite a thing to behold!

see additional remarks below, plz.

Dick

----- Original Message -----
From: "Ben Franchuk" <bfranchuk_at_jetnet.ab.ca>
To: <classiccmp_at_classiccmp.org>
Sent: Tuesday, October 30, 2001 10:13 PM
Subject: Re: CPU design at the gate level


> >What's unfortunate, at least from where I sit, is that though some sources
give
> >you a schematic or an HDL of a CPU, yet they don't tell you WHY the choices
made
> >in its design were made. Normally such decisions are normally driven by
> >requirements, be it for performance, or for specific addressing modes, chip
> >size, or whatever. It seems we never see light shed on such matters.
>
> Sometimes you can find this information on the web. Now that many of the
> older computers are of historical value people are writing things down.
>
> >One caution is certainly warranted, however. Fully synchronous design became
> >the default method of designing circuits of anysubstance in the mid-late
'80's.
> >One result, of course, was that signal races were easily avoided, and, with
the
> >use of pipelining, it allowed for the acceleration of some processes at the
cost
> >of increased latency. The use of fully sunchronous design drove up CPU cost,
> >however, and was not an automatically assumed strategy in the early '70's, so
> >you've got to consider WHEN a design was specified before making any
assumptions
> >about why things were done in a given way.
>
> I thought that that was due more to the fact (core) memory was
> asynchronous with a wide
> range of cycle times as well as I/O transfers. Only with memory being in
> the same box as
> the cpu does a more synchronous system make sense.
>
> >Classic CPU's were mostly NOT fully synchronous, as fully synchronous design
> >required the use of costlier faster logic families throughout a design when
that
> >wasn't necessarily warranted. Today's FPGA and CPLD devices, when used to
host
> >a classic CPU design, eliminate the justifications for asynchronous design
> >strategies that were popular in the early '70's - late '80's. Their use
> >essentially requires the design be synchronous, not only because signal
> >distribution/routing resources are limited, but because propagation delays
are
> >so different from what they were in the original discrete version.
>
> What is so different a F/F is still a F/F, a gate is still a gate. It is
> only that
> routing delays are a unknown so you can't use logic that requires timing
> delays or
> or oneshots. It is only that the programs can't discover when logic can
> or cannot change
> like a designer can but must use worse case assumptions .It is only in
> the case when you
> have a single clock that timing calculations are the most accurate.
>
Not all CPU chips of yesteryear were even built with clocked logic. If you look
at the ones with a single clock cycle for a single bus cycle, e.g. 6800, et. al,
you'll find that the clock was a useable as a steering member and a timing
reference, but not necessarily a clock to a set of registers. I'd say FlipFlops
of the R/S and transparent latch sort were much more common than those used for
counting. In fact, I recently revisited the 650x core recently and found that
it could and probably should be built with no clocked flipflops at all, using
the ALU to increment the PC and stack pointer as well as operating on the data
registers. That's what reduced the poundage of silicon in the 650x series
chips, which, aside from their very elegant instruction set, is what bought them
their market share.
>
What's different is that the style of design that was used back when the
classics were being worked out was so different from what's done today. Back
then, fully synchronous design meant that all the devices used were of the same
technology and that meant cost impacts whenever fully sunchronous rather than
locally asynchronous, globally syncrhonized structure was used, since that meant
that a nand gate had to have two dual-rank registered inputs and a registered
ouput. Which immediately raised the cost of that 30-cent gate to $4.80. Back
then arrays were a sea of gates, and things changed depending on which gate of
the 4000-6000 identical nands in the array you were using. Today, a small array
consists of a putative 100K gates, of which one's lucky to be able to get the
equivalent of 10K gates in actual practice. Of course they count a 3-input gate
as two gates and a 4-input gate as three, and a D-flop as over a dozen, rather
than the 6 it should really use. Then there's that LUT, which , to the
marketing department represents a lot of logic, even though you have to use the
whole thing just to make a single 5-input AND. Consider how much of the
marketing departments resources you consume with what would have been a 74S133.
>
> How ever I suspect most CPU design starts with a clean sheet of paper
> lays out goals and basic
> design parameters. A good block diagram often can tell you how complex
> your system is.
> While gates are important the quantify and packaging of the gates define
> just how your system
> can be laid out. Only after the instruction set is defined do you look
> at the logic need
> to produce the Computer System, and once you lay things out you have
> good idea of
> what instructions are needed. Of course everything gets revised again
> and again.
>
Well, if you put pencil to paper before the instruction set is defined, and
before the requirements are firmly defined, you're wasting time, and, sadly, I
doubt that many CPU designs start on a clean sheet of paper these days. They
certainly didn't back in the "old days." There's always the political baggage.
>
> http://www.ulib.org/webRoot/Books/Saving_Bell_Books for some
> interesting reading.
> Also "CMOS circuit design,layout and simulation" ISBN 0-7803-3416-7 is
> very good reading for CPU design at the real gate level.

> Ben Franchuk.
> --
> Standard Disclaimer : 97% speculation 2% bad grammar 1% facts.
> "Pre-historic Cpu's" http://www.jetnet.ab.ca/users/bfranchuk
> Now with schematics.
>
>
Received on Wed Oct 31 2001 - 02:31:53 GMT

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:34:22 BST