>What's unfortunate, at least from where I sit, is that though some sources give
>you a schematic or an HDL of a CPU, yet they don't tell you WHY the choices made
>in its design were made. Normally such decisions are normally driven by
>requirements, be it for performance, or for specific addressing modes, chip
>size, or whatever. It seems we never see light shed on such matters.
Sometimes you can find this information on the web. Now that many of the
older computers
are of historical value people are writing things down.
>One caution is certainly warranted, however. Fully synchronous design became
>the default method of designing circuits of anysubstance in the mid-late '80's.
>One result, of course, was that signal races were easily avoided, and, with the
>use of pipelining, it allowed for the acceleration of some processes at the cost
>of increased latency. The use of fully sunchronous design drove up CPU cost,
>however, and was not an automatically assumed strategy in the early '70's, so
>you've got to consider WHEN a design was specified before making any assumptions
>about why things were done in a given way.
I thought that that was due more to the fact (core) memory was
asynchronous with a wide
range of cycle times as well as I/O transfers. Only with memory being in
the same box as
the cpu does a more synchronous system make sense.
>Classic CPU's were mostly NOT fully synchronous, as fully synchronous design
>required the use of costlier faster logic families throughout a design when that
>wasn't necessarily warranted. Today's FPGA and CPLD devices, when used to host
>a classic CPU design, eliminate the justifications for asynchronous design
>strategies that were popular in the early '70's - late '80's. Their use
>essentially requires the design be synchronous, not only because signal
>distribution/routing resources are limited, but because propagation delays are
>so different from wht they were in the original discrete version.
What is so different a F/F is still a F/F, a gate is still a gate. It is
only that
routing delays are a unknown so you can't use logic that requires timing
delays or
or oneshots. It is only that the programs can't discover when logic can
or cannot change
like a designer can but must use worse case assumptions .It is only in
the case when you
have a single clock that timing calculations are the most accurate.
How ever I suspect most CPU design starts with a clean sheet of paper
lays out goals and basic
design parameters. A good block diagram often can tell you how complex
your system is.
While gates are important the quantify and packaging of the gates define
just how your system
can be laid out. Only after the instruction set is defined do you look
at the logic need
to produce the Computer System, and once you lay things out you have
good idea of
what instructions are needed. Of course everything gets revised again
and again.
http://www.ulib.org/webRoot/Books/Saving_Bell_Books for some interesting
reading.
Also "CMOS circuit design,layout and simulation" ISBN 0-7803-3416-7 is
very good reading for
CPU design at the real gate level.
Ben Franchuk.
--
Standard Disclaimer : 97% speculation 2% bad grammar 1% facts.
"Pre-historic Cpu's" http://www.jetnet.ab.ca/users/bfranchuk
Now with schematics.
Received on Tue Oct 30 2001 - 23:13:06 GMT