kits, definitions, prices...
Caution: long-winded blather follows. Read at your own risk. :-)
On Wed, 7 Oct 1998, Doug Yowza wrote:
]
] On Wed, 7 Oct 1998, Bill Yakowenko wrote:
] > I learned the definition of microprocessor to be a single-chip CPU, and
] > a microcomputer to be a computer based on a microprocessor. But I never
] > questioned it. Why is that a useful definition?
] [...]
] > I have my own ideas about which things are micros and which are not, but
] > in retrospect, the definition that I was taught is not a useful one; it
] > does not classify things into categories that I can use to any benefit.
]
] Exactly. Naming things is just a way to classify them, to separate one
] kind of thing from another. So, the first time a term like
] "microprocessor" is coined is due to necessity -- a new thing came into
] being and it needed a name. Applying that name to things that follow is a
] convienient way to establish a relationship to the original thing.
Must it be things that *follow*? If another company had independently
developed a single-chip CPU before Intel, (say they killed the project
because some marketer popped in from the future and convinced them that
Wintel would eventually kill it anyway) could you not bring yourself to
call that thing a microprocessor? Isn't it the thing's physical
properties that should matter, and not its intellectual ancestry? You
are not a patent lawyer on the side, are you? :-)
] As far as I know, the name "microprocessor" was first given to Intel's
] 4004. It was just shorthand for "this new thing that has a high level of
] logic integration that gives you a bunch of stuff needed to build a
] general purpose computer." Calling anything else a microprocessor, to me,
] is just a way of saying it's a CPU that is in the same class as the Intel
] 4004: general purpose, highly integrated, commercially available, etc.
Actually, it is not clear to me now if you consider that the defining
characteristic of a microprocessor is that it descended from the 4004,
or that it be single-chip.
It looks like the only characteristic that a multi-chip implementation
partially breaks here is "highly integrated". Then again, a two-chip
implementation is not necessarily much less integrated than a single-chip.
Now I wonder why this level of integration matters. Is there something
that a two-chip implementation can't do, and a single-chip can? Did
people really care about this level of space-savings to the extent that
it was worth introducing a new word into the language?
Maybe the significant bit was that the entire CPU was in specialized VLSI,
and not made from parts so small and generic as to be re-wirable into
something else altogether (ie: SSI/MSI). I mean, being reconfigurable
is not in itself a bad thing, but if you are using parts that are so
small and generic, maybe you have not achieved the level of integration
that brings big cost benefits. If we count the non-genericity of the
CPU chips as being the relevant feature of a microprocessor, then it is
just an accident of history that the first things we called microprocessors
were single-chip.
Suddenly I like that definition a whole lot more than the one I grew up
with. I generally dislike it when people try to redefine words for
their own (usually political) purposes. But in this case, it seems
we don't have a widely agreed-upon definition to begin with, so I don't
feel too bad in changing sides.
] If somebody made a two-chip CPU that had all of the other characteristics
] of the Intel 4004, you'd have trouble calling it a microprocessor, because
] it would be missing something. Maybe you'd call it a two-chip
] microprocessor. If it were special purpose instead of general purpose,
] maybe you'd call it a special purpose microprocessor. But once you make
] something different enough from the 4004 that you need to add a bunch of
] qualifiers, you might as well just call it a CPU or come up with a new
] name.
Actually, until ten minutes ago, I would have had trouble calling the
two-chip thing a microprocessor because it broke the definition I learned
as a kid: single-chip. But even the characteristic of being similar to a
4004 is relevant to the extent that you are careful in choosing which way
it has to be similar. The first 4004's were probably in ceramic; should
that be part of the definition? Probably not. Why did we care about the
4004? Is being implemented on a single chip really the important bit? Or
was it cost, ease of use, small size, ...? A two-chip implementation
could very well have been important to us for exactly the same reasons
that the 4004 was.
I can see that explaining why computers suddenly became cheap and
ubiquitous could be useful. But the "single-chip" definition of a
microprocessor is only circumstantially related to that. If the 8080
and its cousins had actually been multi-chip implementations, things
would have progressed exactly the same way. (And pigs with wings
*would* be eagles, dammit! :-) )
So, when is it useful to distinguish single-chip from, say, dual-chip?
What kind of practical decision would someone make based on that?
] -- Doug
Cheers,
Bill.
BTW, was the 4004 really the first in the Intel series of 4004, 4040,
8008, and 8080? I seem to remember that something in this sequence
actually happened in non-ascending order, like maybe the 8008 preceded
the 4004, or the 4040 came out last, or ...? It could make sense; you
could imagine scaling back an existing design to penetrate some niche
market with a cheaper part.
Received on Thu Oct 08 1998 - 00:03:34 BST
This archive was generated by hypermail 2.3.0
: Fri Oct 10 2014 - 23:31:25 BST