TTL computing

From: Richard Erlacher <edick_at_idcomm.com>
Date: Wed Apr 17 02:34:11 2002

----- Original Message -----
From: "Tony Duell" <ard_at_p850ug1.demon.co.uk>
To: <classiccmp_at_classiccmp.org>
Sent: Tuesday, April 16, 2002 8:14 PM
Subject: Re: TTL computing


> >
> > This item is getting to be rather large...
>
> Does your mail software not have a way of deleting lines from the message
> you're replying to?
>
> [ARD's odd circuits for fun only]
>
> > I'm curious in what way doing this sort of thing in discrete logic has
over
>
> Becuase I find it more enjoyable to build it that way. And it's easier to
> see what's going on and to test it. As I said, they're for my amusement
only.
>
> > doing it in programmable hardware. (Take a look at the field-programmable
> > analog hardware offered at Lattice, for example.) As for the technique
> > itself, I remember reading that Siemens had some hardware that did this
sort
> > of thing about 20 years ago, so there's plenty been written about it.
>
> Sure. I was actually doing this perhaps 15 years ago. The point is, I
> like to try things out for myself. I find things easier to understand
> with the physical circuit in front of me, 'scope or meter or whatever
> clipped onto it.
>
>
> > >
> > > >
> > > > I live and die on what's real. Engineers don't get paid for coming up
> > with
> > > > things that SHOULD be. The things that exist only in the subjunctive
> > don't
> > >
> > > Which implies, I guess, that you don't believe there will ever be any
new
> > > ideas.
> > >
> > It doesn't imply that at all. It does suggest, however, that if you're
trying
>
> Well, if engineers aren't going to come up with things that 'SHOULD be',
> then who is?
>
> > to exploit new thinking, you shouldn't be weighed down with archaic
> > implementation techniques.
>
> Why? Why does the implementation technique matter _at all_ ?
>
> >
> > If you have a concept you want to explore, you ought to be able to
convince
> > yourself of its validity by thinking it through, the verify that validity
with
> > extensive simulation (which you clearly don't believe useful, in spite of
the
>
> Simulation is not going to convince me of anything. I see no reason to
> believe that the simulator has any relation to the real world at all...
>
Do you trust any software at all?
>
> > fact the rest of the world has accepted it, then implement it in a
>
> I know plenty of people who don't accept simulation either. Mostly those
> who've tried it and had a lot of problems as a result.
>
> > programmable hardware testbed because that doesn't require you to buy an
> > specific hardware, and then stress it in various ways to determine its
> > sensitivities. Building just one example of a circuit would not really
prove
> > that the design is sound. It only proves that you can make it work ...
once.
>
> Whereas simulating it doesn't prove it works at all...
>
There's plenty of room for disagreement there.
>
> And FWIW, 'testing' a circuit is not just applying power and seeing if it
> works. Even if it does work, I'd still be making a lot of measurements on
> it, and trying the effect of changes, just to see how marginal (or
> otherwise) it is.
>
I'd be interested in knowing how you check for "marginal." If it fails once
in 10^9 times it fails. Where do you draw the line, and how do you measure
it?
>
> > > > In a PROM, regardless of how many imputs you want, you have to program
> > them
> > > > all, and there's no software that does it for you. That in itself is
> > already
> > >
> > > Of course there's software to do that. In fact it's simpler than logic
> > > minisation for a PAL. Here's the algorigthm (inputs I(N), I(N-1), ..
> > > I(1), I(0) )
> > >
> > > For I(N) = 0 to 1 do
> > > For I(N-1)=0 to 1 do
> > > ...
> > > For I(1) = 0 to 1 do
> > > For I(0) = 0 to 1 do
> > > address = I(N)*2^N + I(N-1)*2^(N-1) + ... + I(1) * 2^1 + I(0) * 2^0
> > > bit = desired_boolean_function(I(N),I(N-1),...,I(1),I(0))
> > > program_PROM (address,bit)
> > >
> > ... and just exactly WHAT does this do? (BTW, it's a mite confusing
keeping
>
> It generates the bitmap for a PROM for an arbitrary combinatorial logic
> function, of course. Do you really not know how to do that? You just try
> all the input states, calculate the function, and store said result in
> the bitmap.
>
It looks like nonsense to me ... but perhaps it's just the notation. You keep
referring to an arbitrary combinatorial function, yet you've yet to lay one on
the table that is complex enough to use all the inputs yet exceed the
available product terms. You've never indicated what such a might do. It
looks, also, like the example above would require only a single product term.
A highly flexible PLD has a large number of inputs not because it needs many
per function, but in order to make several unrelated functions available from
a single device, putting things in practical terms. Clearly, if what you want
is a PROM, then you should use one, but I've yet to see a PROM that would fit
in a 24-pin package, yet provide 42 address inputs. (just kidding!) In fact,
I doubt very seriously that there are any PROMs with 42 address inputs
available today. If there are any, it's doubtful they'd be fast enough to use
for logic.
>
> Now admittedly this is going to take a ridiculous time for a large number
> of inputs (so long as to be impractical if not impossible! [1]). But most
> such functions (e.g. the multiplier example) have enough 'structure' that
> you wouldn't do it this way.
>
> [1] As a rough estimate, assuming you can calculate 10^6 terms per second
> (not impossible on a modern computer, I guess), it would take 50 days to
> work it out for a 42 input function!
>
and how long running RT-11?
>
> > the I's straight, and figuring out what, exactly, your ellispsis
represents,
> > but, overall, it still looks like a single product term will do it.
That's
> > not the sort of function that you said you'd need a PROM to implement.
>
> That depends on the defintion of 'desired_boolean_function'. If that's
> complex, then a single product term most certainly won't do it.
>
>
> >
> > Simpler, is a pretty nebulous concept, but I'm here to say that simpler
means
> > you use the tools that are freely available to do the "dirty-work"
including
>
> Oh, right. I thought you menat the chip itself was simpler. You certainly
> seemed to imply that in earlier messages.
>
Well, OK... simpler in its application. Since it's normally a technology
applied to simple problems like small FSM's and decoders, and some random
logic.
>
> > translation from source, reduction, and production of the JEDEC file
and/or
>
> That's only because somebody's written that software. In fact the
> software to fit a boolean equation into a PROM is conceptually simpler!
>
That may be why you don't see it out there.
>
> > on fusible-link devices) It means you don't have to write your own tools,
and
>
> I't rather trust tools I'd written myself as I know how I intended them
> to behave under all circumstances...
>
Maybe you'd like to look at the now nearly 30-year old SPICE program. It was
originally written in Fortran, IIRC, and on an old DEC war-horse. In its
various incarnations it's a program that's taught in EE schools throughout the
world and widely used as a reference on how things really behave. It's been
grown into a mixed-mode simulator so you can simulate both digital and analog
functions, and the interactions between them. It supports transient, DC, AC,
analysis, with monte-carlo to help with sensitivity analysis. If you need
precise models, nearly any vendor can provide them. Even models of common TTL
functions are included in many SPICE model libraries.
>
> > it means that you can, for the most part, rely on the resulting output.
The
> > machine on which you accomplish these things is just a tool, like a
wrench.
> > It's not a deity, and doesn't need special consideration. Like a hammer,
when
> > it fails to meet my needs, I repair/replace the offending component, or
chuck
> > the whole thing and replace it, just as I would with a screwdriver that no
> > long works well. If I can't afford a new tool I need, I consider a 2nd
hand
>
> As an aside, any true engineer (especially one working on precision
> machinery) had better know how to regrind a screwdriver blade!
> Considering how many screwdrivers come from the factory with blades
> unsuitable for certain screws.
>
It certainly gives me pause to know that there are still fellows out there who
refuse to use a socket wrench, though.
>
> > You're a free person and are allowed to do pretty much whatever you want,
but,
> > IF you consider your mental processes to be of any value at all, aside
from
> > your own amusement, and IF you think you can come up with something that's
of
> > use, WHY would you encumber yourself with 1970's tools?
>
> Becasue as I've said so many times, I'll use Computer _Aided_ Design
> tools when they actually aid me to design. Rather than hold me back...
>
> > It's fun to play with those old-timers, and they do, after all, still
compute
>
> Yes, which is why I'm on classiccmp. To be honest, I am here to get away
> from all the modern junk!
>
The old-timers were "modern junk" once.
>
> > > To turn it round. If I design the PCB for PROM + latch then I know that
> > > no matter what logic function I end up needing, it'll fit because the
> > > PROM can generate any n-input logic function. But if I chose the PAL,
> > > there's a chance the function I want won't fit into the PAL.
> > >
> > What? Are you suggesting that you'd design a PCB without knowing what
logic
> > functions you need? You've already said that it's not a given that such
a
>
> You seemed to imply you'd lay out the board before designing the PLD...
>
It's done all the time. CPLDs don't care what pins you assign for what.
Routing and timing are deterministic. You can assign pinouts before desiging
the first gate of logic with nearly complete certainty that your logic will
fit just as you desire, even though you don't desire it yet.
>
> > ROM even exists. I'm still curious what sort of a logic function you
think
> > you might encounter that you couldn't implement, including latches, etc,
in a
> > 42VA12 or something on that order. Remember, it has both arrays
programmable,
> > and it has 64 product terms per sum term. (look in the "Signetics
> > Programmable Logic Devices" Data Handbook, from Philips, 1990)
>
> My guess is that some arithmetic functions (parallel multiplier, divider,
> something like that) would not fit.
>
It might not fit for the simple reason that there aren't enough I/O's. A
similar CPLD would probably handle that sort of task, but not an SPLD like
this. It's the number of product terms the puzzles me. I've run into
problems with the software tools wherein the tool has balked at the way in
which I specified an <if...then...else> clause in a FSM description, because
it initially produced too many product terms to a given sum. Careful
restatement of the description cleared that up. I have also run into a few
cases in a 22V10 where I had to move a given equation to a different macrocell
with more product terms because I was using XOR syntax to describe a counter.
That generates a lot of product terms that are subsequently reduced. I've
really never just simply run out of product terms, except on a 20RA10, where
there are only 4 of them per sum.
>
> > > I've not looked at the device, but how about an arbitrary set of 12 bits
> > > (to be decided at program-time) from the product of a 15*15 parallel
> > > multiplier? (I'm assuming that if I want 12 outputs,then I have 30 pins
> > > left for inputs -- if I have more, then just increase the input word
size,
> > > please :-)) Large adder arrays are notoriously difficult to fit into
PALs
> > > (there were even some PALs with XOR terms -- the 16X4, etc) to help with
> > > this). A parallel divider (produces the quotient and remainder in one
> > > clock cycle) is likely to be even worse.
> > >
> > How many ROMs of what size would it take to do what you're suggesting?
The
>
> 1 4-terrabit one, presumably :-)
>
> > > > > > > Otherwise, I'll propose the following lossless compression
system :
> > > > > > >
> > > > > > > 1) Read <2^n> bits from a file.
> > > > > > >
> > > > > > > 2) Regard them as the 'truth table' for an arbitrary <n> input,
one
> > > > output,
> > > > > > > logic function (this step is OK)
> > > > > > >
> > > > > > > 3) Turn that function into the equivalent 'PAL fuse map' (which
you
> > > > claim is
> > > > > > > always possible, moreover, it takes fewer (than 2^n) bits to
specify
> > > > this).
> > > > > > >
> > > > > > > 4) I now write out this smaller number of bits to the output
file.
> > > > > > >
> > > > > > > If step (3) is always possible, then we have a compression
system
> > that
> > > > > > > reduces the size of all possible input files. Which is clearly
> > > > impossible!
> > > > > > >
> > > > > > > > > > that too, but the PAL does it with fewer fuses.
> > > > > > >
> > > > > > > > > And that should tell you something is 'wrong'.
> > > > > > >
> > > > > > Not exactly, since the PROM requires you to represent all the
states
> > of
> > > > the
> > > > > > inputs, while the PAL allows you to use only the ones that are
> > relevant.
> > > > >
> > > > > Do you seriously believe that the compression scheme I've proposed
can
> > > > > always work? Because if you do, I've got a bridge for sale!
> > > > >
> > > > You can implement functions that don't do what you want. I'd be
surprised
> > if
> > > > you haven't done that.
> > >
> > > Eh? Would you mind explaining what the heck you mean here?
> > >
> > I'm saying that if you design a function that doesn't work, that doesn't
mean
> > you can't implement that function. You shouldn't expect a compression
scheme
> > that doesn't work to work in programmable logic.
>
> This proves you've not understood one word of what I've been saying. The
> fact that the compression scheme doesn't work has nothing to do with
> programmable logic. Rather, it shows that there _must_ be functions that
> will fit into the PROM that won't fit into the PAL. Because if that was
> not the case, the compression scheme would work. And that is trivial to
> disprove.
>
Well, you're the one who said it wouldn't work ... I've been saying right
along that if you need a PROM, you should use one.
>
> > >
> > > > >
> > > > Actually, I haven't given this example a moment's thought.
> > >
> > > Ah, right. Probably becuase it proves there are functions you can put
> > > into a PROM that you can't put into a PAL of smaller size (same number
of
> > > ins and outs, but fewer 'fuses').
> > >
> > No... only because you're looking for things that fit PROMs with external
> > registers, while you're not considering what sorts of functions it is that
you
>
> There are no external registers. I am considering only purely
> combinatorial functions for the moment.
>
> > > Let me go through it again. Suppose, to keep things simple, we just have
> > > a 20 input, one output PAL (say a 20V8, but only using one output. This
> > > takes a lot less than 1 megabit to progam it, right?
> > >
> > > So I propose the following.
> > >
> > > 1) Read it one megabit of data
> > >
> > > 2) Regard those 2^20 bits as the outputs for a 20 input boolean
function.
> > > If you like, think of burning those 2^20 bits into a 1 megabit PROM, and
> > > figuring out just what boolean function it now generates.
> > >
> > Have you got a part number for a 1 Mbit x1 PROM?
>
> Sure.. A 128K byte EPROM (What's that? 27010 or something?) and a '151
> mux. OK, you don't like TTL, so program an 8 input mux into any PLD with
> 11 inputs, one output, and 8 product terms.
>
> Anyway, the fact that you can't find a 1Mbit PROM is irrelevant to the
> argument. All we need to show is that a 1Mbit PROM _could_ exist and
> would have the desired property of implelementing any 20 input logic
> function. And that there's a 1-1 mapping between logic functions and
> bitmaps, actually (which there is in this case).
>

>
> > >
> > > 3) Generate the PAL fuse map that generates the same function in a 20V8.
> > > By the above assumptions, this is less than 1 megabit in size.
> > >
> > I'm not sure where this is supposed to lead. Normally one starts with a
>
> That is obvious!
>
> The whole point is, I am now looking for a way to express _any_ 20 input
> combinatorial function using fewer than 2^20 bits. If I can find a way to
> do this, then I can replace the PROM with this 'smaller' device _always_.
> And so I can transform the 2^20 bits from the original file into a
> smaller number of bits. And that process is reversable. That's _exactly_
> what I'd want for a 'perfect' compression scheme.
>
Why all the hand-waving and generalities? How about a REAL and complete
function that can't be expressed in the resources in a 42VA12, which lives in
a 24-pin skinnydip, meaning that in order to gain some inputs, you can give up
some outputs, so let's not fiddle with what's POSSIBLE, just what's real ...
just a real-world requirement that is likely to come up in a logic design
problem. It's dimensioned 42x105x12, so there are nominally 12 sets of 42x105
fuses, not all of which are used for logic.
>
> > > This process is reversable (in that given a PAL fuse map we can generate
> > > the boolean function from it (this is what a PAL does in hardware :-)),
> > > and thus can generate the PROM bitmap to do the same thing. And thus can
> > > recover the original 2^20 bits.
> > >
> > You're putting too much emphasis on the fuse map, which nobody sees, and
not
>
> That's because I want to
>
> a) count how many possible ways there are of programming the PAL (as
> compared to the PROM)
>
> b) I want to regard the PAL program as an ordered set of bits.
>
> The fuse map is the obvious way to do both of these.
>
> > on the relationship between the inputs and output, though one certainly
> > exists.
> > >
> > > However, it's also clear that there are many more PROM bitmaps than PAL
> > > fuse maps. How can we then have a 1-1 mapping between 2 finite sets of
> > > different sizes? Obviously we can't.
> > >
> > And what, exactly does that prove?
>
> 2 things : 1) the compression scheme can't work because
>
> 2) There will be sets of 2^20 bits that correspond (when burnt into the
> PROM) to combinatorial functions that can't be generated by the PAL.
>
>
> > > Do you really never consider non-existant but possible devices for the
> > > purposes of discussion/arguement?
> > >
> > That's the reason programmable logic exists. Devices are needed, but
don't
>
> Again that comment shows you've not understood what I am saying. I am not
> interested in actually making or using that 16L8-varient. I am just using
> it for the purposes of discussion. It's easier to reason about that
> device, and the arguments can then be translated back to a real 16L8.
>
and I'm not interested in discussin unrealistic problems right up to the
moment they become real. I don't care that a PAL can't contain the same
number of bits as a PROM. If I need the whole set of bits, I'll use a PROM.
They do different jobs and it's solving the problem that interests me, not
avoiding that.
>
> > With one output, you could generate only one state of a state machine, so
I
> > imagine it would take several of the hypothetical devices to which you
refer
> > to do anything useful. I find it difficult to imagine working with a
finite
> > state machine with as many inputs as you suggest. That suggests a pretty
wide
> > state register, and separate outputs, perhaps, and quite a few additional
>
> Isn't that similar to what's normally called 'microcode' :-). I've been
> trying to argue they're pretty close...
>
> > If you were ever to want to investigate, thoroughly, at least as
thoroughly as
> > you could by building such a thing, a system that required such a
complicated
> > logic function, with as many as 56 ORs of products with 1..20 ANDed
inputs,
> > you would likely start with a simulator and not with hardware. You'd then
>
> You might, I wouldn't. Let's leave it at that.
>
> > write a top-level functional simulation program and then test it with a
sample
> > program. After that, you'd descend, step-by-step in to the bowels of each
> > module, simulating it as you think you'd implement it, substituting
thousands,
> > or tens of thousands of ANDs and ORs, etc, for what was a bunch of
registers
> > or adders, or whatever, refining your work as you progress to ever lower
> > levels until you've arrived at a simulation, at the gate-level, of what
you
> > eventually plan to build. At every step of the way, you can test the
> > simulation. If you've written your simulator well, when you're done, you
can
> > construct the device at the gate-level or at the SSI/MSI/LSI level. Your
> > result should quite well match the simulation. In my estimation, all this
is
> > much simpler, because the tools already exist, and easier because there's
help
> > available, if it's done with the modern hardware and accompanying tools.
With
> > the free tools from Xilinx, for example, you have Verilog an VHDL,
schematic
> > entry, graphical FSM entry, and lower-level PAL-style (ABEL) tools, all of
> > which can be used together. You can design in the large or at the SPLD
level.
> > You can design a system and verify its logic without ever soldering a
wire,
> > and get representative simulations of your work via the simulator that
they
> > provide. Further, you can specify your design at behavioral, structural,
and
> > register-transfer level and migrate from one to the other as needed in
order
> > to specify more precisely what you want. You can even specify the design
> > using your own TTL library with timing and all as specified for the type
of
> > logic you intend to use rather than using XILINX programmable logic,
though I
> > don't know that for certain. You might have to do this last step in
someone
> > else's freeware HDL.
> >
> > Doesn't this seem less painful than (a) finding a set of bare wire-wrap
>
> Actually, it sounds like pure hell to me... I'd much rather be soldering
> and testing than typing on a computer...
>
So what does that mean? Do you simply string together bits of logic at random
like the infinite number of chimps trying to type a Gutenberg Bible? I'd
think not. Do you plan your logic at all? Do you, or can you even,
rigorously test your concepts before attempting to realize them?
How do you plan out a 32-bit adder/subtractor that produces a sum/difference
in 30 ns?

This software is all there to help you do that. If you think you can do the
job better than the guys who use this software effectively, I'd say you're
kidding yourself. One man, working alone can save 75% of the time it once
took a whole team to do the job of designing a large and complex set of logic
and do so with better than twice the certainty of success. Of that 75%,
probably half is in meeting time not wasted on meetings, but the rest is on
work that the machine can better do because it's not so prone to error.

Another thing ... please explain how you can "test" a circuit of which you
have only one, and what criteria you'd apply to a pass. Do you write test
spec's before you design your work product?

Most of the time, I decide what the goal is before I start on something. Then
I devise a means for testing whether I'm there yet. After that, I start down
the path.

Dick
Received on Wed Apr 17 2002 - 02:34:11 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:34:31 BST