OT memory too cheap to pass up

From: ajp166 <ajp166_at_bellatlantic.net>
Date: Sat Jan 27 17:08:44 2001

From: Tony Duell <ard_at_p850ug1.demon.co.uk>
>I see what you're saying, but I don't think it's as simple as that.
>
>After all, in a reasonably complex system you're going to want to write
a
>given word to a given address. The address is probably going to have to
>be relocated by some kind of MMU, and then applied to the DRAM in 2
>halves. And while the address is being processed in this way, it's


Parity is either in parallel or downstream of the MMU. Also the
parity is always applied at the byte level so for 72pin and larger
simms there are two (or more) bytes to apply parity to.

>possible that the data is already 'available', so the partity bit could
>be calculated at the same time. Then it doesn't take any longer to
>calculate and store parity -- in a sense the critical path could be the
>address relocation.


Still no. Parity is calculted from the data and by default has
some inherent propagation delay associated with the logic.
For example the old and venerable 74180 that delay is 10s
of nS (about 40 from the 1985 databooks) LATER than the
the data.

>It depends -- a lot -- on the design of the entire machine.


While there may be exceptions parity is generally downstream
and part of the memory system rather than CPU. The expections
are some of the big iron that had parity on all data as part of the cpu.

>So I guess that it might well be the case that parity memory _for a
>particular machine_ might have to use faster chips than non-parity
memory,
>and this might be one reason why they're so much more expensive. But
it's
>certainly not the case that all memory modules that store a parity bit
are
>faster than non-parity modules.


They are expensive if only for they are Nbits wider (more rams).


Allison
Received on Sat Jan 27 2001 - 17:08:44 GMT

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:33:48 BST