OT memory too cheap to pass up

From: ajp166 <ajp166_at_bellatlantic.net>
Date: Sat Jan 27 19:26:38 2001

From: Tony Duell <ard_at_p850ug1.demon.co.uk>
>2) The physical address to write it to.
>
>To get the first may involve calculating the parity bits based on the
raw
>data word. This, agreed, takes some time, so the parity bit is available
>later than the data.
>
>To get the second involves an MMU-type operation. We have to map the
>program-generated virtual address to a physical address. This will also
>take some time, so the relocated address is available some time after
the
>virtual address.


Usually the physical address is the only one we talk of at the memory
pins
and any virtualization, mapping and all is prior history.

>We then have to apply the address to the memory. Probably in 2 parts,
row
>and column. Until both parts have been applied to the memory, in general
>the state of the data lines is irrelevant. Again, strobing in the
address
>in two parts takes time.


Usually the address for the memory preceeds the data. If the system
expects
DRAM it's often far earlier to permit the MUX operation.

However, in the PC world where the conversation started considerations
of MMUs and the like were limited to the older 286s as the 386 class
and later were alrady delivering physical addresses to the memory
subsystem.

>Now, suppose the parity calculation takes p ns. Now, if the parity
>generator circuit can get the raw data more then p ns before the address
>can be got into the RAM, then the parity logic is not slowing things
down
>at all. The parity bit is available before the RAM can use it anyway.


Save for that is speculative as most cpus deliver data AFTER the address
of in coincidence at best. Even then it's meaningless as the timing
relative
to what ever strobes declare the address valid and qualify the data read
or write.

One generalization that has mostly held true for most years is that
memory (at the system level) usually slower than cpu. This leads to
things like pipelining, caching and burst mode(block) transfers to get
stuff in and out of the finite bandwith resource. That consideration
was as true for the PDP-8 as latest PentIV/1.4G. It's also true that
things like DMA and video(bit blitters) are also competing for
the same ram bandwith.

Allison
Received on Sat Jan 27 2001 - 19:26:38 GMT

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:33:48 BST