followup: Rinky dink hamfest

From: Allison J Parent <allisonp_at_world.std.com>
Date: Mon Mar 29 20:55:01 1999

<I agree that the ramdisk improved performance. It was just the
<implementations that made the system so fragile once ramdisk was in place.

I have a kaypro with a 2048k ramdisk, it screams and would beat any kaypro
hard disk. I also have a 512k Mdrive on s100 and it's plenty big
enough and fast. Fragile, neither.

<The virtual disk features with which I had contact were pretty memory
<hungry, in that they required a fair sized buffer in order to allow speedy
<caching of data from the drive from which it was being loaded. They

There were some ugly implmentations. The first one I did was in late '80
and was pretty tight. No banking and it left 56k(to the base of the BDOS)
for transient programs.

<or that it provide a buffer in lower memory so that the higher memory coul
<be switched in and out. This became quite taxing in terms of hardware
<resources and memory bandwidth.

I never ran out of either.

<You're certainly right about the observations you made of the effect of
<various floppy disk handling factors, e.g. sector skew, on performance.
<Since you're probably referring to SSSD floppies, which were not only the
<smallest but also the slowest, I can see why you might favor such a scheme

By late 82 I was running a 4 z80/6MHz multiprocessor system using CP/M2.2
as the core OS with a task manager added. I've done most all of it at one
time or another. I took advantage that statistically the data bus was
actually unused between 40-60% of the time! Z80 T1 and T2 states offer
very litle in the way of actual transfer activity other than preperation.

<The software overhead was burdensome only because it had to reserve quite
<bit of memory in order to block and deblock on a track-by track basis.
<That's why we didn't use ramdisk. One could manage disk I/O almost
<transparently when the disk was spinning at 3600 instead of 360 rpm.

Well to deblock you only needed a chunk the size of disk sector and
typically that was 256 or 512 bytes, trivial space. The code to do it is
under 512bytes. Giving up 1k for a huge increase in performance is worth
it.

Usually I would bump that host buffer up to at least the same size as the
allocation block size, as that meant you could cache a whole allocation
block. That usualy being either 2k or 4k. For that I would bank in a
small amount of ram in lowmem just for that and generally that was based
on the common size chips (2k or 8k x8 rams). later I would do a full MMU
for even more space for track buffers. the code to do basic deblock will
do any size with ease.

The latentcy even at 3600 rpm is low but that assumes DMA direct to ram.
most controllers had a small (usually 1-2 sectors) sized buffer and would
read to buffer and transfer from there.. added delays. Also those
controllers could be driven harder than the usual supplied drivers did
for far greater peformance. The bios often used the least efficient means
to move the selected 128 bytes to the target address. Considering how
often that was done it's a big hit. The average systems did not utilize
the possible performance of the z80 even at 4mhz. When 6 and 8mhz parts
started to show up they were often idling in loops waiting for something
to happen. Spooling printer and keyboard/modem interrupts were often not
done. So the user often never saw type ahead or any semblence of
concurrency. All of which were possible.

If there is a point, CP/M prevented nothing performance wise and usually
anything it did offer was not used (it can signal when to flush the cache).
Think like forground/background were easily done and there were a few print
spoolers that really worked.

Allison
Received on Mon Mar 29 1999 - 20:55:01 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:32:22 BST