Simplest (practical) file system?

From: Bob Shannon <bshannon_at_tiac.net>
Date: Mon Jul 28 20:17:00 2003

Dwight K. Elvey wrote:


<snip>

>>>
>>>
>>>I'm thinking about two things: first, trying to keep the entire volume
>>>bitmap in memory chews up almost 5K of RAM, and that's probably not good for
>>>an 8-bit machine, at least, it's not sufficiently memory-efficient, IMO.
>>>Second, if you decide then that you'll only deal with one sector of it at a
>>>time to save RAM, you may have to read-then-write juggle those ten sectors a
>>>lot. If you do a linear search for a free block, it may be that you can rip
>>>through the 255-valued bytes quickly, but they have to be in RAM, so you may
>>>have to do several reads to find what you want, which saps time.
>>>
>>
>> I wonder if it really would sap a lot of time. Modern IDE drives have
>>large cache buffers so I would think that system could very likely read the
>>data from the buffer. I'm thinking that as slow as these old systems are
>>and as fast as the modern drives are that it would be better to use a
>>simple and fast algorithim even if it means more drive accesses.
>>
>> Joe
>>
>
>As I recall, Bob is looking to put this onto some old HP hardware.
>Dwight
>

Yes, I'm using old HP hardware, ranging from a 4 microsecond per
instruction HP 2116 with 16K words
of core up to a sub-microsecond HP 2113 with half a megaword of
high-perf RAM.

For disks, I beleive its possible to support anything from a 2.5 meg
7900 cartridge, up to a 300 meg CS/80
disk, or even my 8 gig ATA disk controller using an identical set of
disk data structures.

In fact, very large physical devices would consist of an array of
smaller identical subsystems, so larger devices
would appear to have subdirectories.

Its true that as the drive gets larger and larger, this approach will
become more and more inefficient. But its also
true that this becomes less and less important at a faster rate than the
inefficiencies grow. Add to this the fact that
a squeeze utility can reduce the performance inefficiencies to nearly
zero as-needed.

So in strict terms of number of machine code instructions needed versus
functionality provided, simple does seem to
imply a flat, linear, probably FAT-less file system. From what I've
read, this is not unlike a BFS partition, only I'm
thinking of something like an array of BFS's for large disks, and a
single 'logical volume' for smaller ones, so all the
disk allocation parameters are constant.
Received on Mon Jul 28 2003 - 20:17:00 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:36:06 BST