Simplest (practical) file system?

From: Bob Shannon <bshannon_at_tiac.net>
Date: Mon Jul 28 19:58:00 2003

Joe wrote:

<snip>

>
> I wonder if it really would sap a lot of time. Modern IDE drives have
>large cache buffers so I would think that system could very likely read the
>data from the buffer. I'm thinking that as slow as these old systems are
>and as fast as the modern drives are that it would be better to use a
>simple and fast algorithim even if it means more drive accesses.
>
> Joe
>
Hold on here, the IDE drive (technically, the ATA drive, but nevermind
that...) may have a buffer, this
it quite irrelevant here.

As soon as you hook your fast ATA disk up to a classic CPU like an
HP1000 or an Imlac, you now have
a slow disk system by any modern standard.

Also, if the classic CPU is running the disk access code, there is
another layer of performance hit.

Another little detail of reality...

Some HP disk systems force the use of DCPC (DMA) transfers. The boot
mechanisim also forces the
use of some linear files, files that use contiguous sectors on disk to
store boot images.

If some FAT system is used that allows non-linear files, then the code
to control the DCPC logic has to
deal with 2 cases, simple linear file transfers, and non-linear file
transfers performed as a series of smaller
DCPC block transfers.

This is complexity I don't need, especially since many other disks don't
demand DCPC transfers.

Lets be a bit more analytical here, clearly people have deep seated
emotions on the subject. This only
complicates objective engineering.

 From all I've read so far, the only advantages FAT based approaches
offer are:

1. Allowing efficient disk space use by implementing non-linear files.

2. Easing disk optimization tricks.

But non-FAT based DOS's have been implemented, and once SQUEEZED, appear
to be just as efficient
as other schemes. Yes, fragmentation is an issue, but this is more
easily solved (via SQUEEZE) then the code
needed to maintain the FAT.

Simplicity suggests that if its not needed, its not implemented.

Traditional file systems tend to need to be tweaked in different ways
(FAT sizes, cluster sizes, etc) for different
kinds of media. I think there is a very different way to approach all
this, and have a single scheme for all sizes
of disks that will deliver a common level of efficiency in all cases.

Think this is impossible? FORTH's block address scheme works. The
trick is to very carefully adopt some
features of other file systems while challenging their fundemental
assumptions and take only what makes sense.

I think there is a lot to be learned from vintage file systems like
TSS-8, RT-11, Northstar DOS, etc.

This discussion is fascinating and very helpful, but not being familiar
with the guts of the variety of DOS's being
thrown into the duscussion I'd really appreciate it if we could be a
little more clear on the specific advantages
each implementation choice implies.

There seems to be a real lack of any objective anatomical dissection of
different methods in print, at least in a
readable (approachable) form. This is proably the only place such a
discussion could ever take place, as so
few people new to file system internals ever develop new approaches.
 What was the last 'really different' way
to store stuff on disk?
Received on Mon Jul 28 2003 - 19:58:00 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:36:06 BST