MS-DOS FAT file system ripped off? (was Re: Stupid CP/M question)
> TOPS-6, I believe. (Yes, that's an ancestor of TOPS-10.)
No. Before it was called TOPS-10, it was just "Monitor".
> The structure was called the Storage Allocation Table and was pretty much
> the same idea.
My understanding of it was that the SAT was just a bitmap of the available
blocks on the disk, and that each file had a RIB (Retrieval Information Block)
which pointed to the actual data blocks of the file. This is very similar
to the allocation bitmap, inodes, and indirect blocks used by Unix, and not
very similar to the 86-DOS (AKA QDOS, MS-DOS, IBM-DOS) FAT.
> Somewhere I probably still have have an e-mail message I got from one of the
> TOPS developers, when I asked him about this topic.
I'm definitely not an expert on the internals of Monitor, so I'd love to see
that email.
> Except for the problem of corruption, lost chains, and all that garbage.
> (As I understand it, in some cases it's not possible to write a program to
> fix errors, because there just isn't enough redundant information!) The
> original TOPS-6 had the same problem.
There has yet to be invented a file system that doesn't get corrupted.
Even log-based file systems written to WORM media have been known to
get corrupted, although it is easier to repair the damage.
Of course, on a PDP-10 (or PDP-6), people didn't just yank the disk out of
the drive (or reboot) at any old time. Disks had to be unmounted; the
operator would issue the unmount command before spinning down the disk, and
that would force the operating system to write any dirty buffers back to the
disk, leaving it in a (hopefully) consistent state.
The problems with corruption commonly occuring with MS-DOS are mostly
caused by two phenomena:
1. The user removing diskettes or rebooting the system while the disk is
not consistent. This could be mitigated to a large extent by a log-based
file system. However, MS-DOS was certainly not any worse about this
problem than any other contemporary disk operating system for small
machines (i.e., that had to run in 64K or less RAM).
2. The lack of any memory protection to keep ill-behaved or buggy programs
from corrupting the in-memory structures, which then can result in
corrupted data being written to disk. Memory protection would largely
eliminate this. A log-based file system wouldn't eliminate this problem,
but would make it easier to recover. However, contemporary microcomputers
weren't any better in this regard either.
The FAT file system was a reasonable design at the time given the objective
of having a file system for low-capacity removable media, and not requiring
much RAM to deal with it. The mistake was to keep extending FAT to higher
capacity drive and larger systems. It's now about 19 years old, about
16 years longer than it should have lived. And those idiots in Redmond
don't seem to have any plan for a suitable replacement for desktop (not
server) platforms.
Received on Tue Jan 12 1999 - 16:42:03 GMT
This archive was generated by hypermail 2.3.0
: Fri Oct 10 2014 - 23:32:05 BST