CP/M BIOS setup

From: Richard Erlacher <richard_at_idcomm.com>
Date: Thu Oct 5 21:09:46 2000

Well, it seems everybody keyed on the same, unfortunately wrong, end of the
problem I'm trying to address. Yes, it's true that there are countless
utilities to allow using foreign formats by phantoming or whatever, but
those basically wanted the user to assign a format and then try to use it.

What I'm after is a utility that examines a boot diskette, which, by
definition contains the OS' 3 constituents, the CCP, the BDOS, and the BIOS,
among other things, and, by comparing the portion that would ostensibly
contain the BDOS with an internal copy of the BDOS, find and map out in
sequence those sectors containing the BDOS, and deducing from that the
sector skew between tracks, the sector interleave within a track, and then
loading the OS into the TPA to examine the BIOS to get whatever information
it has to offer.

Allison suggests that the disk parameters are obscure and hard to locate,
but CORTESI's book on CP/M, among others, provides a bit of software that
quickly and reliably extracts any disk-related parameters one would need
from the running BIOS. Given that it can be done automatically on a running
BIOS, it's unlikely one would be unable to do that just as automatically
from a BIOS that's resident in the TPA. This stuff is not relocating
itself. It's quite well-defined where everything is to be found.

In those cases, among which one finds the CCS example, wherein a simple,
"dumb" BIOS is loaded into a 20-K CP/M requiring a 32K memory in which to
run, and subseqeuently used to run a "smarter" and more fully developed
version of the BIOS together with the OS loaded into whatever memory it
finds available thereby making a 64K(actually 61K) CP/M quite attainable,
one would have to examine the autocommand that's loaded in the "dumb" system
in order to find the image that's going to contain the "real McCoy" with the
full-featured BIOS from which the parameters relating to the directory and
data areas of the diskette can be extracted.

This strategy is particularly important in those rare cases where one has
actually done what the CCS folks recommend and format the first two tracks
of an 8" diskette single-density and the remainder at double density.
Likewise, the remainder of the diskette can be two-sided. The reason THEIR
loader doesn't handle the DD formats in their "dumb" BIOS is that it is
written in 8080 code rather than Z80, but their double-density handler uses
Z80 code to move the data. It even does something wierd with that, i.e.
loading an opcode as data in a buffer so it can later be modified to
accomplish a complementary operation to what it normally does. The "other"
approach they recommend is rewriting the BOOT code in the EPROM and the
loader in Z80 code.

>From what I can tell, the information is all there on a bootable diskette,
but one must also be able to extract what's needed from a non-bootable
diskette, since one might just want to read data from a diskette used in a
system that can only boot DD. I know of several such systems so it's not
unheard-of. If a known file of more than 8K length is used as a target, one
can use that file to extract the same information from the diskette, at
least as far as sector skew between tracks, sector interleave within a
track, and the BIOS-related disk parameters are concerned.

What puzzles me is that, if this information is so readily available, why
hasn't the entire process been automated already? I know there are lots of
folks who hack away at these problems, but most of the time, they quit when
their immediate problem is solved. What I'm trying to find out is whether
there might be an easy, or at least rigorous, way to characterize and
ultimately program this task. As I wrote earlier, even the authors who've
written about this tend to equivocate somewhat about exactly how some things
should be done. There is always a bit of a guess as to whether 4K or 8K
allocation blocks should be used when hooking up a hard disk. That's not an
issue with floppies, however.

Generally, when one's trying to read what's on a diskette, it's necessary to
know little more than how big the floppy is, allowing the computer to figure
out what the modulation, data rate, and associated formats and track layout
are. The computer can tell whether it's two-sided, simply by looking to see
whether it can read data from both heads. It can tell whether the diskette
is written in FM or MFM by attempting to read it. It can tell how long a
sector is by reading one and looking at it, and it can deduce the track to
track sector skew by doing a track-read and looking at the order of the
sector numbers on each successive track until the pattern repeats itself.
It can also tell how large an allocation bock is by examining directory
entries.

I've embedded a couple of context-sensitive remarks in the quoted post
below, if you'd like to read them.

Dick


----- Original Message -----
From: ajp166 <ajp166_at_bellatlantic.net>
To: <classiccmp_at_classiccmp.org>
Sent: Thursday, October 05, 2000 6:13 PM
Subject: Re: CP/M BIOS setup


> >the adjacent ones. The BDOS doesn't change, generally, though there may
be
> >changes in the BIOS. By finding each sector of the BDOS, one learns
about
> >the format of the boot tracks. My CCS system, for example, requires, at
> >least for the distributed boot EPROM, that the boot tracks be SSSD.
> That
>
> I have a fully documented CCS and it clasifies as the early basic CP/M
> bios of low to average functionality. It's robust but closer to a minimal
> example.
>
True enough, but it's compatible with a front-panel and the software's
written for an 8080 so you can use their FDC with an 8080 or 8085 as well as
a Z80. Moreover, it's rock-solid. The fact that it uses a nearly
vanilla-flavored CP/M doesn't detract either. I've run into absolutely no
CP/M programs that won't run on it, while there are numerous utilities that
won't work properly on the more modern MPM-targeted boards I got from
Systems Group.
>
> >> the key parameters are the DPH, DPB and SKEW... also you need to know
> >> how big the sector is and if there is embedded skew within the sector.
> >> Then you need to know the disk layout, things like what side/sector
> >> numbering was used. For example I've seen two sided media where
> >> sector one occured on both sides and where identically formatted, also
> >> I've seen side one as 1 thru 9 and side two as 10 through 17...
> >>
What you refer to as skew is what I call the interleave, while a sector skew
is a difference in sector numbering from idex, used by some systems (mostly
early DEC actually, but some truly random-access systems as well) to
minimize delays imposed by rotational latency during track-to adjacent-track
seeks. I'm aware that sector numbering varies from one system to another.
That's exactly why I think an automatic tool to construct the system
parameters and install them in a dummy drive parameter block
> > >
> >These parameters are all there on a boot diskette. It's just necessary
> to
> >find them.
>
>
> Not all and they may be very hard to find. DPH, DPB ahve pointers to
> them as a result of the BIOS call to Seldisk. The SKEW however may
> not be used in the SECTRAN call at all! Often the skew translate is
> a table but it can be calculated and for DD the SECTRAN call is applied
> at the logical sector level and doesn't apply well for double density who
> have sector sizes larger than one logical sector. So skew in that case
> will likely be burried in the raw read/write routine. Or possibly even
> at the logical sector level inside the the physical sector.
>
> So wome things are not guarenteed and also not easily found.
>
> >The Multidisk and Eset that I have are not for this purpose. They want
to
> >be passed the information that I'm suggesting could be extracted.
>
>
> Oh like I said I can be... but if you know it's easier as even with 33mhz
> z180 your going to flog a while getting to the same answer.
>
It's difficult to pass it parameters you don't already know. If you don't
know them, you've got to do some work and that's what I'm trying to
automate.
>
> >That's exactly the problem I'm trying to circumvent. The interleave,
skew,
> >sector size, etc, are all accurately represented on the boot diskette.
> The
>
>
> Ah, no. Most boot sectors are not skewed and like you observed may
> not be the same density or sector size.
>
> >BDOS is the BDOS, i.e. shouldn't be different on different boot
> diskettes,
>
>
> Likely but not always true.
>
> >so long as the CP/M version is the same. Consequently, it should be
>
> There were patches and the CP/M version can be misleading. Many of the
> clones use base 2.2 ID so apps will run normally, most all are written
> using
> z80 unique instructions where DRI used only 8080.
>
If it's not the stuff from DRI, it's not relevant, since it's not CP/M.
I'll admit that's a weakness, but for now, I'm happy to deal with CP/M only.
AFAIK, DRI didn't issue any patches to v2.2. There were several enhanced
systems patterned after and purported to be compatible with CP/M 2.2, but
for now, I'm wanting to deal with the plain-vanilla CP/M.
>
> >possible, having once determined the sector size, to extract,
> automatically,
> >the relative locations of sequential sectors of this known file. Since
> we
> >KNOW and RECOGNIZE the BDOS, shouldn't it be possible to find its
> beginning,
>
>
> BDOS is not part of the CP/M file system! It's in the boot tracks.
>
That's true, BUT, when you have a two-stage boot, you can examine the second
layer boot system, and, in fact, have to in order to avoid getting tangled
up in discrepancies between the boot tracks and the directory and data area.
>
> >parameters from the system BIOS and verifying them against another known
> >file e.g. PIP.COM, should provide the necessary information about the
> >directory and data areas of the diskette. Isn't that so?
>
> You would be forced to do that and heuristically that will be a PITA! PIP
> is in the file system whereas BDOS is out on the boot tracks. the boot
> tracks in the CCS case is SSSD and the system tracks can be DSDD! The
bios
> entries for DPH, DPB do not say if the disk is DSDD or even if it's
> floppy.
> It will tell you how many logical sectors per track, If skew is used. If
> directory is checked, allocation size and the size of the area used ofr
data
> storage. You will have to figure out from that a lot of things that are
variable
> and can still end up as the same answer.
>
In fact, I don't believe they have to be "figured out" at all. After all
the diskette is in the drive. You just have to look at it.
>
> >> >Another item I've wanted for some time to automate is the setup of a
hard
> >> >disk BIOS. Since it's dependent not so much on CP/M quirks but often
more
> >> >on decisions made on the basis of folklore, I thought it might be
> >> >interesting to examine the process as a candidate for automation.
>
>
> it's been done but the usual is to hook the disk IO routine and load a
> mini hard disk bios in high memory. Teltek, Konan and a few others did
that.
> A better way is to provide slots that can be filled with an address of the
> driver(s). The reason for the difficulty is the wide assortment of
> controllers and the varied protocals to talk to them. If it was always
IDE
> or SCSI it would be simpler.
>
> >Well, I don't see hand-feeding a set of parameters that one has to
> determine
> >by guessing on the basis of lots of conflicting folklore as particularly
> >easy. Authors who wrote about the process e.g. Laird and Cortesi seemed
> to
>
> No folklore. There are detailed tables out there for every drive and
> disk going if one care to look. What do you think Multidisk does/is?
>
All I've seen of Multidisk is about half a dozen different formats, used by
a dozen or more different system vendors. Maybe there were later versions,
but since it required I know what its variables are, and I want to determine
what they are by an autmatic process, I figure it's solving a different
problem.
>
> >equivocate considerably about this, and, while it's straightforward to
> come
> >up with a set of parameters that work, it's not easy to come up with the
> >optimal ones, at least where the HD is concerned. Both of the authors I
>
> Optimal ones for hard disk in the timeframe they wrote in was simple.
> hard disks are FAST and Z80s (pre 1990) are NOT. No amount of
> optimization is possible. Actually if you have banked memory caching
> is the solution as it steps neatly around the problems. FYI: the
> problem is that CP/M does a lot of realatively small transfer with
> lots of references to the directory. the true limiter to performance is
> not data rate but latency (mostly from shuffling the head). When Laird
> wrote a fast drive was a Quantum D540(31mb MFM 5.25 FH) with a
> average latency of around 30mS.
>
> >is in hand, it's easy, certainly, but what should one do, given a known
> >bootable but otherwise undefined boot diskette? The reality of the data
> >present on a boot diskette defines all the parameters necessary to
> >recreate it, doesn't it?
>
> No. the boot tracks are always written by a specialized utility like
> SYSGEN (which is not generic code) that is always system specific.
>
True enough, but however the data gets there, we always know where it is,
and, by virtue of where, what it is as well.
>
> >I get emails from people all the time, asking about how to build a boot
> >diskette for a system they can't boot because they don't have a BIOS on
> >the diskette for the I/O ports they use.
>
> Most of those I converse with with that set of questions have no sources
> to work from and find 8080 or Z80 asm code scary to terrifying. Often
they
> dont know the ports in use nor what they mean. Rare is the one with Docs
for
> their system at the time the question is launched. They often think it's
just
> like a PC where dos boots on all if the disk fits.
>
> >Likewise, I get frequent questions about how to formulate an optimal
> >configuration for a hard disk. While it
>
> Like Laird said and I'll say _optimal_ for what? I'd never use the word
optimal.
> Again, expereince most want a drop in replacement like a PC. Most do not
> code at that level or dont wish to try. Many dont have docs needed. So
what
> they want is not optimal, just something that works.
>
> >may not be a terrible thing, it is something many people, including
> myself,
> >though I've done it several times, find daunting. In the absence of a
> >rigorous method it's hard to find peace at the end of the task because
> >so many less-than optimal solutions will work quite well. How's a person
> >to determine what's best?
>
>
> Lesse, I have five systems with hard disks all were added later two with
> code supplied. I find peace with the fact that they work and are
reliable.
> Only one have I applied rigorous and experimental methods to the extreme
> to see what was possible and effective... Occams razor won most often.
>
> Here it is: hard disks and performance. Assume nothing about the hard
> disk used rare is the old drive/controller that can really help you. DMA
> or a seperate processor will help if the CPU is loaded or memory is
> short.

> Caching at the track or cylinder level with a LRU method really helps if
> you have space (64-128k is good). You will cache(call it host buffer if
> you like) anyway as most hard disk have sector sizes larger than 128
> bytes requireing deblocking. Caching the directory seperate from
> the data area cache really pays as it saves head thrashing. Achieve the
>
> above or subset with direct and efficient code.
>
> I've tried this using a IDE drive (still working the code out) and most
> decent over 100mb drive have caching (quantum PRO AT series does).
> use it as it isolates you from things like skew and all.
>
> If you using a an old SA4000, forget all this as making it work is
> three quarters the battle.
>

It does get much more messy when you try to squeeze speed out of the system
in ways the ultra-slow CPU doesn't let you appreciate, but when I said
optimal, I meant for the technology of the time, which meant, at least to
me, getting the most hard disk space to fit into the parameters the system
would allow, without overly restricting either effective space utilization
or directory space. That seems to have been the key tradeoff of the time
... allocation block size versus number of directory entries. One other
factor was swtiching heads rather than moving the head stack. The heads
take at least 3 ms to move from track to track, plus 8 ms on the average, to
rotate half a rev, while switching heads took about 40-50 microseconds on
the early Seagate ST506's. The trick, to me, was always finding a way to
computer head, cylinder and sector from the CP/M sector number you were
given by the BDOS without having to swallow up half a KByte in lookup
tables.

> Allison
>
>
>
Received on Thu Oct 05 2000 - 21:09:46 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:33:15 BST