Apple II for intro to microprocessors

From: Richard Erlacher <edick_at_idcomm.com>
Date: Wed Jul 18 14:14:29 2001

see inline comments, plz.

Dick

----- Original Message -----
From: "Mike Ford" <mikeford_at_socal.rr.com>
To: <classiccmp_at_classiccmp.org>
Sent: Wednesday, July 18, 2001 11:03 AM
Subject: Re: Apple II for intro to microprocessors


> >The things the more generously appointed systems like the AIM or the Apple
> >don't
> >allow you to do is (1) use address lines as device selects, (2) use ambiguous
> >decoding, (3) adapt the processor speed to the code being executed, (4)
> >anything
> >else that involves changing system memory mapping or system timing to the
> >target
> >application. That's what I mean by "their features get in your way."
>
> >but it means, at the outset, that you have to learn about the Apple rather
> >than
> >the target.
>
> Precisely, but once you KNOW the Apple II, you know it for all future
> projects. I basically followed the rules for expansion cards which allowed
> me to use address lines as device selects, if by ambiguous decoding you
> mean using just a "few" address lines, sure by including the slot enable
> signal or whatever it was (partial address decoding was done for me on
> Apple II for each slot).
>
Yes, but, since the original question was regarding learning about
microprocessors, which are hardware components, and not
system/software-products, that requires lots of effort dedicated to learning
tools that don't really need to be learned in context.
>
> Number (3) seems like really bad form, making a circuit that depends on the
> processor running at some specific rate. Much better to write code that is
> "aware" of the processor speed and compensates for it.
>
Number (3) is very GOOD form, since microprocessors are HARDWARE and hardware
should be optimized for each task it performs. Microprocessors are just
substitutes for dedicated hardware, and, in the examples I mentioned before,
they'd fall apart if they had to operate at rates a few percent off the rates at
which they function. If you limit a processor to one oscillator frequency,
where you started out with a virtually unlimited capability, you end up with a
VERY limited system. Making the thing faster won't always make it better, where
making it operate at the correct speed, where a loop can be precisely
synchronized with a process operating at an independent rate, will make things
work. This wouldn't be an issue, of course if the processor and memory in an
Apple or AIM were infinitely fast, but they're not. If you fiddle with the
AIM's oscillator, it might work, but until you study out the precise effects on
timing, say, of the print resistors in the printer, you may find that you damage
things, or, if you speed it up, the motor in the printer may not work. It's
true, you could rewrite the code ... but I dont' want to do that. Rather than a
$300-400 AIM, I'd rather buy a $90 (prices are based on 1979, when the AIM was
new, as was the 6801L1) component, and build the target hardware so it runs at a
rate that suits the application. It's HARDWARE, after all, and it should suit
the applicaton, not the development system.
>
> Number (4) never bothered me either. If the target system was sufficiently
> different, then I worked on the Apple II and downloaded code or burnt
> eproms that ran on the target. Sometimes I made conditional assembly, ie
> set a flag and code would compile that ran in the Apple II, set a different
> one and the code ran in the target.
>
Those techniques are fine, but they don't relate to the task of learning about
micorprocessors. They may be involved in a given implementation, but they don't
address the needs of a student learning how to build microprocessor-based
hardware. Elegant hardware is the simplest, least complicated, least costly,
that does the job while meeting all the spec's, including reliability. In some
applications, 99% was good enough. Computers, including microprocessor-based
hardware, that don't achieve a part in 10^9 reliability need thorough study,
however.
>
> Since the whole point of this is learning microprocessors, what is the big
> problem in learning the MOST elegant implementation of the Apple II?
>
The problem is that if you write code that relies on a 0.768 MHz Phase-0 clock,
it's awfully difficult to test in an Apple. The hardware/firmware boundary is
probaly the most critical issue in microprocessor system design. That's what
makes the Apple as good as it is at what it does, but it's also what makes it as
poor a solution as it is for what it doesn't do well. It has absolutely no
flexibility in its execution rate because it's tightly coupled with the video
circuitry. There's even a "skip-a-beat" cycle in which it keeps in sync with
the color burst. That's one that someone pointed out to me back in the late
'70's when the Apple was new, and, while it's a clever way to do what they
needed to do in order to make the Apple what it is, it's just exactly the thing
I'm saying makes the Apple a poor environment for learning about using
microprocessors, which are, after all, a hardware element. What platform you
use to develop software is completely arbitrary, since anything at all will
work, whether it's a 1 Hz 4004, or a 1.5 GHz Pentium. The timing may vary,
though. It takes a while to learn the stuff that it's necessary to know in
order to use the Apple. It takes a while to learn nearly any environment.
However, learning about the way in which software is implemented when your job
is to produce hardware, is something that CAN get you fired, and always WILL
make the schedule slip.
>
>
Received on Wed Jul 18 2001 - 14:14:29 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:33:53 BST