"Nobody programs in machine language", ReRe: "Who you callin Nobody?", ReReRe(...

From: Antonio Carlini <arcarlini_at_iee.org>
Date: Tue Jun 22 16:31:15 2004

> (1) Reliability is always more important. But memory/CPU cycles
> cannot be ignored when your customers are running benchmarks, and when
> you're trying to beat the competition using less expensive hardware
> than they are.

That's what the company I work for need to do if we
expect to stay in business.

> (2) Yes indeed -- but being skilled at assembly
> language programming imposes a useful discipline that carries over
> into other languages.

In our case it is more like being skilled in understanding exactly
how the whole system hangs together. Most of the code (99.99%) needs
to be correct rather than fast (noone cares too much how fast
the web GUI is). The remaining (pretty small) percentage
gets pored over by multiple people, looking for a hint of a
performance gain.

Even that code is almost all in a HLL. In our case that may be
because the trick is to do as little as possible in s/w and
get the vlsi to do it instead.

> (3) Not true. A compiler will beat a poor
> assembly programmer all the time, and an average one much of the
> time.

I'm not sure that's true. A poor assembly programmer can probably
pick a lousy algorithm in an HLL too :-) If they are restricted
to implementing a specific algorithm, then I agree, these days
a compiler for a modern RISC processor should be able to whip
the pants off a poor or even average programmer all the time, and
even a good programmer much of the time.

> But a programmer can know more about the problem than the
> compiler can ever know (because the higher level language can't
> express everything there is to say about the problem) so an excellent
> programmer can always tie the compiler, and in selected spots can beat
> the compiler by a very large margin. It's important to know when to
> spend the effort, and that is also part of what marks an excellent
> programmer.

The trouble is that all programmers (good AND bad) think they
can predict where it is worth directing their effort. For real
world programs, all the evidence I've seen suggests that they
are both about equal in their predictive abilities: i.e. almost
always wrong!

> True, but about a year ago I spent a week or so on a routine that
> takes about 30% of the CPU, and (with the help of a CPU expert) made
> it 50% faster. It started out faster already than what the compiler
> could do; the end result is way beyond any compiler designer's fondest
> imagination.

The presence of numbers suggests you cheated by doing the unthinkable
and actually measuring the performance both before fiddling (to see
where you should play) and afterwards (to see what you'd done). Do
you realise how much trouble you could get into with such an
unconventional approach :-)

Antonio
 
-- 
---------------
Antonio Carlini             arcarlini_at_iee.org
Received on Tue Jun 22 2004 - 16:31:15 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:36:59 BST