Robots again

From: Brian L. Stuart <stuart_at_colossus.mathcs.rhodes.edu>
Date: Wed Mar 18 09:33:47 1998

In message <Pine.LNX.3.96.980316012606.3960A-100000_at_lafleur.wfi-inc.com>, aaron
_at_wfi-inc.com writes:
>
>I just finished watching that Discovery program on Robots and was
>wondering if anyone involved in the list has previously/is currently
>working in that field

I decided to make one of my rare contributions to the list on this topic,
but in repect to those who have quite rightly pointed out that this is
off-topic, this will be my only posting on the subject. My only excuses
are that the work I did in this area started over 10 year ago and the
topic at least has some scientific/engineering content in contrast to
much of what has filled my mail recently.

On people working in the area:
While I don't work in Robotics per se and I certainly don't claim to be
in the same league as those who were profiled, my doctoral work at
Purdue was on machine learning that could well be applied to robotic
control. The focus of the dissertation was to present a computational
model based on probabilistic automata that had several nice properties.
1) With a suitable source of reinforcement (a teacher), the model
was complete in the sense that any probabilistic automaton could be
learned. (These were the only proofs in the dissertation.) 2) The model
showed (in experiments) about a dozen of the standard properties of
classical and instrumental conditioning. As you might infer, the
model had both reinforcement and non-reinforcement learning
mechanisms. 3) The model could be implemented using a network of
neurons that were at least somewhat biologically plausible. Such
an implementation exists in the same sense that any real implementation
of a Turing machine or a pushdown automata exists. We can have a
finite approximation to the model. Unfortunately, after the work
served its purpose (getting me a PhD), I haven't really pursued it
further.

On Turing and his test:
On disucssing the Turing test, we must be very careful to remember
the intent and interpretation. Even though the process of the test
asks if we can distinguish between a human and a machine in a limited
context, it in no way suggests that the machine and the human are
equivalent. So the test is not and was never meant to be sufficient
to identify a machine as having intelligence equal to a human. Further,
the test is not and was never meant to be a necessary condition for
intelligence. It is also not a definition of intelligence. What Turing
argued with the test was that if a machine was not reliably distinguishable
from a human in a non-trivial but limited (in time) setting, then we
would have to attribute to it some degree of some form of intelligence.
It may not be the same form of intelligence as humans and probably won't
be the same degree of intelligence, but it would certainly be more
intelligent than a toaster.

On analog computers:
I always feel like the odd man out when I talk about having actually
worked with analog computers. In fact my farewall lecture at Rhodes
College where I taught for 6 years was on the subject of analog
computers. About 20 years ago, my first thoughts about AI were also
along the lines of a "stored program analog computer." It's quite
possible to imagine what such a concept might mean, but the difficulty
is determining what purpose it would serve. Remember that an analog
computer is basically a big differential equation solver. We can
certainly design it in a way that when certain conditions are reached,
we change the equations being solved, but how would we apply that
to AI? Also the combination of analog and digital computer was
fairly common and called a hybrid computer. EIA was one of the
manufcaturers of such beasts. I still have fond memories of the
640/680 I used in college.

On Babbage's machines:
As has been pointed out Babbage's machines were indeed digital in
nature. He used 10 discrete positions on wheels to represent the
10 possible values of a decimal (base 10, not fractional part)
digit. In fact, he went to great pains to make sure that he could
correct for any errors that got introduced due to mechanical wear.
He had no interest in anything approximate in the computational
process. Of course, the problem being solved was likely to involve
some approximation, usually of a transendental function by a polynomial
(at least on the differential engine).

Just a few thoughts on a vaguely related topic...

Brian L. Stuart
Received on Wed Mar 18 1998 - 09:33:47 GMT

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:31:09 BST