[OT] AI, FSMs, Hofstadter

[OT] AI, FSMs, Hofstadter

Post by phoeni » Fri, 18 Jul 2003 09:48:46


ong post below; I figured it'd be politer to make on long one rather than
lots of off topic replies. And rather than responding directly to lots of
people I thought I'd try to making Hofstadter's (and my own) position clearer,
and maybe that'll case light in which to find common ground.

Joachim Durchholz < XXXX@XXXXX.COM > wrote:



I haven't read those parts of GEB recently, but I'm not sure that's a fair
description of what he says. Goedel would still apply to the formal system in
which the mind is embedded. Goedel wouldn't apply to the top level if the
latter isn't a formal system in its own right.


Hofstadter's an AI researcher, supporting grad students writing programs to
model concepts and creativity, programs which run on modern computers, which
are FSMs. He doesn't desire us being more than an FSM. Being more than a
simply programmed FSM, like an expert system or many neural networks, sure.

Back to the ants. I say an ant is a FSM, pheromones and all. Anyone
disagree? Well, let me say the behavior of my idealized ant is perfectly
emulatable by some FSM, and go on. The behavior of the anthill is also
emulatable by some FSM, namely one which is the sum of the FSMs of the ants,
plus perhaps enough spare states to deal with any possible future growth of
the anthill through making new ants. So, the anthill is an FSM. (Or
emulatable by an FSM, but I'm condensing my language now.)

But there's no guarantee that the anthill can be described by a smaller FSM,
or one which doesn't refer to the actual ants. Conversely, the anthill may be
quite describable by sloppy heuristic rules which are practically useful and
much more concise than the sum of FSMs, but which are "non-computational" in
that they're sloppy, namely sometimes wrong, where you can't perfectly predict
when the rules will be wrong. Unless, of course, you go back to the ants.

So. You can believe that neurons are modelable by FSMs, and that the brain is
modelable on one FSM (namely the sum of neuronal FSMs), but not describable by
perfectly accurate rules which ignore the neurons. Hence, non-computational
at a higher level.

Another analogy: bodies moving under Newtonian gravity. Ignoring for a moment
the apparent continuous nature of space and time, any such system is
computational in the sense that starting with perfect measurements of the
initial conditions, you can calculate the forces and step through time.

But a two-body system is also computational at the level of two bodies (more
accurately, given the continuities, it's analytical, but I'd rather abuse my
terms than switch them just now): you can ignore the forces, and make
predictions about future times without having to step through the time, you
can quickly predict where the bodies will be in a constant time algorithm.

But in a three-body system you're not guaranteed that's possible. It's
computational at the level of the forces, but not necessarily at a higher
level.

In GEB, I think Hofstadter's position was actually more aggressive, believing
that while loops between levels were necessary, and that the top cognitive
level wouldn't be computational at its own level, we'd still be able to ignore
neurons, i.e. the loops would be between the top level and middle subcognitive
levels, of "teams of ants" or "signals". In _Metamagical Themas_ he has
essays backing away from that somewhat, such as "Waking up from the Boolean
Dream".
 
 
 

[OT] AI, FSMs, Hofstadter

Post by Joachim Du » Fri, 18 Jul 2003 15:21:35

amien Sullivan wrote:

The latter is, of course, true, but I found no arguments in GEB that the
human mind is indeed more powerful than a formal system.


Actually, an ant isn't an FSM either, since the transitions are "fuzzy"
- but I agree insofar as I think that the essential aspects of an ant
can be emulated by an FSM.

The problem is that even an ant displays chaotic behaviour, which means
that microscopic phenomena may magnify into macroscopic effects, and
once we have chaos, we have to take quantum theory into account - and at
that point, we lose all similarity with an FSM.
I'm still with you insofar as I assume that an ant is still
*essentially* an FSM. But it's not undisputed ground anymore.


Agreed.

> Hence, non-computational at a higher level.

Well, right - but that's not an interesting result. I can always create
a non-computational model of anything by abstracting away from some of
its properties (or by being imprecise if you will).


We can get rid of the problems of continuity by wording the problem as:
"can we determine the position and velocity of the bodies at any given
point, with arbitrary precision?"
An FSM would not be enough to do this sort of calculation, but we're
still within a formal system, so all the relevant arguments still apply.


It's computational at all levels.
Whether the computation is carried out by evaluating a closed formula or
by using some algorithmic iteration scheme is entirely irrelevant. It's
all algorithms, hence it's a formal system and incapable of escaping
Gel's limitations.
A formal system doesn't become nonformal just by ignoring low-level
detail. You can model formal systems with nonformal systems, and this
might be useful, but it is definitely *not* useful to make the formal
system somehow transcend the incompleteness theorem.


Actually I'm pretty sure that something that *looks* intelligent can be
programmed. That intelligence will be non-human (though people will try
to appear as human as possible, to make the interaction easier).
In what ways such an artificial intelligence is similar to and different
from human intelligence would be an interesting questions, but some of
these will stay unanswerable (such as whether there's a "soul" in that
AI or not, or whether it has a consciousness).
Actually, these issues have always been settled by society, not by
science, philosphy, or metaphysics. There were places and times when
people thought that women had no mind!
I see no reason why this should change in the future. Society will
decide that robots have a mind as soon as they exhibit the behaviour
that society associates with a mind. It's not a question that can be
scientifically researched (unless you're into sociology), and I think
that most AI researchers wisely decided to concentrate on the various
feats that intelligence can do, and how to reproduce them best using a
computer, rather than trying to reproduce something as ill-defined as
"mind".


I think this is confusing levels.
Self-awareness is one form of self-reference, but it's entirely unclear
whether the self-reference of a formal system shares any relevant
properties with self-awareness. It's a classical pattern: we have two
things that are similar in one way, and we have nagging open questions
about both, so we equate them - this doesn't really explain anything,
but we have reduced the number of unexplained pr
 
 
 

[OT] AI, FSMs, Hofstadter

Post by Dylan Thur » Sat, 19 Jul 2003 02:32:27


This is a misunderstanding of quantum computing.

Any quantum state can be expressed as a superposition of classical
states, with complex phases. As such, its evolution can be simulated by
simulating the evolution of all the classical states, with occassional
mixing between the different classical states. So quantum computation
does not change the class of computable functions, only (possibly) the
polynomial-time-computable functions.

The only non-determinism comes when this quantum system interacts with
its environment in producing the final output.

Also note that while it is often helpful to think of the quantum
computation as many classical computations in parallel, a quantum
computer is less powerful than that model suggests; there are only
certain restricted measurements you can make on the results of the
parallel computation.

Peace,
Dylan
 
 
 

[OT] AI, FSMs, Hofstadter

Post by phoeni » Sat, 19 Jul 2003 04:43:42


I'd better correct this, now that I've looked at the books. The "backing away"
I was thinking of is in the P.S. to the third essay on Lisp, and refers to a
section in Chapter X of _GEB_ called "AI Advances are Language Advances".
That section expressed a vision of more and more high level and abstract
concepts being built in language after language, until you could formally
express an AI. This vision was arguably in conflict with the rest of GEB and
anyway is what he backed away from.

-xx- Damien X-)