ong post below; I figured it'd be politer to make on long one rather than
lots of off topic replies. And rather than responding directly to lots of
people I thought I'd try to making Hofstadter's (and my own) position clearer,
and maybe that'll case light in which to find common ground.
Joachim Durchholz < XXXX@XXXXX.COM > wrote:
I haven't read those parts of GEB recently, but I'm not sure that's a fair
description of what he says. Goedel would still apply to the formal system in
which the mind is embedded. Goedel wouldn't apply to the top level if the
latter isn't a formal system in its own right.
Hofstadter's an AI researcher, supporting grad students writing programs to
model concepts and creativity, programs which run on modern computers, which
are FSMs. He doesn't desire us being more than an FSM. Being more than a
simply programmed FSM, like an expert system or many neural networks, sure.
Back to the ants. I say an ant is a FSM, pheromones and all. Anyone
disagree? Well, let me say the behavior of my idealized ant is perfectly
emulatable by some FSM, and go on. The behavior of the anthill is also
emulatable by some FSM, namely one which is the sum of the FSMs of the ants,
plus perhaps enough spare states to deal with any possible future growth of
the anthill through making new ants. So, the anthill is an FSM. (Or
emulatable by an FSM, but I'm condensing my language now.)
But there's no guarantee that the anthill can be described by a smaller FSM,
or one which doesn't refer to the actual ants. Conversely, the anthill may be
quite describable by sloppy heuristic rules which are practically useful and
much more concise than the sum of FSMs, but which are "non-computational" in
that they're sloppy, namely sometimes wrong, where you can't perfectly predict
when the rules will be wrong. Unless, of course, you go back to the ants.
So. You can believe that neurons are modelable by FSMs, and that the brain is
modelable on one FSM (namely the sum of neuronal FSMs), but not describable by
perfectly accurate rules which ignore the neurons. Hence, non-computational
at a higher level.
Another analogy: bodies moving under Newtonian gravity. Ignoring for a moment
the apparent continuous nature of space and time, any such system is
computational in the sense that starting with perfect measurements of the
initial conditions, you can calculate the forces and step through time.
But a two-body system is also computational at the level of two bodies (more
accurately, given the continuities, it's analytical, but I'd rather abuse my
terms than switch them just now): you can ignore the forces, and make
predictions about future times without having to step through the time, you
can quickly predict where the bodies will be in a constant time algorithm.
But in a three-body system you're not guaranteed that's possible. It's
computational at the level of the forces, but not necessarily at a higher
In GEB, I think Hofstadter's position was actually more aggressive, believing
that while loops between levels were necessary, and that the top cognitive
level wouldn't be computational at its own level, we'd still be able to ignore
neurons, i.e. the loops would be between the top level and middle subcognitive
levels, of "teams of ants" or "signals". In _Metamagical Themas_ he has
essays backing away from that somewhat, such as "Waking up from the Boolean