Another NL thought ...

Another NL thought ...

Post by Steve Rich » Mon, 05 Sep 2005 05:36:42

Steve's Theory:

We each have a world-model behind our eyeballs. Part of each of our
world-models is some idea about what is in other people's world models.
Sometimes we seek to transfer some small portion of our world-model to
another person, so we transmit a set of relationships TO THEIR suspected
world model rather than to our own world model.

These world-model updates are commonly called NL. This process is
analogous to applying updates to a different version of source code in a
source code control system. Of course, this never works right on the
first try, which is why analyzing NL always finds unresolveable
problems, and why we get such interesting arguments here.

Looking at source code updates may tell us some useful details about the
software being updated, e.g. what is broken, and what recent innovations
might fix it, but until substantially everything has been updated there
is no way to see substantially all of the software, which you MUST see
to tell how it works. Especially, things that are NOT broken have no
reason to be updated, so you'll never see them updated, so you ONLY see
the problems! Similarly, until every aspect of human living has been
discussed in depth, there is little hope of understanding our world by
examining our NL. Even then, there is SO much that we take for granted
that others know about the real world, that the MOST fundamental things
will NEVER be discussed.

Dr. E1iza gets past this by ONLY extracting the problem areas from the
"updates" that we call NL. Beyond that, extraction becomes pretty
fragmentary as people presume that the listener knows WAY more than any
computer now knows, and perhaps more than any computer will ever know
until they walk and work among us.

It seems useless to invest years/lives to pushing back the fuzzy
boundary of what ELSE you can learn from NL's fragmentary updates
without a pretty clear idea of where this is heading and how the goal,
whatever it is, is going to be reached.

I've read many postings here by people here who seem to honestly believe
that once they can accurately "parse" NL that they will have achieved
some holy goal that will quickly lead them to success and riches. It
seems from the theory being presented here that all they will have
accomplished is being able to "read" the world-model updates, without
having a working world model to apply those updates to. It isn't clear
that there is any paying real-world application for this beyond some
really narrow applications like better search engines. Am I missing
something here?

It seems that efforts to "understand" NL should address the fundamental
limitations that are imposed by this update process if they are to have
any real credibility.

Any thoughts? Are there "credible" (as defined above) applications for
an NL parser? As I have explained in prior postings, generating foreign
language text from a correct parsing is every bit as difficult and
probably even more difficult than the original parsing, so don't point
at translation as a credible application unless you are have an answer
to the generation problem, which is ALSO a series of updates to a world
model - but whose?!

Steve Richfie1d

Another NL thought ...

Post by Claudio Gr » Mon, 05 Sep 2005 09:52:57

don't maybe agree with any detail of Steve's NL thought, but generally I
see it his way.

I am _very_ interested in the
"MOST fundamental things, that will NEVER be discussed".
Can you Steve (or someone else) please name them or at least tell me what
can and must be done to get them revealed?


"Steve Richfie1d" < XXXX@XXXXX.COM > schrieb im Newsbeitrag
news: XXXX@XXXXX.COM ...


Another NL thought ...

Post by Steve Rich » Mon, 05 Sep 2005 11:19:12


They are the countless lessons we learn in our first year. A few of the
countless examples include:

Pain hurts and is to be avoided.
You can't hold your breath forever.
Look where you are going or you will run into something.
The sun is too bright to look at.
The floor is hard and hurts if you fall on it.
For many years, children are smaller than their parents.
That smelly stuff in your diapers came from YOU.
There is a daily cycle of sunlight and darkness.
People have two arms, two legs, and one head.
Some people are born female, the rest are born male.

This list is nearly endless. I'd guess that there are over a thousand of
these VERY basic facts gained in your first year alone, with WAY more
gained as you grow up, go through school, etc. Exceptions to the above
rules, like sex changes, padded carpets, pain medications, amputees,
SCUBA apparatus, etc., etc., would be discussed, but the underlying
universally accepted facts are NEVER discussed in normal conversation.

What gets REALLY interesting is when someone challenges one of these
very basic never-discussed facts. I attempted to demonstrate this on
another forum by exhibiting one example. I made the test demonstration
statement that "polygamy stabilizes societies" as an example that
challenged everyone's disparaging view of something that they really
knew nothing about. There were dozens of postings from people who were
incensed about the assertion, but NOT ONE posting by anyone who could
see either how this statement was demonstrating the logical principle
under discussion, or even a posting that addressed the content
(specifically, "stabilizes") of the test statement! I wasn't looking for
agreement or disagreement, just to demonstrate the impact of
foundation-challenging statements.

The bottom line is that people are, on the whole, VERY irrational
creatures who are not at all interested in examining their own
irrationality. Duplicating this irrationality in a computer, even if
possible, seems (to me) to be of VERY dubious value.


Another NL thought ...

Post by Claudio Gr » Tue, 06 Sep 2005 18:20:11

gt; > I am _very_ interested in the

I thought this all is covered by the common sense
databases like e.g. OpenCyc .

Have I got something wrong?


"Steve Richfie1d" < XXXX@XXXXX.COM > schrieb im Newsbeitrag
news: XXXX@XXXXX.COM ...


Another NL thought ...

Post by Steve Rich » Wed, 07 Sep 2005 06:48:27


Certainly not ALL of it. I hadn't looked at this before, so I spent some
time browsing the write-ups, examples, etc.

I'm not sure. You obviously know a lot more about this site than I do.
I'll make some impressionistic comments to show what *I* got from the
web site and allow you to correct me.

What I saw from the web site was more of a dictionary than a common
sense sort of thing. Maybe they just picked poor examples, or perhaps
they did them poorly. If you have access to the actual database, perhaps
the best test would be to take my list of examples and find the closest
fit to each of them you can. If, say, half of them can be found in the
database, then you might infer that with twice the number of common
sense facts that you would have at least the common sense of a
one-year-old child.

To illustrate, they presented the entry for "RoadVehicle" indicating
that they are designed to travel on roads. Of course, the fact that they
are "designed" at all is a bit off the subject, and certainly some
RoadVehicles are doubtless made freehand with a torch (I know a guy who
does this, a sort of "Road Warrior", with less than a grade school
education). The key point is that they DO travel on roads, which wasn't
at all clear from the entry. Further, such vehicles have some other
common characteristics, e.g. they have exactly one driver, they are
painted, they have windows, headlights, bumpers, and other things
required by law, they travel on tires made of rubber, etc.

As Joe Devin noted in a recent posting, one of THE most important things
to know is WHAT characteristics of a noun phrase are modifiable. Without
this, you cannot successfully parse any but the simplest of sentences.
For example, it would be very useful to know that RoadVehicles have
color, and that red is a color, so if you see red and RoadVehicle in the
same sentence, you can semi-safely presume that one modifies the other,
though there ARE exceptions, e.g. "The red man drove the RoadVehicle."
Of course, here, men can have certain colors, e.g. red, white, and
black. Other colors like blue can infer something other than color, e.g.
sadness, e.g. in "The blue man drove the RoadVehicle.", what color is
the man? Almost certainly NOT blue!

Compare "A bright and shining red caught my eye, as I watched the man
drive the fire engine." to "The black was definitely out of place - as I
watched the New Orleans refugees at the Houston Astrodome." In each of
these sentence, the color applies to a completely different part of the
sentence because we know that fire engines are red and the Houston
Astrodume is not black. Imagine the sorts of common sense knowledge
needed to figure THAT out.

I can STILL remember an incident when I was 5 years old. My mother had
taken me for my first ride on a Seattle transit bus, and a black man
climbed aboard - the first one that I had EVER seen. I pointed to him
and announced in a loud voice "Look at that dirrrrrty man over there!"
as my mother quickly wrapped her hand over my mouth and explained in a
whispering voice that some people are actually BORN that way! Sometimes,
important common knowledge is learned late.

Perhaps what is needed is a gigantic e-book of common sense knowledge
(who needs an API?!), that everyone just feeds into their NL program and
then checkpoints the tables in preparation for human use.

Of course, trying to actua

Another NL thought ...

Post by Mike » Wed, 07 Sep 2005 07:33:51

"Claudio Grondi" < XXXX@XXXXX.COM > wrote in message
news: XXXX@XXXXX.COM ...

Most of these "fundemental things" are useless to the majority of AI or
software systems.
(Unless, for example, you are programming an autonomous robot that lives and
grows among us, or you're designing an expert system where the domain
pertains to developmental psychology.)

The domain-independent guts (such as learning, reasoning with facts,
reasoning about reasoning, use of proper/ideomatic dialog structure, etc.)
are probably most important for most future hard-AI applications - but these
are also the least likely to be codified in a way that can be imported into
a system which by definition already has to deal with most of the these
issues to operate in the first place.

Even within a specific domain, doing something useful with a database like
Cyc, other than mining for very narrowly defined cross-sections, is a
daunting task.



Another NL thought ...

Post by Mike » Wed, 07 Sep 2005 07:47:44

Steve Richfie1d" < XXXX@XXXXX.COM > wrote in message
news: XXXX@XXXXX.COM ...

You should spend some time reading all the Cyc papers, and not just OpenCyc,
but the whole enchilada. It's not just a database of facts, but of
highre-level constructs and an engine for dealing with all of it. (It's been
15 years or so (pre-CycCorp) since I looked at it, though.)



Another NL thought ...

Post by Steve Rich » Wed, 07 Sep 2005 08:41:50


There are two levels of this. At the domain level, you are absolutely
right. However, some/much of this information is needed even for basic
parsing just to identify what modifies what in a sentence, before you
ever get near to dealing with domain-specific issues. THIS early use of
such information seems to be what is so immediately needed by many people.


I think there is a bootstrap procedure that would work, e.g. where
Chapter One would consist only of very simple sentences whose parsing
would be trivial and obvious even to a computer, e.g...

Vehicles can have color.
Red is a color.
Blue is a color.
Depressed is a frame of mind.
Blue is a frame of mind.

Chapter Two would consist of more complicated sentences requiring
knowledge from Chapter One, etc., e.g...

People can only have the colors white, black, and brown, and tan.
People can have any frame of mind.

I'm working on narrow domain-specific applications with Dr. E1iza, and
from that vantage point I doubt whether anyone else's database would be
of enough value to even bother converting formats! The apparent KEY to
domain-specific success is the CAREFUL structuring of the
domain-specific knowledge. If someone else already figured out how to
structure it, then they are probably already ahead in the marketplace.
Without that, I do better starting from a blank screen. In short, you're
right and then some.


Another NL thought ...

Post by Ted Dunnin » Wed, 07 Sep 2005 09:08:37

> I thought this all is covered by the common sense
> databases like e.g. OpenCyc

Somebody should have let you in on the joke. Cyc has a been a collosal
waste of time and, as far as I know, hasn't ever been a useful
component of any working system. Certainly not part of any system with
any practical import. Why Cyc isn't much use isn't at all clear, but
my own guess is that a hand-coded approach to machine intelligence runs
afoul of software complexity issues long before it is useful.

> Have I got something wrong?

Nah... you just believed a relic of 80's AI hype.

Another NL thought ...

Post by humigue » Thu, 08 Sep 2005 17:00:57

teve Richfie1d < XXXX@XXXXX.COM > wrote:

As I understand it, a world model is just a bag of facts that
individuals collect from the environment through their senses. These
can range from immutable facts about the environment ("A centipede
has less than one foot") to facts that change ("It's raining cats and
dogs") to metaphysics ("There is a god up there"). Lower animals are
restricted to facts about the physical world; humans can go beyond
that by using language.

Some facts stand by themselves, others are inter-related and only
stand together: You know that "There is a god up there" because
"There is so much beauty and perfection in this world".

Due to the huge amount of facts collected and the limited storage
capacity, facts cannot be stored in raw form; some form of data
compression must be used. One of the mechanisms used by the brain to
achieve compression is to group similar facts into concepts.

The world model affects the way an individual behaves in many ways:
When you go out you take your umbrella because "It's raining cats
and dogs". You always try to do the right thing because "There is a
god up there".

In the aspect that concerns this news group, you can produce verbal
behaviour in response to some question ("How many feet does a
centipede have?"). The answer to this question includes information
from the world model. The challenge is, of course, to find the
mechanisms that make this possible.

When two persons, Alice and Bob, engage in a conversation, they
produce verbal behaviour. If Bob says "A centipede has less than one
foot", Alice adds this as a new data point to her world model.

A problem arises when one person acquires a fact that collides with
other facts already present in the world model. If Alice says "There
is a god up there", Bob may not be able to accept that because he is
a cold-hearted atheist, due to the fact that "There is so much
injustice and suffering in this world that no god would allow that".

I doubt that we can talk a bout software updates in relation to the
updating process of our world models. According to the commonly
accepted model of the brain, this is clearly a data driven system.
The brain is made of a huge number of very simple processing elements,
the neurons, which also serve as the basic information storage unit.
Knowledge is represented essentially in the form of relationships or
links between neurons.

The processing done by the neuron consists essentially in firing one
or more its outputs according to some rule, and taking into account
the current state of the neuron. We can model a neuron with a very
simple CPU, running a small piece of micro-code, a set of state
variables and a variable number of links to other similar processors.
The learning process doesn't need to change the software of the
neuron, just its variables.

It is my conviction that to achieve intelligence, an artificial
system must stay close to the basic data-driven architecture, where
the software consists mainly of a small set of simple algorithms. Of
course, in practical systems, the software is much more complicated,
due mainly to: a) Most of the time we do not know the right
algorithms to apply and b) We are using a serial processor to run a
process designed for a massively parallel architecture.

Antonio Esteves

Another NL thought ...

Post by Milin » Thu, 08 Sep 2005 23:19:40

Isn't a major constraint of today's computer systems their inability to
respond to external stimuli?
Isn't a major constraint of humans the inability to visualize anything
more complex than in a few dimensions.

I recently had a discussion with a scholar whose Ph.D thesis dealt with
this question over 20 years ago, and the major points were:

1. The inability of humans to visualize anything beyond a few
dimensions limits our ability to design systems that are more complex
than a certain level.

2. The main difference humans have over the machines they build is
their ability to continuously respond to external stimuli and gather
millions, if not billions of inputs throughout their lifetimes, react
to them, and build patterns out of them.

3. Unless we can either find a way to design more complex systems, or
make machines able to gather and respond to stimuli like we do, a
"learning" system will be limited to the level it was designed at, and
get no better.

Milind Joshi

Another NL thought ...

Post by Steve Rich » Fri, 09 Sep 2005 05:43:39


... background skipped ...

... or more commonly, is related to facts in Bob's world model that
simply aren't present in Alice's world model.

Of course, there ARE people who believe that "God" is an emergent
property - that the world ACTS as though there were a God watching, even
though there may not be such a physical entity. Specifically, people
give away their true frame of mind in subtle things they say and do, so
that malintent people are seen as such and do badly in life as a result.
To everyone involved, including the malintent person, this would appear
to be an act of God. Then again, there are the warm-hearted atheists!

Now you are treading on MY turf! Most of my credentials are in NN!
William Calvin wrote many of the books that NN folks use to learn about
the brain, and I was Calvin's assistant for a couple of years when he
was doing epilepsy research at the University of Washington. I learned
most of what I know about real neurons from Calvin, and he first learned
about NN from me! All that having been said:

There is FAR more complexity in neurons than most people suspect. In
cases were a certain characteristic is predicted from theory, it is
routinely found with incredible accuracy! I wrote about this linking of
theory and functionality in the very first ICNN conference back in the
mid 80's, plotting laboratory results over theoretical curves. One of my
articles showed that at least some neurons are clearly communicating the
logarithms of probabilities of assertions being true, so that adding
such inputs as many neurons do is effectively performing a weighted
probabilistic AND function. Of course, if this were true than inhibitory
synapses would be performing an AND NOT function, which in logarithmic
space would not only be nonlinear, but discontinuous at 1 because it's
NOT is zero and the logarithm of 0 is undefined. This precise inhibitory
synapse function curve, complete with the discontinuity, have been seen
in the lab and were published 25 years ago!

In short, forget about "simple". Neurons appear to do whatever they need
to do to get the job done as well as possible. Our problem is that we
just don't know how this must/is being done. That our main laboratory
tool is an electrode that to you would be the size of a telephone pole
certainly doesn't make this any easier, as usually all you see is are
the death throes of the neuron.

Actually, only about 10% of the computing cells in your brain ever fire
at all! Most of them appear to be purely analog computing elements. That
the early definition of "neuron" excluded the glial cells (that make up
the "silent" 90% of the cells), simply relegates what we know as
"neurons" to being dumber cells used to transmit values over longer

You should say "We can model WHAT LITTLE WE KNOW ABOUT a neuron with a
very simple CPU. It is not known how difficult it might be to model all
that is ..." For example, some neuromuscular neurons appear to keep the
equivalent of a performance table that gets modified with use, so they
know how much signal to put out to a muscle group to achieve the desired
contraction, in the presence of nonlinear responses, exhaustion, etc.
Indeed, there is now an area of alternative medicine known as "Somatics"
which puts people through simple/unusual exercises to update the entries
in these tables to solve muscular issues like som

Another NL thought ...

Post by dave davie » Sun, 11 Sep 2005 00:33:57

Interesting comment Steve. I thought that most neuroscientists still
denied any functional role for glia, relegating them to metabolic
support. Have you read the Nobili (father and son) reports of around
1987 - in Phys. Rev. as I remember. They showed that a simple model of
glial electrodynamics showed what they referred to as a holographic
memory described by the Schroedinger equation.


Another NL thought ...

Post by Steve Rich » Sun, 11 Sep 2005 04:59:56


I first heard about the potential for computation in glial cells from
William Calvin (I was his assistant then) in about 1974.

The wet-lab guys (Calvin was one of them then) get to see LOTS of stuff
that isn't in the literature, and which they can't characterize well
enough to publish themselves. This makes for a 30 or so year gap between
observation and general knowledge.

This was published as a proven fact in Scientific American sometime
early in 2004, along with lots of rather speculative explanations that
were less than convincing, at least to me.

A number of people have looked for ANY evidence of holographic behavior
which you would see as correlation in the operation of nearby cells, but
so far no one (that I know of) has ever found so much as a hint of this.
Besides, holographic operation would arguably be a very inefficient use
of computing resources, as things must get distilled down to single
indications of what is there and what to do about it. Given the
(potential) computing resources of individual cells, I would expect
something much more resource-efficient than holographic operation.

Note that to avoid embarrassment, failed experiments seldom ever get
published, so that failed theories tend to hang around long after they
have been debunked. Given the apparent ease of demonstrating holographic
operation, that it hasn't been observed yet should tell you something.

In short, it appears that in its back and forth swaying, that the jury
is now leaning in the direction of glial computing, but there is still
much more that isn't known that is known about this.

Steve Richfie1d

Another NL thought ...

Post by humigue » Fri, 16 Sep 2005 06:25:12

If the world model is indeed a bag of facts, this shouldn't be a
problem. If some fact is presented to Alice, it will be stored as a
new data point in her world model, irrespective of the fact that it
is related to other facts in Bob's world model. As I see it, the
world model includes everything that reaches our senses. If we can
establish useful relationships between a newly acquired fact and
other facts already present, so much the better, but this is not a


That's exactly what I had in mind. I never meant to imply that
discovering the functionality of each type of neuron in the human
brain was a simple thing. One thing is certain: Even the most simple
behaviour produced by the brain requires the activation of billions
of neurons. It is reasonable to infer that the contribution of each
one to the final result is small.

In any case, I do not favour the neuronal approach to AI. After all,
what we want is to simulate is the functions of the brain, not its
structure. The neural approach has many problems: One of them is
that the neuronal model is not well adapted to the general-purpose
CPUs that we have available today. Another problem, as you pointed
out, is that our knowledge of the brain physiology is very
incomplete at the moment. Then there is the bird-plane argument.
Today's planes don't flap their wings, yet they are able to fly at
super-sonic speeds.

It is of course important to use the findings of neuroscience as a
guide to help us select the proper models and algorithms. This was
precisely the reason why I mentioned neurons in the first place.

Antonio Esteves