in the past I came across of feature based algorithms and techniques.
The trick is usually to reduce the representation of a complex
function/signal/geometric or real object by a relative small number of
feature vectors. This reduces the complexity of tasks like pattern
matching, speech recognition etc. Especially in neural-nets this
technique seems to be often used (see e.g. SOM).
The only question that is not addressed in the papers I have read so
far, is the question about the _number_ of these feature vectors.
Usually the authors come up with a certain number of vectors used for
the represention, without explaining how they got this number. I guess
they simple chose the number by looking at their object/function and
estimating how many features they have to represent. A simple
geometric object (line, cube) can get described by a less number of
vectors compared e.g. to a human face. Usually one would like to get s
small number of vectors to reduce the complexity of the task, but at
the same time the objects have to be discribed sufficiently to avoid
So I am still wondering: Isn't there a mathematical way to describe
this, so that one could justify the number of chosen vectors or
perhaps come up with a function/algorithm that calculates
automatically the number of vectors that is neccessary to represent a
certain object or function. I know this of course also depends on the
concrete application, but still there has to by a kind of general
It would be great if you could point me to a paper or mathematical
field that addresses these kind of problems.
Btw.: I am mostly interested in pattern matching (2D/3D).
Thanks a lot,