Not exactly my corner in the compression business, but the "mainstream"
products are mostly old-fashioned in this respect AFAIK. There are many
experimental codecs out that have been benchmarked - you should probably
take a look at
http://www.yqcomputer.com/
and wait for responses from people that are more working on this side of
the field. I'm the imaging guy, you know, I don't compress arbitrary
data. There, bayesian approaches are used, but the applications I know
go more into the direction of de-noising than compression.
Don't ask me, sorry. Somebody else wants to jump in, probably?
> let's say we make a 1st pass on the file to be compressed,
> during which we find 1 (or more) position that have interresting
> similarity with another
> we start the compression with 2 windows , the second one having a
> negative offset determined by our 1st pass
> (here i meant the second window to be only a search buffer in fact)
Well, in my field the "second window" would be "the row of pixels above
the current row". In that sense, yes, but that's because the source data
is two-dimensional and not one-dimensional. I believe I understand why
that could make sense - it could be understood as some "optimized"
PPM-scheme where you ignore data between the two windows. You could also
understand the LZ-algorithm as a two-window approach: The current
scanning window you check, and the position in the text the dictionary
entry came from you currently compare against. Probably a bit
far-fetched, I agree.
Sure.
So long,
Thomas