by cr8819 » Fri, 22 Oct 2004 00:20:45
how about "least probability" data, eg:
you allways pick the values least likely with the current model, and then
update the model with the picked value.
I would guess this would exhibit patterns similar to random data (fairly
even statistical distribution, fairly chaotic values, ...), however, it
would differ in that it could be compressed by making a predictor that takes
the model into account, but would compress poorly with a normal compressor.
likely random variation would be needed though, as the basic algo I was
imagining on its own would result in repeating patterns (which would
compress to easily, eg, with lz77).
other algos could be possible to generate data that does not compress well
with lz77 (eg: based on unused or least-used hash entries).
that or I am just being stupid...