< XXXX@XXXXX.COM > wrote in message
news: XXXX@XXXXX.COM ...
How big is the whole table that's being processed ?
How tightly co-located is the data set to be updated ?
If you can run into locking issues doing this in a single
update, what does it mean for the business logic that
one process can (apparently) update rows that another
process is apparently trying to put out of bounds ?
Parallel probably won't help - large tablescans (and
I assume this table is larger than 2% of the size of the
db_cache - but you could try setting it to NOCACHE)
use only the tail-end of the buffer, so tend not to do much
damage to the cache. Moreover, a parallel update will
lock the table - and you are committing regularly to try
and avoid locking issues.
If the few million rows represent a small fraction of the
data, with a good clustering effect, then creating a function
based index to identify them, and updating through the index
Otherwise, take out the commit - you are repeatedly re-reading
the blocks that you have just changed, introducing extra costs
of cleanout, and excessive scattered reads (possibly assisted by
a file-system or disk cache, but still wasting resources.)
Author: Cost Based Oracle: Fundamentals
The Co-operative Oracle Users' FAQ