IDS 10 disk configuration

IDS 10 disk configuration

Post by DL Redde » Fri, 29 Jul 2005 06:59:16



We are migrating one of our HP-UX 11i database servers
to a new disk array and upgrading it from IDS 9.3 to
IDS 10 at the same time.

Current disk array is an attached SCSI RAID 5 array,
forgive me Art. The new array is a HP XP128, RAID 5 as
well but it was installed before I had much say in
things and before I drank of the RAID 10 cool-aid, so
once again forgive me Art.

The old disk layout consists of a bunch of 2GB devices
but we have several tables that could benefit from
larger devices. I plan on leaving my dbspaces just as
they are now but utilizing one large chunk instead of
several 2GB chunks.

Does anyone in this group have a rule of thumb or
recommendations for the device configurations? Would
one large device divided among all of the dbspaces be
just as good as 20 smaller devices? I would think not
but cannot justify that thought with any facts.

I'm leaning toward n devices of x size so that all of
the devices are the same size and I'll split up the
data how I want. The x size would be some factor of
what's best for the array.

This is a unique opportunity to implement an IDS 10
install with the best configuration possible so I
thought that I'd query the community as see what was
said.

DL
sending to informix-list
 
 
 

IDS 10 disk configuration

Post by Art S. Kag » Sat, 30 Jul 2005 01:17:29


<SNIP>
I'll forgive you, but the real question is will Murphy? 8-(

Art S. Kagel

 
 
 

IDS 10 disk configuration

Post by davi » Wed, 03 Aug 2005 09:42:41


At checkpoint time chunks are assigned round robin to page cleaners for
cleaning.

ONE CHUNK WILL ONE GET ONE PAGE CLEANERS FLUSHING IT'S
WRITES AT CHECKPOINT TIME.

We had a 60Gb chunk and someone was loading lots of data into it.

5 minute checkpoint interval and checkpoints up to 6 minutes long!

Just what I needed when running a program that inserts into temp
tables.

My program went from 2 minutes to 20 minutes!
 
 
 

IDS 10 disk configuration

Post by Art S. Kag » Thu, 04 Aug 2005 05:22:26


Good point David. The solution would be to use fractional values for
LRU_MIN/MAX_DIRTY to make sure that there are very few dirty pages at
checkpoint time. Not a complete solution, but it will help with 'larger'
chunks if not 'huge' ones.

Art S. Kagel
 
 
 

IDS 10 disk configuration

Post by Neil Trub » Thu, 04 Aug 2005 07:42:41


Are you both saying that an inevitable consequence of large chunks is long
checkpoint times at times of high update?

I guess this isn't just an IDS 10 thing ...?
 
 
 

IDS 10 disk configuration

Post by ILis » Thu, 04 Aug 2005 09:00:42


Neil

All the events that take place at a checkpoint, including waiting for all
critical processes to get out of critical and cleaning BUFFERS, need to take
place. Then, if you have 10 times as many dirty BUFFERS in a chunk then
this component of a checkpoint (page cleaner) will take longer - probably
less than 10 times as long.

Are you running Fuzzy checkpoints? This will help speed checkpoints as will
having the MAX DIRTY set lower (it is a FLOAT variable). Also have
sufficient Cleaners, one for each chunk. Confirm your dirty buffers with
onstat -R.

MW






sending to informix-list
 
 
 

IDS 10 disk configuration

Post by John Carls » Thu, 04 Aug 2005 10:26:51

On Tue, 2 Aug 2005 23:42:41 +0100, "Neil Truby"






Another reason to take advantage of the fractional values of the
LRU_MIN/MAX_DIRTY variables.

JWC
 
 
 

IDS 10 disk configuration

Post by Art S. Kag » Thu, 04 Aug 2005 23:09:37


At max peak, perhaps. Longer anyway. That's just one more thing to keep in
mind when deciding to use larger chunks. Similarly, it occurs to me, you
would increase checkpoint times by using very large page sizes, unless you
adjust the LRU_MIN/MAX_DIRTY settings for that buffer cache, since the same
number of dirty pages will require more physical IO than for smaller page
sizes. I guess tuning in 10.00 gets more complex, not simpler.


No it affects 9.40 also. At least the large chunk part does.

Art S. Kagel
 
 
 

IDS 10 disk configuration

Post by Richard Ko » Sat, 06 Aug 2005 22:54:26

Neil Truby schrieb:




smaller cunks are better chunks ....
Always worth the trouble (what trouble is it?)
you have with administration!

Also in partitioning your I/O subsystem:
More LUNs are better LUNs, as the drivers create
two queues per LUN (one for immediately startable
I/O, and one if you are >throttle value) ->
more LUNs = better I/O parallelism, when
using KAIO.

dic_k

--
Richard Kofler
SOLID STATE EDV
Dienstleistungen GmbH
Vienna/Austria/Europe
 
 
 

IDS 10 disk configuration

Post by Richard Ko » Sat, 06 Aug 2005 23:04:07

Murray Wood (IList) schrieb:

Still the I/O for cleaning the LRU buffers during checkpoint
is by far more efficient than LRU writes.
One has to find the balance longer checkpoints and less I/O from
LRU writes between 2 checkpoints versus shorter checkpoints and
(much?) more write-I/O from LRU writes all the time.

If you issue 'onstat -R | tail' a few seconds before a checkpoint
takes place you get the picture of how much there is to do
for the CLEANERS
Of course CLEANERS at least equal to LRUS is a must.
For those using onmode -B,
CLEANERS > LRUS is a must.
(look into onstat -F and see the 'C' flagged cleaners
additional to one 'F' flagged cleaner per LRU, if it comes
heavy)

These findings are the only reason why it can be useful to
have less than 128 / 512 LRUS. If you do not have 512 LUNs
configured to you systems and have 512 CLEANERS at the same time
starting I/O you can watch nice but unwanted I/O queueing :)

dic_k

--
Richard Kofler
SOLID STATE EDV
Dienstleistungen GmbH
Vienna/Austria/Europe