Should "begin tran" hurt like this?

Should "begin tran" hurt like this?

Post by jxster » Sun, 07 May 2006 07:18:25


Here's my question: should wrapping this code in a transaction have
ANY effect on its performance? Because I'm seeing some major effects,
and did not expect it.

OK, here we go:

Have what looks like a fairly innocent SP that inserts a bunch of
records into a table.

In this case, the table is starting out empty, though in production
that will generally not be the case.

The format is generally:

begin transaction

insert into mytable
select from yourtable
where something=whatever
and yourpk not in (select mypk from mytable)

insert into mytable
select from hertable
where something=whatever
and herpk not in (select mypk from mytable)

...

commit transaction

Run from a DTS package after four hours it had not completed adding
about 1m rows on our memory-starved dev machine, and I killed it. The
same code has run in reasonable time, maybe five minutes, on our
slightly larger staging machine, also from inside a DTS package.

Anyway, I hacked up a version of the SP, removed the transaction
begin/commit, and ran it from QA. Completed in under four minutes!
Hmm.

I reset the table and then tried running the original code (with
transaction). After six minutes I killed it, it had not completed.
Ran notrans code again. Completed in three minutes!

No contention on the machine that I can see.

So, it rather LOOKS like something in the management of the
transaction is a problem here. Here, but not there.

So, anybody, wassup with this?

BTW, when it runs slowly and forever, it spawns off six threads for
the SPID, and consumes lots of CPU, doing very little (but not zero)
IO. Certainly no blocking is seen.

Yes, there is physical IO, usually a few thousand, and a few thousand
read-aheads.

I guess the overall performance problem is an artifact of the tiny
memory (now 768mb) on the dev server, causing the optimizer to choose
different plans than on the staging server, and then running them more
slowly, too.

I doubt it would take 60 seconds to run this on a 3ghz, 4gb server.

Just looking for a reality check and any suggestions.

The obvious one is already in action - there will probably be some new
dev and staging servers soon, where soon is the typical month or so.
Meanwhile, we get to study SQLServer behavior under this kind of
stress.

Thanks.


BULLETIN:

After freeproccache, the original code ran in 1:57!

Hmmm. Harumph.

Better than a reboot ...

Josh
 
 
 

Should "begin tran" hurt like this?

Post by jxster » Sun, 07 May 2006 07:32:56

BTW,

SQL2000, and as it turns out, SP2.

Josh


On Fri, 05 May 2006 15:18:25 -0700, jxstern < XXXX@XXXXX.COM >

 
 
 

Should "begin tran" hurt like this?

Post by jxster » Sun, 07 May 2006 08:16:21

Further observations.

Afte the original code ran in two minutes from QA, I tried running it
from the DTS package. Nope. I killed it after ten minutes, then
cloned a DTS package to call my _notrans version. That one ran fine,
in just a few minutes from inside the DTS package, no problem.

Harumph.

All anomalous behaviors ...

Josh
 
 
 

Should "begin tran" hurt like this?

Post by D » Tue, 09 May 2006 18:07:26


Have you tried to use the NOLOCK hint in your NOT IN (SELECT .....)
part? I think that when you use BEGIN TRAN then it locks MYTABLE and
then your select statement will wait forever.

Regards
S ***
 
 
 

Should "begin tran" hurt like this?

Post by jxster » Wed, 10 May 2006 03:19:50

On Mon, 08 May 2006 11:07:26 +0200, "S *** Persson (DK)"


But it runs fine from QA.

Hmm, that defaults to read committed, but it doesn't lock out
self-made changes, never did before, anyway, and looks OK even now.

BUT, do packages run by default as read committed, when run under DTS?

Thanks.

Josh