Skip to main content
I have a problem with performance. The thing is there is a huge insert trigger which does all kinds of things when inserting an order. The insert trigger also inserts in a lot of other tables, causing a lot of insert triggers to go off. The strange thing is. The first time I insert something, the performance is just awful. The 2nd time I insert in the table, the performance is way better. I bet this has something to do with the SQL server caching stuff.



An example is inserting an order based on a huge calculation took 13 seconds. The second time i did this it only took 2.8 seconds. We know this big 'trigger party' is a problem, but this is the way it works; the 3 seconds is acceptable, but 13 is not!

I'd love to know if there's a way to see what is increasing this performance this much (like a cache) and is there a way to always achieve this? Like force caching stuff?



Who can help me with this issue?
Don't use triggers :P



Seriously, challenge that status quo. Even if it is a lot of work right now, a 'trigger party', i.e. (complex) business logic in triggers is never a good idea.



However, to solve the immediate problem, I think you should measure where your biggest performance problems lie and solve those first. An easy way to measure (on a test environment) is to write back timestamps from within the trigger and then focus on the biggest timegaps.

It could be as simple as a missing index or a faulty join, collecting and discarding huge amounts of data.
For performance bottle necks the Profiler is a very helpful tool, but be careful to use this with a production system since the profiler takes up a lot of resources.



Run the profiler with the 'Tuning' template. In case the tuning template is not available in the events selection tab check the tick box show all events. Then in the stored procedures section check SP: StmtCompleted. Doing this you will get each individual statement with its duration and the number of reads/writes. This will give you a clear indication of where your bottleneck lies.
Hi Tom,



If you use the command SET STATISTICS TIME ON before you perform the two insert statements, check the SQL Server parse and compile time output.



I expect the first call to have quite some parse and compile time, whereas the second call has the query plan ready to go.



Try calling the command DBCC FREEPROCCACHE in between your two calls (not on production!) and you will probably see the second call taking just as much time because it will have to recompile a plan.



Compiling a query plan for a large trigger will cost a lot of CPU time. Moving this to a large procedure will probably not change much as the large procedure will also eat up a lot of compile time.



There are various ways to deal with this. However, I can't provide any silver bullet advice, it really depends on the situation.


Compiling a query plan for a large trigger will cost a lot of CPU time. Moving this to a large procedure will probably not change much as the large procedure will also eat up a lot of compile time.



There are various ways to deal with this. However, I can't provide any silver bullet advice, it really depends on the situation.




I agree with this statement, complex logic stays complex, no matter where you put it. It is however, from my experience, far easier to comprehend and optimise a procedure than a trigger firing and resulting in multiple triggers to go off. Also there is often less overhead because a procedure is purposely built for a specific job. Triggers fire regardless and you have to at least test for cases you don’t want (a part of) the trigger to do anything.

Reply