An example is inserting an order based on a huge calculation took 13 seconds. The second time i did this it only took 2.8 seconds. We know this big 'trigger party' is a problem, but this is the way it works; the 3 seconds is acceptable, but 13 is not!
I'd love to know if there's a way to see what is increasing this performance this much (like a cache) and is there a way to always achieve this? Like force caching stuff?
Who can help me with this issue?
Best answer by Robert Jan de Nie
Compiling a query plan for a large trigger will cost a lot of CPU time. Moving this to a large procedure will probably not change much as the large procedure will also eat up a lot of compile time.
There are various ways to deal with this. However, I can't provide any silver bullet advice, it really depends on the situation.
I agree with this statement, complex logic stays complex, no matter where you put it. It is however, from my experience, far easier to comprehend and optimise a procedure than a trigger firing and resulting in multiple triggers to go off. Also there is often less overhead because a procedure is purposely built for a specific job. Triggers fire regardless and you have to at least test for cases you don’t want (a part of) the trigger to do anything.