Skip to main content

Hello,

I have a question about a process flow which I made specifically for an integration with another application, say “App B”.

The process flow makes sure to merge the correct record into the correct tables. There is some custom logic where checks are done whether to insert or to update the record. This is an atomic transaction.

We have a webhook configured in App B, which means that every update App B receives is immediately send to our Thinkwise application. Now it seems that App B is updated via an integration as well. Sometimes, this means that the same record in App B gets updated multiple times in rapid succession.

Our process flows is not able to handle this correctly. I get the following error: 

Cannot insert duplicate key row in object “XXX”

 

Do you have any recommendations in order to solve this issue?

Will simply rewriting my check into a merge statement rather than the custom check help resolve the issue? The issue is that I have no easy way to test the issue in another environment at the moment.

Does the table where the insert takes place have an identity primary key? This sounds like the key is set up manually and the merge causes the insert to fail due to a simultaneous insert. 


Hi Erwin,

We have an identity column set as primary key. However, there is a unique index on another columns (which causes the error).

Note that this specific template has no merge statement at the moment:

IF EXISTS (...)
BEGIN
UPDATE
END
ELSE
BEGIN
INSERT
END

 


@NickJanssen How do you know which “request” should be updated/inserted in the DB? Unless I’m misunderstanding your issue, you may want to take a look at some form of locking mechanisms: database - Optimistic vs. Pessimistic locking - Stack Overflow


Reply