Solved

What causes the message 'We stopped hearing from Inidicum'

  • 6 December 2021
  • 29 replies
  • 543 views


Show first post

29 replies

Userlevel 5
Badge +12

Time out is probably the problem:

Idle time-out

In IIS, ensure in the application pool's Advanced Settings that the Idle Time-out (minutes) is set to 0 (disabled) instead of 20, so scheduled process flows will continue running even if there is no user activity.

 

This was already the case in my IIS.

@Mark Jongeling also told me to do a shrink on the iam DB. I shrunk files and logs. 

Although I still saw the error, I have a feelings it's less often. I'll keep an eye out and after today check the logs. 

 

Userlevel 6
Badge +10

@Mark Jongeling Our System Log shows an entry every 30 seconds as expected. All other System Flows related to Creation have an interval of every 5 seconds, could it be that their Indicium inactive check is given after less than 65 seconds?

It should still update the agent check-in datetime every 30 seconds so the 61 or 65 second-check is not per se the issue here.

I advise you to create a TCP ticket in which we can take a deeper look at it. What we need is a couple of screenshots of the Schedule log, the “We stopped hearing from Indicium” text appearing, the Indicium error log of the day when the texts appear and the result of the following SQL query run on the SF database:

select * from last_agent_check_in

 

@Mark Jongeling TCP 2648S raised. Noticed an interesting error, hope that it is related to this issue:

2021-12-16T09:06:08.2865241+00:00  [ERR] Error scheduling system flow 'system_flow_generate_definition' for application 11. (0c778fc7)
Microsoft.Data.SqlClient.SqlException (0x80131904): Transaction (Process ID 209) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
   at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at Microsoft.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString, Boolean isInternal, Boolean forDescribeParameterEncryption, Boolean shouldCacheForAlwaysEncrypted)
   at Microsoft.Data.SqlClient.SqlCommand.CompleteAsyncExecuteReader(Boolean isInternal, Boolean forDescribeParameterEncryption)
   at Microsoft.Data.SqlClient.SqlCommand.InternalEndExecuteNonQuery(IAsyncResult asyncResult, Boolean isInternal, String endMethod)
   at Microsoft.Data.SqlClient.SqlCommand.EndExecuteNonQueryInternal(IAsyncResult asyncResult)
   at Microsoft.Data.SqlClient.SqlCommand.EndExecuteNonQueryAsync(IAsyncResult asyncResult)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location ---
   at Indicium.BackgroundServices.SystemFlowSchedulerBase.RunSystemFlow(Int32 guiApplID, TSFApplication application, FullProcessFlow systemFlow, DateTime scheduledTime) in C:\azp\agent\_work\1\s\src\Indicium\BackgroundServices\SystemFlowSchedulerBase.cs:line 112
   at Indicium.BackgroundServices.SystemFlowScheduler.RunSystemFlow(Int32 guiApplID, TSFApplication application, FullProcessFlow systemFlow, DateTime scheduledTime) in C:\azp\agent\_work\1\s\src\Indicium\BackgroundServices\SystemFlowScheduler.cs:line 90
   at Indicium.BackgroundServices.SystemFlowSchedulerBase.ScheduleSystemFlow(Int32 guiApplID, String systemFlowID, DateTime scheduledTime) in C:\azp\agent\_work\1\s\src\Indicium\BackgroundServices\SystemFlowSchedulerBase.cs:line 84
ClientConnectionId:cebd2f0d-f4be-42c7-b3e5-a33b7b1c34ad
Error Number:1205,State:52,Class:13

Userlevel 5
Badge +12

@Arie V  Yes, we have the same Deadlock-error. i’ve seen it in the log file as well - uploaded to the tcp in the ticket. 

@Mark Jongeling  Yesterday we had the issue just as much as normal though, so the shrink did not solve it. 

Userlevel 7
Badge +23

@Blommetje, can you confirm that in the Ticket and attach the log file? Thanks!

Reply