Skip to main content

Hello everyone,

We recently implemented a solution for manual tests. This is that we can generate a description using generative AI. This works well for single objects, but when I want to generate descriptions for all of the remaining 200 tests, it becomes incredibly slow, and eventually even freezes. I thought this was my code at first, but when I tested it with the already present description generator for template code, it did the exact same.

So now I have to generate my descriptions one by one, which will take an incredibly long time, otherwise I can wait about an hour for it to reach 25% only for it to freeze. We make use of GPT-4.0-turbo. Is this known to be slow or is there something wrong?

Hi Sander,

It may be the throttling limit that OpenAI sets. I have seen happen that when you generate an OpenAI API key that the organization is set to Personal. With that the key becomes quite restricted. Can you verify the key is indeed using Organization rather than Personal? Also be sure to check the Indicium log. Any error, such as throttling limits, will be logged in there.

For the most part, this enrichment is simply sending requests to OpenAI and waiting for response.


Hi Mark,

We made an API-key on an organization account. So I think that got configured correctly. I understand that it is awaiting responses, and I understand that it could take long(er), but that does not explain the problem where it freezes and get stuck at around 25% completion. If this did not happen, I would gladly wait a few hours for it to complete, in the meantime I can work on something else.

It should not be necessary to process as much data, but seeming we are migrating our manual test cases to the SF, it is necessary now. It works fine but slow, until it freezes. So could it be default OpenAI behaviour to freeze all requests if a limit is reached, or should it continue after a bit of cool down. Because I do not think the freezing makes sense.


If the freezing is indeed replicable you can create a ticket for that in TCP. I don’t recall us experiencing the same issue, but with your Software Factory we may be able to spot the issue at hand.

The first part of enrichments is a System flow, I can only imagine Indicium to become inactive during the enrichment run. That also should be visible in the Indicium log.