Skip to main content

As the tooltip indicates, the "Translating using AI" enrichment uses the fallback language (in our case, English). However, it currently seems impossible to translate an untranslated English or Spanish object using AI, which means the usual "Translate using ID" task must first be performed, followed by the AI translation.

I can understand to some extent that this is a deliberate choice, as you would want to review and approve a manually translated or ID-based object before proceeding with further translations. This approach mitigates the risk that, if the initial translation is flawed, the subsequent translations will also be incorrect.

Do you agree with me that the user interface could still be improved? Currently, the task is executed, and you have to navigate to Enrich model > Run enrichment to see that no update can be performed. A simple improvement could be disabling the AI task if the fallback language lacks a translation.

 

 

 

Additionally, I am very curious about the technical handling of the API calls between SF and the OpenAI API. Are the enrichment calls temporarily stored in a (temporary) table within SF before they are actually sent to the API, or how does this process work?

Hi Dennis,

The idea behind the task was to quickly translate languages that you aren't fluent in. It's quite challenging to let AI generate a translation for an object name without knowing anything about that object. In your situation, send_mail_test you would most likely translate to Send mail (test), but that is you know the context of the object. For the AI, it might remove the underscores and call it a day, or it may translate it as "Send a test mail”. AI can fluctuate in what it provides despite our efforts to keep it consistent.

Disabling the task in the screen if you have selected the fallback language might not be too comprehensible for most users, but there is some room for improvement I agree. Hiding it altogether for the fallback language is also an option.

Would love to hear any other suggestions from our Community members. 😄

About the SF and the OpenAI API calls; underwater we have several tables that store the enrichment run, the parameters of that run, the AI data prompts and reponses and such. Enrichment calls are calculated at runtime and then stored temporarily prior to sending. Then we call the Generative AI provider and store the response afterwards.

Quite an intricate process flow:

 

 


Reply