Skip to main content

Hi,

We have made a setup for using LLM Embeddings to implement AI based search which we are trying out as we speak (write). My question is about the models itself. Do they, and if so, how do they work with Dutch and other languages (like German)? Would you have to use other models, generate your own or is there some kind of translation possible?

 

Hi Robert,

The embedding process actions call OpenAI's API and you can configure which model is used in the Generative AI providers screen. The default model for embedding currently is text-embedding-3-small. OpenAI shared more info about newer models here: New embedding models and API updates | OpenAI

It all depends on your API key which model you can use. The default model can handle multi-language information.


Hello Mark,

Thanks for the quick reply. We will have a look at the different models, but also need to get some hands on experience to see how we can get the most optimal results based on the default model.


As an additionional comment / reply to the subject. As mentioned by Mark, Dutch is also understood by the model. This opens up opertunities 😀