We have made a setup for using LLM Embeddings to implement AI based search which we are trying out as we speak (write). My question is about the models itself. Do they, and if so, how do they work with Dutch and other languages (like German)? Would you have to use other models, generate your own or is there some kind of translation possible?
Best answer by Mark Jongeling
Hi Robert,
The embedding process actions call OpenAI's API and you can configure which model is used in the Generative AI providers screen. The default model for embedding currently is text-embedding-3-small. OpenAI shared more info about newer models here: New embedding models and API updates | OpenAI
It all depends on your API key which model you can use. The default model can handle multi-language information.
The embedding process actions call OpenAI's API and you can configure which model is used in the Generative AI providers screen. The default model for embedding currently is text-embedding-3-small. OpenAI shared more info about newer models here: New embedding models and API updates | OpenAI
It all depends on your API key which model you can use. The default model can handle multi-language information.
Thanks for the quick reply. We will have a look at the different models, but also need to get some hands on experience to see how we can get the most optimal results based on the default model.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.