That is a lot of questions, but I'll try to give you some pointers. I am going the make the assumption the documents are already in a format where the contents can be queried. If they are not you will have to get them into such a format. A good starting point would be reading our documentation: https://docs.thinkwisesoftware.com/docs/sf/llmWhen implementing AI search you probably want to generate an embedding of every document and compare this to the embedding of the search query. An example of how this could be implemented: Summarization of documents can be done using our LLM instruction or LLM completion process actions. Do keep in mind that there are limits to the prompt size depending on which OpenAI model you are using. You might need to split documents and process them in parts if they are bigger than the token limit.
Hi @Remco Kort, Do you have more information on how to create the base vectors for the searchable content? We have for example a database full of knowledge items and services.. each consists of a title, intro and full description. I have questions doubts on how to structure the embedding request.. TW only support one string text input right? So in this case I should concatenate title, intro and full description? How does TW/OpenAI treat HTML tags? Do they need to be stripped upfront? There is max input (tokens), are these characters? If so what need to happen when you have an item that supersedes this max input? Is there a smart way to validate if a vector is still up to date? Or you just need to track updates on the content that was embedded? Do you have more input or information to share? Correct, i think that would probably work for your use case. But I believe that in the ESG app made by Thinkwise they use larger amounts of data, so maybe ask them (Sander Kiesbrink,
I like the general idea but I think it should be more generic. Like give a developer the option for a column to set whether this column is used in the automatically generated handlers or not. We could provide a Tag that developers then can attach to Columns to exclude them from the automatically generated handlers, but maybe we can think of another way 😄 That would work but is also not a great user experience. Maybe you could add it as options in this screen, that seems a logical place:
I like the general idea but I think it should be more generic. Like give a developer the option for a column to set whether this column is used in the automatically generated handlers or not.
Another extension i find rather useful when you are using azure data studio on multiple devices is the 'settings sync’ extension. This allows you to sync settings to your github account. On other devices you then install this extension and it will sync all your settings / extensions, so you don't have redo it all by hand.
At pre-sales i had a change to work on solution number 2 mentioned by Jop. The code below will add prefilters based on the provided domain elements. Also it will create a new group based on the name of the domain which will include all prefilters. If domain elements have a picture associated with them it will also use this picture as icon for the prefilter. To use this code, replace [SF_DATABASE], [PROJECT_ID], [PROJECT_VERSION] and [DOMAIN_NAME] with your own parameters. use [SF_DATABASE]declare @project_id varchar(max) = '[PROJECT_ID]'declare @project_vrs_id varchar(max) = '[PROJECT_VERSION]'declare @dom_id varchar(max) = '[DOMAIN_NAME]'--create prefilter groupinsert into tab_prefilter_grp( [project_id],[project_vrs_id],[tab_id],[tab_prefilter_grp_id],[tab_prefilter_grp_desc],[sub_menu],[icon],[exclude],[mand],[order_no],[abs_order_no],[insert_user],[insert_date_time],[update_user],[update_date_time])select @project_id as project_id , @project_vrs_id as project_vrs_id , t.tab_id as
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.