Question to sql generation?

Related products: Software Factory

I came across a story , and it mentioned a few AI cases that seemed interesting. Are there any movements towards adopting these type of technologies?



There is a specific mention on 'question to SQL generation'. It would be great to have a user type in a question and the systems generates the right query to fetch the data and show it.
I do think this would be an idea for Microsoft to improve the SQL Server Management studio, but it could take a while till we see that. Thinkwise could use this for the in-GUI SQL editor I think.
Not only for the in-GUI SQL editor. With this question to SQL generation, you could, like Freddy mentions, let the user ask the application questions, in natural language, and the AI would turn that into a query and retreive an answer.



I think for ad-hoc questions from users, this would be an amazing tool! Especially combined with the structured way applications are set up using the Software Factory, you could use the SF as a meta-database, so a user does not have to be aware of the names of tables or fields. The AI knows the translations as well and can use those.



A question like 'How many projects have we running for the customers in Belgium?' should be an easy question to answer. The possibility for end users, is in my opinion endless and very powerful.



I'd strongly encourage that we look into this. @Marcel Zuur, do you have any ideas about this?
Leveraging application models and corresponding translation sets in order to process questions and commands in natural language is something we have already looked into over the last two or three years. This concept was actually the basis for an experiment we did with a user interface that used text and voice commands to perform actions (i.e. generate requests to Indicium).



I expect that we will invest more research into this in the future.
Leveraging application models and corresponding translation sets in order to process questions and commands in natural language is something we have already looked into over the last two or three years. This concept was actually the basis for an experiment we did with a user interface that used text and voice commands to perform actions (i.e. generate requests to Indicium).



I expect that we will invest more research into this in the future.




What did come out of the experiment, was it successful, and to what degree? Would be really cool to see/know more about this.

The proof of concept was very successful. Natural-language understanding is a very good match with model-driven development, because all information needed to train the required machine learning models and to process the results is already available in the Thinkwise model.

You can expect more on this topic in 2020!