Skip to main content

Idea pipeline (top 25)

1817 Ideas

Arie V
Community Manager
Arie VCommunity Manager

Provide Notification and Download API for new Thinkwise runtime component versionsOpen

Automation of deployments helps in speeding up the deployments ánd at the same time reduces time spent by Developers/Operators on these mundane and repeating tasks. It's great that tremendous effort is put into the SF Creation screens in the latest Thinkwise Platform releases (2021.3 / 2022.1). It's also great to see that the same is planned for Hotfixes.Now, this Idea is about the runtime components of the Thinkwise Platform (GUI's/Indicium). Currently we are notified of new Releases via the Community News & Updates blogs. By subscribing to this Category, we receive an e-mail when the Release Notes are published (and the release is available for download). After that we need to go to TCP ourselves, download new versions through the annoying download task, and then upload them in our own environment for further processing.We have automated deployments on our side already for Indicium & Universal (we only use the Windows GUI for the SF): after the new versions are uploaded in our environment, we can simply provide the Indicium + Universal combination we need, the target Environment (DEV/TEST/PROD) and our automation (in AWS using S3 / SNS / Lambda) takes care of creating the ZIP files with custom configuration, and Uploading and Deploying them (in our case to AWS Elastic Beanstalk). The next big step to speed up this process is replacing the Community e-mail / TCP manual download / upload actions. Instead we would like to see the following:Thinkwise sends a Notification through an API endpoint (to which any customer can subscribe) whenever a new version of a runtime component is available Thinkwise provides a Download API from which we can then download the new version

Robert Jan de Nie
Thinkwise blogger
Robert Jan de NieThinkwise blogger

Make Software Factory metadata available to AI so I can generate SQL templates from my IDEOpen

I want to use generative AI to draft the bulk of my SQL templates, directly from my own IDE (e.g., VS Code) together with tools like GitHub Copilot. For this to work reliably, the AI needs first-class context from the Thinkwise Software Factory (SF): the application’s model metadata and relevant UI information. The SF already maintains a rich, structured model of tables, columns, and references; exposing that to the AI is the missing piece.Why this metadata is essential for AI-assisted SQL When the AI understands my model, its output stops being generic and becomes project-specific:Tables, columns, PK/FKs – enable correct JOINs, WHERE clauses, and integrity-awareness. The SF’s data model explicitly defines these entities and relations, which the AI can use to pick the right join keys, respect cardinalities, and avoid hallucinated columns. Domains (user-defined data types) – domains act as abstract data types that drive constraints and UI defaults. Sharing domains helps the AI choose correct data types, casts, default values, and validation logic in generated SQL. UI semantics – properties like control type or visibility can guide the AI to prefer filtered queries (e.g., hide inactive rows by default) or to shape parameter prompts and WHERE clauses that match what end-users actually see. Template description & intent – short, descriptive text per template (purpose, invariants, edge cases) gives the AI the business intent it needs to generate more accurate and consistent code.How I propose to make this work Expose a compact model export for use in VS Code MVP scope: tables, columns (incl. nullability), primary keys, foreign keys, domains (name, base type, constraints). Optional scope: UI hints (visibility, control), default filters, and template descriptions. Delivery options (any of these would work): Indicium OData endpoint that surfaces a read-only view of the SF model (Indicium already provides an open API and can expose SF branches when configured). CLI/export task that writes JSON files into a repository (consumed by VS Code). VS Code extension that authenticates to the endpoint and injects the model as inline context for Copilot prompts. Use the model export as AI context in the IDE In VS Code, I want to select a template and have the AI read the JSON/endpoint to generate the initial SQL ready for review and refinement. Expected resultsSpeed: AI drafts 70–90% of the SQL, I focus on edge cases. Quality: Consistent use of keys, domains, and defaults; fewer “missing join” or “wrong column” errors. Discoverability: As Thinkwise continues to add AI-powered features (e.g., model enrichments and support for storing LLM embeddings), a model-aware export aligns with that trajectory. Future fit: Community use cases already show value in combining Thinkwise models with LLMs (e.g., natural-language search using embeddings), reinforcing the need for structured model access.