Skip to main content

We are very excited to announce version 2022.1 of the Thinkwise Platform. It is available for download in TCP now!

Here, we would like to highlight two features: the full integration of the Thinkstore into the Software Factory and the redesigned deployment processes.

For an overview of everything new, improved, and fixed in this release, please consult the release notes in the Thinkwise Docs. 

 

Thinkstore fully integrated

Let's start with the Thinkstore. Perhaps you have tried it already in the Community? It has been fully integrated into the Software Factory to improve its ease of use.

What is the Thinkstore?

The Thinkstore is already available in the Community. It contains a collection of scripts and samples to help you get the most out of the Thinkwise Platform.

In the Community, solutions can only be downloaded and installed manually. To improve this, you can now access the Thinkstore in the Software Factory from the menu Enrichment > Thinkstore. Here, you can select, download, and install samples and models right from the Thinkwise IDE into your projects. You can install a solution multiple times under different project names and project versions.

Thinkstore fully integrated into the Software Factory

 

Redesigned deployment processes

In the previous release (2021.3), we have redesigned the Creation process. In this release, we have continued improving the entire Deployment process. 

In the Creation process, writing program objects to disk is available again in the Generate source code step. In the Execute source code step, we have improved the information on the user input that is sometimes required when the execution is aborted. The icons for the available tasks to continue the execution have also been improved. New in the Execute source code step is the possibility to determine the error handling behavior in API calls via Indicium for the tasks Connect to database and Execute source code. Finally, we have improved the error log when the source code is executed via the user interface or Indicium's API.

As a follow-up to the redesigned Creation process, the Deployment processes Synchronization to IAM and Deployment package have also been redesigned. Both the processes now run through Indicium, so automation via the Indicium API is possible.

In the task Synchronize to IAM, you no longer need to enter a host, database, or authentication. Instead, you only need to select a Target IAM (as set in menu Maintenance > IAM configurations). In the task Synchronize to disk (previously: Write to disk), you no longer need to enter a file location. A location field is displayed on the screen. You can easily navigate to this file or folder using the buttons next to it. In both tasks, the new Note field allows you to place a comment, like the reason for synchronizing. This comment is visible in the History tab's grid.

Synchronization to IAM redesigned

In the Deployment package process, you no longer need to enter a file location. After a successful run, all deployment package files are written to the project folder. A field containing the exact location appears on the screen. You can easily navigate to this folder using the button next to it. The new Note field allows you to place a comment, like the reason for creating the deployment package. This comment is visible in the History tab's grid.

Deployment package redesigned

 

Other highlights

Of course, this release brings many more new features and changes. Some other highlights are:

 

Last night I've run the upgrade to 2022.1 and although everything seems fine, the authorizations of my trainees that where in a separate tenant got stripped of their SF rights in the IAM and on the DB (no more End User rights). Anything incorrect in the upgrade script or did I do something wrong? 


Hi Freddy,

An update of the Software Factory application indeed re-provisions roles for the user group ‘sf_developers’ of the default tenant and does not leave any other role assignments to user groups intact, regardless of tenant.

I understand that this is very inconvenient for your scenario. Please submit a ticket and we’ll look into resolving this for future releases.

Note that re-assigning SF roles for the the trainees’ user groups will also automatically grant them End User rights for IAM.


I've got a question about the "Redesigned deployment processes”. I'm super exited about this because i see much value in this for the automation of the development process.

We have lifted all infrastructure for the Software Factory development to the Azure cloud. The database(s) and application servers (web apps) are running in the cloud.

I read that the application server writes the files to some location. How is this designed to work with application servers that are running in the (Azure) cloud?

At this moment we are running on SF 2021.2 and the Windows GUI is writing the files to the local c:\Temp folder.


I read that the application server writes the files to some location. How is this designed to work with application servers that are running in the (Azure) cloud?

@Anne Buit @efitskie Same question and challenge for AWS Cloud from our side. Would prefer to get a recommended solution from Thinkwise, instead of having to figure this out ourselves!


When you cannot directly perform product rollouts and/or synchronisations, the scripts must indeed be somehow accessible to use in conjunction with the deployer (or some other tool).

The Software Factory allows for writing these to disk, but this is not the only option. The deployment scripts and sync scripts can be pulled from the Software Factory via the API.

You can access the code files after the source code has been generated as following:

indicium/iam/sf/code_file?$filter=project_id eq 'TEST_AB' and project_vrs_id eq '1.40' and code_file_generated_code ne null&$select=code_file_id, code_file_generated_code
Direct access to the code files

The same can be done for the synchronisation scripts. After the API add_job_to_generate_sync_script has been used and the job is done, the sync scripts can be accessed as following:

indicium/iam/sf/sync_script?$filter=project_id eq 'TEST_AB' and project_vrs_id eq '1.40'&$select=sync_object_id, sync_script

The sync scripts will be available in the response

Direct access to the sync scripts

I hope this helps in automating the deployment when direct access from the Software Factory environment to the rollout target environment is not possible and access to disk is not an option.

It is possible to mount Azure blob storage as a volume in Azure Web App for Containers and in AWS Elastic Beanstalk via Amazon EFS but this might be more trouble than using the API to access the scripts.

 


@Anne Buit Thanks, good to know we can access the files through API calls as well. Nevertheless, I'm struggling to understand how we could streamline our PROD deployments after upgrading to Release 2022.1.

Currently we simply run the ‘Create deployment package’ task, which writes the package to an agreed upon network folder, right from the SF. Then IT Operations simply runs the twdeployerGUI.exe in order to execute the PROD deployment.

I guess using the API's would mean we need to run the API calls using another tool from another location (with access to the target network folder) and then save the files there.

Alternatively, introducing Amazon EFS would be an additional File Storage location, while we already use S3 for our Custom Application.

Since the Write file process action now (sort of) supports writing files to S3 (and other file storage locations), wouldn't it be rather simple and more straightforward if Thinkwise would use the File Storage location functionality for writing these SF files too?


Yes, it would be required to have some kind of agent that has access to the (on-premises?) network folder to access the API’s and write the files. Storing the files in S3 would also require an agent with access to S3 to move the files to said network folder. There are various CI/CD tools that can help with these kinds of tasks.

The Create Deployment Package task is an orchestration of various jobs, all executed by Indicium, including several jobs that write files to disk. In earlier versions of the Software Factory, these actions were executed by the GUI, which often has access to local network locations but which also made it notoriously difficult to automate.

We can look into allowing the Software Factory to use other types of storage instead of disk storage in processes that write files. In all scenarios, a shared/mapped file storage or an agent that acts as a bridge between the stored files (API/S3/...) and the on-premises file systems will be required if the deployment is initiated on-premises and the script generation is done in the cloud.


Hi @Anne Buit , I have a follow up question about this:
Because we will be using the Thinkwise deployer for upgrading our projects we will be needing the manifest file which is normally provided with a deployment package. Is there a way to access the linked manifest the same way as above? 


Excellent question. There is a function that generates the manifest content on-demand for the current version control and subsequent current generated code files. But as far as I can see, this function cannot be accessed through the API.

I’ve taken the liberty to create ticket 2960S for this.

Note that this manifest will only be usable for new installations of the product and upgrades of the currently configured version control. It is possible to have a manifest contain multiple products and/or incremental upgrades, but this will require the manifest to be created manually (or at least, not with the Software Factory).


@g.c.d.vanderschoot a platform improvement has been released that allows the manifest to be accessed via the API.

get manifest call

Note that the manifest itself is json-formatted, so the OData API will return it as nested json. Any json-deserializing client will yield the proper json file.


It is possible to mount Azure blob storage as a volume in Azure Web App for Containers and in AWS Elastic Beanstalk via Amazon EFS but this might be more trouble than using the API to access the scripts.

@Anne Buit Amazon EFS is apparently not supported with Windows-based EC2 instances, as noted here.

The AWS alternative for Windows servers is Amazon FSx, but we can't find any confirmation that this service works with Elastic Beanstalk.

Has anyone at Thinkwise tried and tested a solution to successfully write files from Elastic Beanstalk? 

 


The same can be done for the synchronisation scripts. After the API add_job_to_generate_sync_script has been used and the job is done, the sync scripts can be accessed as following:

indicium/iam/sf/sync_script?$filter=project_id eq 'TEST_AB' and project_vrs_id eq '1.40'&$select=sync_object_id, sync_script

The sync scripts will be available in the response

Direct access to the sync scripts

I hope this helps in automating the deployment when direct access from the Software Factory environment to the rollout target environment is not possible and access to disk is not an option.

It is possible to mount Azure blob storage as a volume in Azure Web App for Containers and in AWS Elastic Beanstalk via Amazon EFS but this might be more trouble than using the API to access the scripts.

FYI: be aware that simply parsing the JSON from the sync_script API call will break your Sync to IAM! The sync_script response includes both the sync_object_id”: “post_sync” (normally used as part of the ‘Synchronization to IAM’ process) and the sync_object_id”: “post_deployment” (normally used as part of the ‘Deployment Package’).

If you don't do this the deployment will cause errors due to the duplicate declared values:

 


And one more thing about the sync_object_id”: “post_deployment” : the sync_object_id's are returned in the API based on alphabetical order, so it is by default nót added at the end of the script when parsing, where it is supposed to be executed.


Hi all,

I just did an upgrade to 2022.1 on Azure DB for the IAM and the SF. For some reason the SF was pointing to a default develop_server as a DB server which does not exist. Due to this SF was not accessible to developers.

I managed to log into IAM and point the SF to the correct DB server and problem resolved.

 

Not sure if anyone else faced the same so just a heads up.