Skip to main content

The Thinkwise Platform version 2021.3 has revised the way the Creation screen works, opening up more elegant ways to leverage tooling to automate this process.

The generation screen consists of a number of steps:

  • Generate definition
  • Validate definition
  • Generate source code
  • Execute source code
  • Run unit tests

This has remained unchanged between versions 2021.2 and 2021.3. However, the way these steps are performed has fundamentally changed.

In the previous version, the steps in the generation screen were bound to a specific client. A developer would start Generate definition and the definition generation would be orchestrated start-to-end by the users client. This has served us well for many years, but there are some drawbacks that we’ve addressed in this release.

 

The new Creation approach

All ‘jobs’ are no longer performed by a client, but are instead performed by Indicium. This means that generation, validation and such will continue even if the developer closes the screen or even the Software Factory GUI.

Furthermore, all developers will be able to track progress of a job. If a developer starts generating the definition of a project version, a different developer will also see this in their Creation screen.

This behavior extends to the validation- and unit test tabs in various modelers. The validation and unit test screen, that can be opened directly from the the menu, have also been updated.

 

What has changed for the developer?

To be able to move the responsibility of the Creation process to Indicium, a few things have changed. We’ve also cleaned up some unused features.

 

Generate definition

Visually, the generate definition step has been updated to show progress at the bottom and initiate the definition generation via a task.

The screen provides you with information about the most recent definition generation job.

Some configuration options for Generate definition have been removed, namely the options to:

  • Skip control procedures in development
  • Skip deleting generated objects
  • Skip copying base projects
  • Skip generation itself

When a developer starts definition generation and an error occurs at one of the control procedures, the definition generation will halt. The developer has various options to either skip the control procedure and continue execution or abort the definition generation.

Note that this choice may also be made by a different developer than the developer initiating the definition generation.

 

Validate definition

A progress bar is shown at the bottom which displays the progress of current validation executions. A validation will receive a question-mark as icon when subject to re-validation and will show a progress-icon when being executed.

Validation is currently active for the User Interface validation group.

Developers can still choose to perform a full validation or only validate a certain validation group or the selected validation(s).

Note that - unlike other jobs, multiple validation jobs can be active at once for a project version.

 

Generate source code

This step has been simplified a bit as there was much functional overlap with the functionality modeler.

Writing all program objects to disk during generate source code is no longer possible. A program object can still be written to disk in the functionality modeler.

Writing code files to disk is no longer a checkbox when starting source code generation. Instead, there is a button that can be used to write all code files to disk.

The generation method Manual is no longer supported. For cherry-pick execution of very specific pieces of logic, use the Functionality modeler or the Code overview screen.

The new generate source code step

 

Execute source code

It is no longer possible to execute source code on an arbitrary server and database. The developer must select a runtime configuration. The server and database of the runtime configuration will be used to create or upgrade the database and place the business logic code.

There are two ways to execute source code. In both scenarios, the developer will first be prompted to select a runtime configuration.

Connect and modify selected code files before continuing.

When the Connect option is used, the developer will have the option to change the default selection of code files to execute. The default selection may vary depending on whether or not the database exists, and the current versioning information stored in the database. The developer can also choose to reconnect to a different runtime configuration or cancel code execution at this point.

Directly execute suggested code files on the target database.

As the name indicates, when Execute all code files is used, no further input will be asked an the code files will be executed directly on the database as provided in the runtime configuration.

If any error occurs during execution of the code files, the developer will be prompted with various options to either continue or abort the code execution.

An error occurred. Note the 3 tasks to choose on how to continue.​​

The developer may choose to skip the code file and continue execution with the next code file, to abort execution completely, or to continue with the next statement in the current code file.

Note that this choice may also be made by a different developer than the developer initiating the code execution.

 

Unit test

Just as with source code execution, it is no longer possible to run unit test on an arbitrary server and database. A runtime configuration must be selected.

Executing unit tests prompts runtime configuration selection

The unit test screen from the menu has also been updated. This screen has been split into a unit test execution tabpage and a maintenance tabpage where unit tests can be created and updated.

 

Automation

A job may be started by a developer via the UI, but jobs can also be started via the Indicium API. The following API’s may be used to start various Creation steps. The Indicium instance running on the Intelligent Application Manager facilitating the Software Factory must be used for this.

Note that these calls only queue the job and respond with a 200 - OK directly after.

 

Generate definition

A job to generate definition can be added as following:

POST dindicium]/iam/sf/add_job_to_generate_definition

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12",
"error_handling": 1
}

I hope the project- and version parameters are self-explanatory. The parameter error_handling allows the caller to configure what happens when a control procedure fails to execute properly during definition generation. The following options are available:

0 Pause and await user input
1 Skip the control procedure in error and continue
2 Abort generation

 

Validate definition

Starting the validation is pretty straight-forward.

POST yindicium]/iam/sf/add_job_to_validate_all

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12"
}

 

Generate source code

Generate source code is accessible via the following API:

POST aindicium]/iam/sf/add_job_to_generate_code

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12",
"upgrade_method": 0
}

Writing code files to disk is currently not yet possible via the API. The parameter upgrade_method determines the generation method.

0 Smart generation
1

Full generation

 

Source code execution

An execution job can be created as follows:

POST eindicium]/iam/sf/add_job_to_execute_source_code

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12",
"runtime_configuration_id": "default"
}

The runtime configuration id is used to select the target for the source code execution. Note that this has to be the specific name of the runtime configuration as configured in the Software Factory. An application id or application alias cannot be used.

An error handling option is not available yet but will be added in the future. Source code execution will wait for developer input if an error occurs.

 

Unit test execution

To execute all unit tests, queue the job as following:

POST indicium]/iam/sf/add_job_to_test_unit_test

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12",
"runtime_configuration_id": "default"
}

The body is the same as source code execution. Note that inactive unit tests will not be executed. 

 

Full creation

As an alternative to calling the individual steps for creation, a task is also available to perform all steps for creation.

This task can be found in the first tabpage of the Creation screen and will queue the various steps in this screen in one go.

This action can also be called via the API.

POST oindicium]/iam/sf/add_job_to_do_complete_creation

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12",
"execute_complete_creation": 0,
"generate_definition": true,
"validate_definition": true,
"generate_source_code": true,
"execute_source_code": true,
"execute_unit_tests": true,
"generate_definition_error_handling": 1,
"validation_result_creation": 3,
"upgrade_method": 0,
"runtime_configuration_id": "default",
"execute_source_code_error_handling": 1
}

The execute_complete_creation setting should be set to 0 if manual selection of individual steps is desired. When set to 1, all steps will be executed.

The generate_definition_error_handling and execute_source_code_error_handling values are the same as described earlier:

0 Pause and await user input
1 Skip the control procedure in error and continue
2 Abort generation

 

The validation_result_creation setting indicates how the validation results should affect the further steps.

0 Abort when there are errors
1 Abort when there are warnings
2

Abort on any validation message

3 Always continue the subsequent creation steps

 

The upgrade_method setting is the same as described earlier:

0 Smart generation
1

Full generation

 

Code execution and unit tests execution will both be done on the provided runtime configuration.

 

More to come - Deployment

This 2021.3 release focuses on automation of the creation screen. The creation screen focuses mostly on your development environment.

For the next release, we will focus on the automation of deployment to IAM environments as well. This includes synchronizing to IAM, writing code files and the combination of both - creating deployment packages via jobs in Indicium.

The packages can be picked up by the Thinkwise Deployment Center to update environments via a wizard or automated via the command-line interface.

Update!

These new features are now available in 2022.1. More information here.

Today our first 2021.3 generation, nice! One but ….
We use Microsoft DevOps with GitHub for code and version control. We always generate a full source code to disk of a production build and commit these Groups and Program Object file changes to DevOps/GitHub.

It looks like only Groups are written in 2021.3. How to generate all Program Object files in 2021.3?


Hi @Jaap van Beusekom,

Writing all program objects to disk is indeed no longer possible because of the overlap with the functionality modeler, where it still is possible to write program objects to disk.

You make a valid case, however. Therefore, we’ve added this to our backlog and strive to implement this in the 2022.1 release, which is scheduled for January. 

In the following topic explanation and scripts are provided on how to automate your own deployment process. Perhaps this can serve as some inspiration for writing your own script for now:

 


@Jaap van Beusekom For your information, we just finished the development of writing program objects to disk in the 2022.1 version the Thinkwise platform. This version is expected to become available on 2021/01/31.

Wishing you a happy new year!

Jeroen


Note that these calls only queue the job and respond with a 200 - OK directly after.

@Anne Buit Is there a way to retrieve the status and result of these jobs by means of the Indicium API?


Hi Arie,

In the upcoming 2022.1 release, we’ll have these API’s return a job_id.

This id can be used to monitor the status of the specific job in the /job API.

An API will also be provided to sychronously wait for a certain job_id to finish.

For now, you could directly access the /job api and find the last queued job. We realize this is not ideal, hence the improvements in 2022.1.


@Anne Buit A couple of more questions on automating the entire process:

  • BEFORE starting the creation process I'd assume it is important to be able to verify and set the previous_project_id and previous_project_version_id in the SF project_vrs table to match the latest sf_project and sf_project_vrs from sf_product_info of the Target Database. 
    • We were trying to update the project_vrs table through the Indicium API but got a 403 Forbidden. I noticed that the SF GUI uses a Variant called ‘project_vrs_maintenance’ , not sure if that plays a role. Is there a way we can update the previous_project_version through an Indicium API at the moment?
  • Related to above suggestion that previous_project_version is not always the same for Target Database deployments two other things come to mind:
    • Data conversion: I assume it would be valuable to run the squash_version_control Task as part of this process as well, how can we reach this with an Indicium API?
    • Upgrade scripts: Would a similar Task as for Squash data conversion be a solution for the risk of missing Upgrade scripts
  • AFTER finishing the creation process we're used to creating a new Project version by using the copy_project_vrs Task in the SF.
    • We were trying to do this using an Indicium API, but that call also returned a 403 Forbidden. Is there a way we can run the copy_project_vrs Task through an Indicium API at the moment?

Note that we're just starting to investigate the automation of our deployments, if there are good reasons why these steps don't make sense as part of the entire deployment automation process, please let me know your best practice recommendations or alternative suggestions!


Hi Arie,

The fields to update are vrs_control_project_id and vrs_control_project_vrs_id. The fields you are attempting to update are indeed read-only because they are used to determine the origin of this version.

When using the squash-functionality, these fields will automatically be updated. The squash functionality would indeed be invalueable when incremental version control has been configured and the test environment needs to be updated accordingly.

It is worth noting that it is also possible to simply update the test environment per project version. The squash task is limited to rather simple data conversion. For instance, manual data conversion scripts cannot be retained when squashing. This is also a solution for the topic you mentioned regarding only having a single data conversion configuration.

The task copy_project_vrs does not allow user input for the from_project_vrs and from_project_vrs_id, they are read-only. This is why a 403-Forbidden is returned. We’ll resolve this in version 2022.1.

You can run it by starting it in context of the specific project version to copy. This is also how the UI does this.

In certain scenarios, Indicium may return a 500 response when using this call. A resolution is pending QA for an upcoming release.


@Anne Buit thanks for the clarifications and fixes in Release 2022.1!

It is worth noting that it is also possible to simply update the test environment per project version.


On the topic of a DTAP street with the Thinkwise Platform, could you clarify your suggestion?

We currently have it sort of covered as follows:

  • DEV = Indicium/Universal GUI/Windows GUI running against SF 
  • TEST = DEV running against IAM: here we usually deploy Branches and the Trunk incrementally (so basically every project version)
  • ACCEPTANCE = (what we call) TEST running against IAM on a completely separate environment. We usually restore a PROD database backup first, after which we upgrade from the PROD project version to the Trunk SF version. We do this ‘dryrun’ to ensure PROD deployment quality: unexpected issues (often related to upgrade scripts and post-deployment end product configuration) are resolved and deployment instructions updated. This is currently especially important for us, since (1) the Business Team approves Releases based on the successful Deployment and Functional Test of the deployed version and (2) IT Operations performs the PROD deployments manually (not the Developer team). Once we have an approved Project version in TEST we package it for PROD deployment.
  • PROD = PROD: as you know I’m eager to get to Zero Downtime Deployment, which together with the Thinkwise ambition to make Branching and Merging in the SF só easy that Developers will create a Branch for each individual User Story, should allow us to really do Continuous Integration / Continuous Delivery / Continuous Deployment!

Any improvements/changes to this process you could recommend?

 


Judging by your description, the versioning could look something as following:

DEV:  1.00 - 2.00 - 2.10 - 2.11 - 2.12 - 2.13 - 3.00 - 3.10

TEST: 1.00 - 2.00 - 2.10 - 2.11 - 2.12 - 2.13 - 3.00

ACC:  1.00         -       2.11

PRD:  1.00         -       2.11

Every new version in DEV is eventually pushed to TEST.

The ACCEPTANCE and PRODUCTION environments are upgraded in larger increments. (e.g. 1.00 to 2.11).

Correct?

The version control for those larger increments is done by reconfiguring the version control for the to-be-released version. For instance, reconfiguring version control of version 2.11 to point to 1.00 instead of 2.10.

The squash task can help with this but as I mentioned earlier, it might not be completely sufficient. Manual intervention and verifictation could be required to ensure we didn’t miss a manual data conversion script, for instance between version 2.00 and 2.10.

The alternative is to archive the upgrades and run a cumulative update on the ACCEPTANCE and PRODUCTION environments, performing the upgrade 1.00 to 2.00, 2.00 to 2.10 and 2.10 to 2.11. 

Naturally, only the model of version 2.11 would need to be synchronized to IAM, synchronizing the intermediate versions does not serve much of a function unless there are very exotic post_sync scripts.


@Anne Buit You've interpret my explanation correct, but let me add two nuances:

The ACCEPTANCE and PRODUCTION environments are upgraded in larger increments. (e.g. 1.00 to 2.11).

Correct?

In reality the ACCEPTANCE is upgraded a bit more frequent, but indeed always stemming from the same PRODUCTION version. Additionally, for larger Releases we also deploy Branches to ACCEPTANCE, also stemming from the same PRODUCTION version.

Especially once the whole creation can be automated, as a first next step, I expect to Plan/Schedule a deployment to ACCEPTANCE (at least) once a week, while for PRODUCTION deployments I aim for about once a month (at the moment).

So the slightly adjusted picture could look like:

DEV BRANCH:  1.00 - 1.01 - 1.02

DEV TRUNK:  1.00 - 2.00 - 2.10 - 2.11 - 2.12 - 2.13 - 3.00 - 3.10

TEST: 1.00 - 2.00 - 2.10 - 2.11 - 2.12 - 2.13 - 3.00

ACC:  1.00         -       2.00

ACC:  1.00         -       2.10

ACC:  1.00         -       2.11

PRD:  1.00         -       2.11

 

Now if we also take the Merge action into consideration, I would like to build the following Deployment pipelines (happy flow):

  1. MERGE Branch to Trunk DEV/TEST
    • Backup database
    • Previous version check/update (depending on Target database sf_product_info)
    • Execute merge session task
    • Full creation task
    • Sync to IAM task (including similar Post synchronization code as mentioned here)
    • Copy project version task
    • Generate definition task (in new Project version)
  2. UPGRADE Trunk/Branch DEV/TEST:
    • Backup database
    • Previous version check/update (depending on Target database sf_product_info)
    • Full creation task
    • Sync to IAM task (including similar Post synchronization code as mentioned here)
    • Copy project version task
    • Generate definition task (in new Project version)
  3. UPGRADE Trunk/Branch ACCEPTANCE:
    • Backup PRODUCTION database
    • Rename ACCEPTANCE database to __database__
    • Restore PRODUCTION database on ACCEPTANCE
    • Previous version check/update (depending on Target database sf_product_info)
    • Squash data conversion task
    • Full creation task
    • Sync to IAM task (including similar Post synchronization code as mentioned here)
    • Delete __database__ from ACCEPTANCE
    • Copy project version task
    • Generate definition task (in new Project version)
  4. UPGRADE Trunk/Branch PRODUCTION:
    • Backup PRODUCTION database
    • Rename PRODUCTION database to __database__
    • Previous version check/update (depending on Target database sf_product_info)
    • Full creation task
    • Sync to IAM task (including similar Post synchronization code as mentioned here)
    • Delete __database__ from PRODUCTION
      EDIT: I am considering the creation of a Deployment Package right after an ACCEPTANCE version has been approved as an alternative for the Previous version check/update, Full creation task and Sync to IAM task. This better ensure a Project version is ‘frozen’ and might fit better with our current process where planned PRODUCTION deployments are performed by IT Operations. For some reason we have never used the Thinkwise Deployment Center so far, we currently create a Deployment Package and run all SQL scripts manually in Management Studio...

By the way: In the near future I hope we'll also be able to do Process test of the Universal GUI (registered either outside or inside the SF), so I hope the automatic testing work mentioned in the latest Universal GUI Release Notes will result in a solution that can also be used by Customers!?

Other than that, and assuming that for now we'll do with the workaround for upgrade scripts, does this make sense and/or am I missing steps?

 

The alternative is to archive the upgrades and run a cumulative update on the ACCEPTANCE and PRODUCTION environments, performing the upgrade 1.00 to 2.00, 2.00 to 2.10 and 2.10 to 2.11. 

Sounds like it would make deployments take even longer and having more moving parts involved. Unless you can convince me otherwise, I don't feel very comfortable about this approach.

 

Some other open questions I haven't been able to figure out yet:

  • Same as the Merge task, can we run the Create Branch task using Indicium API?
  • Regarding the upgrade scripts workaround, will ‘always’ assigned upgrade script also remain assigned after a Merge to the Trunk?
  • Smart vs Full upgrade: we feel it is rather unclear when a Smart upgrade is not the smart thing to do. For example, does the Note under Smart upgrade in the Docs mean to say that the Previous version first has to be Generated BEFORE running the Creation of the Current version?
  • EDIT: And how can we automate/script/call the SQL Backup and Restore task on AWS RDS for MS SQL? EDIT 2: we actually got it working by encapsulating the task into a Stored Procedure!

From the Thinkwise Platform release 2021.3 Release Notes:

  • All jobs are no longer performed by a client (the GUI) but instead by Indicium. This shift means that generation, validation, etc., will continue even if you close the Software Factory GUI.

Although it might sound like stating the obvious: this also means that from Platform Release 2021.3 onwards you need to make sure that the Indicium used for the Software Factory is able to reach the Target Database for Execute Source code (instead of the SF GUI).

  • The current Creation screen doesn't provide proper feedback when the Connection fails and the Job keeps hanging on status ‘Active’ causing new Jobs to be Queued.
  • Even worse, as part of the full Creation task (using Smart upgrade), the Execute Source code job is even marked as ‘Successful’.
  • As part of the full Creation task (using Full upgrade), the Execute Source code always selects create instead of upgrade, then it does throw an error at the first db step though.

@Anne Buit Since Indicium is properly logging the errors, I would expect that feedback in the Creation screens and the Jobs log as well. It took us a while to realize that our Indicium SF couldn't reach our Test database...


Hi Arie,

The way that developers are informed about an unreachable runtime configuration when executing source code can definitely be improved. I’d consider this a bug - can you log a ticket for this so we can resolve this via the proper channels?

There are a lot of questions in the previous reply but I’ll try to answer them.

A branch can be created via the Indicium API. 

An always assigned manual upgrade script will be kept intact as it is copied to new versions and remains assigned. However, you’ll need to make sure it works properly for every previous version in the code itself. 

The smart upgrade bases itself off of the archive of a project version. If you don’t mess around with the project version after you create the subsequent version, the archive should always be intact and should be a good base for the smart upgrade. If the project version has been modified after being copied and hasn’t been generated properly since, the archive may be out-of-date and a smart upgrade will not be possible. Hence the comment in the docs.

Good to hear the back-up and restore has been automated!

The id’s placed in the DOM for test automation should be useable for anyone. We are still reviewing how to approach the process tests for the new generation of UIs. The way this was done for the \Windows GUI wasn’t ideal as this UI was testing itself. An auxiliary, autonomous agent is more reliable and suitable for automation. But this is is a debate for a different topic.


@Anne Buit Thanks again for the answers! TCP 2771 and 2772 raised for fixing/improving the Creation functionality.


@Anne Buit Hi Anne,

I’m trying to create a branch like you described in your post however I can’t seem to get this to work because I keep getting a 500 internal server error.

As a test I’ve tried to generate a definition also using the indicium API to see if this is also causing problems, however this is working fine. 

Do I need to add other parameters to the JSON body or am I doing something else wrong? 

After looking in the indicium log I found the following error: 

The file path 'E:\Thinkwise\DEPLOYMENT\Branches\' for storage configuration 'project_folder' is outside of the allowed base path '\\everest\Departments\Product Innovation\Applications\SQLSERVER_SF\2021.3\'.

 

 


Harry, apologies. My sample code uses Software Factory version 2022.1 which is set to be released on January 31.

To get this to work for 2021.3 and earlier, we’ll have to release an update for Indicium first. This is planned for February 14th.

With the updated version of Indicium, you will be able to create a branch via the API using Software Factory 2021.3 and earlier as following:

Which in essence executes starts the branch creation from the project version instead of directly.


 

Hi Arie,

In the upcoming 2022.1 release, we’ll have these API’s return a job_id.

This id can be used to monitor the status of the specific job in the /job API.

An API will also be provided to sychronously wait for a certain job_id to finish.

 

@Anne Buit Hi Anne, 

After upgrading the platform to 2022.1 I am still only receiving a “@odata.context” in the body/content of my calls, not the job_id you mention. Was this not yet implemented or am I doing something wrong in the API call? 


Hi Harry, I’ve tested this on several 2022.1 Software Factories and they all return the job_id in good order. 

There is nothing in the call you can do to prevent the job_id from being returned, short from providing a project version that does not exist. But this would result in a 403 - Forbidden.

If the generation job is starting properly on the Software Factory, the only thing I can imagine is that Indicium is still using a 2021.3 Software Factory model somehow. Ensure your Indicium is up-to-date, pointing to the correct IAM and has been rebooted after the platform upgrade if it was already up-to-date. 


@Anne Buit We're trying to get the Creation automation working as described in one of my earlier posts and now encounter an interesting hickup for below step:

  • Previous version check/update (depending on Target database sf_product_info)

We cannot reach the sf_product_info table of the target database through an Indicium call (https://<server>/indicium/iam/<application_id>/sf_product_info), due to the fact that we have no way to assign permissions to this particular table for an SF Role and IAM user/user group. I would expect a Main administrator and/or the Pool User should at least be able to reach this table. How can this be resolved?

EDIT: as a workaround, we can of course make a view to the table and assign rights to the view.


Hi Arie, the sf_product_info table is not a formal model table, hence the reason no API is available. The workaround seems to be the best way to go about this, yes.


@Anne Buit hi Anne, we’re currently creating a project that makes its possible to upgrade other projects. However we’re currently wondering how it’s possible to pass credentials for different run time configurations. 

I’ve tried to reverse engineer this by looking at how the SF handles this but to no avail. 

Because if we try to execute source code in the SF we can choose a run time configuration on which this should be deployed, (eg. ‘test’). Based on the selected runtime configuration a host and database is defaulted in the popup, however we don’t understand how the SF is able to access the host of the selected runtime configuration without the need of a password. 


Hi Harry,

Indicium always uses the configured database service account to access the server and database, which is either the application pool or a preconfigured account in the appsettings.json. Currently, it is not possible to connect to other runtime configurations or applications using different credentials.

The Application Connector is used by the Software Factory to execute queries on these runtime applications via the database service account.


@Anne Buit This Full creation API does something weird. If we try and run this with some steps on false (see below example), a 403 Forbidden is returned. Instead we need to remove those paramaters that are set to false and then it works. This does not make sense to me.

POST Tindicium]/iam/sf/add_job_to_do_complete_creation

{
"project_id": "MY_PROJECT",
"project_vrs_id": "1.12",
"execute_complete_creation": 0,
"generate_definition": true,
"validate_definition": true,
"generate_source_code": true,
"execute_source_code": false,
"execute_unit_tests": false,
"generate_definition_error_handling": 1,
"validation_result_creation": 3,
"upgrade_method": 0,
"runtime_configuration_id": "default",
"execute_source_code_error_handling": 1
}

The execute_complete_creation setting should be set to 0 if manual selection of individual steps is desired. When set to 1, all steps will be executed.

EDIT: We seem to have figured out the cause, the following combination is not allowed. This seems a bug to me, so we'll raise a TCP.

    "generate_source_code": true,
    "execute_source_code": false,

EDIT 2: To be clear: removing the parameters is not a good idea, the Full Creation task interprets these missing parameters as true and will actually also execute these jobs.


There are some important things to note about the API:

  • Parameters are processed in the order they are provided in the JSON body
  • Parameter access may be re-evaluated directly after parameter is processed
  • Read-only parameters may not be provided in the JSON body

In this case, the execute_complete_creation is set to 0, opening up all other parameters. But when execute_source_code is set to false, the parameters execute_unit_tests, runtime_configuration_id and execute_source_code_error_handling are no longer accessible.

Ensure these parameters are omitted or placed earlier in the request body. The parameter execute_unit_tests will be automatically be set to false when execute_source_code is set to false.

It is a bit cumbersome that read-only parameters also cause a 403 when the provided value in the JSON body matches the actual current value. This could be more lenient. We will re-evaluate this behavior in Indicium.


@Anne Buit Hi Anne, 

We recently made the switch to 2023.1, however I’ve failed to find any documentation on the changes regarding the API calls mentioned in this post. I imagine some things got changed due to extra parameters we’ve got in the SF but I can't find a (comprehensive) list of these changes. 

So my questions:

  1. Do all API calls still work as mentioned in the blogpost?
  2. Did any extra parameters become available for these calls?
  3. (a bit unrelated but would be very nice to have) Can we now use branching and merging via API calls? 

 


Hi Harry,

The API calls have been updated to reflect changes to the model- and branch naming conventions.

Up to date information can be found in the documentation.

Just from looking at the docs - wait_for_job now has a show_msg parameter and there is a new described API call to cancel all jobs, conveniently named cancel_all_jobs

Branching and merging can technically be done via the API calls as every feature in the Software Factory is available via API calls. But they are not documented, optimized or earmarked to keep stable for CI/CD impact on upgrades.


Reply