Solved

Code coverage of unit tests

  • 9 March 2020
  • 3 replies
  • 132 views

Userlevel 5
Badge +15

I'm trying to monitor or find out how to deal with 'test coverage' using unit tests in the SF.

Let's say i'm having the subject "Project" with a number, description, start date and an end date. I have applied a constraint that the start date must be equal to or before the end date. To prevent the user of having an error message that makes no sense (*1 see image), I've added a default concept telling the user he's doing things that are not possible.

For this default a unit test can be added. And also for other concepts (layouts, task etc.) unit tests can be created. I want to measure that all SQL code (logic) is tested properly. When you generate the unit test for the situation above, you'll get this:

create procedure def_project
(
  @default_mode       tinyint,     -- 0 = add, 1 = update
  @import_mode       tinyint,     -- 0 = regular, 1 = import row, 3 = import
  @cursor_from_col_id varchar(255), -- column that triggered the default
  @cursor_to_col_id   varchar(255) output, -- cursor in column after leaving the default

  @project_id id           output,
  @project_no project_no   output,
  @description project_name output,
  @start_date start_date   output,
  @end_date   end_date     output
)

as
begin

-- Do not count affected rows for performance
SET NOCOUNT ON;


  --control_proc_id:     dft_project
  --template_id:         dft_project
  --prog_object_item_id: dft_project
  --template_desc:      
   
  if @cursor_from_col_id in ('start_date', 'end_date')
      and
      @start_date > @end_date -- if on of 2 values is null, this is skipped.
  begin
   
    exec dbo.tsf_send_message 'dft_project_start_date_end_date_invalid', null, 1
   
  end
   

end

This is a lot of code and a lot of parameters (*2), which aren't used at all. So the first step is removing all the parameters that are not used in the default concept. Currently only start_date and end_date are used as input parameter.

Removing the not used parameters reduces it to the following header:

create procedure def_project
(
  @default_mode       tinyint,     -- 0 = add, 1 = update
  @import_mode       tinyint,     -- 0 = regular, 1 = import row, 3 = import
  @cursor_from_col_id varchar(255), -- column that triggered the default
  @cursor_to_col_id   varchar(255) output, -- cursor in column after leaving the default

  @start_date start_date,
  @end_date   end_date
)
...

Still a lot of headers and 3 parameters not used in this case, but those cannot be removed.

Thing is here that I can write a unit test, but it doesn't say anything about the quality of the unit test. So I'm looking to a sort of metric to decide if the unit test is 'good'. But I don't know what's already possible in the SF?

For me this example is that I have 2 actual used input parameters that can have different values (start_date, end_date). There's one template that does something with this. I can calculate for this example that I can have the following situations:

  • Both start and end date empty

  • Start date empty and end date not empty

  • Start date not empty and end date empty

  • Both not empty, start date > end date

  • Both not empty, start date = end date

  • Both not empty, start date < end date

This would result into at least 7 unit tests, but the 4 parameters can affect this unit test as well, and for the value of cursor_from_col_id it really does in this case (so it can be multiplied by 2). The behavior of default_mode, import_mode and cursor_to_col_id is in this case undefined, so should not be taken into account for calculating the metrics.

So, the question is, how do I know using the SF if my unit tests are OK and cover the programmed functionality? I notice it's named over here: https://community.thinkwisesoftware.com/updates-and-releases-31/release-plan-2020-1-746#unit-testing

But what are the plans on that?

There is in "Analysis" a subject called "Test coverage", but this only applies to process tests.

 

*1: Message when having constraint (start_date <= end_date):

*2: This is by the way something that annoys me, all new columns / things are by default added to the parameter list of default, layouts, context, where they are probably not used and mostly of the time are downgrading the performance. Maybe this should not be done by default, or should be a validation with a warning indicator that input / output parameters are added that are not necessary (if not yet exists).

icon

Best answer by Anne Buit 20 March 2020, 13:54

View original

3 replies

Userlevel 1
Badge

The plans for unit test analysis currently consists of two parts. We will add a cube in 2020.2 to provide insight into unit test statistics. This records how many times a unit test has been run and which successes have been achieved. We also want to take into account the average duration of the unit test.

The second cube that we want to introduce is about the coverage of the unit tests. To what extent are the control procedures covered with a unit test and how many unit tests are active for a program object.

To know if your unit test covers the programmed functionality is something you define in a test plan. We also have plans to create a test plan in the Software Factory, in this test plan you can describe possible test scenarios that must be covered with a unit test.

A validation to identify unused parameters which downgrade the performance is a great idea.

Userlevel 5
Badge +15

Thanks for sharing the plans. I'm also curious about automating things, compared to traditional software suites with automated tests after committing source code, or after starting a pull request for example.

I'm on a study how to automate some things to speed up the development process. A lot of things should or could be automated after certain triggerpoints. Such as running validations, unit tests, process test in an 'build’ environment. This because repeating those steps is each time, time consuming.

For comparison you'll probably are familiair with systems such as Atlassian Bamboo, Jenkins, TeamCity etc. All created to automate repeatable steps and only warn when there's something wrong. And create a build of the product and / or even deploy the product.

Userlevel 7
Badge +5

Hi René,

Running these tests automatically in the background is definitely something that we are looking into.

You can expect a future release having the unit tests queue and execute automatically in the background, for instance on code change or after generation. Any test built now will leverage this functionality. Same counts for validations.

Automatic builds and deployments are already possible using tooling such as Jenkins, but it takes some knowledge of the Software Factory to set this up by calling the right procedures in sequence.

We are planning to encapsulate this entire process in a queueable flow that can run in the background as well.

I'm curious to hear the results of your research, feel free to submit ideas in the ideation section so we can look into incorporating this in future releases.

Reply