I'm trying to monitor or find out how to deal with 'test coverage' using unit tests in the SF.
Let's say i'm having the subject "Project" with a number, description, start date and an end date. I have applied a constraint that the start date must be equal to or before the end date. To prevent the user of having an error message that makes no sense (*1 see image), I've added a default concept telling the user he's doing things that are not possible.
For this default a unit test can be added. And also for other concepts (layouts, task etc.) unit tests can be created. I want to measure that all SQL code (logic) is tested properly. When you generate the unit test for the situation above, you'll get this:
create procedure def_project
(
@default_mode tinyint, -- 0 = add, 1 = update
@import_mode tinyint, -- 0 = regular, 1 = import row, 3 = import
@cursor_from_col_id varchar(255), -- column that triggered the default
@cursor_to_col_id varchar(255) output, -- cursor in column after leaving the default
@project_id id output,
@project_no project_no output,
@description project_name output,
@start_date start_date output,
@end_date end_date output
)
as
begin
-- Do not count affected rows for performance
SET NOCOUNT ON;
--control_proc_id: dft_project
--template_id: dft_project
--prog_object_item_id: dft_project
--template_desc:
if @cursor_from_col_id in ('start_date', 'end_date')
and
@start_date > @end_date -- if on of 2 values is null, this is skipped.
begin
exec dbo.tsf_send_message 'dft_project_start_date_end_date_invalid', null, 1
end
end
This is a lot of code and a lot of parameters (*2), which aren't used at all. So the first step is removing all the parameters that are not used in the default concept. Currently only start_date and end_date are used as input parameter.
Removing the not used parameters reduces it to the following header:
create procedure def_project
(
@default_mode tinyint, -- 0 = add, 1 = update
@import_mode tinyint, -- 0 = regular, 1 = import row, 3 = import
@cursor_from_col_id varchar(255), -- column that triggered the default
@cursor_to_col_id varchar(255) output, -- cursor in column after leaving the default
@start_date start_date,
@end_date end_date
)
...
Still a lot of headers and 3 parameters not used in this case, but those cannot be removed.
Thing is here that I can write a unit test, but it doesn't say anything about the quality of the unit test. So I'm looking to a sort of metric to decide if the unit test is 'good'. But I don't know what's already possible in the SF?
For me this example is that I have 2 actual used input parameters that can have different values (start_date, end_date). There's one template that does something with this. I can calculate for this example that I can have the following situations:
-
Both start and end date empty
-
Start date empty and end date not empty
-
Start date not empty and end date empty
-
Both not empty, start date > end date
-
Both not empty, start date = end date
-
Both not empty, start date < end date
This would result into at least 7 unit tests, but the 4 parameters can affect this unit test as well, and for the value of cursor_from_col_id it really does in this case (so it can be multiplied by 2). The behavior of default_mode, import_mode and cursor_to_col_id is in this case undefined, so should not be taken into account for calculating the metrics.
So, the question is, how do I know using the SF if my unit tests are OK and cover the programmed functionality? I notice it's named over here: https://community.thinkwisesoftware.com/updates-and-releases-31/release-plan-2020-1-746#unit-testing
But what are the plans on that?
There is in "Analysis" a subject called "Test coverage", but this only applies to process tests.
*1: Message when having constraint (start_date <= end_date):
*2: This is by the way something that annoys me, all new columns / things are by default added to the parameter list of default, layouts, context, where they are probably not used and mostly of the time are downgrading the performance. Maybe this should not be done by default, or should be a validation with a warning indicator that input / output parameters are added that are not necessary (if not yet exists).