Blog

Quality Assurance in Product Innovation

  • 11 January 2023
  • 0 replies
  • 208 views

Userlevel 3
Badge

By Evelien Duizer and Marjolein van der Vegt-Verstraten

 

Let us start with a question: Do you know what software testing means, and what a software tester or a software test engineer is up to all day? Let’s be honest, most people, whether they work in IT or not, would answer this question with a resounding ‘no’. But isn’t that a bit strange? For most professions, the average person has a pretty good idea of what the job includes. A teacher educates students, a police officer enforces the law, and a doctor treats diseases. People can even hazard a good guess about what software developers do. They create websites, applications, and video games.

So what does a tester do? First and foremost, a tester is responsible for increasing and guarding the quality of a system. This involves a lot more than people think. A tester has to find out how functionality should work, analyze the differences between the actual and desired situation, create test plans and scenarios, analyze risks, report test results, come up with improvement ideas, exchange knowledge with designers and developers, write instructions, monitor the release process and create automated tests, which also requires some skill in a programming language. This is all done to ensure that the quality of a system, in our case the Thinkwise platform, is as high as possible. But how do you quantify such an elusive concept as quality? And what does that look like within Product Innovation? In this blog, we want to illustrate this to you with some practical examples.

 

First of all, What is quality?

 

“Quality is the degree to which a set of inherent characteristics of an object fulfills requirements” (ISO)

 

In general, a lot is documented about quality. ISO defines several aspects of quality, called quality attributes. Some of these quality attributes are:

  • Functionality, the degree of certainty that the system processes the information accurately and completely.
  • Security, the certainty that consultation or mutation of the data can only be performed by those persons who are authorized to do so.
  • Performance, the speed by which client requests are handled.

Within Product Innovation, we naturally include all these attributes in our development process. We work by the principle of “security by design”, and we continually monitor the performance of our applications. As testers, however, our main focus is on the attribute of functionality: does the software meet the requirements regarding accuracy, integrality, and consistency?

An additional level of difficulty for our role in Product Innovation is the fact that we do not deliver a standard piece of software. We deliver a platform to create your own software, which is used in multiple different ways. Our platform consists of multiple components, and within these components, different testing strategies need to be applied:

  • We test the Metamodel, or in other words, the Software Factory and IAM. The advantage we have here is the fact that we use our own software to create our own software. This means that we are constantly using the most recent version of the platform. Therefore, we are able to continuously test this software. Furthermore, we use the unit test model in the Software Factory to test our code.
  • We test our Application Tier (Indicium). This is mainly covered with automated (unit) tests.
  • We test our user interfaces: the Windows and Web GUI, and the Universal GUI. Next to unit testing by developers, the GUI’s are partly tested automatically and partly manually.

What is tested exactly differs per component. You can probably imagine what we test for the Metamodel; a new feature is added to the platform and the tester will check if this feature is functioning properly, including all variations and exceptions. So, let’s zoom in on two components that are a bit more abstract regarding testing: Indicium and the Universal GUI. Before we continue, let’s do a quick refresher course on Thinkwise theory.

 

The Thinkwise platform 101

 

“The Thinkwise Indicium Application Tier is the central integration hub of the Thinkwise Platform. It provides secure access to the data, processes, and business logic of Thinkwise applications through an open REST API.”

 

Indicium interprets every application to provide an API. This API offers several endpoints. Through these endpoints, other applications can communicate with your application. Of every table and functionality within your application, a separate endpoint is generated. This way, applications can get access to the information in your application model and add, update and delete data, or even execute tasks and generate reports, defaults, and layouts. Other applications communicate with your application by sending a request to the appropriate endpoints. The API returns a response with a status code that describes whether the request was successful or not. Depending on the request type, it then returns the information that has been ordered in the request.

To give an example: suppose you have an application with customer information, and you want to open a screen that should show all customers in Apeldoorn. Indicium will be called with a request; it will set the filter and return the result in a response. It is up to the user interface to correctly show the result. In other words, Indicium takes care of the data, but has no front end itself. To cover this, Thinkwise developed the Universal GUI.

 

“The Thinkwise Universal user interface is the latest Thinkwise interface, providing an ultra-modern user experience on mobile devices, desktops, and the web. It has been designed, along with the Indicium service tier, with an emphasis on performance, security, and user experience.”

 

Summarized: The Universal GUI retrieves its information from Indicium, and Indicium makes sure the GUI gets all the right information and handles the logic that is set up in the model. From a testing perspective, this requires two different approaches. For Indicium, we must check whether the functionality is handled correctly, and whether the right data is returned. For Universal, we must check if the data is correctly displayed on the screen. Therefore, the execution of each test is different. Let’s illustrate this using an example: adding a new customer to a CRM application.

 

Testing of the Universal GUI and Indicium

 

If we want to test this scenario in the Universal GUI, we first need to open the browser, open the application, log in and navigate to the correct screen. Arriving at the screen, we want to add a customer by clicking the ‘add’ button, add some values to the record, save, and check if the new customer is visible in the grid, including all the correct values. For this specific scenario, you could come up with tens, or maybe hundreds of variations. What happens if you enter an invalid value? What if a default or a layout procedure is involved? What if you don’t have the correct rights to add a user to the application?

To make sure we don’t get bogged down in all the possibilities, it is very important to scope our test. The most important question here is: What are we testing? Is the goal to validate our input, or to test the setup of the user rights? It is also important to work in a structured way when setting up a test case. Test cases should be written such that everyone can understand and run them. For Universal, we use a standardized way of writing these test plans:

 

Figure 1. A testplan voor Universal

 

In this test plan, you might notice that a few different tests are already being blended together. For example, we check if the field control is working properly (in this case, it consists of a text control and a lookup control). In practice, we have written extensive tests for every separate control type that includes all variations. In other test cases, default edit mode is covered, for example.

For Indicium, a test to add a new customer works completely differently. Because Indicium doesn’t have a front end, you’ll need a testing tool like Postman or Insomnia to test it. With such a tool, you can send one or more requests and check if the response that Indicium returns is correct. The advantage of not having a UI is that we do not require to start a browser, log in, and open the correct screen. Instead, we can directly start our test in the right place and include the authentication information in every request. If we look at the example of adding a customer, our test will consist of a series of GET, POST, and PATCH requests, which create a staged resource for a new customer, change this staged resource, and commit this resource to save the definitive changes that have been made. In the end, the newly added customer should be returned. In every step, we test if Indicium is giving us the correct response with the correct status code, and if the body and/or the headers of the response include the correct information.

 

Figure 2. Testing requests in Indicium

Automated testing

 

As you can see in our example, testing a simple process like adding a customer to a customer application requires many steps. You can walk through all of these steps manually. For Universal, this means that you’ll be clicking through the application and check for every step if the result is correct. For Indicium, you’ll manually send all the requests and check if the responses contain the correct information. This not only takes a long time, but it is also very error prone. You’d rather let a computer do this for you. A computer can go through all these steps very fast, without typos, ‘misclicks’, and distractions, and show the result in an organized way. That is why we are currently investing a lot of time in automating our tests. In addition to saving time (and money) and reducing error sensitivity, there are some more advantages of test automation. Automated tests can be incorporated into our continuous integration flow so the execution of the entire test set will be triggered every night or with each change of code. So, if a modification of the application has led to an error, this will be visible immediately. As a result, we can handle our testing work more efficiently by focusing on finding edge cases and cases that are unsuitable for automated testing.

When manually testing a user interface, it is hard to spot visual changes that are only a few pixels apart. You can create screenshots of two environments and compare them visually, but this takes a lot of time and effort. An automated testing tool can easily capture multiple screenshots, compare them to previously stored baseline images and automatically report any visually significant differences. Finally, when the tests are run automatically in different browsers/devices and on complex applications, it gets a lot easier to spot bugs before going into the release. And it is well known that detecting a bug early in the process takes less time and money to fix, than a bug found by a customer in production.

For Indicium, we automate our tests using the tool xUnit.net. This is a testing tool specifically for .Net applications. These tests are written in C#. Our test example for Indicium looks like this:

 

Figure 3. Automated test Indicium

In this example, we’ve described all the separate steps, so it is easier to read what is actually going on in the test. Normally, we would use methods that can be used generically.

For the Universal GUI, we use another tool, Cypress.io. This tool focuses on front-end applications specifically. There are more tools for this, for example, Selenium or Robot Framework, but they basically follow the same logic. Our tests are written in TypeScript (a superset of JavaScript). Our test example for Universal looks like this:

 

Figure 4. Automated test Universal

When automating our tests at Product Innovation, we have a few important principles that we handle:

  • We always use a separate environment to run our automated tests. At the start of every test, we make sure that all the data is cleared, and data is added. The tests should always work independently of each other.
  • For selecting elements in the UI, we use test IDs as much as possible. These test IDs are included in the Universal code and can be used by Cypress (or other tools) to search for a specific element in the DOM (Document Object Model). For example an input field of a form that you want Cypress to select to type a value into. Test IDs never change and are therefore a safe way to search for elements.
  • Regarding the Indicium test, we make use of methods, which are pieces of code that can be reused. In Cypress, these methods are referred to as custom commands. In the Cypress example you might have noticed them: “verifyInputValue” is a custom command which is used to validate the input value of a field. Because we semantically describe these methods, a method will always ‘tell’ us what it is supposed to do. This will not only make our code more readable and maintainable, but it also increases the reliability of the tests.
  • We try to prevent redundancy as much as possible. If we have tested something once, we won’t need to test that again. For example, if we’ve already extensively tested adding a customer, we won’t do this again in a test where we delete a customer.

We have automated tests for all the components of the Thinkwise platform. We also run visual tests for Universal. These tests automatically create screenshots and compare the ‘before’ to the ‘after’ situation. All our tests are integrated into our CI pipeline in Azure DevOps. This means that every night, and every time we merge changes in our code, the tests are run. We can also manually kick off these test runs. We strive to fully automate the regression test for Universal, just like we have done for Indicium. In the regression test, we check if everything is still working as expected after every release. This gives us space to focus on testing new features and extending our test base even more. In the long term, we are looking into ways to test our whole chain (the Meta Model, Indicium, and the user interface) at once. We are also exploring the possibilities of creating more handles in the platform to help you with automated testing. For example, by generating test cases from our model. That last part is a concept for the future.

 

So, what does this mean for you?

 

As you may have noticed, we specifically test the platform at Product Innovation. We make sure the toolbox you use as a platform developer is of high quality. We don’t test whether the application that is built with the platform works correctly. Our applications are ‘simple’ and use mock data. We do not test thirty-step process flows that require very specific user actions. And we certainly do not test if applications that have been made with the platform meet the business requirements. To give another example, in our test we entered a value in the ‘tax number’ field. We only check if we can add text and numbers to the field. In your application, you’d probably want to test if the tax number is actually correct. Another example is our ‘prepare data’ task. We wrote a few simple SQL statements that insert and delete the data. Your tasks and functionality will be much more complex and you’d want to write unit tests and process tests for that.

Long story short, testing the end application is and will remain a very important part of the quality of the software. How to set up your tests, and which aspects you should consider in your specific situation is the expertise of our colleagues at Professional Services. If you find something during your own tests that we did not foresee, that is very valuable information for us to further improve our test base. You can create a ticket for that. Make sure you always include clear reproduction steps. If you do this, we can reproduce the error and even include it in our regression test. That way, we not only help you by fixing the problem as fast as possible. Together, we can also increase the quality of the Thinkwise platform continuously. The Universal team will release a beta version every three weeks. This is a perfect opportunity for you to help us by checking if your application is still working properly. The sooner we find out about issues, the better. So, don’t hesitate to install a beta version and run your own tests on it. We hope that, after reading this blog, you have a better understanding of what we do in Product Innovation to keep up the high-quality standards of the platform, and what we cover in our tests.


0 replies

Be the first to reply!

Reply