Skip to main content

Model Based Testing – Automation level

Model Based Testing (MBT) is the next level in test automation. To explain what this means we should look at 3 different steps in testing which are:

  1. Test case specification: what do we want to test? Also known as logical test cases (e.g. I want to test login)
  2. Test case generation: how are we testing? Also known as physical test cases (e.g. I am going to test login with user a, and password b)
  3. Test execution: running test cases

In testing theories (like TMAP and ISTQB), there are more steps, however right now we just focus on these 3. The reason why these steps need to be highlighted, is because the testing world is slowly evolving and more and more automation is used. MBT is just the next step.

The more automation, the more complex testing becomes. So, MBT does require more technical skills than traditional testing. Let’s zoom into the different levels of test automation and their benefits and drawbacks.

Manual testing

All steps within a manual test are performed manually, hence the name ‘manual testing’. When writing tests it is important to keep in mind different test techniques and the risk of the product under test.

The skills necessary for this are knowledge on test techniques, and risk assessments.

Test automation: scripting

During manual testing there are many repetitive steps, which could easily be automated. As a first step typically small steps are automated, for example getting a System Under Test (SUT) in a ready for testing state, or a script to collect screenshots. Nothing too complex.

This is to help with test case execution, but the tests still must be physically written out and explained.

The skills necessary for this are knowledge on scripting (or programming), test techniques, and risk assessments.

Test automation: BDD/DSL

So, in this blogpost we are not talking about the full scope of Behaviour Driven Development (BDD), just the test automation part of it (see article https://www.testnet.org/artikelen/why-is-bdd-confused-with-testing/ about the distinction). In BDD, keywords are written in a human readable language that walk through a scenario. The “given, when, then” sentences make it easy to understand what the start state is (given), which actions are taken (when), and what the expected result is (then).

These keywords are written in a language the domain specialist understand (Domain Specific Language (DSL)), so that a team can discuss the scenarios with people who do not understand programming languages. This way the tester doesn’t have to write the full physical test cases themselves, plus some parts are generated, making it easier for the tester to use.

These keywords must be connected to the SUT, for which adapters are used. This adapter is code, a bit similar to the scripting step earlier, so programming knowledge is still essential.

The skills necessary for this are knowledge on domain specific languages, scripting (or programming), test techniques, and risk assessments.

Model Based Testing

With Model Based Testing we no longer have to think about specific scenarios but rather about workflows, states or activities/transitions. When modelling, it is important to keep in mind what the test goals are and implement those in the model (as described in the previous blog post).

Once the models are made, the physical test cases are generated, and the tests can in most cases also automatically run. There is a big difference between the MBT tools with regards to running test cases. There are 3 options in this:

  1. No test automation
  2. On the fly testing: while the test is generated it can already run on the SUT. This is very strong for random testing, but also for debugging while writing the model.
  3. Offline testing: here specific test cases are generated that the tester can first review before running it on a SUT. This is very strong for situations where the burden of proof of reproducibility is high.

Connecting the model to the SUT is in this case also through an adapter, same as with DSLs. Typically, in environments that apply Behaviour Driven Development, Model Based Testing is a logical next step, and the adapter could be reused.

The skills necessary for this are knowledge on modelling, domain specific languages, scripting (or programming), test techniques, and risk assessments.

AI based testing

There are tools that can already be used for testing using an AI, however this is limited. I still wanted to highlight it because it is going to be the next step. There are 2 main streams in this.

Prompting

When looking at the current AI hype, everybody loves Large Language Models ( LLMs). These can also be used for testing. ChatGPT, Claude, Gemini, etc. can be used for generating the test cases, rather than that a tester has to think about the different scenarios.

This way of testing is only advised for test engineers who can prompt very well, but are also able to cross verify if the test makes sense, because we have seen LLMs hallucinate.

For example, if ChatGPT is asked to make test cases following Boundary Value Analysis (BVA) test technique, the results might look convincing, but it quiet often happens it misses a case or adds a few additional tests, which is not the desired behaviour. This is because LLMs are trained with generic data and not specifically with information about testing or test techniques. It is still a useful tool, but the AI should be treated as a junior test engineer. Trust but verify the outputs.

The skills necessary for this are knowledge on prompting, modelling, domain specific languages, scripting (or programming), test techniques, and risk assessments.

 

Machine learning

There are more techniques than LLMs that fall under the umbrella of Artificial Intelligence. One of these techniques is machine learning.

As we have seen in the previous blogpost, modelling is a big issue for MBT but if we can ask the system to produce us a model, that could be a good starting point.

Using machine learning we can learn what the expected behaviours of the system are without having to model it ourselves. Of course there are risk here, issues could be missed, so the model should be cross verified before reusing it for testing. There is a bunch of research on this (https://research.utwente.nl/en/publications/efficient-learning-and-analysis-of-system-behavior)

The skills necessary for this are knowledge on machine learning, modelling, domain specific languages, scripting (or programming), test techniques, and risk assessments.

 

Conclusions

More test automation is going to be necessary to cover technical systems that are getting expansively more complex. However, there are new skills needed to enhance the automation. Now a test engineer doesn’t have to have all these skills, part could be picked up by other members of the team, but the skills are needed.

 

Kyra hameleers

 

Kyra Hameleers
Test Engineer