AI based testing in virtual safeguarding

Software development has made enormous progress in recent years. Increasingly automated workflows, a new agile mindset among developers. We are even talking about the beginning of a paradigm shift in software development thanks to machine learning. However, one area has always been the bottleneck: testing.

The testing of software in conjunction with hardware essentially consists of the writing of so-called test cases based on previously defined requirements, the execution of the test and the subsequent analysis of any errors that occur. The more complex the systems, the clearer one thing becomes: It is an almost impossible task to test all imaginable scenarios and to find exactly the one from the multitude of input variables that is the cause of the misbehavior. It is even more complicated if one first takes into account that the test itself can already be error-prone.

Thus, companies are always confronted with high resource requirements resulting from manual work steps and the time-consuming use of test hardware. This is because most errors that occur correspond to recurring patterns, but these are difficult to detect manually - and therefore lead testers to repeat the task. A heterogeneous system landscape does the rest. While many companies from a wide range of industries engage in "A.I. washing" (along the lines of "green-washing," where PR departments market an unjustified green fingerprint of their company), actual value can be added by artificial intelligence in testing. This is because only by applying the full range of machine learning and deep learning methodologies can recurring error patterns be distinguished from new ones, putting an end to the tedious duplication of testers' work. Before this can happen, however, there are still a few steps to be taken in advance.

Andreas

RPA Professional

18.03.20

Ca. 5 min

Sharing is caring!

Model-based testing

The test cases are modeled with the help of visual representations (e.g. in the modeling language UML). This saves time, because once modeled, many individual test cases can be derived automatically. If the requirements for the system change, a change in the model can automatically adapt all test cases affected by this. In addition, the test data can also be generated from the model and the test run can be prepared. The model-based approach also helps with the increasingly required agile development method or Scrum. Due to the increase in speed, the test cases can be derived in parallel with the development of the features in the same sprint (e.g. two-week cycle).

In A Nutshell

  • Model-based testing

  • Continuous Integration
  • Automated testing from ECU and backend to frontend
  • Jenkins
  • Artificial intelligence

Automated test flow with Jenkins

Before the actual test procedure, a selection of the previously generated test cases must take place. It is also necessary to reserve the right test hardware – i.e. the control unit – on which a particular feature will later be implemented. After the test procedure, the results have to be derived together with the reports. To ensure that this can run in parallel during the entire course of the project, the testing experts at Cognizant Mobility rely on the “Jenkins” tool. The approach behind this is simple and ingenious at the same time: Changed test cases are automatically pushed through the entire process via the tool, without the need for a manual push.

The biggest advantage is in failure analysis, and this becomes apparent when you realize the complexity that a tester faces today. The failed tests must be manually analyzed one by one. The task is to find out why they failed, based on parameters and time series. Multiple tests may fail due to the same parameter or dependence on each other. Also, the function under test may be flawless and only fail to work due to faulty test hardware or a test case that has not been modeled precisely enough. It is very difficult to distinguish between the sources of error. It is practically impossible to identify sources of error across departmental boundaries. Once the faulty hardware or test case is corrected, manual retesting takes place to re-check the actual function. The errors that are ultimately detected are logged and returned to software development.

A strong AI can recognize patterns in the errors and automatically draw conclusions about the source of the error. Not only is the similarity of the individual defects revealed at high speed, but new test cases are also generated completely autonomously and post-tests are triggered. It is not yet possible to estimate exactly how high the potential savings from fully autonomous testing will be. However, initial estimates by Cognizant mobility experts suggest that the potential is great. After all, we are talking about nothing less than a revolution in this area of development. Ultimately, this brings the vision of software that writes and improves itself within reach.