A.I.-based Testing and Robot Process Automation

Until now, A.I.-based approaches to performing complex maneuvers or testing have remained in the realm of science fiction and far from any implementation for engineering. The challenges in this area have increased many times over, especially in recent years, due to the growing complexity of the systems. Cognizant Mobility describes a two-part tool landscape for use in the testing process.

What to expect in this article:


Data Science Professional


Ca. 11 min

Sharing is caring!

Automate complex interactions

The more complex the system, the more decisions are made by the vehicle alone, and the more interactions are required between the various control units to perform driving maneuvers, the greater the challenge of testing these complex, networked systems in advance. This intuitive rule of three needs no explanation. What is needed, however, is an explanation of how testing for these systems will look in the future and in everyday practice. In terms of cost, this cannot grow at the same rate as the demands on the system. New, smart solutions to the complexity problem are needed. A.I.-based testing represents one such solution. To solve this challenge in practical everyday testing, the machine learning and testing domain experts at Cognizant Mobility, together with the IoT and robotics experts at BotCraft, have developed a two-part tool landscape that will realize so-called end-to-end testing in the future: A fully autonomous test sequence of any product from test order to test execution to failure analysis (FIG 1). As a first step, this new testing tool landscape relies on an approach known as Robot Process Automation (RPA). The challenge is to automate the complex interactions between existing planning systems, test systems, hardware and product configurations via bots and microservices. The following step focuses on supporting the analysis of the found error images by a powerful artificial intelligence.


The view of the benefits of the novel approach can best be evaluated by analyzing the status quo of today’s product and system assurance. In order to test a system completely, it is first necessary – in simplified terms – to set up the relevant test cases for a defined test order. Next, the so-called hardware-in-the-loop (HIL) test bench is configured depending on the test job. This simulates the desired vehicle environment, adapted to the exact product variant. Now the software to be tested must be ported to the ECUs and the test must be registered for execution. A scheduling system ensures that the test run is performed at an optimal time. What still seems quite clear in theory has serious implications for the highly complex systems in the vehicle: The HIL must have countless country settings, because the hardware is structured completely differently according to region. In addition, there are dependencies due to different model variants. The control unit software is also available in countless variants and versions. Last but not least, the environmental states of the test itself are also changeable. The dependencies between the systems are so high that the smallest errors in the setup make the respective test unusable. In addition, basic settings still have to be reconfigured manually for every test setup today, resulting in low utilization of the test benches of less than 70%. A value that is hardly sustainable in the face of the aforementioned challenges and makes an RPA in automotive testing interesting.


RPA in automotive testing is not entirely unknown, is used in simplified form as a macro in some places, and has already gained acceptance in other industries. Today, bots – and RPA is nothing else – are responsible, for example, for ensuring that e-mails are automatically scanned for keywords, land in the right department, automatically trigger bank transfers in accounting, and finally send confirmations. This is the only way to make some processes economically viable. Intelligent bots do not yet exist in automotive testing because the interacting systems are far more complex and the interfaces less standardized. Whereas in the insurance industry, for example, only the Outlook office software needs to communicate with Excel, an ERP system and a database, in testing it is a random mix of differently configured hardware, test benches and many test systems – each component with its own connectivity philosophy.


The potential that automation of testing alone brings can be seen clearly in the future process: The bot automatically recognizes that new software is available for an ECU. It then compiles all the necessary parameters, selects the test cases required for this, loads additional files, synchronizes with the available test software, activates the correct configuration of the HIL via a type of multiplexer and triggers the test via a scheduler. The key ingredient to bringing a bot to life is seamless connectivity across all building blocks. This not only required the development of countless interfaces between the individual systems, tools, databases and hardware, which was a Herculean task in itself given the highly technical and complex processes and the engineering expertise and vehicle knowledge required for this. It was also necessary to develop a higher-level functional language that would allow all the building blocks to interconnect. The result is an adaptable software stack whose business logic is built from a library and requires only a customer-specific configuration. In this way, all testing pipelines can be individually adapted to the needs of the manufacturer. The extreme parameterizability of the systems now makes it possible to fully test the high product variance at runtime. The bots can be used, for example, to test both old and in parallel the new versions of the device software, to add the country settings and to check all available models in automated test runs. This happens around the clock, as the detection of a failed test run, the analysis of the network problems or the crashed test bench is handled by the bot, which independently initiates the necessary resets. It is not the human tester who decides on test scheduling, but an intelligent scheduler takes control and ensures optimal utilization of the test hardware. Overall Equipment Effectiveness can thus be increased from the 70% usual today to over 90%. In addition, the manpower required for testing is reduced by about 30% while test coverage, depth and quality increase. In times of shrinking product margins and increasing international competition, this is a relevant cost lever for manufacturers.


The testing landscape is also about analyzing product defects found during the testing phase so that they can be resolved in a timely manner. Here, too, it is first worth taking a look at the status quo in order to better understand the advantages that result from an efficient A.I.. If a test case delivers a negative result (“Failed”), it is the test engineer’s task to find the causes. At the same time, the causes of the errors can sometimes even seem adventurous. Whether, for example, a power surge in the onboard power system, a false signal from an ECU or the test bench itself with a latency that has just dropped by a few milliseconds was the trigger, cannot be detected at first glance and often entails painstaking detective work. In the process, the vehicle experts examine the recorded data streams of all signals from the onboard network. With their vehicle knowledge and years of experience, they can eventually guess the causes. In the early stages of product development, 50% failed tests are not uncommon. In absolute numbers, this usually means several thousand test cases, for which several experts are needed to find the causes in a reasonable amount of time. Test cases and their underlying functions are manually cut by category for analysis and assigned to function experts. However, the error patterns do not adhere to these category boundaries and so it is little wonder that similar errors are processed in parallel by different people without knowing about each other. The limited capabilities of human communication do not allow for a holistic, cross-system and cross-test view of these defects at an early stage and the search for patterns.


Instead of rigid category boundaries, Cognizant Mobility experts use machine learning to search for similar events that led to the corresponding faults using advanced cluster analysis techniques. In simplified terms, the existing test data and their results are transformed into a high-dimensional vector space based on their multiple properties (FIG. 2).

In this set of points, centers are formed and estimated which points are considered adjacent. This neighborhood can be seen as an approximation of a similarity measure. Vehicle domain knowledge is also required here, because not every similarity makes immediate sense. The clusters identified in an unsupervised learning procedure can always be assigned and labeled with new test results. As the amount of data increases, the accuracy of the learning procedure also increases and increasingly stabilizes the overall approach. It has been proven that accuracies of over 90% in the classification of defect patterns can be achieved in this way. From examples from applied product and vehicle development, it can be deduced that in this way, replacements of manual activities often amounting to more than 20,000 hours per year are not uncommon.


Even if some parts of the developed testing pipeline have to be adapted to customer-specific requirements, it is already apparent what potential there is in A.I.-based testing . Against the backdrop of the complexity explosion due to highly autonomous assistants and strongly networked customer functions, it is clear that only with this setup is it possible to get to grips with the necessary test coverage at a reasonable cost.