
... tests around the clock
Our ECU-TEST, TEST-GUIDE, TRACE-CHECK and SCENARIO-ARCHITECT tools fit into every phase of automated vehicle software testing. Even as individual tools, they simplify and accelerate the testing process enormously and make it more effective. When they are linked together, they are far more than just the sum of their individual tools – as an Automotive DevOps Platform they achieve maximum impact.
With consistent workflows and highly functional interfaces for the flexible connection of further tools, we see the Automotive DevOps Platform as a collaborative platform for all stakeholders in the DevOps process of vehicle development.
In a targeted manner, we strive for close collaboration in the development of test cases (Dev) and their execution (Ops) based on direct feedback and seamless traceability. Special platform features that focus on overarching workflows are important in this context. These include our current highlights:
Imagine executing thousands of test cases throughout the release process as efficiently as possible, i.e., in the most time-focused and resource-optimized manner possible. Test executions would then run in the correct sequence and simultaneously in the correct test environment, automated and round the clock. There would be quick and continuous feedback, no idle time of test resources and a hub in terms of development speed. Doesn't seem realistic? Well, it is! We have developed and successfully tested the perfect solution – execution distribution with ECU-TEST and TEST-GUIDE.
Finding the right test bench for a test case to be executed is more difficult in a heterogeneous hardware-in-the-loop (HiL) test landscape than in virtual software-in-the-loop (SiL) farms. On SiL test benches, the necessary configurations and test environments can be quickly changed, exchanged and adapted to the required test scope. HiL farms, on the other hand, often consist of different and elaborately configured HiL test benches in order to cover many different test scopes in a varied and realistic environment over the long term, and to be able to test in different integration phases (e.g., component HiL and overall HiL). Thus, it is possible that at the time of test execution, a free HiL test bench does not match the desired execution environment. Or that a matching HiL is currently under maintenance. Often, however, several test benches in a HiL network have identical features. So, the big challenge is to distribute the test cases in queue on appropriate HiL test benches so as to be able to execute them simultaneously.
In practice, the search for the perfect match of test order and test bench is still mostly carried out manually, but this is not a very efficient method. With execution distribution, it is possible to automate this allocation. The setup requires only a few steps: 1. Install ECU-TEST
2. Install TEST-GUIDE
3. Install and configure ResourceAdapter
4. Configure data storage in TEST-GUIDE
5. Export playbook from ECU-TEST to TEST-GUIDE
6. Sit back and watch
Playbooks describe what the test order is. They are created directly in ECU-TEST or TEST-GUIDE and exported to the TEST-GUIDE data storage via an integrated public interface. Playbooks describe what is necessary for the execution of the test order – the workspace to be used including the relevant test cases, the parameterization of global constants, the test bench configuration, as well as the CleanUp after execution. TEST-GUIDE becomes active if there are new playbooks in the TEST-GUIDE data storage. The tool independently organizes the distribution of test orders on the suitably configured test benches in the order of their prioritization. The tool is supported by ResourceAdapter, a software installed on the test benches. ResourceAdapter continuously provides information on vital data, configurations and test case executions of test benches to the TEST-GUIDE server. Which test has already been run? Which HiL is stuck? Which HiL is idle right now? Such permanent monitoring is the basis for making a live decision about the test bench on which the next test order can be executed using ECU-TEST.
However, execution distribution also solves another problem that often occurs. After the test order is allocated, all the conditions for test execution have to still be created at the test station. Repositories need to be checked out, data copied back and forth, and additional dependencies resolved. ResourceAdapter does all this too – based on the description in the playbook.
This system can also be very flexibly and individually adapted via the REST-API interface. The specific requirements for a test bench are read in in the JSON format and automatically resolved. This means that the right test bench – whether it is virtual or physical – can be found quickly.
Classically, each automated test of new software artifacts is seamlessly followed by the associated analysis of the trace records created in the process (such as bus logs for instance) and test evaluation. But this way is very time-consuming and not very practical for tests with countless variants and scenarios. Extensively configured hardware-in-the-loop (HiL) test benches are already in continuous operation specifically for testing and validating newly developed ECU software. The number of HiL test benches should actually be increased simultaneously with the ever-increasing scope of testing owing to the rapidly growing proportion of software in vehicles. But this may not be a solution because of the enormous cost and space requirements.
So, the declared goal is to achieve more performance with the same capacity. This leaves only the option of decoupling the test execution from the analysis and evaluation of recorded measurement data. The tests thus continue to run on the existing HiL test benches, while the analysis of the test results – the so-called trace analysis – is outsourced to other existing or cost-effective systems. The principle of downstream analysis distribution by TEST-GUIDE makes this possible. After ECU-TEST executes test orders on HiL test benches, analysis orders are created and transferred to a configured TEST-GUIDE. In combination with ResourceAdapter, which continuously provides information about available computing capacities, TEST-GUIDE distributes these orders to test bench independent ECU TEST instances (PC, virtual machine, cloud). This is where trace analyses are performed. The results in the form of test reports are automatically stored in TEST-GUIDE and provide the basis for detailed evaluations.
In this way, valuable test resources are not blocked by evaluations and the test throughput can be increased significantly. In addition, the parallelization of evaluation allows further scaling of the test scopes to be executed.
Everyone is aware of the problem – the larger the file, the more extensive its content and the longer it takes to open. The same thing happens when reading bus/service databases and A2L files. These are very large databases with complex formats that must be read in by ECU-TEST each time the configuration starts, and parsed for further use. This process sometimes takes very long.
Therefore, the idea is to store this data centrally – in a cache – after it is read in for the first time. Caching is perfect when there are many users using the same database. They can access the same cache memory simultaneously and quickly. This not only enormously improves the efficiency of data retrieval and data return, but also significantly boosts the performance of ECU-TEST. What was just an idea before has now turned into reality. Try it out!
The only requirement is to set up a central storage location for the cache in the workspace settings of ECU-TEST. After reading in the database in ECU-TEST, the associated cache of the bus or service databases is stored encrypted on the configured network drive. However, since there is mostly no structured process to access a network drive, important artifacts can be accidentally deleted.
It is therefore better to store the cache for exchange in any configured TEST-GUIDE depository (e.g., Artifactory or S3). Users of other ECU-TEST instances can access the cache from there and thus avoid the time-consuming creation of their own cache. Access tokens are configured once in TEST-GUIDE for this purpose. If the input data changes, the cache is recreated and synchronized with the already stored cache. This makes starting the configuration much faster. In our own test environment, this feature allowed us to reduce the duration of our night run from over six hours to 2.5 hours.