Automotive DevOps Platform

... tests around the clock

Our ECU-TEST, TEST-GUIDE, TRACE-CHECK and SCENARIO-ARCHITECT tools fit into every phase of automated vehicle software testing. Even as individual tools, they simplify and accelerate the testing process enormously and make it more effective. When they are linked together, they are far more than just the sum of their individual tools – as an Automotive DevOps Platform they achieve maximum impact.

With consistent workflows and highly functional interfaces for the flexible connection of further tools, we see the Automotive DevOps Platform as a collaborative platform for all stakeholders in the DevOps process of vehicle development.

In a targeted manner, we strive for close collaboration in the development of test cases (Dev) and their execution (Ops) based on direct feedback and seamless traceability. Special platform features that focus on overarching workflows are important in this context. These include our current highlights:

Execution distribution

Imagine executing thousands of test cases throughout the release process as efficiently as possible, i.e., in the most time-focused and resource-optimized manner possible. Test executions would then run in the correct sequence and simultaneously in the correct test environment, automated and round the clock. There would be quick and continuous feedback, no idle time of test resources and a hub in terms of development speed. Doesn't seem realistic? Well, it is! We have developed and successfully tested the perfect solution – execution distribution with ECU-TEST and TEST-GUIDE.

Finding the right test bench for a test case to be executed is more difficult in a heterogeneous hardware-in-the-loop (HiL) test landscape than in virtual software-in-the-loop (SiL) farms. On SiL test benches, the necessary configurations and test environments can be quickly changed, exchanged and adapted to the required test scope. HiL farms, on the other hand, often consist of different and elaborately configured HiL test benches in order to cover many different test scopes in a varied and realistic environment over the long term, and to be able to test in different integration phases (e.g., component HiL and overall HiL). Thus, it is possible that at the time of test execution, a free HiL test bench does not match the desired execution environment. Or that a matching HiL is currently under maintenance. Often, however, several test benches in a HiL network have identical features. So, the big challenge is to distribute the test cases in queue on appropriate HiL test benches so as to be able to execute them simultaneously.

In practice, the search for the perfect match of test order and test bench is still mostly carried out manually, but this is not a very efficient method. With execution distribution, it is possible to automate this allocation. The setup requires only a few steps: 1. Install ECU-TEST
2. Install TEST-GUIDE
3. Install and configure ResourceAdapter
4. Configure data storage in TEST-GUIDE
5. Export playbook from ECU-TEST to TEST-GUIDE
6. Sit back and watch

Playbooks describe what the test order is. They are created directly in ECU-TEST or TEST-GUIDE and exported to the TEST-GUIDE data storage via an integrated public interface. Playbooks describe what is necessary for the execution of the test order – the workspace to be used including the relevant test cases, the parameterization of global constants, the test bench configuration, as well as the CleanUp after execution. TEST-GUIDE becomes active if there are new playbooks in the TEST-GUIDE data storage. The tool independently organizes the distribution of test orders on the suitably configured test benches in the order of their prioritization. The tool is supported by ResourceAdapter, a software installed on the test benches. ResourceAdapter continuously provides information on vital data, configurations and test case executions of test benches to the TEST-GUIDE server. Which test has already been run? Which HiL is stuck? Which HiL is idle right now? Such permanent monitoring is the basis for making a live decision about the test bench on which the next test order can be executed using ECU-TEST.

However, execution distribution also solves another problem that often occurs. After the test order is allocated, all the conditions for test execution have to still be created at the test station. Repositories need to be checked out, data copied back and forth, and additional dependencies resolved. ResourceAdapter does all this too – based on the description in the playbook.

This system can also be very flexibly and individually adapted via the REST-API interface. The specific requirements for a test bench are read in in the JSON format and automatically resolved. This means that the right test bench – whether it is virtual or physical – can be found quickly.

Downstream analysis distribution

Classically, each automated test of new software artifacts is seamlessly followed by the associated analysis of the trace records created in the process (such as bus logs for instance) and test evaluation. But this way is very time-consuming and not very practical for tests with countless variants and scenarios. Extensively configured hardware-in-the-loop (HiL) test benches are already in continuous operation specifically for testing and validating newly developed ECU software. The number of HiL test benches should actually be increased simultaneously with the ever-increasing scope of testing owing to the rapidly growing proportion of software in vehicles. But this may not be a solution because of the enormous cost and space requirements.

So, the declared goal is to achieve more performance with the same capacity. This leaves only the option of decoupling the test execution from the analysis and evaluation of recorded measurement data. The tests thus continue to run on the existing HiL test benches, while the analysis of the test results – the so-called trace analysis – is outsourced to other existing or cost-effective systems. The principle of downstream analysis distribution by TEST-GUIDE makes this possible. After ECU-TEST executes test orders on HiL test benches, analysis orders are created and transferred to a configured TEST-GUIDE. In combination with ResourceAdapter, which continuously provides information about available computing capacities, TEST-GUIDE distributes these orders to test bench independent ECU TEST instances (PC, virtual machine, cloud). This is where trace analyses are performed. The results in the form of test reports are automatically stored in TEST-GUIDE and provide the basis for detailed evaluations.

In this way, valuable test resources are not blocked by evaluations and the test throughput can be increased significantly. In addition, the parallelization of evaluation allows further scaling of the test scopes to be executed.

Cache synchronization

Everyone is aware of the problem – the larger the file, the more extensive its content and the longer it takes to open. The same thing happens when reading bus/service databases and A2L files. These are very large databases with complex formats that must be read in by ECU-TEST each time the configuration starts, and parsed for further use. This process sometimes takes very long.

Therefore, the idea is to store this data centrally – in a cache – after it is read in for the first time. Caching is perfect when there are many users using the same database. They can access the same cache memory simultaneously and quickly. This not only enormously improves the efficiency of data retrieval and data return, but also significantly boosts the performance of ECU-TEST. What was just an idea before has now turned into reality. Try it out!

The only requirement is to set up a central storage location for the cache in the workspace settings of ECU-TEST. After reading in the database in ECU-TEST, the associated cache of the bus or service databases is stored encrypted on the configured network drive. However, since there is mostly no structured process to access a network drive, important artifacts can be accidentally deleted.


It is therefore better to store the cache for exchange in any configured TEST-GUIDE depository (e.g., Artifactory or S3). Users of other ECU-TEST instances can access the cache from there and thus avoid the time-consuming creation of their own cache. Access tokens are configured once in TEST-GUIDE for this purpose. If the input data changes, the cache is recreated and synchronized with the already stored cache. This makes starting the configuration much faster. In our own test environment, this feature allowed us to reduce the duration of our night run from over six hours to 2.5 hours.

... and unites all phases of testing

With the Automotive DevOps Platform, we go from the big picture to the details and unite all phases of vehicle software testing – from planning the test scopes to summarizing the test results. At the same time, continuous monitoring across all test phases always provides an overview of all activities – even with several thousand test executions per day and in different test environments.

Show content

Test Planning

In the planning phase, the test scopes within a test level are defined. The basis is formed by requirements that can be read in from a wide variety of sources – via direct connection of ALM tools (e.g., Octane, IBM RQM, Polarion), a proprietary API or by importing various exchange formats (e.g., ReqIF).

The speed of the first automated test execution with the TraceTronic Automotive DevOps Platform depends on the customer-specific prerequisites for an automated test process. More often than not, existing data structures and test structures need to be cleaned up and organized before anything else.

What tools are already available? Where are the requirements and test specifications managed? Is there a ticket system in place already? Which system is used for reporting? Where and how are the reports collected and evaluated? How does the work process work in general? The essential framework conditions are already defined in advance with a questionnaire like this, or a similar instrument. The test manager knows exactly how the test suite is structured. His knowledge is indispensable at this point. For, after all, the goal is to first map the existing test process and to efficiently integrate the available data into the Automotive DevOps Platform using interfaces.

Information about the test suites and test management systems is usually queried directly via API accesses to the individual tools. But script-based solutions can also be used for this. Our TraceTronic experts can help you customize it to your workflow. Once the content of the test-suite is defined, the test coverage can be generated automatically. This gives the testers and the test manager insight into the current status of the test activities at any given time.

If the test suite is designed for a new requirement, the relevant tests and the associated set of all possible parameter values (parameter spaces) will be defined. Here, it is not the concrete parameter value that is decisive, rather, the focus is on the abstract definition of the essential variables.

What is to be tested with which test cases in physical (HiL) or virtual environments (MiL, SiL) is then planned. Here, the entire team has an up-to-date overview of the test scope and the status of the test process at all times.

If the release overview shows that not all the results were appropriate, then the loop is fine tuned and run through again. Test cases are adapted, modified or extended and then re-submitted for execution. As many times as necessary.

Show content

Test Development

In this step, the focus is on the development and management of requirements- and experience-based as well as reusable test cases for automated execution in both physical (HiL) and virtual test environments (SiL, MiL).

With each new requirement from vehicle manufacturers, software functions are developed for ECUs. These must be verified and validated using test suites. The scope of these test suites is variable and depends on the complexity of the vehicle function. At the very least, the defined specifications of the statutory standard norms for the newly developed vehicle functions must be tested. This can be done in the form of many real and virtual driving kilometers, but also on the basis of various scenarios in which the new driving functions are tested critically and evaluated in detail.

The testers design abstract test cases and test projects in ECU-TEST for the new software functions, across platforms, i.e., for physical test environments (HiL), as well as for virtual ones (MiL, SiL). The abstraction not only ensures the reusability of the test cases but it also means that the tests can be parameterized depending on requirements. All the test cases that are created, including all the required data, are managed in a version management system (Git, SVN). The test suites are then compiled based on these.

The test automation tool that is used to develop the test cases and test suites does not necessarily have to be ECU-TEST. Any other automotive tool, such as MATLAB or vTestStudio, can also be connected. This flexibility characterizes our Automotive DevOps Platform and reflects how we work at TraceTronic – we always put customer benefits first.

Show content

Trace Analysis Development

Trace analyses can be prepared for recorded measurement data, which significantly increase the test depth by means of validations and verifications of the recordings. Numerous recording formats (e.g., Matlab, ASC, MDF, MP4, WAV) are supported.

During the execution of a test case, measurement data or traces are recorded. The purpose of trace analysis is to examine these in detail and with regard to different criteria. It starts with the requirements that are placed on a signal course and ends with the fully automated generation of a test report. The goal is to analyze multiple traces from different recording sources along a common, synchronized time line.

Trace analysis development is similar to test case development, both in terms of logic and in terms of sequence. The recorded signals of different tools are analyzed measuring point by measuring point, independently of the measuring grid. Meaningful diagrams help to provide a quick overview of the places where the signal characteristics deviate from the given requirements.

A trace analysis is generic, i.e., it is universally applicable and hence easy to reuse. The signals to be read are also specified only when the analysis is to be performed for a specific test case.

Strictly speaking, however, it is not just signal characteristics that can be evaluated. Image, audio and video recognition are also possible use cases for the analysis. This is possible because TRACE-CHECK, the tool for trace analysis, reads and analyzes the measurement data of all commonly used automotive formats (CSV, MDF, DAT, MAT, STI/STZ, ASC, WAV, AVI, and many more).

Regardless of the test environment (MiL, SiL, HiL or vehicle) that is used to generate the traces, generic trace analyses can be used to find and display anomalies.

Show content

Testing

Intelligent and integrated test execution ensures that tests are automatically distributed to available test stations. At any time, the best possible utilization can be ensured and the current status of the test resources can be monitored.

The classic test case execution consists of: stimulation, recording, and evaluation of signals. Test case by test case. To avoid having to do this individually, the test cases are bundled into test suites (so-called ECU TEST projects) and executed together. However, this is not quite sufficient for optimum automation. Triggering projects one after the other in a test environment is already a big step forward. However, this does not yet make the testing very efficient. It is better to use all the available resources for the projects to be executed so as to avoid wasting time. If the distribution of test cases and projects to the free resources is also automated, it will help save more time. Within the Automotive DevOps Platform, this part is taken over by TEST-GUIDE - supported by playbooks and the ResourceAdapter.

Playbooks can be created in ECU-TEST and TEST-GUIDE and ensure particularly agile handling when test jobs are to be executed in parallel and distributed on several physical or virtual test benches. They describe what is necessary for the execution of the test order - like the workspace, the test cases, parameterizations, configurations and the CleanUp after the execution.

The ResourceAdapter is configured on the test environments and organizes the communication with TEST-GUIDE. It automatically reports all important information about the available infrastructure to TEST-GUIDE and also takes care of the necessary preparations for the test execution with ECU-TEST on the respective test bench.

The location where the test execution takes place can be decided individually. Whether on HiL test benches or in the cloud (MiL, SiL) – everything is possible.

Show content

Trace Analysis

The test station-independent and downstream analysis of the measurement data – decoupled from the stimulation – enables the results to be reused and the test system to be made available earlier for further runs.

Testing and analysis belong together, like starting and braking. However, both do not necessarily have to be performed immediately one after the other. This means that the traces can also be analyzed downstream, and on any PC. Coveted test benches will thus be available sooner for testing again, and the test throughput is also increased significantly.

Recordings made during the test run and the defined trace analyses are transferred automatically to TEST-GUIDE as an analysis job. The tool then organizes the available resources for trace analysis on its own again using the ResourceAdapter.

Does the test object meet the test specification? Is the measuring device functioning correctly? Has the correct measuring range been selected? All details of the trace analysis and the stimulation are presented clearly in a test report for each individual step of the analysis and then stored automatically in TEST-GUIDE.

Show content

Test Result Review

With several thousand test executions a day, it is important to have support for evaluating results and following up on anomalies. Errors that occur can be recorded and forwarded to a connected ticket system (e.g., Jira, Redmine, Octane, etc.).

The further the automation of test executions progresses, the higher the test throughput. As a result, countless test results will have to be sifted through and evaluated. This can no longer be done without tool support. In addition to a wide range of filter options for a detailed view of the results, TEST-GUIDE has an integrated correction process - the Review. This helps to detect and display clearly any errors and inconsistencies that may occur during vehicle software testing.

If errors are discovered during a test run, a review can be created at the push of a button. The decisive factor here is the detailed documentation of the error that has occurred. This is the only way to trace errors easily and rectify them quickly.

The best prerequisites for rapid troubleshooting exist with the connection to defect management systems, such as Jira, Redmine, HP ALM.net or Octane. This means that the errors can be handed over straight away to the Development as a ticket, and scheduled quickly into the vehicle software development process, without having to log into the system itself. In addition to this, all those who are involved will be kept informed via e-mail notification and will also have a constant overview of the processing status.

The reviews that are performed on each test case can be included in the individual test completion reports. These provide information about passed, failed and erroneous test cases.

Show content

Test Result Release

Release represents the conclusion of the test activities. The summary of test results in release and coverage overviews is based on the test scopes defined at the beginning. The link between planning, implementation and results forms the basis for producing informative final reports.

Once the test cases for a new driving function have been executed and analyzed, the release overview in TEST-GUIDE provides information about the entire test process. Combined with the coverage, the release as a summary of the test results acts as the ideal decision-making basis for the release and for the delivery of a new software version, and provides a good overview of the initially designed test scopes.

Points that are still open can be marked out at precisely this point. Such as whether the depth of testing was sufficient, the test cases were varied enough, or the results obtained were meaningful or not. Or even how high the quality of the current software status ultimately is.

To support this assessment, an integrated final report (Excel, PDF) summarizes the determined test results and the infrastructure data specifically: comprehensively, clearly and, above all, in a manner that is transparent for everyone involved.

In addition to this, all the test results can also be fed back into the customer's own test management systems (e.g., HP-ALM, Octane, Jama, Polarion), because TEST-GUIDE can be integrated in a flexible manner with various automotive tool chains.

The lessons learned from the release and coverage will be carried over into the planning phase for subsequent iterations. To exclude subsequent changes, completed releases can be locked.

Show content

Monitoring

Monitoring is a continuous phase that accompanies development and provides an overview of all activities from planning to the summary of test results. By means of a clear dashboard, progress can be monitored and measures can be derived to deal with problems in good time.

Sustainable Continuous Integration solutions are really intelligent only if every step of the test process can be traced by means of consistent monitoring. It is only then that planning, execution and analysis can intermesh systematically and run in a completely automated manner. This function is becoming increasingly important in view of the growing digitization in vehicles and the resulting increase in the number of test executions and test resources.

Especially with large numbers of test executions, it is important to have everything under control. Which test benches or test environments are connected? What is the utilization rate? What condition is the test bench in? Which test case is currently being executed on which test bench? How high is the test case coverage? And which test executions are "failed"? Within the Automotive DevOps Platform, this comprehensive monitoring is handled by TEST-GUIDE - with the Test report management including coverage and the monitoring of available test resources using the ResourceAdapter.

All data are merged and made available for various views and export options in TEST-GUIDE, with a live status of the test executions at every moment.

This effectively answers the question of how to cope with the growing number of test executions of increasingly complex vehicle software. Now is the time to rev it up – because the Automotive DevOps Platform can test on all systems, around the clock.

This is the place for me!

Contact

This is the solution to your problem? Then contact our sales team and arrange an appointment with us.

We were not able to answer all questions in such a short time? Then get in touch with our sales team to access the information you still need.

Product Demos

Would you like to see the latest features? Then browse through our product demos.