Automated Test Case Selection for Flight Systems using Genetic Algorithms

2010 
Without rigorous system verification and validation (SVV), flight systems have no assurances that they will actually accomplish their objectives (e.g., the right system was built) or that the system was built to specificatio n (e.g., the system was built correctly). As system complexity grows, exhaustive SVV becomes time and cost prohibitive as the number of interactions explodes in an exponential or even combinatorial fashion. Consequently, JPL and others have resorted to selecting test cases by hand based on engineering judgment or stochastic methods such as Monte Carlo methods. These two approaches are at opposite ends of the search spectrum, in which one is narrow and focused and the other is broad and shallow. This paper describes a novel approach to t est case selection through the use of genetic algorithms (GAs), a type of heuristic searc h technique based on Darwinian evolution that effectively bridges the search for test cases between broad and narrow spectrums. More specifically, this paper describes the Nemesis fram ework for automated test case generation, execution, and analysis using GAs. Results are pres ented for the Dawn Mission flight testbed. I. Introduction inding the fatal flaws or vulnerabilities of comple x systems requires thorough testing. In the tradit ional approach for validating such systems, an expert sel ects a few key high fidelity test scenarios that he or she believes will most likely uncover problems. Each o f the cases is crafted and evaluated by hand. Some times, the test engineer adapts his strategy as he goes along, usin g interesting results from one test case to guide t he selection of new cases. The usefulness of these tests in findin g flaws can be limited by the biases and assumption s of the expert in the selection process. Another approach augments the expert selection process with scripting to walk through many scenarios, traversing values of various test parameters. Evaluation can be automated with test result scoring to prioritize review team attention according to features found in the test r esults. Unfortunately, this approach results in wa sting valuable test time on families of similar cases with little new i nformation gained. Furthermore, because the test t eam must wade through a large volume of results, there is less op portunity to adapt the approach to what is discover ed along the way. In this paper, we describe the application of genet ic algorithms to automated test case selection to e xploit the advantages of both adaptive expert case selection a nd automated test space exploration by evolving tes t scenarios that expose the vulnerabilities of a system under t est (SUT) according to models and scoring functions defined by the test team. The test team controls the scope of test space coverage through what they choose to in clude in the model, and controls the search priorities through t he definition of the fitness function to guide the evolutionary search. Furthermore, the starting point for the se arch is manually specified, allowing the system to cover the ground initially defined as important by the test engineer s, continuing the search into other areas as well, and adapting the search to examine more closely those areas with tel l-tale signs of stress. This paper is organized into the following sections : (I) Introduction, (II) Genetic Algorithm Backgrou nd, (III) Detailed Approach, (IV) Results, and (V) Conclusion.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    7
    Citations
    NaN
    KQI
    []