Equivalence of regular languages: an algorithmic approach and complexity analysis

2010 
Over the last few decades, several alternative algorithms addressing the problems of minimisation and equivalence for finite automata and regular expressions have been proposed. Authors typically present the worst-case time complexity analysis of their algorithms, but that does not provide enough information on the practical behaviour. The lack of existing implementations of several algorithms in one single framework, especially of some of the older algorithms, makes it difficult to determine the running time characteristics. Using the C and Python programming languages, we implemented random generators of finite automata and regular expressions, several automata minimisation algorithms (including an original, incremental one), some functional variants of a rewrite system to test the equivalence of two regular expressions without explicitly converting them to equivalent automata, and a set of algorithms to test the equivalence of two finite automata without resorting to a minimisation process. After integrating these tools in the FAdo toolkit, and using more that 300 GB of random samples, we experimentally compare the relative performance of these algorithms. The size of the data sets is sufficient to ensure confidence intervals of 95:7% 99:5%, within a 1% error margin.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    64
    References
    7
    Citations
    NaN
    KQI
    []