Hyperparameter Optimization for Effort Estimation.

2018 
Software analytics has been widely used in software engineering for many tasks such as generating effort estimates for software projects. One of the "black arts" of software analytics is tuning the parameters controlling a data mining algorithm. Such hyperparameter optimization has been widely studied in other software analytics domains (e.g. defect prediction and text mining) but, so far, has not been extensively explored for effort estimation. Accordingly, this paper seeks simple, automatic, effective, and fast methods for finding good tunings for automatic software effort estimation. We introduce a hyperparameter optimization architecture called OIL (Optimized Inductive learning). We test OIL on a wide range of hyperparameter optimizers using data from 945 software projects. After tuning, large improvements in effort estimation accuracy were observed (measured in terms of the magnitude of the relative error and standardized accuracy). From those results, we can recommend using regression trees (CART) tuned by either different evolution or MOEA/D. This particular combination of learner and optimizers often achieves in one or two hours what other optimizers need days to weeks of CPU to accomplish. An important part of this analysis is its reproducibility and refutability. All our scripts and data are on-line. It is hoped that this paper will prompt and enable much more research on better methods to tune software effort estimators.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    76
    References
    30
    Citations
    NaN
    KQI
    []