Comparative analysis of predictive techniques for release readiness classification

2016 
Context : A software release is the deployment of a version of an evolving software product. Product managers are typically responsible for deciding the release content, time frame, price, and quality of the release. Due to all the dynamic changes in the project and process parameters, the decision is highly complex and of high impact. Objective : This paper has two objectives: i) Comparative analysis of predictive techniques in classifying an ongoing release in terms of its expected release readiness., and ii) Comparative analysis between regular and ensemble classifiers to classify an ongoing release in terms of its expected release readiness. Methodology : We use machine learning classifiers to predict release readiness. We analyzed three OSS projects under Apache Software Foundation from JIRA issue repository. As a retrospective study, we covered a period of 70 months, 85 releases and 1696 issues. We monitored eight established variables to train classifiers in order to predict whether releases will be ready versus non-ready. Predictive performance of different classifiers was compared by measuring precision, recall, F-measure, balanced accuracy, and area under the ROC curve (AUC). Results : Comparative analysis among nine classifiers revealed that ensemble classifiers significantly outperform regular classifiers. Balancing precision and recall, Random Forrest and BaggedADABoost were the two best performers in total, while Naive Bayes performed best among just the regular classifiers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    7
    Citations
    NaN
    KQI
    []