Aggregating predictions of multi-models for the small dataset learning tasks in the TFT-LCD process

2018 
Compressing new product development schedules has become an important strategy for firms to earn more market share in recent years, because of the globalizing competition. Nevertheless, engineers often have to face the issue of small data learning in this context, since only few pilot runs are allowed to avoid lowering the yield rates of mass production. Bagging has shown to be an effective approach to deal with small data. However, when adopting multiple models in bagging to learn numeric forecasting tasks without the prior weights, bagging often takes the averages of the predictions of the models as the compromised ones, which are very probably affected by extreme values and thus become less precise. Accordingly, this study develops a systematic procedure to generate weights for prediction aggregation by employing box-and-whisker plots to build the possible distributions of the predictive errors of the algorithms with membership functions. In the experiment, a real case taken from a thin film transistor liquid crystal display maker is studied with four distinct algorithms and bagging; and the results show the errors of the predictions aggregated by the proposed weights are significantly lower than those of the four algorithms and the averaging method in bagging in general.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    4
    Citations
    NaN
    KQI
    []