Random Number Generators in Training of Contextual Neural Networks
2021
Much care should be given to the cases when there is a need to compare results of machine learning (ML) experiments performed with the usage of different Pseudo Random Number Generators (PRNGs). This is because the selection of PRNG can be regarded as a source of measurement error, e.g. in repeated N-fold Cross Validation (CV). It can be also important to verify if the observed properties of a model or algorithm are not due to the effects of the use of a particular PRNG. In this paper we conduct experiments so that we can observe the possible level of differences in obtained values of various measures of classification quality of simple Contextual Neural Networks and Multilayer Perceptron (MLP) models for various PRNGs. It is presented that the results for some pairs of PRNGs can be significantly different even for large number of repeats of 5-fold CV. Observations suggest that when different ML models and algorithms are compared with the usage of 5-fold CV when different PRNGs were used, the confidence interval should be doubled or confidence level higher than 95% should be used. Additionally, it is shown that even under such conditions classification properties of Contextual Neural Networks are found statistically better than of not-contextual MLP models.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
49
References
0
Citations
NaN
KQI