Sound classification during Seabed Classification Experiment 2017

2021 
In ocean acoustics, finding acoustic signals within long recordings is usually time consuming. The need to optimize this process we propose an experiment to discover the optimal acoustic signal classification model using the PyTorch deep learning package. This machine learning algorithm was designed to recognize and classify various sources from an input of spectrograms compiled from raw sound data. In the case of continuous audio files, this model’s purpose is to identify areas of interest for human operators so they don’t need to listen to each hour of the audio files marking down noises as they are heard. Four Convolutional Neural Networks (CNN) with differing numbers of layers took in one minute spectrograms. The most successful of the CNNs acted on time-averaged spectrograms, and resulted in a model which achieved a high degree of accuracy. This machine learning algorithm can help identify underwater sound signal sources, and can more efficiently identify when different signals are present in long audio files. The results of these tests imply that time averaging spectrograms may improve the identification of long term signal sources by a CNN. [ Research supported by the NSF REU program.]
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []