FathomNet: A global underwater image training set for enabling artificial intelligence in the ocean

2021 
Ocean-going platforms are integrating high-resolution camera feeds for observation and navigation, producing a deluge of visual data. The volume and rate of this data collection can rapidly outpace researchers' abilities to process and analyze them. Recent advances in machine learning enable fast, sophisticated analysis of visual data, but have had limited success in the oceanographic world due to lack of dataset standardization, sparse annotation tools, and insufficient formatting and aggregation of existing, expertly curated imagery for use by data scientists. To address this need, we have built FathomNet, a public platform that makes use of existing (and future), expertly curated data. Initial efforts have leveraged MBARI's Video Annotation and Reference System and annotated deep sea video database, which has more than 7M annotations, 1M framegrabs, and 5k terms in the knowledgebase, with additional contributions by National Geographic Society (NGS) and NOAA's Office of Ocean Exploration and Research. FathomNet has over 100k localizations of 1k midwater and benthic classes, and contains iconic and non-iconic views of marine animals, underwater equipment, debris, etc. We will demonstrate how machine learning models trained on FathomNet data can be applied across different institutional video data, (e.g., NGS' Deep Sea Camera System and NOAA's ROV Deep Discoverer), and enable automated acquisition and tracking of midwater animals using MBARI's ROV MiniROV. As FathomNet continues to develop and incorporate more image data from other oceanographic community members, this effort will enable scientists, explorers, policymakers, storytellers, and the public to understand and care for our ocean.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []