Towards an ontology design pattern for UAV video content analysis

2019 
Video scene understanding is leading to an increased research investment in developing artificial intelligence technologies, pattern recognition, and computer vision, especially with the advance in sensor technologies. Developing autonomous unmanned vehicles, able to recognize not just targets appearing in a scene but a complete scene the targets are involved in (describing events, actions, situations, etc.) is becoming crucial in the recent advanced intelligent surveillance systems. At the same time, besides these consolidated technologies, the Semantic Web Technologies are also emerging, yielding seamless support to the high-level understanding of the scenes. To this purpose, the paper proposes a systematic ontology modeling to support and improve video content analysis, by generating a comprehensive high-level scene description, achieved by semantic reasoning and querying. The ontology schema comes from as an integration of new and existing ontologies and provides some design pattern guideline to get a high-level description of a whole scenario. It starts from the description of basic targets in the video scenario, thanks to the support of video tracking algorithms and target classification; then provides a higher level interpretation, compounding event-driven target interactions (for local activity comprehension), to reach gradually an abstraction high level that enables a concise and complete scenario description.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    5
    Citations
    NaN
    KQI
    []