Neural Methods for Imagery, GMTI, and Information Fusion

2006 
Abstract : This work addressed the development and application of neural models of multi-sensor, multi-modal data and information fusion at Levels 0, 1, 2, and 2+/3 according to the JDL Data Fusion Group Process Model. In order to support multisensor IMINT and GMTI fusion and 3D visualization, we constructed a 3D site model of the docks and surrounding areas in Mobile, AL, which enables search using our existing image mining tools, and provides a COP environment in which scenarios can be simulated and visualized. We developed software for simulating traffic and scripting movements of individual vehicles to support scenario creation. We explored several new concepts to support higher-level information fusion at Levels 2+/3. One approach derived from insights into neural processing in the form of dynamic spiking information networks and their synchronization. These networks can bind data and semantic knowledge in the form of relationships and learned associations among represented concepts. We demonstrated the feasibility of using these networks for learning simple associations between moving vehicles in a dynamic urban scenario within the Mobile data set. A second approach involved extracting knowledge structures from imagery and/or text data. Two mechanisms for discovering taxonomies from concept co-occurrences within a dataset were developed. We demonstrated the efficacy of these approaches on fused imagery and textual corpora. A final approach utilized neurally-inspired mechanisms to learn models of normal behavior from moving tracked entities. These models were subsequently used to detect anomalous behavior and to predict future track locations of the tracked entities. Learning, detection, and prediction all occur in real-time with little or no operator input.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []