Question Answering from Brain Activity Data via Decoder Based on Neural Networks

2021 
We build a model that can estimate what subjects recognize from functional magnetic resonance imaging (fMRI) data via a visual question answering (VQA) model. The VQA model can generate an answer to a question about an image. We convert fMRI signals into image features via an fMRI decoder based on the relationship between the fMRI signals and the image features extracted from the gazed image. Then this allows the VQA model to answer a visual question from the fMRI signals measured while the subject is gazing at the image. Though brain decoding, which interprets what humans recognize, has become overwhelmingly popular in neuroscience, they often suffer from the small datasets of brain activity data. To overcome the small size of datasets of fMRI signals, we introduce an fMRI decoder based on neural networks that have a high expressive ability. Even when we do not have enough fMRI signals, the proposed method derives the answer to what a person is looking at from fMRI signals. Experimental results on several datasets show that our method allows us to answer a question about gazed images from fMRI signals.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []