Visual Decoding of Phrases from Occipital Neuromagnetic Signals
Orthographic visual perception (reading) is encoded via a widespread dynamic interaction between different language centers of the brain and visual cortex. In this study, we investigated orthographic visual perception decoding with Magnetoencephalography (MEG), where phrases were visually presented to participants. We compared the decoding performance obtained with sensors within the occipital lobe that obtained with sensors covering the whole head. Two naive machine learning classifiers namely support vector machines (SVM) and linear discriminant analysis (LDA) were used. Experimental results indicated that the decoding performance using only occipital sensors is similar to the performance obtained with all sensors within the task period, which were all above chance level. In addition, temporal analysis by taking short-time windows showed that the occipital sensors were more discriminative near onset compared to later time periods, while using the whole head sensor setup at later time periods performed slightly better than occipital sensors. This finding may indicate a sequential order (from visual cortex to other areas beyond occipital lobe) during visual speech perception.