Combined expectancies: the role of expectations for the coding of salient bottom-up signals

2020 
The visual system forms predictions about upcoming visual features based on previous visual experiences. Such predictions impact on current perception, so that expected stimuli can be detected faster and with higher accuracy. A key question is how these predictions are formed and on which levels of processing they arise. Particularly, predictions could be formed on early levels of processing, where visual features are represented separately, or might require higher levels of processing, with predictions formed based on full object representations that involve combinations of visual features. In four experiments, the present study investigated whether the visual system forms joint prediction errors or whether expectations about different visual features such as color and orientation are formed independently. The first experiment revealed that task-irrelevant and implicitly learned expectations were formed independently when the features were separately bound to different objects. In a second experiment, no evidence for a mutual influence of both types of task-irrelevant and implicitly formed feature expectations was observed, although both visual features were assigned to the same objects. A third experiment confirmed the findings of the previous experiments for explicitly rather than implicitly formed expectations. Finally, no evidence for a mutual influence of different feature expectations was observed when features were assigned to a single centrally presented object. Overall, the present results do not support the view that object feature binding generates joint feature-based expectancies of different object features. Rather, the results suggest that expectations for color and orientation are processed and resolved independently at the feature level.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    1
    Citations
    NaN
    KQI
    []