Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks.

2018 
Syntactic rules in human language usually refer to the hierarchical structure of sentences. However, the input during language acquisition can often be explained equally well with rules based on linear order. The fact that children consistently ignore these linear explanations to instead settle on hierarchical explanations has been used to argue for an innate hierarchical bias in humans. We revisit this argument by using recurrent neural networks (RNNs), which have no hierarchical bias, to simulate the acquisition of question formation, a hierarchical transformation, in an artificial language modeled after English. Even though this transformation could be explained with a linear rule, we find that some RNN architectures consistently learn the correct hierarchical rule instead. This finding suggests that hierarchical cues within the language are sufficient to induce a preference for hierarchical generalization. This conclusion is strengthened by the finding that adding an additional hierarchical cue, namely syntactic agreement, further improves performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    28
    Citations
    NaN
    KQI
    []