Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech

Cognition. 2016 Feb:147:70-4. doi: 10.1016/j.cognition.2015.11.010. Epub 2015 Nov 27.

Abstract

Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences. A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations. We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies. We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced. We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.

Keywords: Artificial grammar learning; Grammatical processing; Language acquisition; Speech segmentation; Statistical learning.

Publication types

  • Randomized Controlled Trial
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Female
  • Generalization, Psychological*
  • Humans
  • Language Development
  • Language*
  • Learning*
  • Male
  • Speech*
  • Young Adult