Spring 2014 Colloquium

The University of Arizona

Department of Linguistics

Spring 2014

Linguistics Colloquium

Ling 495/595A

Coordinator and convener: Professor Andy Wedel (Associate Professor, Department of Linguistics)

wedel@email.arizona.edu

Fridays 3:00-4:30 PM in Communication 311

Please see dates for Colloquia below.

 

Date: January 24, 2014

Title: Probabilistic knowledge in human language comprehension and production

Speaker: Roger Levy, UC San Diego Department of Linguistics

Abstract:

The talk covers two fundamental issues, one each in language comprehension and production: what determines the difficulty of comprehending a given word in a given sentence, and what factors influence the choice that a speaker makes when it is possible to express a meaning more than one way?  In comprehension, two theoretical problems have stood at the core of psycholinguistic research in syntactic comprehension: (1) the resolution of local ambiguity; and (2) syntactic complexity, or the difficulty incurred in processing locally unambiguous structures.  I describe a unified treatment of these two problems through the theory of surprisal, which proposes that comprehenders rationally deploy probabilistic knowledge to yield variability in word-by-word processing difficulty that reflects a wide range of evidential information sources. I present computational modeling and experimental studies showing how surprisal effects account for a range of both garden-path ambiguity resolution and syntactic complexity effects, and give empirical evidence for the specific quantitative relationship between subjective probability and processing difficulty -- as measured by word-by-word reading times -- proposed by surprisal theory.  In production, I advance the proposal that speakers make choices in language production such that their utterances tend toward uniform information density (UID).  I present two computational modeling corpus studies of speaker choice in grammatical optionality, one on optional relativizer omission (e.g., 'How big is the family (that) you cook for?'), and one on optional to-omission in the do-be construction (e.g., 'The least you could do was (to) call me in advance').  In both cases we find support for UID's prediction that speakers tend to use these optional function words more often the greater the contextual surprisal of the syntactic event they introduce.  Surprisal theory and Uniform Information Density provide a unified and general account of a wide range of phenomena in comprehension and production, and advance our foundational understanding of human communication within the structured grammatical system provided by natural language.

 

Date: February 7, 2014

Title: Interlingual phonetic interactions are pliable

Speaker: Miguel Simonet, University of Arizona Department of Spanish and Portuguese

Abstract:

People who learn a second language are likely to retain a nonnative accent even after years of practice. The characteristics of this accent are typically attributed to the first or native language of the speaker, so that the accents of learners who share a native language differ from native norms in systematic, predictable ways.

This suggests that the native and nonnative language sound (sub)systems interact in the mind of bilinguals. What is the nature of interlingual phonetic interactions? In this talk I report on the results of two phonetic experiments on proficient, sequential bilinguals. The findings show that interlingual phonetic interactions are affected by the communicative setting in which languages are produced, which suggests that interactions are modulated by the activation strength of nodes during the process of speech production.

In this talk I explore the hypothesis that the two languages of a bilingual are activated in a non-selective manner during processing, which in turn impacts phonetic and phonological behavior in these speakers. I discuss the theoretical implications of this hypothesis, as well as some avenues for further research.

 

Date: February 14, 2014

Title: Temporal characteristics of vowel articulation: An electromyographic investigation of the 'articulatory period'

Speaker: Amy LaCross, University of Arizona Department of Physiology

Abstract:

In speech production, articulator movement precedes the acoustic signal (e.g. Meyer 1991; Gracco 1988; Lubker & Gay 1982; Bell-Berti & Harris 1981). Bell-Berti and Harris (1981, p. 13) argue for an ‘articulatory period’ or a period of muscle activation beginning before and ending after the acoustic signal. However, the duration of the articulatory period and its variability relative to the acoustic period is unclear. Our research provides more information on the nature of the articulatory period. Using electromyography (EMG), the effects of vowel height and backness, coarticulation with a preceding consonant and speaking rate on both the onset and offset of the articulatory period are examined. We recorded whole muscle EMG activity from the posterior and anterior regions of the genioglossus muscle of eight healthy human subjects during the articulation of vowels in three conditions: static vowels, static vowels with initial coronal or palatal fricatives and vowels embedded in the nonce word [əpVp]. Our findings indicate that vowel height and front/backness influence the onset and offset of muscle activation, and that these effects are significantly influenced by the type of speech task or place of articulation of a preceding consonant. Furthermore, these effects differ between two different regions of a single muscle. These findings indicate that the articulatory period is subject to predictable variance. However, these findings also underscore the complexity of the timing of articulator movements, further emphasizing the need for more insight into the physiological mechanisms underlying speech. Understanding the temporal basis of articulator movements, many of which are executed simultaneously and extend across phonemic and syllabic boundaries as a result of coarticulation, is an important component in furthering our understanding of motor speech control.

Further, accurate understanding of speech gestures and their timing relations can provide more realistic bases to theories of speech production and planning, in addition to inferences about the cognitive organization of language.

 

 

April 4, 2014: Stephen Wilson, University of Arizona Department of Speech, Language, and Hearing Sciences

 

Date: April 11, 2014

Title: Do the mind and the larynx work together? Cross-linguistic phonetics and phonology of voicing

Speaker: Viktor Kharlamov, University of Arizona Department of Linguistics

Abstract:

Voicing is one of the most basic and obvious phonological features, and it makes distinctions in most languages of the world.

It seems to be a good candidate for a way that humans structure continuous sound information into abstract categorical behavior.

However, the detailed phonetics of voicing shows that this distinction is far from straightforward, especially when effects of the biology of the vocal tract (which have the potential to be universal) and language-specific effects are examined.  The main focus of the current talk will be on the question of how production and perception of [voiced] segments involves phonetic dimensions that are not generally associated with phonological voicing, such as nasality and consonantal place of articulation, and the question of language specificity versus universality of such relationships. This follows up on past research showing that the aerodynamics of voicing in the vocal tract hinder production of simple voiced stops at backer places of articulation, a phonetic effect that cross-cuts phonological features of both place and voicing. These questions will be answered through production and perception data from Russian and English and acoustic data from Southern Ute that show substantial inter-dependence between categorical phonological voicing and the gradient effects of physical structures of the vocal tract.

 

Date: April 25, 2014

Title: Individual variability in human speech processing: three case studies

Speaker: Alan Yu, Univeristy of Chicago Department of Linguistics

Abstract:

Theories of sound change hypothesized that mistakes in speech perception and production, if uncorrected, may lead to eventual changes in perceptual and production norms. How haphazard errors lead to systematic sound change remains a recalcitrant puzzle. In this talk, I articulate a theory of sound change where systematic individual variation in speech perception and production takes center stage. To illustrate this theory, I offer three case studies, focusing on individual variability in how pitch, tone-duration interaction, and vocalic coarticulation on sibilants are processed. To the extent that perceptual variability corresponds to variability in speech production, individuals within the “same” speech community should arrive at different perceptual and production norms. Such differences, which are anchored to specific individuals, serve as the pool of systematic variation that members of a speech community may draw from to construct local identities.