2013 Spring Colloquium

Click Here! For PDF Version of 2013 Spring Colloquium

The University of Arizona

Department of Linguistics

Spring 2013

Linguistics Colloquium

Ling 495/595A

Coordinator and convener: Massimo Piattelli-Palmarini

(Linguistics, Cognitive Science and Psychology, University of Arizona)

massimo@email.arizona.edu

Assistant coordinator: Kristen Howe (Human Language Technology, University of Arizona)

khowe@email.arizona.edu

Fridays 3:00-4:30 PM in Communication 311

Colloquia are every other week

Program

 

 

Friday January 11, 2013: Computational Insights Into Language

Speakers: Julia Fisher, Dane Bell, Rolando Coto

Julia Fisher: A Computational Model of Vowel Acquisition in Bilingual Infants

Bosch and Sebastián-Gallés (2003) and Sebastián-Gallés and Bosch (2009) showed an intriguing pattern in the vowel acquisition of simultaneous Catalan-Spanish bilinguals.  Specifically, while 4- and 12-month-old bilinguals could distinguish between the phonemes /e/ and /ɛ/ in a familiarization-preference task, the 8-month-olds could not.  The same pattern held for the contrast between /o/ and /u/.  In contrast, Catalan monolinguals maintained the ability to discriminate the contrasts throughout the first year.  These results were originally taken to indicate a lack of bilingual discriminatory abilities between these vowel pairs at eight months.  However, Albareda-Castellot et al. (2011) showed that when the bilingual infants are given a task with more "direction", they succeed at eight months.  This then presents an interesting story:  Catalan-Spanish bilingual infants are simultaneously able and unable to distinguish between certain vowel pairs at eight months.  I computationally explore some of the likely factors involved in producing this pattern in bilinguals and conclude that the pattern is possible to produce in a lexically-based phoneme acquisition system with a high number of cognates.

Dane Bell: A Tripartite Model of Human Word Prediction

Word prediction for speech recognition and assistive technology has benefited from statistical distributions of adjacency (as in n-grams) and of phrase structures (as in PCFGs). I show that adding a third purely statistical semantic measure increases prediction accuracy.

Rolando Coto: Temporal changes in the accentuation of Japanese Loanwords

The majority of Japanese loanwords are accented (93%), whereas words in other strata are less likely to be accented: 30%~50% for Nativewords and 49% for Sino-Japanese words. In this paper I used a combination of accent and etymological dictionaries to determine that the 93% accentuation figure has changed over time, and that loanwords imported before 1700 had a close-to-chance probability (51%) of being accented. This value increased in the subsequent centuries, possibly due to changes in syllabic structure, word size and weight-to-pitch attraction properties of newer loanwords.

Friday January 25, 2013: Exploring variability in speech production and morphological interactions

Speaker: Benjamin V. Tucker (Department of Linguistics, University of Alberta, Canada)

As a speaker, the goal of language is generally to convey a message to a listener. Speech production is the result of an elegant interaction between cognitive and motor function. I explore how speakers use linguistic, lexical and contextual knowledge to encode aspects of structure and meaning in the speech signal. In this talk, I focus on the phonetic characteristics of speech and their interaction with morphology. I investigate regular and irregular verbs in the Buckeye Corpus – a collection of spontaneous speech from forty speakers containing over 350,000 transcribed and segmented words. The results of the present study on irregular verbs (e.g. “run/ran” and “sing/sang”) provides an additional and interesting insight into the influences of morphological information on variability in spoken language. I discuss the relevance of these results to models of speech production.

Friday February 15, 2013: Categorical speech perception is not that categorical

Speaker: Bodo Winter (Department of Cognitive and Information Sciences, University of California, Merced)

The way in which we understand speech appears to be categorical. For example, when presented with a phonetic continuum ranging from /ba/ to /pa/, people respond categorically, abruptly switching from one response to another rather than reporting to hear a steady increase of “pa-ness”. In this talk, I will look at categorical speech perception from the perspective of complex dynamical systems, and I will argue that categorical perception is not that categorical as it seems. I will discuss results from two experiments and a computational model that highlight the time-varying nature of categorical speech perception. The computational model integrates seemingly distinct cognitive mechanisms: perceptual competition, habituation and learning. This highlights how we can benefit from seeing language as a system that is not so much characterized by encapsulated processing, but by massive interactions between cognitive faculties.

Friday February 22, 2013: Showcase of graduate research in the Department of Linguistics

Speakers: TBD
 

Friday March 1, 2013: Cascaded Semantic Activation in Visual Word Recognition: Could “capable” prime “cabbage”?

Speaker: Ken Forster and Dane Bell (Department of Psychology, UofA)

Parallel activation models of visual word recognition require that the detectors for many words are activated by a single letter string, the degree of activation being proportional to the degree of orthographic overlap.  In cascaded models, it is further assumed that this activation is passed forward to a semantic level prior to any attempt to decide which detector provides the best match to the input.  Using a semantic categorization task, we will report the results of several experiments designed to detect this cascaded activation.  We will also consider how “form-first” models might cope with these findings.

 

Friday March 22, 2013: “Input” in second language phonology: The influence of orthographic information

Speaker: Rachel Hayes-Harb (Department of Linguistics, University of Utah)

The outcome of second language (L2) acquisition typically differs markedly from that of first language (L1) acquisition, particularly in the domain of phonology, and great deal of research has explored factors that differentiate L1 and L2 acquisition. Many studies have attempted to explain L1-L2 differences in terms of variation in the amount and type of input the two learner groups receive, with some noting that literacy (and thus written input) often distinguishes L1 from L2 learning. However, relatively few studies have specifically investigated the potential influence of phonographic information on second language phonological development. In fact, implicit in much work on second language phonology is the assumption that the input learners use to acquire phonology is exclusively auditory. A growing body of research, however, indicates that L2 learners' phonological development may be strongly influenced (both positively and negatively) by inferences learners make about the phonological structure of the language from written input. I will discuss this emerging area of research, focusing on recent studies my collaborators and I have conducted on native English speakers' acquisition of a variety of languages using a variety of orthographies, including Arabic, Mandarin, and Russian.
 

Friday April 12, 2013: Evidence for Language Differentiation in Bilingual’s Interpretation of Lexical Tone

Speaker: Carolyn Quam (Department of Psychology and Department of Speech, Language and Hearing, UofA)

We investigate how bilinguals represent sounds in two languages, motivated by two main questions. First, do bilinguals take language context into account in order to interpret acoustic information? In other words, do they process identical acoustics differently in their two languages? Second, do individual differences in language dominance (relative experience with Mandarin vs. English) affect their sensitivity to native-language sounds? Both these questions have strong developmental implications for how multiple languages are learned and represented, and how an early-acquired second language (L2) affects native-language (L1) sound processing. In studies with undergraduates, we considered interpretation of pitch contours by Mandarin-English bilinguals. Pitch (tone) contrasts word meaning in Mandarin but not English, making this a useful test case for how bilinguals process the same dimension of sound in two languages. Previous work (Quam & Creel, in press) examined effects of both language background and the local language context on tone processing in word learning/recognition. English-speaking and Mandarin-English-bilingual undergraduates learned 16 nonsense words designed to be equally probable in English vs. Mandarin. In an eye-tracked word-recognition test, bilinguals’ tone use was predicted by their degree of Mandarin dominance, even though all bilinguals’ first language was Mandarin. However, the language context of the experiment—English vs. Mandarin task instructions and carrier phrases—did not affect tone use. Subsequent rating tasks verified that the talker had a Mandarin accent. This, combined with the compatibility of the words’ segments/phonetics with Mandarin, may have biased bilinguals to rely on their Mandarin phonetic system regardless of the language of instructions/carriers. That is, a phonological-context effect may have outweighed the language-context effect. We investigated phonological context by repeating the experiment on new undergraduate participants (24 Mandarin-English bilinguals and 24 English-speakers) with strongly English-like novel words. Words contained the same tones as before (verified by a native speaker), but were strongly segmentally and phonetically English-like,and spoken by an American-English talker. Bilinguals, but not English speakers, exploited tone significantly less than before, and no effects of language background on tone processing were found. Adult bilinguals do therefore take language context into account, interpreting equivalent pitch patterns as tones in a Mandarin-compatible context but not an English context. We are now asking, at what point in development do bilinguals differentially interpret acoustic patterns? In a child-friendly experiment, we are teaching bilingual preschoolers (ages 3.5-4) two novel words containing different Mandarin tones. One word contains consonants and vowels consistent with Mandarin (e.g., fipu), and is learned through observing two Asian puppets speaking Mandarin. The other is phonologically legal only in English (e.g., klaefa) and spoken by an Asian puppet talking to a Caucasian puppet in English. We predict that bilinguals will differentiate their languages by an early age, interpreting the same acoustic information (pitch contours) differently depending on the language context. Together, our results will speak to how bilinguals become fluent comprehenders of two languages and to the interplay between L1 and L2 processing strategies.

 

Friday April 26, 2013: Roundtable on emergentism and innatism

Speakers:  Diana Archangeli (Department of Linguistics, U of A), Douglas Pulleyblank (Department of Linguistics, University of British Columbia) and William Idsardi (Department of Linguistics, University of Maryland).  Chair: Andrew Barss (Department of Linguistics, U of A)

This round table will offer arguments and data pro and con an Emergent Grammar: the hypothesis that language is acquired with only minimal benefit of an innate language-specific endowment. Diana and Doug will defend the thesis that an Emergent Grammar account of these facts is possible and that it accounts for phonological patterns as effectively, if not more effectively, than a rule-based or a constraint-based alternative. Bill Idsardi will offer arguments and data in favor of the alternative.