Colloquium Series

We host a colloquium series on contemporary topics in linguistics every spring and fall semester. From examining loan words in minority languages to the intersection of psycholinguistics and morphological processes, we invite you to learn about a variety of topics from experts in the fascinating world of linguistics.

Upcoming Colloquium Series

Stay tuned for upcoming colloquiums.

Sept. 13, Tania Leal, University of Arizona

Place: Com 311
Time: 3-4:30, Sept 13
 
Dr. Leal has kindly made herself available for meetings in the afternoon before the colloquium. If you would like to meet with her, please sign up here.
  • L2 Acquisition of French prenominal possessives: contributions of syntax & morphology
    Our study centers on the acquisition of L2 French prenominal possessives by L1 English and L1 Spanish learners. Unlike English, which displays gender “agreement” with possessors, French and Spanish encode it with possessums. Prior research has examined L1 transfer but mainly with so-called “semantic agreement” languages like English or German. This study fills the gap by testing syntactic agreement in French. The Bottleneck Hypothesis, however, predicts difficulties with French possessives, regardless of L1. In two tasks (self-paced reading and fill-in-the-blank), French native speakers performed at ceiling, while learners struggled, especially with different gender contexts. Results show that morphology poses significant challenges in acquisition, supporting the Bottleneck Hypothesis (Slabakova, 2018). Both L1 English and L1 Spanish learners displayed persistent difficulties with gender agreement, particularly in online processing tasks like self-paced reading. These findings emphasize the challenges learners face in mastering French morphological structures despite L1-L2 similarities or differences.

Sept. 27, Owen Rambo, Stony Brook University, Zoom 

  • Natural Language Processing for Arabic and its Dialects: Challenges and Approaches (and Why Theoretical Linguists Should Care)

    Owen Rambow, Department of Linguistics and IACS, Stony Brook University
    owen.rambow@stonybrook.edu

    Arabic is a challenge for natural language processing (NLP) for at least two reasons: (1) it has rich morphology and (2) it shows dialectal variation which affects many levels of linguistic analysis, including phonology and morphology.  In this talk, I will review work in Arabic NLP that I have been involved in.  In a first part related to the richness of morphology, I will talk about the tasks of morphological analysis (determining all possible morphological analyses for a work) and morphological tagging (determining the correct analysis in context).  This will be based on Modern Standard Arabic (MSA).  In a second part, I will address the question of how we can build NLP resources in the presence of profound dialectal variation, given that most Arabic dialects are under-studied and under-resourced.  Specifically, I will present recent work on learning morphophonological rules from small data sets.  Such morphophonological rules can be used to create morphological analyzers and taggers for dialects.

    The work presented will be based heavily on the contributions of my collaborators: Nizar Habash and Mona Diab for work on morphological analysis and tagging for MSA, and Salam Khalifa on the rule learning work.

Oct 11, Melanie Cody, University of Arizona

3 - 4:30PM in COMM 311

History of North American Indian/Indigenous Sign Language
  • The presentation will provide insight into the ancestral linguistic landscape of the Indigenous Deaf; from where tribal signs have been identified on various canyon walls to recorded historical depictions of our ancestral inhabitants' era. These Indigenous people have intergenerationally transmitted signs to the present day. Also discussed will be the newly developed North American Indian/Indigenous Sign Language Family Tree Chart, where existing sign languages have been documented and mapped out the way many spoken language family trees are. Alongside this information, linguistic documentation using SooSL and ASL Notations will be provided.
This talk will be presented in ASL with simultaneous English translation.
 

Oct. 25, Abby Walker, Virginia Tech

Nov. 8, Santiago Barreda, UC Davis

Nov 15, Paul Pietroski, University Maryland

Nov 22, Deina Rabie, University of Arizona

Past Colloquium Series

Feb 2: Chantale Cenerini from the University of Saskatchewan via Zoom

            -Dr. Chantale Cenerini will be speaking on the "The use and distribution of Michif complementizers in oral stories: An example of storywork in Linguistics"

                        Abstract

Feb 9: Ljiljana Progovac from the Wayne State University via Zoom

          -Dr. Ljiljana Progovac will be speaking to us on the evolution of syntax. 

Passcode to Zoom: LingCo1

Time: 3 - 4:30PM

Zoom link:  https://arizona.zoom.us/j/89102371872

Abstract: Survival of the Wittiest (not Friendliest): Causal Role of Early Syntax
 

 
My argument is that language, in particular early syntax, played a crucial, causal role in human evolution, both linguistic and cognitive, driven in part by the selection for quick-wittedness (“using words in a clever and funny way”), specific to language and unique to humans. While my reconstruction of the earliest stage of syntax is based on formal syntactic considerations, it also takes into account typological considerations, and it cross- fertilizes with some relevant neuroscientific and genetic findings. The reconstructed stage provides the foundation, a common denominator, for cross-linguistic variation in the expression of e.g. transitivity (i.e. accusative vs. ergative vs. serial-verb grammars). I will provide specific approximations of this early stage of syntax (including, but not limited to “exoteric” verb-noun compounds/small clauses across languages), demonstrating their relevance not only for the quick-wittedness argument, but also, through fMRI experiments, their evolutionary role in (playful, humorous, often pejorative) naming. It is hard to be witty in a single word stage, but, as these approximations demonstrate, with just a single instance of a verb-noun composition, arguably instantiating an absolutive-like (intransitive) grammar, one can reach impressive metaphorical heights and verbal virtuosity. This is what it would have taken to entrench this early language/grammar, later to be built upon, overcoming eons-deep competition based on physical aggression and posturing. Wittiness is that kind of trait which allows competition (by ‘outwitting’ others) while at the same time favoring “friendliness” in the sense that it provides an excellent platform for replacing physical aggression with cognitive contest. There are several considerations, both theoretical and experimental, that have paved the way toward the view of human evolution as the “survival of the wittiest,” offering better explanatory power than the recent proposals advocating the “survival of the friendliest,” and giving a causal role to language in human evolution. Research on language evolution has largely neglected the artistic dimension of language, including eloquence and wittiness, and yet the fitness in humans has been found to be correlated with linguistic prowess, and human mate choice even today is often influenced by displays of cognitive abilities through the creative use of language, including humor.
 

Feb 23: Abeer Alwan from UCLA via Zoom

'Dealing with Limited Speech Data and Variability Using a Hybrid Knowledge-Based and Data-Driven Approaches'.

 
This talk will be over zoom. Please join us!
 
Time/date: Friday Feb 23, 3-4:30
Passcode: LingCo1
 
Dealing with Limited Speech Data and Variability Using a hybrid Knowledge-
Based and Data-Driven Approaches
Our research focuses on improving speech processing algorithms, such as
automatic speech recognition (ASR), speaker identification, and depression
detection, under challenging conditions such as limited data (for example,
children’s or clinical speech), mismatched conditions (for example, training on
read speech while recognizing conversational speech), and noisy speech, using a hybrid data-driven and knowledge-based approach. This approach requires
understanding of both machine learning approaches and of the human speech
production and perception systems. I will summarize in this talk our work on
children’s ASR using self-supervised models, detecting depression from speech
signals using novel speaker disentanglement techniques, and automating scoring of children’s reading tasks with both ASR and innovative NLP algorithms.

Mar 22: Owen Rambow from Stony Brook University via Zoom

  • Canceled, relocated to the Fall 2024 Colloquium Lineup

Apr 12: Faruk Akkus from the University of Massachusetts in COMM 311

Date/time: Friday April 12, 3-4:30
Location: Communications 311
  • Turkish Verbal Reflexives and Argument Structure

    A central question for syntactic approaches to the argument structure of verbal reflexives (e.g. Embick 2004, Wood 2015; cp. e.g. Grimshaw 1982, Reinhart and Siloni 2005) concerns how in some cases a single syntactic argument can receive the interpretive properties associated with two different theta-roles. Focusing on Turkish verbal reflexives, we argue for a movement-based approach to construal as an answer to this question, applying to both figure and ground reflexives. We first show that these verbs are syntactically intransitive and semantically monadic, with a single DP argument in their structure. 

     
    Diagnosing the position of this sole argument reveals a striking mixed behavior: the sole argument behaves as internal for some syntactic diagnostics, and as external for others, and therefore is associated with two distinct positions in the structure. We propose that its base position is that of the internal argument, where it saturates the theme role, which is eventually linked to the agent role by the reflexivizing Voice head. Independently of the derivation of reflexivity, this single argument moves to a derived position within the verb phrase -- a VoiceP-internal EPP effect. We illustrate how this approach gives us a handle on Greek verbal reflexives, which behave like Turkish in having a sole internal argument, but lack the movement step.

Apr 19: Deina Rabie from the University of Arizona in COMM 311

  • Indigenous voices from the past and present: Digitalisation, annotation and translation of the Florentine Codex

     The Florentine Codex is a bilingual text produced in Mexico in the 16th century. It describes the lives and beliefs of the Indigenous people who were living in the Valley of Mexico before the arrival of the Spanish. The text is in Nahuatl, the lingua franca of the area, and is accompanied by a translation or summary in Spanish. In this talk we describe the processing of the Nahuatl text and some linguistic issues that will need to be addressed in the annotation, including relational nouns, functional incorporation and subordination and clause structure. We also describe efforts to translate from the Nahuatl of the era to modern varieties of Nahuatl spoken in the Sierra of Puebla.

Apr 26: Mohsen Mahdavi from the University of Arizona in COMM 311

  • Abstract:
  • In Persian — as in Turkish, Kurdish varieties, and several other areally related languages — even though word-final stress is the norm, there are notable sets of exceptions that have been puzzling to linguists. I focus on Persian, where almost all of these exceptional words are stress-initial. Crucially, the class of stress-initial words cannot be demarcated based on a single criterion related to phonological form or lexical category. Instead, I argue that the effect is lexical, but has a systematic explanation rooted in diachrony. The stress patterns specified in the lexicon for the lexical items are influenced by the larger phonological environments they frequently appear in. More specifically, the lexicon is shaped in a way that helps avoid the occurrence of tonal crowding at the right edge of Intonational Phrases (IPs).
  • Avoiding tonal crowding (the situation where two tonal targets are realized too close to each other) or phonetically repairing it is widespread across languages. Inspired by previous work on Chickasaw and English, I argue that a stress on the right edge of an IP in Persian is suboptimal because it places the high tone of the stressed syllable and the low tone associated with the end of the IP on the same tone-bearing unit. This makes stress-final words at the end of IPs undesirable. I then argue that the word classes that seem to avoid final stress are indeed those that typically appear at the end of IPs. The five major environments with stress-initiality, which I examine one by one, are prefixed verbs, interjections, vocatives, a group of DP modifiers, and words denoting mathematical operators.
     

Mar 29: Fran Tyers from the University of Indiana in COMM 311

 
Disentangling Ancestral State Reconstruction in Historical
Linguistics - comparing classic approaches and new methods with Oceanic grammar and introducing Grambank, a new database for typological research
 

Abstract:
Ancestral State Reconstruction (ASR) is an essential part of historical linguistics. Conventional ASR relies on three core principles: fewest changes on the tree, plausibility of changes and plausibility of the resulting combinations of features in proto-languages. This approach has some problems, in particular the definition of what is plausible and the disregard of branch lengths. This study compares the classic approach of ASR to computational tools (Maximum Parsimony and Maximum Likelihood), conceptually and practically. Computational models have the advantage of being more transparent, consistent and replicable, and the disadvantage of lacking nuanced knowledge and context. Using data from the new structural database Grambank, I compare reconstructions of the grammar of ancestral Oceanic languages from the historical linguistics literature to those achieved by computational means. The results show that there is a high degree of agreement between manual and computational approaches, with a tendency for classical historical linguistics to agree more with the approaches that ignore branch lengths. Taking into account branch lengths explicitly is more conceptually sound, as such the field of historical linguistics should engage in improving methods in this direction. A combination of computational methods and qualitative knowledge is possible in future and would be of great benefit.  

This coming Friday Sept 22 we welcome Dr. Gasper Begus from UC Berkeley who will be speaking on "Modeling language as a dependency between the latent space and data". This talk is in-person. Dr. Begus is available for meetings on Friday morning and afternoon before the colloquium. Slots are 30 minutes, and up to three people at a time can sign up for one slot. To sign up, please go to:

https://docs.google.com/spreadsheets/d/13vu0MyCUs3ftbUx8KP5K2ahoS1S143ImyiTLxkgLDvs/edit?usp=sharing

Date/Time: Friday Sept 22, 3:00 - 4:30
Location: Communications 311

Modeling language as a dependency between the latent space and data

There are many ways to model language -- with rules, exemplars, finite state automata, or Bayesian approaches. In this talk, I propose a new way to model language in a fully unsupervised way from raw speech: as a dependency between latent space and generated data in generative AI models called Generative Adversarial Nets (GANs). I argue that such modeling has implications both for the understanding of language acquisition and for the understanding of how deep neural networks learn internal representations. I propose an extension of the GAN architecture (fiwGAN) in which meaningful linguistic properties emerge from two networks learning to exchange information. FiwGAN captures the perception-production loop of human speech and, unlike most other deep learning architectures, has traces of communicative intent. I further propose a technique to identify latent variables in deep convolutional networks that represent linguistically meaningful units in a causal, disentangled, and interpretable way. We can thus uncover symbolic-like representations at the phonetic, phonological, syntactic, and lexical semantic levels, analyze how learning biases in GANs match human learning biases in behavioral experiments, how speech processing in the brain compares to intermediate representations in deep neural networks, and what GANs’ innovative outputs can teach us about productivity in human language. 

This Friday we welcome Prof. Deniz Rudin of the University of Southern California as our Linguistics colloquium speaker. Please join us!

Time/date: Friday Oct 13, 3-4:30

Location: Comm 311
 
Prof. Rudin will be available for meetings on Friday morning and early afternoon. If you'd like to sign up for a 30 minute slot, please go to:
 

Title: Quotative and non-quotative speech reports

 
Abstract:
 
Verbs of speech are typically analyzed as relating individuals to propositions: simplifying to cases describing utterances in spoken language, x said that p means that x made an utterance and the propositional content of that utterance was p. Verbs of speech can embed both ordinary clausal complements, and direct quotations:
 
(1) Ai said that she wants boba.
(2) Ai said, “I want boba.”
 
It’s not immediately obvious that this fact is problematic for the traditional analysis of verbs of speech. A parsimonious analysis (inspired by Pagin & Westerståhl 2010, Maier 2020, Rabern 2023): direct quotations are ordinary embedded clauses, plus some kind of “quotation operator” that delivers the interpretation of indexicals relative to the reported context, not the matrix context, to explain things like why I refers to Ai in (2). This allows for a uniform analysis of both complement types: each is an embedded clause that delivers a proposition to the compositional semantics, to be composed with the verb of speech. Unfortunately this can’t be right. Quotative complements are not ordinary syntactic objects. They are both too little and too much: they need not be syntactic constituents at all; and paralinguistic information that is ordinarily non-truth-conditional becomes truth-conditional in quotative complements to verbs of speech.
 
A second try at a parsimonious analysis (inspired by Potts 2007, Maier 2023): quotative complements are not ordinary syntactic objects; rather, they demonstrate the form of the reported utterance (Davidson 1979, Clark & Gerrig 1990, Davidson 2015). These forms, despite not being ordinary syntactic objects, can still be arguments to a “quotation operator” that maps the form of an utterance to its content. The verb of speech relates its subject to the proposition denoted by the utterance whose form is quoted, just as a verb of speech with an ordinary clausal complement relates its subject to the proposition denoted by the embedded clause. Unfortunately this can’t be right. Speech reports with ordinary clausal complements invariably describe assertions, even when the embedded clause is interrogative—an interrogative clause needs to be mapped onto a proposition to compose with a verb of speech. But speech reports with quotative complements can describe question-asking, or humming, or insensate wailing, or intoning a word for the pleasure of articulating it alone.
 

A third try at a parsimonious analysis: verbs of speech don’t relate individuals to propositions at all. We only thought they did because we weren’t thinking about the entire empirical picture. Verbs of speech just describe speaking events. The thematic structure by which ordinary clausal complements relate to verbs of speech contributes a relation between speaking events and their propositional content (Kratzer 2006, Moulton 2009, Hacquard 2010). Quotative complements do not relate to verbs of speech via the same thematic structure: instead of specifying the propositional content of a speech event, they specify a demonstration of it.

 

I currently think this analysis is right. If it’s right, there’s an important semantic difference between (1) and (2). (1) can be paraphrased as “There is a speaking event whose agent is Ai, the the propositional content expressed by that event is that she wants boba." (2) Can be paraphrased as “There is a speaking event whose agent is Ai, and it went like this: ‘I want boba.’ “. When, as in this case, the quotation is of an assertive utterance, it’s easy to mistake the two descriptions for synonymous.

Prof. Shelome Gooden of the University of Pittsburgh will be speaking to us this Friday, October 20. Please join us! Title/abstract, and a bio for Dr. Gooden are included below. 

Time/Date: 3-4:30 October 20
Location: Comm 311
If you would like to meet with Dr. Gooden on Friday before her talk, please sign up at: 
 
Title/Abstract

Structure, Variation and Change in Creoles: Views from the P-side

 

Discussions about Creole language structures inevitably center on questions of language transfer, variation and change and on implications for theories of language change. While theoretical musings are commonplace for grammatical properties of Creoles, discussions on lessons from the P-side (phonology, phonetics, prosody) have not happened in ernest (Clements & Gooden 2009). The gauge has not shifted much, among Creolists, nor in the wider field of linguistics. On one hand, the research of Creole language P-side (phonology, phonetics, prosody) has not advanced significantly. On the other hand, outside of Creolistics, the languages are still seen as infantile language systems, crucibles of simplicity, thus contributing little to advance linguistic theory (Gooden 2022). Using (semi) spontaneous speech data from rural varieties of Jamaican Creole, I present three cases that offer a sampling of the kinds of data that might be used to fuel exciting new avenues of inquiry to better our understanding of processes of language variation and change. These data also demonstrate that are not linguistics aberrations, but products of natural processes of speaker creativity, i.e. evidence against Creole exceptionalism (DeGraff 2005; Winford 2012; Gooden 2022).

Case 1: Some descriptions describe JC stops as pulmonic egressive ([b, [d], [g]), but some more recent works have argued that these can be pronounced as implosives ([ɓ],[ɗ], [ɠ]), or that JC has implosives stops (Devonish & Harry 2004; Harry 2006). The current analysis demonstrates there is variability in production of voiced stops influenced by factors such as place of articulation, word duration, speaking style, and discourse topic. I argue that JC speakers manipulate stop articulation in ways that may (or may not) be indicative of substrate transfer effects.

Case 2: Despite the rich history of language contact, English lexicon Creoles like JC and Trinidadian are not prosodic clones of their input languages. So, while JC, has an ip and IP above the word, (Gooden 2003, 2014), Trinidadian English Creole has an accentual phrase (AP) rather than an ip Drayton (2014). Acoustic cues to prosodic phrasing are largely unexplored and the initial evidence suggests these include duration, tone height, pitch scaling, voice quality differences. Further, in JC there is some evidence of geographical variation is tone height realization between Central and Eastern varieties.

Case 3: Focus prosody. A well documented focus strategy in Creoles is morphosyntactic marking through word order changes or focus particles (e.g. Bryne, Caskey & Winford 1993, Kouwenberg 1994, Patrick 2004, Aboh 2006, Durrelman 2007). What is less well known (and has been argued not to occur) are strategies that make use of intonation and the position and type of pitch accent. Not only are intonational strategies used, but they permit multiple foci where purely morphosyntactic strategies for focus are impossible. 

 
Bio:

Shelome Gooden is Professor of Linguistics at the University of Pittsburgh and is  currently Assistant Vice Chancellor for Research for the Humanities, Arts, Social Sciences, and  Related Fields. She received an MA and PhD in Linguistics from the Ohio State University  (2003).   She has served on the advisory board for Creative Multilingualism, and for the past 16  years has served in various roles on the Executive committee of the Society of Pidgin and Creole  Languages and has been a member of the Society for Caribbean Linguistics for just over 25  years. Her research focuses mainly on language contact, intonation and prosody in black  language varieties in the Caribbean. Fieldwork has taken her to Belize and Jamaica and she also  has worked with digitized recordings of varieties like Sranan, Trinidadian and African American  English. She designed and teaches a course on Language and the Black Experience for which she  has won a teaching award.  Her peer-reviewed publications are a combination of journal articles, edited volumes, edited  special issues of top Linguistics journals, high-profile conference proceedings and invited full-  length articles to prestigious Handbooks.  She has served as prepublication reviewer for 12  different peer reviewed research publications across the fields of linguistics, abstract reviewer for  various national and international linguistics conferences and grant reviewer and panelist for the  National Science Foundation. Most recently, she took on the role of Co-editor for Language, and  is the Publications Officer for the Society of Caribbean Linguistics.  Gooden’s recent publications include; In the Fisherman’s Net. Language Contact in a  sociolinguistics context (in Blake & Buchstaller 2019); Intonation and Prosody in Creole  Languages: An evolving ecology. Annual Review of Linguistics (2022). She was guest co-editor  for a special issue of Language and Speech journal (2022); co-editor of a book for Language  Sciences Press (Social and structural aspects of language contact and change, 2022).   

This Friday Nov 3, Dr. Rolando Coto Solano of Dartmouth College will speak to us in the Linguistics Colloquium series (Title/abstract below). Dr. Coto Solano is a graduate of our own PhD program and we're especially pleased to hear from him. Please join us! 

Time/date: Friday Nov 3, 3 pm -4:30
Place: Communications 311
If you are interested in meeting with Dr. Coto Solano on Friday, please sign up at:

https://docs.google.com/spreadsheets/d/13vu0MyCUs3ftbUx8KP5K2ahoS1S143ImyiTLxkgLDvs/edit?usp=sharing

 
Title: Deep learning for language documentation and revitalization
Abstract: Deep learning algorithms have made enough progress in recent years that they can be used to create tools to aid in language documentation and revitalization. In this talk I will focus on examples of NLP work for Cook Islands Māori, while also using examples from Indigenous languages from Costa Rica. These examples include speech recognition and synthesis, syntactic parsing, phonetic documentation, machine translation and embedding analysis. We have used these to augment corpora, help train language teachers, expand domains of usage for the language, and explore the applicability of deep learning to these tasks. I will show some of the lessons learned working with extremely low-resource scenarios, and the differences between simulating low-resource computing and actually working with Indigenous and minority languages.
 
Time/date: Friday Nov 17, 3:00-4:30
Location: Comm 311
 
If you'd like to meet with Dr. Jasbi, please sign up at: 
 
Title: How to win the debate: Nativism and Empiricism in the Acquisition of Logical Connective Words
 
Abstract: This talk reviews the old debate between nativism and empiricism using novel data and modeling on children's acquisition of disjunction and negation. I first suggest that nativism and empiricism can be construed as "stances". I present "logical nativism" as the stance that logical concepts such as negation, conjunction, and disjunction as well as some of their linguistic properties are innate and part of the language faculty. "Logical empiricism", on the other hand, is the stance that such concepts and their linguistic properties are learned. I first take an empiricist stance and present experimental, corpus, and modeling evidence on children's acquisition of disjunction suggesting that a particular nativist constraint on the semantics of disjunction need not be innate. Then I take a more nativist stance and present corpus evidence suggesting that contrary to what previous empiricist or constructivist accounts suggest, negation may be an abstract and context-general concept present from early childhood. I finish by discussing my seemingly contradictory stances, arguing that changing stances is not as undesirable as it may sound and can potentially help us make better progress and in some sense "win" the nativist-empiricist debate.
 
Grammatical tone and person marking in Rere
This talk explores the role of grammatical tone in the verb system of Rere (Kordofanian, Sudan) based on primary data collection. Grammatical tone indicates Tense-aspect mood (TAM) in combination with a verbal suffix and auxiliaries. Verbal arguments are expressed by noun class agreement prefixes and pronominal enclitics. The position and segmental form of these markers does not change, but whether they expone subject or object depends on syntactic configuration. Subject topic sentences are SVO, class agreement is with the subject, and object pronominals appear post-verbally as enclitics: CL.SBJ-Verb=OBJ. However, Object topicalization (of a nominal or 3rd person pronominal) requires class agreement with the object and a post-verbal subject: OVS, or CL.OBJ-Verb=SBJ if the subject is pronominal. The verb forms are distinguished by tone on the enclitics. We argue that grammatical tone expones both case and number of pronominal enclitics. Furthermore, we show how the tone of pronominal enclitics is dominant with respect to TAM grammatical tone. It both overwrites it and blocks a high tone spreading rule that TAM tone does not block.

Speaker I – Will Oxford of MIT and University of Manitoba

Topic: How to be(come) a direct/inverse language

In a “direct/inverse” alignment system, the agreement morphology that indexes a particular nominal is determined by the nominal’s rank on the person hierarchy rather than by its grammatical function, and a special marker indicates whether the highest-ranking nominal is the agent (direct) or patient (inverse). Algonquian languages are often seen as the prototypical example of such a system, but from a diachronic perspective, the Algonquian direct/inverse pattern is not particularly old: internal and external evidence both point to a reconstructed ancestor in which the agreement morphology shows prototypical nominative/accusative alignment. So where did the direct/inverse pattern come from, and how does the underlying syntax of a direct/inverse language differ from that of a nominative/accusative language? In this talk I propose answers to both questions. Diachronically, I propose that the Algonquian direct/inverse system arose when a gap in an innovative paradigm of verb inflection was filled by the analogical extension of an agreement pattern that was previously dedicated to passive forms. Synchronically, I propose that the direct/inverse pattern reflects the interaction of an object-agreement probe on the Voice head and an "omnivorous" probe on the Infl head. This analysis, formalized using Deal's (2015) interaction-and-satisfaction model of the Agree operation, provides an elegant account of twelve different distributions of inverse marking across the Algonquian family. These proposals allow the Algonquian system to be integrated more closely into standard typological categories and formal analyses rather than standing as a type of its own. Given the prototypical status accorded to Algonquian in typological and theoretical discussions of direct/inverse marking, the fact that the Algonquian system dissolves into simpler and less unusual parts suggests that a degree of skepticism may be in order for putative direct/inverse systems in other language families as well.

14 Jan, Holly Kennard, University of Oxford, UK 9:00 AM Zoom

"The adaptation of loanwords in Breton: stress, gender and morphophonology".

Loanwords are inescapable in Breton, which is unsurprising given its status as a minority language, and centuries of contact with the prestige variety, French. Decades of language decline followed by more recent revitalisation means that today there are two main groups of speakers: older, traditional speakers who grew up speaking Breton at home; and younger 'new' speakers, who have acquired the language largely through formal education.

As borrowing a word involves adapting it to the phonology and morphology of the receiving language, loanwords in Breton must be assigned a stress pattern and grammatical gender, and be integrated into the morphophonology. Patterns of loanword adaptation may be predictable, but equally may not: for example, koěkombr ‘cucumbers’, borrowed from French concombre ‘cucumber’, is a collective noun rather than a singular. The form kokoěmbrez also exists with a collective meaning, and the singular kokombreězenn is built on this. As a result, the stress pattern for this word not only differs from that of French, but is also different in the singular and the plural, which has consequences for gender and mutation. This picture is further complicated by the sociolinguistic context in which Breton is spoken, with claims that younger Breton speakers avoid loanwords from French in favour of more 'Celtic' equivalents.

In this talk I examine data from both older and younger Breton speakers and investigate how they treat loanwords from French with a particular focus on stress, grammatical gender, and morphological processes. The patterns of language use are complex; however, I find that loanwords are less likely to be adapted to Breton stress than to its morphophonology, which perhaps reflects both the different stress patterns that can be observed in Breton, and the variability in stress usage among the younger Breton speakers. 

  • February 18th: Christian Di Casio, University of Buffalo (3:00 PM, In-person)
  • March 18th: Marianne Mithun, UC Santa Barbara (3:00 PM, In-person)
  • April 1st: Damian Blasi, Harvard University (3:00 PM In-person)
  • April 15th: Timo Roettger, University of Oslo (9:00 AM, Zoom)
  • April 29th: Suzi Oliveira de Lima, University of Toronto (3:00 PM, Zoom)