PLENARY SPEAKERS

Richard Breheny (University College London)

Paul Engelhardt (UEA)

Eugen Fischer (UEA)

Yosef Grodzinsky (Hebrew University of Jerusalem)

Stefan Heim (RWTH Aachen University)

Lucy MacGregor (Cambridge University)

Ken McRae (Western University, London, Ontario)

Milena Rabovsky (Free University Berlin)

Maria Spychalska (University Cologne)

Jakub Szymanik (University of Amsterdam)

Matthias Unterhuber (Ruhr University Bochum)

Markus Werning (Ruhr University Bochum)

List of Abstracts (Keynote Talks) of the Workshop Reasoning, Argumentation and Logic in Natural Language: Experiments and Models

April, 3–5, 2019, Bochum

Organization: Markus Werning (U Bochum), Matthias Unterhuber (U Bochum), Maria Spychalska (U Cologne) and Eugen Fischer (UEA)

 

Diverse Mechanisms Explain Scalar Diversity

Richard Breheny (University College London)

Abstract: Scalar Diversity (SD) Effect: When research on scalar inference has looked at scalar expressions beyond the usual quantifiers like ‘some’ and connectives like ‘or’, a consistent finding has been that participants seem more liable to derive a scalar inference in some cases than others (Doran et al. 2009; van Tiel et al., 2016). The main focus of research on SD has been to consider ways in which relations between the scalar term and its alternative, or sentences containing them, might impact on participants responses. Viewing SD through the prism of exhaustification wrt alternatives explains only a limited amount of variance. In this paper, I will explain how other well-studied pragmatic mechanisms could be involved in explaining SD. In particular, I will look at ways in which apparent scalar inference is derived without exhaustification wrt alternatives. I will present a series of studies which support a multi-mechanism approach to SD and which poses problems for the exhaustification-only approach to scalar inference. In addition, I will present a Bayesian model which can incorporate these several mechanisms and show that it out-performs a model which only allows for exhaustification.

 

Naturalized Argument Analysis: Using Eye-tracking to Analyse Philosophical Arguments

Eugen Fischer and Paul Engelhardt (UEA)

Abstract: The novel research programme of naturalized argument analysis complements familiar philosophical methods with experimental, computational, and formal methods, to analyse and assess arguments couched in natural language. This talk will present the research programme, first results, and work in progress. We will discuss the use of pupillometry and reading-time measurements to test new analyses of philosophical paradoxes known as ‘arguments from illusion’ and ‘arguments from hallucination’, which jointly engender the philosophical ‘problem of perception’. We suggest contextually inappropriate stereotypical inferences from appearance- and perception-verbs are at the root of these paradoxes, develop a psycholinguistic explanation of when and why even competent language users make such inappropriate inferences, and present experiments that document the inferences of interest.

 

The Processing Cost of Multiple Negations and the Language-Logic Interface in the Brain

Yosef Grodzinsky (Hebrew University of Jerusalem)

Abstract: My colleagues and I sought to explore the relation between language and logic via a series of studies of implicit and explicit negation. I will present evidence from experimental projects that studied the relation between sentence complexity (vis-a-vis negation) and RT: 

1.      speeded verification experiments with multiple negations in (i) healthy adults (conducted in Jerusalem) and (ii) aging populations (conducted in Jülich). Results indicate that what determines speeded verification behavior, and the deterioration thereof in old age, is not the linguistic complexity of the sentence processed, but rather, its monotonicity properties.

2.      a complex fMRI experiment, indicating that non-linguistic negation activates regions that are outside the language areas.

I will try to convince the audience that these results bolster age-old claims concerning the independence of language and logic.

 

From Conceptual Preparation to the Ill-formed Utterance: Where is the Neurocognitive Locus of Crazy Talk in Schizophrenia?

Stefan Heim (RWTH Aachen University)

Abstract: For schizophrenia, “disorganised speech” is one of the five core diagnostic criteria in the DSM-5. Disorganised speech has thus replaced the former criterion “formal thought disorder” because language behaviour is observable and measurable more easily than thought. Still, the question remains whether formal thought disorder is actually the basis of, and thus reflected in, disorganised speech. In my talk, I will present and review neurolinguistics evidence from cognitive-behavioural and neuroimaging studies, venturing to identify the locus of the disturbance on one or several of the processing steps in language production.

 

You Rock! Understanding Semantic Ambiguity

Lucy MacGregor (Cambridge University)

Abstract: Most common words are semantically ambiguous, such that their meaning depends on context. For example, “ace” can refer to a playing card or a tennis serve that an opponent is unable to return. Thus, the ability to make sense of – resolve – ambiguity is a fundamental part of language comprehension. I will present data from experiments in which we have used EEG and MEG to understand more about the distinct cognitive operations during semantic ambiguity resolution and their underlying neural mechanisms.

 

Event Knowledge and Semantic Processing

Ken McRae (Western University, London, Ontario)

Abstract: People constantly use concepts and word meaning to recognize entities and objects in their environment, to anticipate how entities will behave and interact with one another, to know how objects should be used, and to understand language. Over the years, there have been a number of theories regarding how concepts are organized and structured in semantic memory. For example, various theories stress that concepts (or lexical items) are linked by undifferentiated associations. Other theories stress hierarchical categorical (taxonomic) structure, whereas others focus on conceptual similarity spaces. In this talk, I will present evidence that people’s knowledge of real-world events and situations is an important factor underlying the structure and (contextually-determined) usage of concepts in semantic memory. I will present experiments spanning word, picture, and sentence processing. Evidence for the importance of event-based knowledge will cover a number of types of concepts, including verbs, nouns denoting living and nonliving things, and abstract concepts. I conclude that semantic memory is structured in the mind so that the computation and use of knowledge of real-world events and situations is both rapid and fundamental. In other words, event knowledge is an important force that shapes the dynamics of real-time, context-sensitive, semantic computations.

 

Modeling the N400 Brain Potential as Change in a Probabilistic Representation of Meaning

Milena Rabovsky (Free University Berlin)

Abstract: The N400 component of the event-related brain potential has aroused much interest because it is thought to provide an online measure of meaning processing in the brain. However, the underlying process remains incompletely understood and actively debated. In the talk I will present a computationally explicit account of this process and the emerging representation of sentence meaning. We simulate N400 amplitudes as the change induced by an incoming stimulus in an implicit and probabilistic representation of meaning captured by the hidden unit activation pattern in a neural network model of sentence comprehension, and we propose that the process underlying the N400 also drives implicit learning in the network. The model provides a unified account of 16 distinct findings from the N400 literature and connects human language comprehension with recent deep learning approaches to language processing.

 

Not all implicatures are created equal: on the role of judgment, information and relevance in pragmatic processing

Maria Spychalska (University of Cologne)

Abstract: TBA

 

Why are the Natural Language Quantifiers Monotone?

Jakub Szymanik (University of Amsterdam)

Abstract: Despite extraordinary differences between natural languages, linguists have identified many semantic universals – shared properties of meaning – that are yet to receive a unified explanation. A prominent example comes from the domain of quantification: natural languages lexicalize only monotone quantifiers. In our current research we have been developing the hypothesis that semantic universals, such as monotonicity, are to be explained in terms of learnability. Using tools from machine learning, we show that monotone meanings are easier to learn than those that are not. Furthermore, we also consider the emergence of quantifiers from the perspective of cultural evolution and show (in an iterated learning paradigm with neural networks as agents) that monotone quantifiers regularly evolve. Finally, one may wonder how learnability differs from complexity. To address this issue I will discuss our current work on comparing complexity (in terms of Kolmogorov complexity and minimal-description length) and learnability of monotone quantifiers. 

 

Bayesian Pragmatics and the Contextual Modulation of Word Meanings: A Quantitative Model of EEG and Cloze Data

Markus Werning (Ruhr University Bochum) and Matthias Unterhuber (Ruhr University Bochum)

Abstract: We contrast three quantitative models to explain the contextual modulation of a word’s meaning and how it affects probabilistic predictions on the completion of a discourse. The Semantic Similarity Model presupposes that the predictor for a word given a preceding discourse is their semantic similarity. The Relevance Model maintains that utterances are chosen to maximize relevance. The Bayesian Pragmatic Model accounts for the rational cooperation between speaker and listener using Bayes’s Theorem. In a Cloze test, we measure a word’s probability following a preceding discourse. In an EEG study, we record its event-related potential and take the N400 component to be inversely correlated with its predictive probability. In a regression analysis, the proportions of variance explained by each model are compared with regard to both, Cloze values and N400 amplitudes. The Bayesian Pragmatic model turns out to best explain the data.