The Semantics and Pragmatics of Co-Speech / Co-Sign Communication
Semantic analyses have traditionally focussed on formally modeling the meaning of text, and to a lesser extent, spoken or signed utterances. However, natural communication via speech or sign language is often accompanied by other visual communication phenomena, such as manual gestures, facial expressions, body postures, brow movements, eye gazes, etc. Visual means even enhance written text, for example in the form of emojis or emphatic character lengthening. Recent lines of research investigate what such additional visual means of communication contribute to the meaning of utterances, and in which way their meaning contribution can be integrated with the semantics and pragmatics of the verbal spoken or signed utterance. This research has shown that the formal linguistic repertoire needs to be extended to properly capture the contributions of gestures and other co-speech/co-sign communication. Finding formal representations of the semantics and pragmatics of gestures and other coverbal communication will not only advance theoretical linguistics, but is also important for the analysis of authentic linguistic corpora, and for applications related to therapy or language education, as well as multimodal natural language processing based on audio-visual recordings.
In this special session, we invite researchers interested in the semantics and pragmatics of co-speech or co-sign means of communication such as gestures, facial expressions or emojis. We look for theoretical, empirical, or applied work that addresses the meaning and formal modeling of these types of visual communication and how it can be composed with the meaning of the verbal channel.