Detecting Disguise in Written Language and Audio-Visual Biometrics

drawing english


Detecting Imitation and Disguise in Written Language

Today, the World Wide Web enables a direct exchange of information and knowledge between human users everywhere and anytime and we rely on the integrity of our communication partners. High damage potentially originates from using a false identity for criminal purposes. Linguistic techniques for imitation and disguise have been analyzed especially in the context of e.g. extortion letters. However, up to now, there has been no adequate investigation on how we can discover linguistic imitation strategies by intelligent algorithms. Hence, the aim of this project is to develop machine-learning based algorithms that identify linguistic imitation with high reliability in data sets of inherent high variability and to classify the documents according to the discovered linguistic features.


Audio-Visual Biometrics

Biometric systems, like speaker recognition, are very vulnerable against spoofing attacks, like for example a replay attack with a pre-recorded utterance of the victim (replay attack). Additionally, the reliability can be decreased due to noisy environments. As a possible countermeasure, different biometric modalities can be combined to increase the robustness of the recognition. For an audio-visual recognition, in general, speaker and face recognition are considered. Thus, as an additional benefit, the synchronicity between the utterance and the lip movements can be verified to identify spoofing attacks. Furthermore, it is possible, depending on the environmental conditions, to weight either the speaker or the face recognition more. Therefore, one challenge is to combine both modalities in such a way that they benefit from each other and an attack can be recognized.