Acoustic sensor networks

Acoustic sensor networks (ASNs) and implicitly wireless acoustic sensor networks (WASNs) are considered to be a next-generation technology for audio acquisition and processing. In comparison to traditional microphone arrays, they offer more flexibility as they can be incorporated in many types of embedded devices and more scalability as they can cover both small-scale and large-scale acoustic scenarios.

Some examples include assisted living and smart-houses, with sensors being spread across a room or an entire home, but also smart-cities or natural habitat monitoring, where sensor are spread across a vast area.

The tasks which these networks have to consider vary from speech and speaker recognition to the detection of acoustic events and clustering and classification of sound sources. Some examples of classification labels  include “car horn”, “ambulance alarm”, “telephone ringing”, “washing machine running”, “chainsaw cutting”, “truck engine noise”, etc.

What to expect:

The main environments used are Matlab and/or Python, in addition to which any Tensorflow experience is welcomed. Some projects have also been developed in Java and deployed on Android devices.

You will get the chance to use signal processing tools (FFT, DCT, Mel spectrum, MFCC features, etc.) coupled with machine learning approaches (LDA, PCA, clustering, entropy, mutual information, etc.). On some projects you will have the opportunity to use and learn more about state-of-the-art deep neural network architectures (CNN, MLP, GRU, etc.). 

The work can be performed individually or in a team. Auxiliary you will get to learn and apply basic Agile project management techniques such as Sprints and Scrums.

Examples of projects completed so far:

  • Bachelor Praxisprojekte:
    • Extraction of MFCC features on Android based embedded  devices

    • Audio data acquisition and management on Android based embedded devices

    • Music genre classification using Mod-MFCC features

    • Neural Network based Feature Extraction for Gender Discrimination and Speaker Identification

  • Bachelor Thesis:
    • Audio feature extraction with privacy constraints on Android based embedded devices

    • Audio signal classification with privacy constraints on Android based embedded devices

    • Automatic classification of moving vehicles using audio signals

    • Generation of cryptographic keys using the available information of acoustic channels

    • Content- and context-based classification of music signals using deep neural networks

Ansprechpartner:

Luca Becker, M.Sc.  
ID 2/255  
0234 / 32 27543  
luca.becker@rub.de