Paper accepted at Proceedings of the Royal Society B

Although I left academia a year ago, I have some publication news! A paper I worked on together with Wim Pouw, Lara S. Burchardt, and Luc Selen has been accepted for publication by the Royal Society B. We looked into how the biomechanics of gestures influence the voice, using a load of multimodal signals including audio, 3D motion tracking via video (for movement), EMG (for muscle activation), RIP (for breathing via torso circumference) and a plate to measure center of pressure (for posture). We show that moving the arms (while manipulating mass) recruits (postural) muscles that interact with the voice through respiratory interactions.

While waiting for publication, the postprint is already available and so is the RMarkdown showing the methods and results. Wim also made a great post about it on Bluesky.

Foto

Starting new job in 2024

Starting April 2024, I will be starting a new role as a usability engineer for the usability lab at Johner Institut in Frankfurt a. M. I am very excited to get some more industry experience and to learn more about usability and medical devices!

Foto

Conferences in 2024

Work I have been involved in has been accepted to two conferences for 2024. In May, Movement-related muscle activity and kinetics affect vocalization amplitude will be presented at EVOLANG 2024 in Madison, USA. In July, we will present Arm movements increase acoustic markers of expiratory flow at Speech Prosody 2024 in Leiden, NL.

Preprints for both can be found and accessed here.

Dissertation published!

My dissertation The phonetics of speech breathing: pauses, physiology, acoustics, and perception has been published and is available online as open access.

Journal paper on acoustics of breath noises online

I am very happy to announce that the journal paper I first-authored called Acoustics of Breath Noises in Human Speech: Descriptive and Three-Dimensional Modeling Approaches is now online and open access via ASHA and PubMed. My co-authors for this project were Susanne Fuchs (ZAS Berlin), Jürgen Trouvain, Bernd Möbius (both UdS), Steffen Kürbis, and Peter Birkholz (both TU Dresden).

The paper gives the first description of the spectral characteristics of speech breath noises produced by a large number of (German) speakers. In addition, we modeled in- and exhalations using 3D-printed vocal tract models, that were produced using MRI. The main findings are:

  1. Breath noises have several weak peaks that align with resonances found in a very controlled setting where participants inhaled with the VT configuration of a central vowel in Hanna et al. 2018.
  2. Comparing in- and exhalation spectra in the 3D-printed VT models, airflow direction changes the spectral properties of /s ʃ ç i:/, but not of the other sounds we investigated.
  3. We tried to compare real inhalations and model inhalations but there is a myriad of mechanisms that are either hard to model or still un(der)-researched for speech breathing, so there's many interesting things left to do in that field.

The findings may help with the automatic detection of pathologies such as COVID-19, pathological cries and coughs in infancy, or vocal fold paralysis, or the emotional or cognitive state of the speaker. I hope that this paves the way for further work using the acoustics of breath noises in general, but also in pathological and synthesized speech.