Intelligent Sound Engineering Lab

Some of our current and recent research projects include;

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Aims

We develop intelligent recording techniques, for use by audio editors, mixers and sound engineers, which speed up the recording process, minimise preparation for live performance, and enable easy preparation and transmission of high resolution audio.

Advances in signal processing, machine learning and adaptive systems, have rarely been applied to the professional audio market. This is partly because most digital signal processing applications in these areas have remained focused on replicating or improving those techniques which could be applied in the analogue domain. And until recently, mixing consoles and audio workstations did not have the computing power to allow the introduction of multi-input, multi-output processing tools. Thus audio effects have traditionally been limited to those which operate only on single or stereo channels. There is now an opportunity to develop advanced audio effects which analyse all input channels in order to produce the ideal mix.

Audio engineering for live sound production represents a field with strong potential for improvement and automation. Much of the effort of a sound engineer in preparation for a live performance is consumed by tedious, repetitive tasks. Levels must be set to avoid feedback, input channels must be panned to stereo or surround sound, equalisation, normalisation and compression must be applied to each channel, and all equipment must be tested along with establishing an optimal choice of microphone placement. Only after these tasks have been performed, if time and resources permit, may the sound engineer refine these choices to produce an aesthetically pleasing mix which best captures the intended sound. There is a need for tools which minimise sound-checks by automating complex but non-artistic tasks, establish recommended settings based on the input signals and acoustics, and identifying and avoid issues such as acoustic feedback and microphone crossover.

We develop and test techniques to convert audio mixes between formats. We are working to devise methods to automatically create a surround sound mix, which minimise the masking of sources and places sources in positions that are most subjectively pleasing to the listener.

We investigate methodologies of audio editing used by professional sound engineers, in order to better establish best practices and specify the metadata which will be used to enable automation of audio editing.

We develop sound synthesis algorithms in both analogue and digital forms. It is an important application for cinema, multimedia, games and sound installations. It fits within the wider context of sound design, which is the discipline of acquiring, creating and manipulating sounds to achieve a desired effect or mood. Sound synthesis research within the Centre for Digital Music crosses several themes, including Intelligent Sound Engineering and Augmented Instruments. We seek to uncover new synthesis techniques, as well as enhance existing approaches and adapt them to new applications. With a strong emphasis on performance, expression and evaluation, much of our research is focused on real world applications, empowering users and bringing sound synthesis to the forefront of sound design in the creative industries.

The benefits of these techniques are demonstrated by developing, evaluating and deploying prototype systems for intelligent recording and sound reproduction. Personnel

Members

picture of event

Alexander Williams

PhD Student

User-driven deep music generation in digital audio workstations

picture of event

Chin-Yun Yu

PhD Student

Neural Audio Synthesis with Expressiveness Control

picture of event

Christian Steinmetz

PhD Student

End-to-end generative modeling of multitrack mixing with non-parallel data and adversarial networks

picture of event

David Südholt

PhD Student

Machine Learning of Physical Models for Voice Synthesis

picture of event

Prof Joshua D Reiss

Professor of Audio Engineering

sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing

picture of event

Katarzyna Adamska

PhD Student

Predicting hit songs: multimodal and data-driven approach

picture of event

Marco Comunità

PhD Student

Machine learning applied to sound synthesis models

picture of event

Nelly Garcia

PhD Student

An investigation evaluating realism in sound design

picture of event

Rodrigo Mauricio Diaz Fernandez

PhD Student

Hybrid Neural Methods for Sound Synthesis

picture of event

Xavier Riley

PhD Student

Pitch tracking for music applications - beyond 99% accuracy

picture of event

Xiaowan Yi

PhD Student

Composition-aware music recommendation system for music production

picture of event

Yazhou Li

PhD Student

Virtual Placement of Objects in Acoustic Scenes

School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road
London
E1 4NS
United Kingdom

Internal Site
C4DM Wiki

QMUL logo


© Queen Mary University of London.