Music Informatics Lab

With online music stores offering millions of songs to choose from, users need assistance. Using digital signal processing, machine learning, and the semantic web, our research explores new ways of intelligently analysing musical data, and assists people in finding the music they want.

We have developed systems for automatic playlisting from personal collections (SoundBite), for looking inside the audio (Sonic Visualiser), for hardening/softening transients, and many others. We also regularly release some of our algorithms under Open Source licences, while maintaining a healthy portfolio of patents.

This area is led by Dr Simon Dixon. Projects in this area include:

  • mid-level music descriptors: chords, keys, notes, beats, drums, instrumentation, timbre, structural segmentation, melody
  • high-level concepts for music classification, retrieval and knowledge discovery: genre, mood, emotions
  • Sonic Visualser
  • semantic music analysis for intelligent editing
  • linking music-related information and audio data
  • interactive auralisation with room impulse responses

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Members

picture of event

Aditya Bhattacharjee

PhD Student

Self-supervision in Audio Fingerprinting

picture of event

Dr Aidan Hogg

Lecturer in Computer Science

spatial and immersive audio, music signal processing, machine learning for audio, music information retrieval

picture of event

Alexander Williams

PhD Student

User-driven deep music generation in digital audio workstations

picture of event

Andrea Martelloni

PhD Student

Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing

picture of event

Andrew (Drew) Edwards

PhD Student

Deep Learning for Jazz Piano: Transcription + Generative Modeling

picture of event

Dr Anna Xambó

Senior Lecturer in Sound and Music Computing

new interfaces for musical expression, performance study, human-computer interaction, interaction design

picture of event

Ashley Noel-Hirst

PhD Student

Latent Spaces for Human-AI music generation

picture of event

Berker Banar

PhD Student

Towards Composing Contemporary Classical Music using Generative Deep Learning

picture of event

Chin-Yun Yu

PhD Student

Neural Audio Synthesis with Expressiveness Control

picture of event

Cyrus Vahidi

PhD Student

Perceptual end to end learning for music understanding

picture of event

David Foster

PhD Student

Modelling the Creative Process of Jazz Improvisation

picture of event

Elona Shatri

PhD Student

Optical music recognition using deep learning

picture of event

Dr Emmanouil Benetos

Reader in Machine Listening

Machine listening / computer audition, Machine learning for audio and sequential data, Music information retrieval, Multimodal AI, Resource-efficient AI

picture of event

Dr George Fazekas

Senior Lecturer

Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)

picture of event

Harnick Khera

PhD Student

Informed source separation for multi-mic production

picture of event

Huan Zhang

PhD Student

Computational Modelling of Expressive Piano Performance

picture of event

Hyon Kim

Universitat Pompeu Fabra

Automated Music Performance Assessment and Critique

picture of event

Iacopo Ghinassi

PhD Student

Semantic understanding of TV programme content and structure to enable automatic enhancement and adjustment

picture of event

Ilaria Manco

PhD Student

Multimodal Deep Learning for Music Information Retrieval

picture of event

Dr Iran Roman

Lecturer in Sound and Music Computing

theoretical neuroscience, machine perception, artificial intelligence

picture of event

Ivan Meresman Higgs

Research Assistant

Sample Identification in Mastered Songs using Deep Learning Methods

picture of event

James Bolt

PhD Student

Intelligent audio and music editing with deep learning

picture of event

Jaza Syed

Research Assistant

Audio ML, Automatic Lyrics Transcription

picture of event

Dr Johan Pauwels

Lecturer in Audio Signal Processing

automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science

picture of event

Katarzyna Adamska

PhD Student

Predicting hit songs: multimodal and data-driven approach

picture of event

Dr Ken O'Hanlon

Postdoctoral Researcher

Project: Fine-grained music source separation with deep learning models

picture of event

Dr Lin Wang

Lecturer in Applied Data Science and Signal Processing

signal processing; machine learning; robot perception

picture of event

Prof Mark Sandler

C4DM Director

Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.

picture of event

Dr Mathieu Barthet

Senior Lecturer in Digital Media

Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art

picture of event

Ningzhi Wang

PhD Student

Generative Models For Music Audio Representation And Understanding

picture of event

Dr Pedro Sarmento

Postdoctoral Researcher

music information retrieval, language models for music generation, guitar tablature generation, automatic guitar transcription, deep learning

picture of event

Ruby Crocker

PhD Student

Continuous mood recognition in film music

picture of event

Dr Saurjya Sarkar

Postdoctoral Researcher

Audio Source Separation, Music Information Retrieval, Sample Detection

picture of event

Prof Simon Dixon

Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT

Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.

picture of event

Tyler Howard McIntosh

PhD Student

Expressive Performance Rendering for Music Generation Systems

picture of event

Vjosa Preniqi

PhD Student

Predicting demographics, personalities, and global values from digital media behaviours

picture of event

Xavier Riley

PhD Student

Pitch tracking for music applications - beyond 99% accuracy

picture of event

Xiaowan Yi

PhD Student

Composition-aware music recommendation system for music production

default event picture as no event picture was specified

Yukun Li

PhD Student

Computational Comparison Between Different Genres of Music in Terms of the Singing Voice

School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road
London
E1 4NS
United Kingdom

Internal Site
C4DM Wiki

QMUL logo


© Queen Mary University of London.