Music Informatics Lab

With online music stores offering millions of songs to choose from, users need assistance. Using digital signal processing, machine learning, and the semantic web, our research explores new ways of intelligently analysing musical data, and assists people in finding the music they want.

We have developed systems for automatic playlisting from personal collections (SoundBite), for looking inside the audio (Sonic Visualiser), for hardening/softening transients, and many others. We also regularly release some of our algorithms under Open Source licences, while maintaining a healthy portfolio of patents.

This area is led by Dr Simon Dixon. Projects in this area include:

  • mid-level music descriptors: chords, keys, notes, beats, drums, instrumentation, timbre, structural segmentation, melody
  • high-level concepts for music classification, retrieval and knowledge discovery: genre, mood, emotions
  • Sonic Visualser
  • semantic music analysis for intelligent editing
  • linking music-related information and audio data
  • interactive auralisation with room impulse responses

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Members

picture of event

Aditya Bhattacharjee

PhD Student

Self-supervision in Audio Fingerprinting

picture of event

Dr Aidan Hogg

Lecturer in Computer Science

spatial and immersive audio, music signal processing, machine learning for audio, music information retrieval

picture of event

Alexander Williams

PhD Student

User-driven deep music generation in digital audio workstations

picture of event

Andrea Guidi

PhD Student

Design for auditory imagery

picture of event

Andrea Martelloni

PhD Student

Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing

picture of event

Andrew (Drew) Edwards

PhD Student

Deep Learning for Jazz Piano: Transcription + Generative Modeling

picture of event

Dr Anna Xambó

Senior Lecturer in Sound and Music Computing

new interfaces for musical expression, performance study, human-computer interaction, interaction design

picture of event

Ashley Noel-Hirst

PhD Student

Latent Spaces for Human-AI music generation

picture of event

Berker Banar

PhD Student

Towards Composing Contemporary Classical Music using Generative Deep Learning

picture of event

Brendan O'Connor

PhD Student

Singing Voice Attribute Transformation

picture of event

Carey Bunks

PhD Student

Cover Song Identification

picture of event

Chin-Yun Yu

PhD Student

Neural Audio Synthesis with Expressiveness Control

picture of event

Christopher Mitcheltree

PhD Student

Representation Learning for Audio Production Style and Modulations

picture of event

Cyrus Vahidi

PhD Student

Perceptual end to end learning for music understanding

picture of event

David Foster

PhD Student

Modelling the Creative Process of Jazz Improvisation

picture of event

Elizabeth Wilson

PhD Student

Co-creative Algorithmic Composition Based on Models of Affective Response

picture of event

Elona Shatri

PhD Student

Optical music recognition using deep learning

picture of event

Dr Emmanouil Benetos

Reader in Machine Listening, Turing Fellow

Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology

picture of event

Gary Bromham

PhD Student

The role of nostalga in music production

picture of event

Dr George Fazekas

Senior Lecturer

Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)

picture of event

Harnick Khera

PhD Student

Informed source separation for multi-mic production

picture of event

Huan Zhang

PhD Student

Computational Modelling of Expressive Piano Performance

picture of event

Hyon Kim

Universitat Pompeu Fabra

Automated Music Performance Assessment and Critique

picture of event

Iacopo Ghinassi

PhD Student

Semantic understanding of TV programme content and structure to enable automatic enhancement and adjustment

picture of event

Ilaria Manco

PhD Student

Multimodal Deep Learning for Music Information Retrieval

picture of event

Dr Johan Pauwels

Lecturer in Audio Signal Processing

automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science

picture of event

Lele Liu

PhD Student

Automatic music transcription with end-to-end deep neural networks

picture of event

Dr Lin Wang

Lecturer in Applied Data Science and Signal Processing

signal processing; machine learning; robot perception

picture of event

Prof Mark Sandler

C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder

Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.

picture of event

Maryam Torshizi

PhD Student

Music emotion modelling using graph analysis

picture of event

Mary Pilataki

PhD Student

Deep Learning methods for Multi-Instrument Music Transcription

picture of event

Dr Mathieu Barthet

Senior Lecturer in Digital Media

Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art

picture of event

Dr Matthias Mauch

Visiting Academic

music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles

picture of event

Ningzhi Wang

PhD Student

Generative Models For Music Audio Representation And Understanding

picture of event

Pedro Sarmento

PhD Student

Guitar-Oriented Neural Music Generation in Symbolic Format

picture of event

Ruby Crocker

PhD Student

Continuous mood recognition in film music

picture of event

Saurjya Sarkar

PhD Student

New perspectives in instrument-based audio source separation

picture of event

Prof. Simon Dixon

Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT, Turing Fellow

Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.

picture of event

Soumya Sai Vanka

PhD Student

Music Production Style Transfer and Mix Similarity

picture of event

Sungkyun Chang

Research Assistant

Deep learning technologies for multi-instrument automatic music transcription

picture of event

Thomas Kaplan

PhD Student

Probabilistic modelling of rhythm perception and production

picture of event

Tyler Howard McIntosh

PhD Student

Expressive Performance Rendering for Music Generation Systems

picture of event

Vjosa Preniqi

PhD Student

Predicting demographics, personalities, and global values from digital media behaviours

picture of event

Xavier Riley

PhD Student

Pitch tracking for music applications - beyond 99% accuracy

picture of event

Yannis (John) Vasilakis

PhD Student

Active Learning for Interactive Music Transcription

picture of event

Yixiao Zhang

PhD Student

Machine Learning Methods for Artificial Musicality

picture of event

Yinghao Ma

PhD Student

Self-supervision in machine listening

default event picture as no event picture was specified

Yukun Li

PhD Student

Computational Comparison Between Different Genres of Music in Terms of the Singing Voice

School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road
London
E1 4NS
United Kingdom

Internal Site
C4DM Wiki

QMUL logo


© Queen Mary University of London.