People
Academic
Dr Aidan Hogg
Lecturer in Computer Science
spatial and immersive audio, music signal processing, machine learning for audio, music information retrieval
Dr Anna Xambó
Senior Lecturer in Sound and Music Computing
new interfaces for musical expression, performance study, human-computer interaction, interaction design
Dr Charalampos Saitis
Lecturer in Digital Music Processing
Communication acoustics, crossmodal correspondences, sound synthesis, cognitive audio, musical haptics
Dr Emmanouil Benetos
Reader in Machine Listening, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Dr Iran Roman
Lecturer in Sound and Music Computing
theoretical neuroscience, machine perception, artificial intelligence
Dr Johan Pauwels
Lecturer in Audio Signal Processing
automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science
Dr Lin Wang
Lecturer in Applied Data Science and Signal Processing
signal processing; machine learning; robot perception
Dr Mathieu Barthet
Senior Lecturer in Digital Media
Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art
Dr Tony Stockman
Senior Lecturer
Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility
Prof Andrew McPherson
Professor of Musical Interaction
new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware
Prof Mark Sandler
C4DM Director
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Prof. Joshua D Reiss
Professor of Audio Engineering
sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing
Prof. Simon Dixon
Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Academic Associate
Dr Marcus Pearce
Reader in Cognitive Science
Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling.
Prof Geraint Wiggins
Professor of Computational Creativity
Computational Creativity, Artificial Intelligence, Music Cognition
Prof Matthew Purver
Professor of Computational Linguistics
computational linguistics including models of language and music
Prof Pat Healey
Professor of Human Interaction
human interaction, human communication
PhD
Adam Andrew Garrow
PhD Student
Probabilistic learning of sequential structures in music cognition
Adam He
PhD Student
Neuro-evolved Heuristics for Meta-composition
Aditya Bhattacharjee
PhD Student
Self-supervision in Audio Fingerprinting
Adán Benito
PhD Student
Beyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation
Alexander Williams
PhD Student
User-driven deep music generation in digital audio workstations
Andrea Martelloni
PhD Student
Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing
Andrew (Drew) Edwards
PhD Student
Deep Learning for Jazz Piano: Transcription + Generative Modeling
Antonella Torrisi
PhD Student
Computational analysis of chick vocalisations: from categorisation to live feedback
Ashley Noel-Hirst
PhD Student
Latent Spaces for Human-AI music generation
Benjamin Hayes
PhD Student
Perceptually motivated deep learning approaches to creative sound synthesis
Berker Banar
PhD Student
Towards Composing Contemporary Classical Music using Generative Deep Learning
Bleiz Del Sette
PhD Student
The Sound of Care: researching the use of Deep Learning and Sonification for the daily support of people with Chronic Primary Pain
Bradley Aldous
PhD Student
Advancing music generation via accelerated deep learning
Carey Bunks
PhD Student
Cover Song Identification
Carlos De La Vega Martin
PhD Student
Neural Drum Synthesis
Chin-Yun Yu
PhD Student
Neural Audio Synthesis with Expressiveness Control
Chris Winnard
PhD Student
Music Interestingness in the Brain
Christian Steinmetz
PhD Student
End-to-end generative modeling of multitrack mixing with non-parallel data and adversarial networks
Christopher Mitcheltree
PhD Student
Representation Learning for Audio Production Style and Modulations
Christos Plachouras
PhD Student
Deep learning for low-resource music
Cyrus Vahidi
PhD Student
Perceptual end to end learning for music understanding
David Foster
PhD Student
Modelling the Creative Process of Jazz Improvisation
David Südholt
PhD Student
Machine Learning of Physical Models for Voice Synthesis
Ece Yurdakul
PhD Student
Emotion-based Personalised Music Recommendation
Eleanor Row
PhD Student
Automatic micro-composition for professional/novice composers using generative models as creativity support tools
Elona Shatri
PhD Student
Optical music recognition using deep learning
Farida Yusuf
PhD Student
Information-theoretic neural networks for online perception of auditory objects
Franco Caspe
PhD Student
AI-assisted FM synthesis for sound design and control mapping
Gary Bromham
PhD Student
The role of nostalga in music production
Gregor Meehan
PhD Student
Representation learning for musical audio using graph neural network-based recommender engines
Haokun Tian
PhD Student
Timbre Tools for the Digital Instrument Maker
Harnick Khera
PhD Student
Informed source separation for multi-mic production
Huan Zhang
PhD Student
Computational Modelling of Expressive Piano Performance
Iacopo Ghinassi
PhD Student
Semantic understanding of TV programme content and structure to enable automatic enhancement and adjustment
Ilaria Manco
PhD Student
Multimodal Deep Learning for Music Information Retrieval
Jackson Loth
PhD Student
Time to vibe together: cloud-based guitar and intelligent agent
James Bolt
PhD Student
Intelligent audio and music editing with deep learning
Jiawen Huang
PhD Student
Lyrics Alignment For Polyphonic Music
Jingjing Tang
PhD Student
End-to-End System Design for Music Style Transfer with Neural Networks
Jinhua Liang
PhD Student
AI for everyday sounds
Jordie Shier
PhD Student
Real-time timbral mapping for synthesized percussive performance
Julien Guinot
PhD Student
Beyond Supervised Learning for Musical Audio
Katarzyna Adamska
PhD Student
Predicting hit songs: multimodal and data-driven approach
Keshav Bhandari
PhD Student
Neuro-Symbolic Automated Music Composition
Lele Liu
PhD Student
Automatic music transcription with end-to-end deep neural networks
Lewis Wolstanholme
PhD Student
Meta-Physical Modelling
Louis Bradshaw
PhD Student
Neuro-symbolic music models
Luca Marinelli
PhD Student
Gender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising
Madeline Hamilton
PhD Student
Improving AI-generated Music with Pleasure Models
Marco Comunità
PhD Student
Machine learning applied to sound synthesis models
Marco Pasini
PhD Student
Fast and Controllable Music Generation
Mary Pilataki
PhD Student
Deep Learning methods for Multi-Instrument Music Transcription
Max Graf
PhD Student
PERFORM-AI (Provide Extended Realities for Musical Performance using AI)
Nelly Garcia
PhD Student
An investigation evaluating realism in sound design
Ningzhi Wang
PhD Student
Generative Models For Music Audio Representation And Understanding
Oluremi Falowo
PhD Student
E-AIM - Embodied Cognition in Intelligent Musical Systems
Pablo Tablas De Paula
PhD Student
Machine Learning of Physical Models
Qiaoxi Zhang
PhD Student
Multimodal AI for musical collaboration in immersive environments
Qing Wang
PhD Student
Multi-modal Learning for Music Understanding
Rodrigo Mauricio Diaz Fernandez
PhD Student
Hybrid Neural Methods for Sound Synthesis
Ruby Crocker
PhD Student
Continuous mood recognition in film music
Sara Cardinale
PhD Student
Character-based adaptive generative music for film and video games using Deep Learning and Hidden Markov Models
Sebastián Ruiz
PhD Student
Physiological Responses to Ensemble Interaction
Shahar Elisha
PhD Student
Style classification of podcasts using audio
Shubhr Singh
PhD Student
Audio Applications of Novel Mathematical Methods in Deep Learning
Shuoyang Zheng
PhD Student
Explainability of AI Music Generation
Soumya Sai Vanka
PhD Student
Music Production Style Transfer and Mix Similarity
Teodoro Dannemann
PhD Student
Sabotaging, errors and other mistakes as a source of new techniques in music improvisation
Teresa Pelinski
PhD Student
Sensor mesh as performance interface
Tyler Howard McIntosh
PhD Student
Expressive Performance Rendering for Music Generation Systems
Vjosa Preniqi
PhD Student
Predicting demographics, personalities, and global values from digital media behaviours
Xavier Riley
PhD Student
Pitch tracking for music applications - beyond 99% accuracy
Xiaowan Yi
PhD Student
Composition-aware music recommendation system for music production
Yannis (John) Vasilakis
PhD Student
Active Learning for Interactive Music Transcription
Yazhou Li
PhD Student
Virtual Placement of Objects in Acoustic Scenes
Yifan Xie
PhD Student
Film score composer AI assistant: generating expressive mockups
Yin-Jyun Luo
PhD Student
Industry-scale Machine Listening for Music and Audio Data
Yinghao Ma
PhD Student
Self-supervision in machine listening
Yixiao Zhang
PhD Student
Machine Learning Methods for Artificial Musicality
Yukun Li
PhD Student
Computational Comparison Between Different Genres of Music in Terms of the Singing Voice
Zixun (Nicolas) Guo
PhD Student
Towards Tonality-Aware Music Understanding: Modeling Complex Tonal Harmony
Postdoc
Dr Luigi Marino
Research Fellow in Sound and Music Computing
Networks able to display relationships between human and nonhuman actors. Project: Sensing the Forest.
Dr Pedro Sarmento
Postdoctoral Researcher
music information retrieval, language models for music generation, guitar tablature generation, automatic guitar transcription, deep learning
Dr Saurjya Sarkar
Postdoctoral Researcher
Audio Source Separation, Music Information Retrieval, Sample Detection
Dr Yuanyuan Liu
Postdoctoral Researcher
Project: Digital Platforms for Craft in the UK and China
Research Assistant
Ivan Meresman Higgs
Research Assistant
Sample Identification in Mastered Songs using Deep Learning Methods
Jaza Syed
Research Assistant
Audio ML, Automatic Lyrics Transcription
Sungkyun Chang
Research Assistant
Deep learning technologies for multi-instrument automatic music transcription
Support
Alvaro Bort
Research Programme Manager
Projects: UKRI Centre for Doctoral Training in Artificial Intelligence and Music, New Frontiers in Music Information Processing (MIP-Frontiers)
Visitor
Dr Matthias Mauch
Visiting Academic
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Dr Satvik Venkatesh
L-Acoustics UK Ltd
Online Speech Enhancement In Scenarios With Low Direct-to-Reverberant-Ratio
Hyon Kim
Universitat Pompeu Fabra
Automated Music Performance Assessment and Critique