With online music stores offering millions of songs to choose from, users need assistance. Using digital signal processing, machine learning, and the semantic web, our research explores new ways of intelligently analysing musical data, and assists people in finding the music they want.
We have developed systems for automatic playlisting from personal collections (SoundBite), for looking inside the audio (Sonic Visualiser), for hardening/softening transients, and many others. We also regularly release some of our algorithms under Open Source licences, while maintaining a healthy portfolio of patents.
This area is led by Dr Simon Dixon. Projects in this area include:
- mid-level music descriptors: chords, keys, notes, beats, drums, instrumentation, timbre, structural segmentation, melody
- high-level concepts for music classification, retrieval and knowledge discovery: genre, mood, emotions
- Sonic Visualser
- semantic music analysis for intelligent editing
- linking music-related information and audio data
- interactive auralisation with room impulse responses
PhD Study - interested in joining the team? We are currently accepting PhD applications.