Role: Researcher, Software Dev
Interactive media has increasing influence over our daily lives. We now spend much of our time inhabiting virtual worlds with their own rules of interaction that allow dynamic and personalised experiences. Music that supports visual or narrative content has been shown to improve how immersive these experiences feel, as well as their emotional impact and how memorable they are. But designing music for interactive media is challenging, as it is not possible to pre-compose suitable music for every situation someone finds themselves in.
One way to solve this is to model the setting the person finds themself in, and adapt music accordingly. For my PhD, I developed a new framework for creating adaptive music for interactive media that incorporates cognitive science, media studies, musicology and artificial intelligence. The framework uses a new approach for modelling someone’s emotion when using interactive media and cognitive models about how they make sense of the virtual world. It also uses methods for music composition inspired by improvisation techniques and how nature adapts to different environments.
The framework was used in an adaptive system for games that generates music scores in real-time based on events that are happening in the world of the game. Compared to the official soundtrack for the games, it was found that this system better matched the music with events, and increased feelings of immersion.
These results support the framework as a practical tool in scoring interactive media products as well as the contributions it makes to theoretical knowledge in this area.