I’ve been playing saxophone for 20 years and writing music for the last 15. After high-school I continued my music while doing undergraduate degrees in science (majoring in mathematics) and music. I graduated from both degrees and completed an honours research year in music composition, which I was awarded with first class honours.
I became interested in applying mathematical models to music systems following research assistant positions I held in mathematical modelling (bayesian decision support systems) and analysing fractal properties of plants for fast amomaly detection. After experimenting with applying some of these models to music composition I began a PhD research program at KAIST in South Korea in 2013, with research focused on applications of machine learning for music.
In 2014 I moved to Monash University and completed my PhD under the supervision of Jon McCormack and Vince Dziekan. My thesis: ‘Adaptive Music Scores for Interactive Media’ was successfully examined and published in mid 2018. It provides an analysis into how machine learning can be used to adapt human-composed music for interactive media. It also documents the creation of a novel adaptive scoring system utilising deep learning, evolutionary models and mult-agent systems.
Since 2014 my research has been increasingly involved with deep learning. I explore both novel model development and new applications of existing models, but always with a user-first approach. My goal is to the reflexive, learning behaviour of AI systems to challenge creative pracitioners and nudge them towards new areas of creative exploration. I apply this to music, film and artistic practices.
I work at Monash University as a Creative AI researcher and am a part of SensiLab.
You will usually find me knee deep in Tensorflow or PyTorch with some Wayne Shorter coming through my headphones and a coffee in my hand. I love talking to people doing new and interesting things with the arts and AI - so get in touch through email and twitter.