OFAI

Computer-Based Music Research:
Artificial Intelligence Models of Musical Expression

A research project supported by a generous START Research Prize (1998–2005)
administered by the Austrian Science Fund (FWF Y99-INF)

Articles about our project in public media:

Published Project Status Reports:

Research Goals

The goal of this project is to use Artificial Intelligence methods to study the phenomenon of expressive music performance. The focus of the project is on developing and using machine learning and data mining methods for the analysis of expressive performance data. The goal is to gain a deeper understanding of this complex domain of human competence and to contribute new methods to the (relatively new) branch of musicology that tries to develop quantitative models and theories of musical expression.

By musical expression, we mean the variations in tempo, timing, dynamics, articulation, etc. that performers apply when playing and ``interpreting'' a piece. Our goal is to study real expressive performances with machine learning methods, in order to discover some fundamental patterns or principles that characterize ``sensible'' musical performances, and to elucidate the relation between structural aspects of the music and typical or musically ``sensible'' performance patterns. The ultimate result would be a formal model that explains or predicts those aspects of expressive variation that seem to be common to most typical performances and can thus be regarded as fundamental principles.

To achieve this, it is necessary to

  • obtain high-quality performances by human musicians (e.g., pianists),
  • extract the ``expressive'' aspects from these and transform them into data that is amenable to computer analysis (e.g., tempo and dynamics curves),
  • analyze the structure (meter, grouping, harmony, etc.) of the pieces and represent the scores and their structure in a formal representation language,
  • develop machine learning algorithms that search for systematic connections between structural aspects of the music and typical expression patterns, and formulate their findings as symbolic rules,
  • perform systematic experiments with different representations, sets of performances, musical styles, etc., and
  • analyze the learning results with a view to both qualitative (are the discovered rules musically sensible? interesting? related to theories by other expression researchers?) and quantitative terms (how much of the variance can be explained? where are the limits?).

The project started in 1999 and is intended to last six years (until autumn 2005).

Current Research Directions

  • Data Acquisition and Extraction of "Expression":
    • Score extraction from (expressive) MIDI files (e.g., Cambouropoulos, AAAI'2000)
    • Score-to-performance matching
    • Beat and tempo tracking in MIDI files (Dixon & Cambouropoulos, ECAI'2000)
    • Beat and tempo tracking in audio data (Dixon, J. New Mus. Res. 2001)
  • Automated Structural Music Analysis:
    • Segmentation (Cambouropoulos, AISB'99)
    • Clustering and motivic analysis (Cambouropoulos & Widmer, J.New.Mus.Res. 2001)
    • Musical category formation (Cambouropoulos, Music Perception 2001)
  • Studying the nature of basic percepts related to expression:
    • Experimental studies on the perception of tempo (changes) in listeners (Dixon & Goebl, 2002)
    • Experimental studies on the perception of timing asynchronies (Goebl & Parncutt, 2002, 2003)
  • Performance Visualisation:
    • A software tool for animated visualisation of high-level patterns (Dixon, Goebl, & Widmer, 2002)
    • based on a  visualisation idea by Jörg Langner (Langner & Goebl, 2002, 2003)
    • Extensions to real-time tracking, smoothing, and animation (Dixon, Goebl, & Widmer, ICMAI'02)
    • High-level visualization of performance patterns used by pianists (Pampalk , Widmer & Chan, 2003; Goebl, Pampalk, & Widmer, 2004)
  • Systematic Performance Analysis:
    • Performance averaging (Goebl, SMPC'99)
    • Melody lead (Goebl, JASA 2001)
    • Articulation (Bresin & Widmer, 2000)
    • Relations between segmentation structure and low-level timing (Cambouropoulos, ICMC'2001)
    • Systematic investigation of different tempi (Goebl & Dixon, 2001)
  • Inductive Model Building (Machine Learning):
    • Fitting existing expression models onto real performance data (Kroiss, 2000)
    • Looking for structure in extensive performance data (Widmer, ICMC'2000)
    • Inducing partial models of note-level expression principles (Widmer, JNMR 2002, Artif.Intell. 2003)
    • Inducing multi-level models of phrase-level and note-level performance (Widmer & Tobudic, J.New Mus.Res, 2003)
  • Characterization and Automatic Classification of Great Artists:
    • Learning to recognize performers from characteristics of their style (Stamatatos & Widmer, ECAI'2002; Zanon & Widmer, SMAC 2003; Widmer & Zanon, 2004)
    • Discovering performance patterns characteristic of famous performers (Widmer, ALT'2002)
    • Organization and Visualization of Digital Music Archives (Pampalk et al., ACM Multimedia 2002; Pampalk et al., ISMIR 2003)
    • Rhythm detection and style classification (Dixon et al., ISMIR 2003)

References

All publications of this research project sorted by year or sorted by author

Please refer also to our publications page.