OFAI

Research Areas

Intelligent Music Information Retrieval

This recently established research field is of growing interest for both the research community and "normal" music consumers. Our work in MIR includes
  • Representation and Estimation of Musical Similarity
  • Organization and Visualization of Digital Music Archives
  • Genre Classification (from audio and/or web-based data)
  • Audio Alignment (semi-automatic indexing and generation of content-based metadata)
  • Detection and Classification of Rhythm

Machine Learning and Data Mining

Past and current research areas:
  • Data Mining and Knowledge Discovery in Databases
  • Text Mining
  • Metalearning and Evaluation of Learning Algorithms
  • Learning with Multiple Models
  • Inductive Logic Programming
  • Knowledge Intensive Learning
  • Concept Drift and Context-Sensitive Learning
  • Minimum Description Length Principle

Music Expression and Performance Research with Artificial Intelligence Methods

This area of research covers a vast variety of different subtopics and research tasks including
  • Data Acquisition (Score extraction from expressive MIDI files, score-to-performance matching, beat and tempo tracking in MIDI files, Beat and tempo tracking in audio data),
  • Piano acoustic studies (Analysis of the timing properties of piano actions, quality assessment of reproducing pianos such as the Bösendorfer SE system or the Yamaha Disklavier),
  • Automated Structural Music Analysis (Segmentation and Clustering and Motivic Analysis),
  • Tempo and Timing Perception (Perception of tempo, Perception of note onset asynchronies -- "melody lead," and Similarity perception of expressive performances),
  • Systematic Expressive Performance Analysis (Analysis of individual performance aspects as Articulation, Note onset asynchronies, Segmentation-timing relations),
  • Performance Visualization (animated two-dimensional tempo-loudness trajectories and real-time systems -- the "Performance Worm"),
  • Inductive Model Building -- Machine Learning (Fitting existing expression models onto real performance data, Looking for structure in extensive performance data, Inducing partial models of note-level expression principles, and Inducing multi-level models of phrase-level and note-level performance), and
  • Characterization and Automatic Classification of Great Artists (Learning to recognize performers from characteristics of their style, Discovering performance patterns characteristic of famous performers).