Semantic Annotation by Learned Structured and Adaptive Signal Representations (SALSA)
A project sponsored by the Vienna Science and Technology Fund (WWTF)
The goal of SALSA is to bridge the semantic gap in music information research (MIR) by using adaptive and structured signal representations. The semantic gap is the difference in information content between signal representations or models used in MIR and high-level semantic descriptions used by musicians and audiences. Examples are the mapping from signal representation to concrete content such as instrumentation or to more abstract tags such as the emotional experience of music.
Recently developed methods from applied harmonic analysis allow going beyond the prevalent application of standard time-frequency analysis in MIR by using signal representations which adapt to the inherent characteristics of musical signals. Thereby it will be possible to obtain sparse representations in dictionaries of basic building blocks. The sparsity paradigm will, however, be complemented by assumptions on the representation coefficients incorporating knowledge about the structures specific to the music signals under consideration.
The central questions of SALSA are (i) whether adaptive signal representations and structured sparse coefficient estimation lead to improvement of learned mappings to high level semantic concepts and (ii) how the high level descriptions can guide the adaptation step in harmonic analysis. Answering these questions will allow for an innovative form of musical signal analysis that is informed by and adapts to the rich semantic content music has for human listeners.
- Bammer R., Dörfler M.: Modifying Signals in Transform Domain: a Frame-Based Inverse Problem, in Proceedings of the 19th International Conference on Digital Audio Effects (DAFx-16), 2016.
- Dörfler M., Velasco, G.: Sampling time-frequency localized functions and constructing localized time-frequency frame, to appear: European J. Appl. Math, 2016.
- Flexer A., Grill T.: The Problem of Limited Inter-rater Agreement in Modelling Music Similarity, Journal of New Music Research, 2016 (published online on 5th of July 2016). DOI: http://dx.doi.org/10.1080/09298215.2016.1200631
- Gkiokas A., Lattner S., Katsouros V., Flexer A., Carayanni G.: Towards an Invertible Rhythm Representation, Proceedings of the 18th International Conference on Digital Audio Effects (DAFx-15), Trondheim, Norway, 2015.
- Grill T., Schlüter J.: Music boundary detection using neural networks on combined features and two-level annotations, in Proceedings of the 16th International Society for Music Information Retrieval Conference, pp. 531-537, 2015.
- Grill T., Schlüter J.: Music boundary detection using neural networks on spectrograms and self-similarity lag matrices, in Proceedings of the 23rd European Signal Processing Conference (EUSIPCO 2015), pp. 1306-1310, Nice, France, 2015.
- Holzapfel A., Benetos E.: The Sousta Corpus: Beat-Informed Automatic Transcription of Traditional Dance Tunes, Proceedings of the 17th International Society for Music Information Retrieval Conference, 2016.
- Holzapfel A., Grill T.: Bayesian Meter Tracking on Learned Signal Representations, Proceedings of the 17th International Society for Music Information Retrieval Conference, 2016.
- Schlüter J., Grill T.: Exploring Data Augmentation for Improved Singing Voice Detection with Neural Networks, in Proceedings of the 16th International Society for Music Information Retrieval Conference, 2015.
- Srinivasamurthy A., Holzapfel A., Cemgil A. T., Serra X.: A generalized Bayesian model for tracking long metrical cycles in acoustic music signals, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2016.