Investigating Requirements for Context-awareness and an Attempt to Implement it in Software Agents or Robots
In this project, we dealt with this topic in a theoretical and an applicable way with the aim of implementing aspects of context awareness in a software agent or a robot. Philosophical, psychological, and neuroscientific aspects of consciousness as well as their modeling are a central topic of interest in many disciplines of cognitive science. The approach of this issue has the aim of assessing human and animal consciousness, better understanding its underlying mechanisms and to develop and implement computational models.
L4S - Learning for Security is a European STReP (Small or medium-scale focused research project) that aims to develop simulation based learning experiences and to provide guidelines and tools for the development of soft skills necessary in effective crisis management.
Little is known in hard facts about why readers of online newspapers prefer some articles over others. Current news filtering systems assume that the topic of an article is the only factor that determines user satisfaction. But content accounts only for about 40% of a story's satisfaction rating. Factors that determine the remaining 60% can be as diverse as readability concerns, writing style, the type of a story, visual complexity, proper use of photographs, or, even less concretely, the appeal of a story. Contextual information, like previously read articles or the overall popularity and recentness of articles, needs to be considered as well. The goal of MAGNIFICENT is to gain deep insight into both the relevant parameters of stories and the adaptive training of user profiles along these parameters.
SERA (Social Engagement with Robots and Agents) is a European Collaborative Project which aims to advance science in the field of social acceptability of verbally interactive robots and agents, with a view to their applications especially in assistive technologies (companions, virtual butlers).
This project was about automation of annotation of electro-acoustic music through application of machine learning methods. Electro-acoustic music is a contemporary form of electronic composition and production of music that originated in the 1940s. It is made with electronic technology, using synthesized sounds or prerecorded sounds from nature and studio which are often extensively processed and altered. Compared to the analysis of instrumental or vocal music, annotation of electro-acoustic music is both more challenging and less developed. There exist no "pre-segmented" discrete units like notes, there is no score and no universally established system for analysis. Although musicology has developed various sets of tools for analysis of electro-acoustic music, the tediousness of manual annotation has prevented the application of these theories to a larger body of music. On the other hand, Music Information Retrieval has developed a rich repertoire of machine learning algorithms for analysis of music, including methods that can be used for automatic annotation. Our essential result is that machine learning methods can indeed be used for annotation of electro-acoustic music but only in an interactive setting. Only the integration of a human analyst into the workflow allows to sidestep the seeming impasse that the lack of ground in annotation of electro-acoustic music presents.
As automatic speech recognition (ASR) systems so far do not exhaust the potential of automatic text processing and semantic technology, the INSPIRATION project aims at modelling and supporting document creation processes aided by speech recognition. The first application area is the production of medical reports.
Feasibility Study: Interactive Entertainment of Elder Persons with Intelligent and Emotional Personality Agents
In most listings of the benefits of virtual butlers or other technical companions for elder people one aspect is missing: games. Games can fulfill at least two tasks: First, they can entertain these people, they can amuse them, then people can become happier. Secondly, games can train emotional and cognitive capabilities. In this study we investigated which games could be used, which role an intelligent and emotional agent could play, and how the equipment should function and look in order to be accepted by elderly people.
One important means of natural human-computer interaction is (spoken) language, so for a variety of applications it is essential to have high quality speech synthesis for different languages. The outcome of this project will be high quality synthetic voices, which allow a computer to "speak" in different Viennese dialects/sociolects. Since the sources of these voices are pieces taken from actual human speech, the outcome of the synthetic voices will sound very natural, close to human speech. With this technology it is possible to realize a lot of applications from the domain of education and tourism to art. A mobile sample application, a Viennese district guide capable of various dialects or variants, is also developed within the project. In the research part of the project efficient methods are investigated for developing synthetic voices for languages that are variants of other languages. Furthermore, it is necessary to employ methods for switching, or shifting between the standard language and dialectal variants, which reflects the fact that this mixing of standards corresponds to the everyday language use of many speakers. User tests are conducted to evaluate the quality of the synthetic voices and of the relevant sample applications.
The rapidly growing amount of music available in digital form via internet or digital libraries calls for entirely new computer-based methods for analysing, describing, distributing, and presenting music. The currently emerging research and application field known as Music Information Retrieval (MIR) is a direct response to that need. Over the past years, our research group at the Austrian Research Institute for Artificial Intelligence (OFAI) has accumulated substantial expertise in intelligent music processing.