OFAI

Technical Reports - Query Results

Your query term was 'number = 2011'
25 reports found
Reports are sorted by descending number

OFAI-TR-2011-25 ( 497kB PDF file)

On the Nature of Engineering Social Artificial Companions

Dirk Heylen, Rieks op den Akker, Mark ter Maat, Paolo Petta, Stefan Rank, Dennis Reidsma, Job Zwiers

The literature on social agents has put forward a number of requirements that social agents need to fulfill. In this paper we analyze the kinds of reasons and motivations that lie behind the statement of these requirements. In a second part of the paper, we look at how one can go about engineering the social agents. We introduce a general language in which to express dialogue rules and some tools that support the development of dialogue systems.

Keywords: Social Robots, Artificial Companions, Cognitive Engineering, Software Engineering

Citation: Heylen D., op den Akker R., Maat M.ter, Petta P., Rank S., Reidsma D., Zwiers J. (2011) On the nature of engineering social artificial companions, Applied Artificial Intelligence 25(6):549-574. DOI: 10.1080/08839514.2011.587156


OFAI-TR-2011-24 ( 139kB PDF file)

ICIDS 2011 Workshop: Sharing Interactive Digital Storytelling Technologies

Nicolas Szilas, Thomas Boggini, Paolo Petta

This workshop was organized around three types of contributions:
  • Technology providers, with contributions developed by their research labs or companies available for sharing;
  • Software integrators, with visions on how to technically organize the sharing of IDS-related components and with success and flop stories of community processes;
  • Users, with needs and intentions to use third-party IDS components and middleware within their own scientific, product, and/or artistic development efforts.
     
Online site: http://icids.org/sharing

Keywords: Interactive Digital Storytelling, Component technologies, Software Sharing

Citation: Szilas N., Boggini T., Petta P. (2011) ICIDS 2011 Workshop: Sharing Interactive Digital Storytelling Technologies, in: M. Si et al. (Eds.): ICIDS 2011, LNCS 7069, Springer, Berlin / Heidelberg, pp. 366-367.
doi: 10.1007/978-3-642-25289-1_52.


OFAI-TR-2011-23 ( 186kB PDF file)

Specification of an Open Architecture for Interactive Storytelling

Nicolas Szilas, Thomas Boggini, Monica Axelrad, Paolo Petta, Stefan Rank

his article introduces OPARIS, an OPen ARchitecture for Interactive Storytelling, which aims at facilitating and fostering the integration of various and heterogeneous Interactive Storytelling components. It is based on a modular decomposition of functionalities and a specification of the various messages that different modules exchange with each other.

Keywords: Interactive Digital Storytelling, Interactive Narrative, Software Architecture, Narrative Engine, Behaviour Engine, Animation Engine

Citation: Szilas N., Boggini T., Axelrad, M., Petta, P., Rank, S. (2011) Specification of an Open Architecture for Interactive Storytelling, in: M. Si et al. (Eds.): ICIDS 2011, LNCS 7069, Springer, Berlin / Heidelberg, pp. 330--333 doi:  10.1007/978-3-642-25289-1_41


OFAI-TR-2011-22 ( 395kB PDF file)

A Music Engine for Interactive Drama

Nicolas Szilas, Marcos Aristides, Paolo Petta

A number of Interactive Drama prototypes have been created during the last decade. These Artificial Intelligence-based systems usually aim at enabling the user to drive the story as the main character. Despite the acknowledged role of sound and music in visual narrative, almost none of these prototypes includes interactive background music. In this paper, a Music Engine for the IDtension narrative engine is proposed that is able to adapt in real-time to current user's action and narrative states. In the lineage of the branching music approach developed in some video games, the Music Engine being developed in Max/MSP uses a pre-composed graph-based score to enrich the whole interactive narrative experience. In particular, the reactivity of the Music Engine is aimed at corroborating the user’s subjective feeling of agency, and thereby at enhancing the experience of Interactive Drama system’s main components---user interface, narrative engine, and the theatre---as an integrated whole.

Keywords: Interactive Drama, Interactive Digital Storytelling, Music Engine

Citation: Nicolas Szilas, Marcos Aristides and Paolo Petta (2011) A Music Engine for Interactive Drama, in: Licinio Roque and Valter Alves (eds.) Proceedings of the 6th Audio Mostly Conference: A Conference on Interaction with Sound (AM '11), Coimbra, Portugal, ACM Press, 2011. Poster.


OFAI-TR-2011-21 ( 271kB PDF file)

A survey of research work in computer science and cognitive science dedicated to the modeling of reactive human behaviors

Stéphane Donikian, Paolo Petta

Modeling believable autonomous agents needs to take into account many different aspects from very different disciplines, ranging from cognitive psychology to mechanics. In this paper, we focus on research work dedicated to the modeling of human decision in a reactive way, a domain in-between the biomechanical motion control of the activity and the rational and social background which motivates and shapes the execution of such activities. We cover models of reactive human behaviors introduced in computer science and cognitive science, assessing and comparing them from the application-oriented perspective of modeling credible real-time virtual anthropomorphic actors

Keywords: Reactive behaviour modeling, Virtual humans, Behavioral animation, Human cognition

Citation: Donikian S., Petta P. (2011) A survey of research work in computer science and cognitive science dedicated to the modeling of reactive human behaviors, Computer Animation & Virtual Worlds 22(5):445-455. doi: 10.1002/cav.375


OFAI-TR-2011-20 ( 343kB PDF file)

Music Similarity Estimation with the Mean-Covariance Restricted Boltzmann Machine

Jan Schlueter, Christian Osendorfer

Existing content-based music similarity estimation methods largely build on complex hand-crafted feature extractors, which are difficult to engineer. As an alternative, unsupervised machine learning allows to learn features empirically from data. We train a recently proposed model, the mean-covariance Restricted Boltzmann Machine, on music spectrogram excerpts and employ it for music similarity estimation. In k-NN based genre retrieval experiments on three datasets, it clearly outperforms MFCC-based methods, beats simple unsupervised feature extraction using k-Means and comes close to the stateof- the-art. This shows that unsupervised feature extraction poses a viable alternative to engineered features.

Keywords: Music Information Retrieval, Music Similarity, Boltzmann Machine

Citation: Schlueter J., Osendorfer C.: Music Similarity Estimation with the Mean-Covariance Restricted Boltzmann Machine, Proceedings of the 10th International Conference on Machine Learning and Applications (ICMLA 2011), Honolulu, USA, 2011.


OFAI-TR-2011-19 ( 522kB PDF file)

Advantages of nonstationary Gabor transforms in beat tracking

Andre Holzapfel, Gino Angelo Velasco, Nicki Holighaus, Monika Doerfler, Arthur Flexer

In this paper the potential of using nonstationary Gabor transform for beat tracking in music is examined. Nonstationary Gabor transforms are a generalization of the shorttime Fourier transform, which allow exibility in choosing the number of bins per octave, while retaining a perfect inverse transform. In this paper, it is evaluated if these properties can lead to an improved beat tracking in music signals, thus presenting an approach that introduces recent ndings in mathematics to music information retrieval. For this, both nonstationary Gabor transforms and short-time Fourier transform are integrated into a simple beat tracking framework. Statistically signi cant improvements are observed on a large dataset, which motivates to integrate the nonstationary Gabor transform into state of the art approaches for beat tracking and tempo estimation.

Keywords: beat tracking, nonstationary Gabor transform, music information retrieval

Citation: Holzapfel A., Velasco G., Holighaus N., Doerfler M., Flexer A.: Advantages of nonstationary Gabor transforms in beat tracking. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-19, 2011


OFAI-TR-2011-18

An Interdisciplinary VR-architecture for 3D chatting with non-verbal communication

Stephane Gobron, Junghyun Ahn, Quentin Silvestre, Daniel Thalmann, Stefan Rank, Marcin Skowron, Georgios Paltoglou, Michael Thelwall

The communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation is crucial to simulate realistic behavior, but is still an aspect neglected in the simulation of virtual societies. In response, this paper aims to present the necessary modularity to allow virtual humans (VH) conversation with consistent facial expression -either between two users through their avatars, between an avatar and an agent, or even between an avatar and a Wizard of Oz. We believe such an approach is particularly suitable for the design and implementation of applications involving VHs interaction in virtual worlds. To this end, three key features are needed to design and implement this system entitled 3D-emoChatting. First, a global architecture that combines components from several research fields. Second, a real-time analysis and management of emotions that allows interactive dialogues with non-verbal communication. Third, a model of a virtual emotional mind called emoMind that allows to simulate individual emotional characteristics. To conclude the paper, we briefly present the basic description of a user-test which is beyond the scope of the present paper.

Keywords: Publications List Interact, Three-Dimensional Graphics and Realism, Virtual Reality—Natural Language Processing, Text Analysis, Publications List Interact

Citation: In the Proceedings of Joint Virtual Reality Conference of EuroVR, EGVE 2011


OFAI-TR-2011-17

Model of emotional dialogues based on entropy model

Julian Sienkiewicz, Marcin Skowron, Georgios Paltoglou, Janusz Holyst

We process emotionally annotated (negative, neutral, positive) data from Internet Relay Chat (IRC) extracting 90.000 one-to-one dialogues between users. Statistical analysis shows that, regardless of the length of the dialogue measured in the number of comments, each dialogue ends when the probabilities of finding positive and neutral emotional value equalize. Moreover, the entropy of the emotional probabilities distribution increases with the dialogue evolution. Additionally we observe clustering of comments with the same emotional value. Basing on entropy growth and emotional clustering, we construct a model of dialogues that reproduces the characteristics of the real data and we show how it can be useful for an automatic moderator in this medium.

Keywords: online communication, social norms, modelling, Publications List Interact

Citation: In the Proceedings of European Conference on Complex Systems, ECCS 2011


OFAI-TR-2011-16

Emotional communication patterns in online chat communities

Antonios Garas, David Garcia Becerra, Frank Schweitzer, Marcin Skowron

Real-time communication tools, like chats or instant message services, have an increasing popularity for both casual and professional interactions. These platforms allow us to study the patterns of human behavior in emotional communication through the messages written by people participating in these conversations. We study a dataset composed of logs from Internet Relay Chat (IRC) channels, in which a large amount of users share comments in real time. We use a sentiment analysis tool to extract the emotions expressed in each comment, classified as positive, negative, or neutral. Statistical analysis of the time delays between user messages show that people in chatrooms follow similar principles as in other means of communication. Furthermore, the time dynamics of the whole conversation shows the presence of long-range correlations. We analyze the persistence of the emotional expression of individuals, i.e. how likely they are to express a similar sentiment as in the previous message. We find that most of the users behave in an emotionally persistent way, while some of them have a more random or anti-persistent behavior. Even under the presence of this heterogeneity, there is persistence in the channel as a whole. These results indicate the presence of social norms of emotional expression, as well as the existence of social links between users under the fast communication of a chatroom. We have developed and analyzed a model for the exchange of emotions online that shows the same features as the IRC data, where collective emotions emerge from individual behavior and communication.

Keywords: online communication, social norms, agent-based modeling, Publications List Interact

Citation: In Proceedings of the International Society for Research on Emotion Conference, ISRE 2011


OFAI-TR-2011-15

CyberEmotions: the good, the bad, the neutral - effect of an affective profile in dialog system-user online conversations

Marcin Skowron, Stefan Rank

Emotionally driven online behavior is traceable in a wide range of human communication processes on the Internet. Here the sum of individual emotions of a large number of users, with their interconnectivity and complex dynamics influence formation, evolution and breaking-up of online communities. Our research concentrates on the basic communication process between two conversants. Such interactions constitute a fundament for the modeling of more complex, multi-agents communication processes. Using artificial conversational entities (dialog systems), we investigate the role of emotions in online, natural language based communication. Our previous work demonstrated that a dialog system's capability to establish an emotional connection, and further to conduct a realistic and enjoyable dialog was in pair with the results obtained in a Wizard of Oz setting. In this talk, we present findings from recent experiments on the effect of conversant affective profiles (i.e., positive, negative, neutral) and their influence on the communication processes. The results demonstrate that the affective profile to a large extent determines the assessment of users' emotional connection and enjoyment from the interaction while it does not significantly influence the perception of core capabilities of the dialog systems, i.e. dialog coherence, dialog realisticness. The emotional changes experienced by the participants during the online interactions were correlated with the type of artificial systems affective profile and induced changes to various aspects of the conducted dialogs, e.g., timing, communication style, users' expressions of affective states.

Keywords: Publications List Interact, affective dialog system affective human-computer interactions, agent control architecture

Citation: In Proceedings of International Society for Research on Emotion Conference, ISRE 2011.


OFAI-TR-2011-14 ( 279kB PDF file)

Using Mutual Proximity to Improve Content-Based Audio Similarity

Dominik Schnitzer, Arthur Flexer, Markus Schedl, Gerhard Widmer

This work introduces Mutual Proximity, an unsupervised method which transforms arbitrary distances to similarities computed from the shared neighborhood of two data points. This reinterpretation aims to correct inconsistencies in the original distance space, like the hub phenomenon. Hubs are objects which appear unwontedly often as nearest neighbors in predominantly high-dimensional spaces. We apply Mutual Proximity to a widely used and standard content-based audio similarity algorithm. The algorithm is known to be negatively affected by the high number of hubs it produces. We show that without a modification of the audio similarity features or inclusion of additional knowledge about the datasets, applying Mutual Proximity leads to a significant increase of retrieval quality: (1) hubs decrease and (2) the k-nearest-neighbor classification rates increase significantly. The results of this paper show that taking the mutual neighborhood of objects into account is an important aspect which should be considered for this class of content-based audio similarity algorithms.

Keywords: Music Information Retrieval, Audio Similarity Classification, Hubs

Citation: Schnitzer D., Flexer A., Schedl M., Widmer G.: Using Mutual Proximity to Improve Content-Based Audio Similarity. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-14, 2011


OFAI-TR-2011-13

Affect Bartender - Affective Cues and Their Application in a Conversational Agent

Marcin Skowron, Georgios Paltoglou

This paper presents methods for the detection of textual expressions of users' affective states and explores an application of these affective cues in a conversational system -- Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces.The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.

Keywords: Affective Interactions, Conversational Agent Textual Affect Sensing, Sentiment Classification, Publications List Interact

Citation: Skowron M., Paltoglou G.: Affect Bartender - Affective Cues and Their Application in a Conversational Agent. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-13, 2011


OFAI-TR-2011-12

Talking with affective dialog systems: Extending the analysis of affect in online communication

Marcin Skowron

Recent advances in computer technology extend the capacities of autonomous systems to detect and categorise the textual expressions of affective states. On the one hand, this enables large‐scale studies of the role of affect in the ICT mediated communication processes. On the other hand, it supports the development of interactive systems that can sense and take into account information on the users' sentiment to manage the communication flow. In our talk, we introduce affective dialog systems and autonomous interactive bots that provide supplemental means for analysing the role of affect in synchronous online communication. In particular, such systems enable direct querying of users about their sentiments and affective states towards a set of entities, events and processes and for conducting the follow‐up, task‐oriented dialogs to gain additional insights, e.g., on the users' background, motivations and expectations. The themes that are used by the system are provided manually or acquired from the online news feeds, e.g., Reuters News. We describe how “hot topics” can be automatically detected and incorporated in the dialog, to acquire information on users' affective responses to topics of interest. The presented systems are deployable in various interaction settings, e.g., virtual online worlds, providing an access to user groups that are often not actively engaged in discussions in the other, online communication channels such as blogs and newsgroups. In this talk we present the insights from experiments on the system‐users interactions, conducted in 3D virtual reality settings, in which the affective dialog system managed the verbal aspects of a virtual human communication. The system performance, in terms of its capability to: i.) generate a realistic dialog, ii.) provide an enjoyable experience for the users and iii.) establish an emotional connection with the experiment participants, matched the results obtained in the Wizard‐of‐Oz settings, i.e., unseen human operator who controls the conversation of a virtual character.

Keywords: dialog system, affective computing, HCI, Publications List Interact

Citation: Skowron M.: Talking with affective dialog systems: Extending the analysis of affect in online communication. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-12, 2011


OFAI-TR-2011-11

Effect of affective profile on communication patterns and affective expressions in interactions with a dialog system

Marcin Skowron, Mathias Theunis, Stefan Rank, Anna Borowiec

Interlocutors' affective profile and character traits play an important role in interactions. In the presented study, we apply a dialog system to investigate the effects of the affective profile on user-system communication patterns and users' expressions of affective states. We describe the data-set acquired from experiments with the affective dialog system, the tools used for its annotation and findings regarding the effect of affective profile on participants' communication style and affective expressions.

Keywords: affective profile, dialog system, affective computing, HCI, Publications List Interact

Citation: Skowron M., Theunis M., Rank S., Borowiec A.: Effect of affective profile on communication patterns and affective expressions in interactions with a dialog system. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-11, 2011


OFAI-TR-2011-10

The good, the bad and the neutral: affective profile in dialog system-user communication

Marcin Skowron, Stefan Rank, Mathias Theunis, Julian Sienkiewicz

We describe the use of affective profiles in a dialog system and its effect on participants' perception of conversational partners and experienced emotional changes in an experimental setting, as well as the mechanisms for realising three different affective profiles and for steering task-oriented follow-up dialogs. Experimental results show that the system's affective profile determines the rating of chatting enjoyment and user-system emotional connection to a large extent. Self-reported emotional changes experienced by participants during an interaction with the system are also strongly correlated with the type of applied profile. Perception of core capabilities of the system, realism and coherence of dialog, are only influenced to a limited extent.

Keywords: affective dialog system, affective profile conversational agent affective computing, HCI, Publications List Interact

Citation: Skowron M., Rank S., Theunis M., Sienkiewicz J.: The good, the bad and the neutral: affective profile in dialog system-user communication. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-10, 2011


OFAI-TR-2011-09

Sentiment analysis of informal textual communication in cyberspace

Georgios Paltoglou, Stephane Gobron, Marcin Skowron, Mike Thelwall, Daniel Thalmann

The ability to correctly identify the existence and polarity of emotion in informal, textual communication is a very important part of a realistic and immersive 3D environment where people communicate with one another through avatars or with an automated system. Such a feature would provide the system the ability to realistically represent the mood and intentions of the participants, thus greatly enhancing their experience. In this paper, we study and compare a number of approaches for detecting whether a textual utterance is of objective or subjective nature and in the latter case detecting the polarity of the utterance (i.e. positive vs. negative). Experiments are carried out on a real corpus of social exchanges in cyberspace and general conclusions are presented.

Keywords: Opinion Mining, Sentiment Analysis Conversational Systems, Virtual Reality, Virtual Human, Emotional Profile, Publications List Interact

Citation: Paltoglou G., Gobron S., Skowron M., Thelwall M., Thalmann D.: Sentiment analysis of informal textual communication in cyberspace. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-09, 2011


OFAI-TR-2011-08 ( 2498kB PDF file)

Identification of perceptual qualities in textural sounds using the repertory grid method

Thomas Grill, Arthur Flexer, Stuart Cunningham

This paper is about exploring which perceptual qualities are relevant to people listening to textural sounds. Knowledge about those personal constructs shall eventually lead to more intuitive interfaces for browsing large sound libraries. By conducting mixed qualitative-quantitative interviews within the repertory grid framework ten bi-polar qualities are identi ed. A subsequent web-based study yields measures for inter-rater agreement and mutual similarity of the perceptual qualities based on a selection of 100 textural sounds. Additionally, some initial experiments are conducted to test standard audio descriptors for their correlation with the perceptual qualities.

Keywords: textural audio, auditory perception, verbal description, personal constructs, repertory grid, machine listening

Citation: Grill T., Flexer A., Cunningham S.: Identification of perceptual qualities in textural sounds using the repertory grid method. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-08, 2011


OFAI-TR-2011-07 ( 244kB PDF file)

Improving tempo-sensitive and tempo-robust descriptors for rhythmic similarity

Andre Holzapfel, Arthur Flexer, Gerhard Widmer

For the description of rhythmic content of music signals usually features are preferred that are invariant in presence of tempo changes. In this paper it is shown that the importance of tempo depends on the musical context. For popular music, a tempo-sensitive feature is improved on multiple datasets using analysis of variance, and it is shown that also a tempo-robust description profits from the integration into the resulting processing framework. Important insights are given into optimal parameters for rhythm description, and limitations of current approaches are indicated.

Keywords: Music Information Retrieval, Tempo, Rhythm

Citation: Holzapfel A., Flexer A., Widmer G.: Improving tempo-sensitive and tempo-robust descriptors for rhythmic similarity. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-07, 2011


OFAI-TR-2011-06 ( 12592kB PDF file)

On automated annotation of acousmatic music

Volkmar Klien, Thomas Grill, Arthur Flexer

This paper presents an inquiry concerning the feasibility of using existing methods from the field of Music Information Retrieval (MIR) for automated annotation of acousmatic music. Thorough discussion and appraisal of the meaning and role of annotation in this context leads to the conclusion that: (i) full automation is not possible due to the lack of a "ground truth" and the absence of semantic comprehension on the side of the computer, (ii) MIR can nevertheless play a valuable role by providing human annotators with tools for interactive annotation. We present two possible approaches to interactive annotation applied to compositions of acousmatic music, namely John Chowning's Turenas and Denis Smalley's Wind Chimes. We also discuss the possible impact of such semi-automatic annotation on the theoretical coverage and practice of acousmatic music.

Keywords: Acousmatic music, Music information retrieval, Annotation

Citation: Klien V., Grill T., Flexer A.: On automated annotation of acousmatic music, Journal of New Music Research, Volume 41, Issue 2, pages 153-173, 2012


OFAI-TR-2011-05 ( 214kB PDF file)

No peanuts! Affective Cues for the Virtual Bartender

Marcin Skowron, Hannes Pirker, Stefan Rank, Georgios Paltoglou, Junghyun Ahn, Stephane Gobron

The aim of this paper is threefold: it explores methods for the detection of affective states in text, it presents the usage of such affective cues in a conversational system and it evaluates its effectiveness in a virtual reality setting. Valence and arousal values, used for generating facial expressions of users' avatars, are also incorporated into the dialog, helping to bridge the gap between textual and visual modalities. The system is evaluated in terms of its ability to: i) generate a realistic dialog, ii) create an enjoyable chatting experience, and iii) establish an \emph{emotional connection} with participants. Results show that user ratings for the conversational agent match those obtained in a Wizard of Oz setting.

Keywords: Conversational System, Virtual Agent, Dialog System, Affective Computing, HCI, System Evaluation, Publications List Interact

Citation: Skowron M., Pirker H., Rank S., Paltoglou G., Ahn J., Gobron S.: No peanuts! Affective Cues for the Virtual Bartender. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-05, 2011


OFAI-TR-2011-04 ( 69kB PDF file)

Virtual Agent Modeling in the RASCALLI Platform

Christian Eis, Marcin Skowron, Brigitte Krenn

The RASCALLI platform is both a runtime and a development environment for virtual systems augmented with cognition. It provides a framework for the implementation and execution of modular software agents. Due to the underlying software architecture and the modularity of the agents, it allows the parallel execution and evaluation of multiple agents. These agents might be all of the same kind or of vastly different kinds or they might differ only in specific (cognitive) aspects, so that the performance of these aspects can be effectively compared and evaluated.

Keywords: Cognitive Agents, Agent Modeling and Evaluation, Publications List Interact

Citation: Eis C., Skowron M., Krenn B.: Virtual Agent Modeling in the RASCALLI Platform. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-04, 2011


OFAI-TR-2011-03 ( 57kB PDF file)

Adaptive Mind Agent

Brigitte Krenn, Marcin Skowron, Gregor Sieber, Erich Gstrein, Joerg Irran

We present the Adaptive Mind Agent, an intelligent virtual agent that is able to actively participate in a real-time, dynamic environment. The agent is equipped with a collection of processing tools that form the basis of its perception from and action on the environment consisting of web documents, URLs, RRS feeds, domain-specific knowledgebases, other accessible virtual agents and the user. How these predispositions are finally shaped into unique agent behaviour depends on the agent's abilities to learn through actual interactions, in particular the abilities: (i) to memorize and evaluate episodes comprising the actions the agent had performed on its environment in the past depending on its perceptions of the user requests and its interpretation of the user's feedback reinforcing or inhibiting a certain action; (ii) to dynamically develop user-driven interest and preference profiles through memorizing and evaluating the user clicks on selected web pages.

Keywords: Virtual Agent, HCI, System Adaptation, Publications List Interact

Citation: Krenn B., Skowron M., Sieber G., Gstrein E., Irran J.: Adaptive Mind Agent. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-03, 2011


OFAI-TR-2011-02 ( 860kB PDF file)

Affect Bartender - Affective Cues and Their Application in a Conversational Agent

Marcin Skowron, Georgios Paltoglou

Abstract—This paper presents methods for the detection of textual expressions of users’ affective states and explores an application of these affective cues in a conversational system – Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces. The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.

Keywords: Affective Interactions, Conversational Agent, Dialog System, Textual Affect Sensing Sentiment Classification, Publications List Interact

Citation: Skowron M., Paltoglou G.: Affect Bartender - Affective Cues and Their Application in a Conversational Agent. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-02, 2011


OFAI-TR-2011-01 ( 282kB PDF file)

Sentiment analysis of informal textual communication in cyberspace

Georgios Paltoglou, Stephane Gobron, Marcin Skowron, Mike Thelwall, Daniel Thalmann

The ability to correctly identify the existence and polarity of emotion in informal, textual communication is a very important part of a realistic and immersive 3D environment where people communicate with one another through avatars or with an automated system. Such a feature would provide the system the ability to realistically represent the mood and intentions of the participants, thus greatly enhancing their experience. In this paper, we study and compare a number of approaches for detecting whether a textual utterance is of objective or subjective nature and in the latter case detecting the polarity of the utterance (i.e. positive vs. negative). Experiments are carried out on a real corpus of social exchanges in cyberspace and general conclusions are presented.

Keywords: Sentiment Classification, Affective Computing, Publications List Interact

Citation: Paltoglou G., Gobron S., Skowron M., Thelwall M., Thalmann D.: Sentiment analysis of informal textual communication in cyberspace. Technical Report, Österreichisches Forschungsinstitut für Artificial Intelligence, Wien, TR-2011-01, 2011