by Raja Lala and Johan Jeuring (UU).

The past months we worked hard to deliver a configurable version of the communication scenario editor. We developed the RAGE assets by ‘carving’ out from a pre-existing learning environment (Communicate!) already in use since three years at Utrecht University (UU).

In that process and in developing a configurable version, the RAGE assets developed further than our UU learning environment. We have now updated the UU learning environment editor to be an (configured) instance of the RAGE asset. That has led to better quality as the learning environment is used and already tested in the field, communication teachers at our University use the learning environment as part of their curriculum.

In the RAGE consortium, our assets (both the editor and the communication scenario reasoner) have been integrated by Nurogames seemlessly in the WaterCooler game for Hull. It is interesting to see that the configured editor instance for Hull uses some features and properties (like Intent) differently from how theu are used at our University. Our assets have been able to handle the various requests of both the game developer (Nurogames) and use-case owner (Hull). Nurogames integrated our assets just on the basis of our documentation, which indicates that our asset is easy to use by new parties.

Since our assets are in a stable state, we are looking into further extending our research area.

We currently support a scripted dialogue approach. A domain expert develops a scripted scenario as a sequence of subjects. A scenario often follows a communication protocol like delivering bad-news to a patient. A subject is a graph of statements, alternating between a computer (virtual character) statement and multiple player statements. The game-play uses a scenario, a virtual character and an appropriate background location, to present a conversation simulation to a player/student. A player navigates a simulation and converses with a virtual character by choosing a statement option from one of the pre-scripted player statements.

Usability of authoring environments often comes at the expense of expressiveness (Murray, 2003);
our editor combines ease of use with advanced features like interleaving and premature endings
in addition to standard features like sequence, choice, and conditional options.

In `Face-to-Face Interaction with Pedagogical Agents, Twenty Years Later,’ Johnson et al. evaluate the evolution of features of pedagogical agents over two decades. They present the possibility of interactive natural language dialogue by combining advances in natural language understanding and dialogue management. They argue that complex technologies can be difficult to understand, control, and author; and that this could stand in the way of acceptance and adoption of these technologies by teachers and other stakeholders.

Our editor has proven to be easy to use, and has been tested in the field, and we will keep it
as-is. As a subject of research, we investigate whether we can process natural language using
a scripted scenario.

In the last held project meeting and Hackathon, we looked at assets from the RAGE partners and we now envisage a colloboration with UPB (Universitatea Politehnicaî din Bucuresti).

In the first step, we modify our game interface to enable a player to input text instead of selecting a choice from multiple-statements. We match this ‘free’ input text to the pre-scripted statement choices that would have been available at that step in the simulation.

We use a text matching service from UPB, which uses natural language techniques like
latent semantic analysis, latent Dirichlet allocation, word vectors and affective words to
compare two sentences and to determine a similarity score. UPB currently offers an English
corpus and are developing a Dutch one using Affective Word Norms. The initial results of
the similarity match using English sentences are encouraging.

In the next step, we plan to investigate methods to annotate a scripted scenario pre-game. A statement in a scenario can be analysed using sentiment analysis from UPB, and annotated for example with a sentiment ‘angry’. During game-play, we can match this sentiment to a ‘free’ input text sentiment from a player. UPB uses the classification of six major dimensions from Picard (1997): excited, sad, scared, angry, tender and happy.

From initial testing, assigning a correct sentiment to a sentence seems challenging; so we need to investigate deeper whether we can use additional methods. An idea to use the knowledge of a scenario to assign a more appropriate sentiment to a statement. Since we know all statements in a scenario, we could perhaps design a pedagogical agent with a mini-corpus of all the statements in a given scenario and match against a broader corpus.

Ideas are welcome.

Posts are written and signed by its authors. RAGE Project does not responsabilize for the opinions and comments published in them.