Inhalt des Dokuments
|Alexander Koller, University of Potsdam|
|Monday, October 7, 2013, 2:15 - 3 p.m., TU Hochhaus, 20th floor, Auditorium 2|
When people talk to each other, their language use is deeply intertwined with the physical environment they share. For instance, objects in the shared environment can be easier to refer to, and a speaker must constantly track the hearer's field of vision. On the other hand, speakers can deliberately optimize the communicative situation by asking hearers to move or look somewhere.
If a computer system is intended to have a natural and effective natural-language dialogue with a human user in a shared environment, it must be able to take these things into account as well. I will sketch some of the problems that occur in such scenarios. These are specific challenges that navigation systems and talking robots face, but a system that generates static, non-interactive text (e.g. newspaper text) does not.
In my own research, I have looked at these challenges in virtual 3D environments as opposed to the real world, in order to focus on issues of language processing. I will talk about some of this work, including the use of virtual environments over the Internet for system evaluation, the use of AI planning techniques for computing useful utterances, and the use of eyetracking methods for tracking, in real time, whether these utterances were successful.
ContactDr.-Ing. Klaus-Peter Engelbrecht