Loading...
Please wait, while we are loading the content...
Similar Documents
Multimodal Expressive Embodied Conversational Agents [ Multimodal expressive ECAs ]
| Content Provider | Semantic Scholar |
|---|---|
| Author | Pelachaud, Catherine |
| Copyright Year | 2007 |
| Abstract | In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using the taxonomy of communicative functions developed by Isabella Poggi [22] to specify the behavior of the agent. Based on this taxonomy a representation language, Affective Presentation Markup Language, APML has been defined to drive the animation of the agent [4]. Lately, we have been working on creating no longer a generic agent but an agent with individual characteristics. We have been concentrated on the behavior specification for an individual agent. In particular we have defined a set of parameters to change the expressivity of the agent’s behaviors. Six parameters have been defined and implemented to encode gesture and face expressivity. We have performed perceptual studies of our expressivity model. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www.researchgate.net/profile/Catherine_Pelachaud/publication/221572151_Multimodal_expressive_embodied_conversational_agents/links/0deec51794b399cb9c000000.pdf |
| Alternate Webpage(s) | http://web.cs.wpi.edu/~rich/courses/cs525u/readings/Pelachaud2005.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |