Loading...
Please wait, while we are loading the content...
Tangible objects for the acquisition of multimodal interaction patterns.
| Content Provider | CiteSeerX |
|---|---|
| Author | Taib, Ronnie Ruiz, Natalie |
| Abstract | Multimodal user interfaces offer more intuitive interaction for end-users, however, usually only through predefined input schemes. This paper describes a user experiment for multimodal interaction pattern identification, using head gesture and speech inputs for a 3D graph manipulation. We show that a direct mapping between head gestures and the 3D object predominates, however even for such a simple task inputs vary greatly between users, and do not exhibit any clustering pattern. Also, in spite of the high degree of expressiveness of linguistic modalities, speech commands in particular tend to use a limited vocabulary. We observed a common set of verb and adverb compounds in a majority of users. In conclusion, we recommend that multimodal user interfaces be individually customisable or adaptive to users ’ interaction preferences. 1. |
| File Format | |
| Access Restriction | Open |
| Subject Keyword | Tangible Object Multimodal Interaction Pattern Head Gesture Intuitive Interaction Clustering Pattern Adverb Compound Simple Task Input High Degree User Experiment Speech Input Linguistic Modality Particular Tend Limited Vocabulary User Interface Common Set Speech Command User Interaction Preference Multimodal User Interface Direct Mapping Object Predominates Graph Manipulation Multimodal Interaction Pattern Identification |
| Content Type | Text |
| Resource Type | Article |