Loading...
Please wait, while we are loading the content...
Similar Documents
Incorporating Interpersonal Synchronization Features for Automatic Emotion Recognition from Visual and Audio Data during Communication
Content Provider | MDPI |
---|---|
Author | Quan, Jingyu Miyake, Yoshihiro Nozawa, Takayuki |
Copyright Year | 2021 |
Description | During social interaction, humans recognize others’ emotions via individual features and interpersonal features. However, most previous automatic emotion recognition techniques only used individual features—they have not tested the importance of interpersonal features. In the present study, we asked whether interpersonal features, especially time-lagged synchronization features, are beneficial to the performance of automatic emotion recognition techniques. We explored this question in the main experiment (speaker-dependent emotion recognition) and supplementary experiment (speaker-independent emotion recognition) by building an individual framework and interpersonal framework in visual, audio, and cross-modality, respectively. Our main experiment results showed that the interpersonal framework outperformed the individual framework in every modality. Our supplementary experiment showed—even for unknown communication pairs—that the interpersonal framework led to a better performance. Therefore, we concluded that interpersonal features are useful to boost the performance of automatic emotion recognition tasks. We hope to raise attention to interpersonal features in this study. |
Starting Page | 5317 |
e-ISSN | 14248220 |
DOI | 10.3390/s21165317 |
Journal | Sensors |
Issue Number | 16 |
Volume Number | 21 |
Language | English |
Publisher | MDPI |
Publisher Date | 2021-08-06 |
Access Restriction | Open |
Subject Keyword | Sensors Information and Library Science Affective Computing Classification Communication Deep Neural Networks Emotion Recognition Interpersonal Features Multimodal |
Content Type | Text |
Resource Type | Article |