Loading...
Please wait, while we are loading the content...
Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network
| Content Provider | MDPI |
|---|---|
| Author | Shi, Jiaqi Liu, Chaoran Ishi, Carlos Toshinori Ishiguro, Hiroshi |
| Copyright Year | 2020 |
| Description | Emotion recognition has drawn consistent attention from researchers recently. Although gesture modality plays an important role in expressing emotion, it is seldom considered in the field of emotion recognition. A key reason is the scarcity of labeled data containing 3D skeleton data. Some studies in action recognition have applied graph-based neural networks to explicitly model the spatial connection between joints. However, this method has not been considered in the field of gesture-based emotion recognition, so far. In this work, we applied a pose estimation based method to extract 3D skeleton coordinates for IEMOCAP database. We propose a self-attention enhanced spatial temporal graph convolutional network for skeleton-based emotion recognition, in which the spatial convolutional part models the skeletal structure of the body as a static graph, and the self-attention part dynamically constructs more connections between the joints and provides supplementary information. Our experiment demonstrates that the proposed model significantly outperforms other models and that the features of the extracted skeleton data improve the performance of multimodal emotion recognition. |
| Starting Page | 205 |
| e-ISSN | 14248220 |
| DOI | 10.3390/s21010205 |
| Journal | Sensors |
| Issue Number | 1 |
| Volume Number | 21 |
| Language | English |
| Publisher | MDPI |
| Publisher Date | 2020-12-30 |
| Access Restriction | Open |
| Subject Keyword | Sensors Industrial Engineering Emotion Recognition Gesture Skeleton Graph Convolutional Networks Self-attention |
| Content Type | Text |
| Resource Type | Article |