Loading...
Please wait, while we are loading the content...
Semantic Web and Zero-Shot Learning of Large Scale Visual Classes
| Content Provider | Semantic Scholar |
|---|---|
| Author | Hascoet, Tristan Ariki, Yasuo Takiguchi, Tetsuya |
| Copyright Year | 2017 |
| Abstract | Zero-shot learning (ZSL) refers to the task of learning a model capable of classifying images into classes for which no sample is available as training data. This can be achieved by leveraging semantic features of the visual classes as an intermediate level of representation shared by both training classes (for which labeled images are provided as training data) and test classes (for which no image is available for training). Following the success of deep learning models in the traditional task of image classification, ZSL has recently attracted a lot of attention from the computer vision community as it holds the promise of scaling up the classification capacity of traditional image classifiers while easing the data collection process. While several models have recently been introduced for ZSL, arguably little attention has been given to the design of the visual class semantic features. In this paper, we propose to leverage the interlinking of knowledge bases published as Linked Open Data to provide different semantic feature representations of visual classes in a large-scale setting. Using a simple ZSL architecture, we compare the efficiency of the semantic features we extracted and find that some of them outperform the standard word embedding representations by a significant margin. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www.me.cs.scitec.kobe-u.ac.jp/~takigu/pdf/2017/SNL-2017_paper_15.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |