Loading...
Please wait, while we are loading the content...
Similar Documents
RECURRENT CONDITIONAL RANDOM FIELD FOR LANGUAGE UNDERSTANDING
| Content Provider | CiteSeerX |
|---|---|
| Author | Yao, Kaisheng Peng, Baolin Zweig, Geoffrey Yu, Dong Li, Xiaolong Gao, Feng |
| Abstract | Recurrent neural networks (RNNs) have recently produced record setting performance in language modeling and word-labeling tasks. In the word-labeling task, the RNN is used analogously to the more traditional conditional random field (CRF) to assign a label to each word in an input sequence, and has been shown to significantly out-perform CRFs. In contrast to CRFs, RNNs operate in an online fashion to assign labels as soon as a word is seen, rather than af-ter seeing the whole word sequence. In this paper, we show that the performance of an RNN tagger can be significantly improved by incorporating elements of the CRF model; specifically, the explicit modeling of output-label dependencies with transition features, its global sequence-level objective function, and offline decoding. We term the resulting model a “recurrent conditional random field ” and demonstrate its effectiveness on the ATIS travel domain dataset and a variety of web-search language understanding datasets. Index Terms — Conditional random fields, recurrent neural net-works 1. |
| File Format | |
| Access Restriction | Open |
| Subject Keyword | Word-labeling Task Global Sequence-level Objective Function Output-label Dependency Rnns Operate Web-search Language Understanding Datasets Traditional Conditional Random Field Crf Model Out-perform Crfs Recurrent Conditional Random Field Explicit Modeling Input Sequence Atis Travel Domain Dataset Transition Feature Offline Decoding Recurrent Neural Net-works Rnn Tagger Online Fashion Index Term Conditional Random Field Recurrent Neural Network Language Modeling Whole Word Sequence |
| Content Type | Text |