Loading...
Please wait, while we are loading the content...
Similar Documents
Personalising a speech-to-speech translation in the emime project.
| Content Provider | CiteSeerX |
|---|---|
| Author | Kurimo, Mikko Byrne, William Dines, John Garner, Philip N. Gibson, Matthew Guan, Yong Hirsimäki, Teemu Karhila, Reima King, Simon Liang, Hui Oura, Keiichiro Saheer, Lakshmi Shannon, Matt Shiota, Sayaka Tian, Jilei Wester, Keiichi Tokuda Mirjam |
| Abstract | In the EMIME project we have studied unsupervised cross-lingual speaker adaptation. We have employed an HMM statistical framework for both speech recognition and synthesis which provides transformation mechanisms to adapt the synthesized voice in TTS (text-to-speech) using the recognized voice in ASR (automatic speech recogntion). An important application for this research is personalised speech-to-speech translation that will use the voice of the speaker in the input language to utter the translated sentences in the output language. In mobile environments this enhances the users ’ interaction across language barriers by making the output speech sound more like the original speaker’s way of speaking, even she or he could not speak the output language. Abstract length max 200 words Paper length max 6 pages (for demos) Submission DL 22 Feb (midnight) 1 |
| File Format | |
| Access Restriction | Open |
| Subject Keyword | Speech-to-speech Translation Emime Project Output Language Speech Recognition Synthesized Voice Output Speech Translated Sentence Unsupervised Cross-lingual Speaker Adaptation Transformation Mechanism Mobile Environment Input Language Language Barrier Original Speaker Way Hmm Statistical Framework Word Paper Length Submission Dl Automatic Speech Recogntion Important Application |
| Content Type | Text |