发明名称 METHOD AND SYSTEM FOR ALIGNING NATURAL AND SYNTHETIC VIDEO TO SPEECH SYNTHESIS
摘要 According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously-text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system of the present invention reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. Finally, the facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
申请公布号 US2008059194(A1) 申请公布日期 2008.03.06
申请号 US20070931093 申请日期 2007.10.31
申请人 AT&T CORP. 发明人 BASSO ANDREA;BEUTNAGEL MARK C.;OSTERMANN JOERN
分类号 G10L13/00;G10L19/00 主分类号 G10L13/00
代理机构 代理人
主权项
地址