摘要 |
<p>A method for eliminating synchronisation errors using speech recognition. Using separate audio and visual speech recognition techniques, the method identifies 110 visemes, or visual cues which are indicative of articulatory type, in the video content, and identifies 120 phones and their articulatory types in the audio content. Once the two recognition techniques have been applied, the outputs are compared 130 to determine the relative alignment and, if not aligned, a synchronisation algorithm is applied to time-adjust one or both of the audio and the visual streams in order to achieve synchronisation. Facial features, such as mouth movements, are used to provide visual cues in the video content.</p> |