摘要 |
<p>The method permits the animation of a synthesised model of a human face in relation to an audio signal. The method is not language dependent and provides a very natural animated synthetic model, being based on the simultaneous analysis of voice and facial movements, tracked on real speakers, and on the extraction of suitable visemes. The subsequent animation consists in transforming the sequence of visemes corresponding to the phonemes of the driving text into the sequence of movements applied to the model of the human face.</p> |