In this paper, we present a 3D face animation system rendered on mobile devices. The system automatically creates realistic facial animation from text input with emotion tags. First, an input string is converted into synthetic voice and phonetic information. Then, 3D head model performs facial movements synchronized to the speech. The proposed system offers an affordable quick solution for applications that require virtual actors speaking text in which human-machine interfaces on mobile devices can profit.
Eser Adı (dc.title) | A 3D face animation system for mobile devices |
Yayın Türü (dc.type) | Makale |
Yazar/lar (dc.contributor.author) | MENDİ, Engin |
DOI Numarası (dc.identifier.doi) | 10.3233/IFS-120690 |
Atıf Dizini (dc.source.database) | Wos |
Yayıncı (dc.publisher) | IOS PRESS |
Yayın Tarihi (dc.date.issued) | 2014 |
Kayıt Giriş Tarihi (dc.date.accessioned) | 2020-08-07T14:19:08Z |
Açık Erişim tarihi (dc.date.available) | 2020-08-07T14:19:08Z |
ISSN (dc.identifier.issn) | 1064-1246 |
Özet (dc.description.abstract) | In this paper, we present a 3D face animation system rendered on mobile devices. The system automatically creates realistic facial animation from text input with emotion tags. First, an input string is converted into synthetic voice and phonetic information. Then, 3D head model performs facial movements synchronized to the speech. The proposed system offers an affordable quick solution for applications that require virtual actors speaking text in which human-machine interfaces on mobile devices can profit. |
Yayın Dili (dc.language.iso) | en |
Tek Biçim Adres (dc.identifier.uri) | http://hdl.handle.net/20.500.12498/4800 |