FRAMEWORK DEVELOPMENT OF REAL-TIME LIP SYNC ANIMATION ON VISEME BASED HUMAN SPEECH
DOI:
https://doi.org/10.11113/jt.v75.5065Keywords:
Lip synchronization animation, real time, human speech recognitionAbstract
Performance of real-time lip sync animation is an approach to perform a virtual computer generated character talk, which synchronizes an accurate lip movement and sound in live. Based on the review, the creation of lip sync animation in real-time is particularly challenging in mapping the lip animation movement and sounds that are synchronized. The fluidity and accuracy in natural speech are one of the most difficult things to do convincingly in facial animation. People are very sensitive to this when you get it wrong because we are all focused on faces. Especially in real time application, the visual impact needed is immediate, commanding and convincing to the audience. A research on viseme based human speech was conducted to develop a lip synchronization platform in order to achieve an accurate lip motion with the sounds that are synchronized as well as increase the visual performance of the facial animation. Through this research, a usability automated digital speech system for lip sync animation was developed. Automatic designed with the use of simple synchronization tricks which generally improve accuracy and realistic visual impression and implementation of advanced features into lip synchronization application. This study allows simulation of lip synching in real time and offline application. Hence, it can be applied in various areas such as entertainment, education, tutoring, animation and live performances, such as theater, broadcasting, education and live presentation.
References
Rathinavelu, A., H. Thiagarajan and S.R. Savithri. 2006. Evaluation of A Computer Aided 3D Lip Sync Instructional Model using Virtual Reality Objects. Proceeding of 6th International Conference on Disability, Virtual Reality & AssociationTechnology. Esbjerg, Denmark. 18-20 Sept. 2006. 67-73.
Joshi, C. 2009. Speech Recognition. [Online]. From: http://www.slideshare.net/charujoshi/speech-recognition.
Gary, C. M. 2007. Preston Blair Phoneme Series. [Online]. From: http://www.garycmartin.com/mouth_shapes.html.
Cambridge English Online: Phonetics Focus. [Online]. From: http://cambridgeenglishonline.com/Phonetics_Focus/.
Huang, F. J. and T. Chen. 1998. Real-Time Lip-Sync Face Animation Driven by Human Voice. In IEEE Workshop on Multimedia Signal Processing. Los Angeles, California. 7-9 Dec. 1998. 1-6.
Hofer, G., J. Yamagishi and H. Shimodaira. 2008. Speech-Driven Lip Motion Generation with A Trajectory HMM. In Proc. Interspeech 2008. Brisbane, Australia. 22-26 Sept. 2008. 2314-2317.
Zoric, G. 2005. Automatic Lip Synchronization by Speech Signal Analysis. Master Thesis. Faculty of Electrical Engineering and Computing. University of Zagreb.
Zoric, G. and I. S. Pandzic. 2005. A Real-Time Lip Sync System using A Genetic Algorithm for Automatic Neural Network Configuration. Proc. IEEE International Conference on Multimedia & Expo ICME. Amsterdam, The Netherlands. 6 July 2005. 1366-1369.
Lewis, J.P. 1991. Automated Lip-Sync: Background and Techniques. Journal of Visualization and Computer Animation. 118-122.
Lewis, J. P. and F. I. Parke. 1990. Automated Lip-Synch and Speech Synthesis for Character Animation. SIGGRAPH 1990 17th International Conference on Computer Graphics and Interactive Techniques. Dallas, TX, USA. 6-10 Aug. 1990. 83-87.
Spevack, J. 2008. 2D Animation. [Online]. From: http:// http://profspevack.com/archive/animation/course_cal/week12/week12.html.
Berger, M. 2012. Move Over Motionscan; New Lip Synch Tech Aims to Revolutionize Motion Capture. [Online]. From: http://venturebeat.com/2012/02/09/lip-synch-tech-to-revolutionize-motion-capture/.
Scott, M. J. 2010. Digital Puppetry. Interviewed by Kenny, H.S.H. Universiti Malaysia Sarawak.
Beiman, N. 2010. Animated Performance: Bringing Imaginary Animal, Human and Fantasy Characters to Life. Switzerland: AVA.
Blair, P. 1994. Cartoon Animation. California: Walter T. Foster Publishing.
Budiman, R., M. Bennamoun and D.Q. Huynh. 2005. Low Cost Motion Capture. The University of Western Australia, Australia.
Parent, R., S. King and O. Fujimura. 2002. Issues with Lip Synch Animation: Can You Read My Lips?. The 15th International Conference on Computer Animation. Geneva, Switzerland. 19-21 June 2002. 3-10.
Clara, S. 2009. The Henson Digital Puppetry Studio Revolutionizes Television Production using NVIDIA Quadro Processors. [Online]. From: http://www.renderosity.com/nvidia-congratulates-jim-henson-s-creature-shop-on-winning-primetime-emmy-engineering-award-cms-14712.
Dyer, S., J. Martin and J. Zulauf. 1995. Motion Capture White Paper. [Online]. From: http://reality.sgi.com/jamsb/mocap/MoCapWP_v2.0.html-HDR0.
Tang, S. S., A. W. C. Liew and H. Yan. 2004. Lip-Sync in Human Face Animation based on Video Analysis and Spline Models. Proceedings of the 10th International conference on Multimedia Modeling. Brisbane, Australia. 5-7 Jan. 2004. 102-108.
Frank, T., M. Hoch and G. Trogemann. 1997. Automated Lip-Sync for 3D-Character Animation. In 15th IMACS World Congress on Scientific Computation, Modelling and Applied Mathematics.
Downloads
Published
Issue
Section
License
Copyright of articles that appear in Jurnal Teknologi belongs exclusively to Penerbit Universiti Teknologi Malaysia (Penerbit UTM Press). This copyright covers the rights to reproduce the article, including reprints, electronic reproductions, or any other reproductions of similar nature.