Autosoft Journal

Online Manuscript Access


SignsWorld Facial Expression Recognition System (FERS)


Authors



Abstract

Live facial expression recognition is an effective and essential research area in human computer interaction (HCI), and the automatic sign language recognition (ASLR) fields. This paper presents a fully automatic facial expression and direction of sight recognition system, that we called SignsWorld Facial Expression Recognition System (FERS). The SignsWorld FERS is divided into three main components: Face detection that is robust to occlusion, key facial features points extraction and facial expression with direction of sight recognition. We present a powerful multi-detector technique to localize the key facial feature points so that contours of the facial components such as the eyes, nostrils, chin, and mouth are sampled. Based on the extracted 66 facial features points, 20 geometric formulas (GFs), 15 ratios (Rs) are calculated, and the classifier based on rule-based reasoning approach are then formed for both of the gaze direction and the facial expression (Normal, Smiling, Sadness or Surprising). SignsWorld FERS is the person independent facial expression and achieved a recognition rate of 97%.


Keywords


Pages

Total Pages: 23
Pages: 211-233

DOI
10.1080/10798587.2014.966456


Manuscript ViewPdf Subscription required to access this document

Obtain access this manuscript in one of the following ways


Already subscribed?

Need information on obtaining a subscription? Personal and institutional subscriptions are available.

Already an author? Have access via email address?


Published

Volume: 21
Issue: 2
Year: 2014

Cite this document


References

Abdat, F., C. Maaoui, and A. Pruski. "Real Time Facial Feature Points Tracking with Pyramidal Lucas-Kanade Algorithm." RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication (2008): n. pag. Crossref. Web. https://doi.org/10.1109/ROMAN.2008.4600645

Abdat, F., C. Maaoui, and A. Pruski. "Human-Computer Interaction Using Emotion Recognition from Facial Expression." 2011 UKSim 5th European Symposium on Computer Modeling and Simulation (2011): n. pag. Crossref. Web. https://doi.org/10.1109/EMS.2011.20

Arai, Kohei, and Ronny Mardiyanto. "Eye-Based HCI with Full Specification of Mouse and Keyboard Using Pupil Knowledge in the Gaze Estimation." 2011 Eighth International Conference on Information Technology: New Generations (2011): n. pag. Crossref. Web. https://doi.org/10.1109/ITNG.2011.81

Awad, G., Junwei Han, and A. Sutherland. "A Unified System for Segmentation and Tracking of Face and Hands in Sign Language Recognition." 18th International Conference on Pattern Recognition (ICPR”06) (2006): n. pag. Crossref. Web. https://doi.org/10.1109/ICPR.2006.194

Bartlett, M. S., Littlewort, G., Fasel, I. & Movellan, R. (2003). Real time face detection and facial expression recognition: Development and application to human computer interactio`n. In Proceedings of the CVPR Workshop on Computer Vision and Pattern Recognition for Human–Computer Interaction. Madison, U.S.A., IEEE Computer Society.

Bouguet J. Y. Intel Corporation, Microprocessor Research Labs

Bradski, G., Darrell, T., Essa, I., Malik, J., Perona, P., Sclaroff, S., Tomasi, C. Retrieved from 17 November 2013 http://sourceforge.net/projects/opencvlibrary/.

Cheddad, Abbas, Dzulkifli Mohamad, and Azizah Abd Manaf. "Exploiting Voronoi Diagram Properties in Face Segmentation and Feature Extraction." Pattern Recognition 41.12 (2008): 3842-3859. Crossref. Web. https://doi.org/10.1016/j.patcog.2008.06.007

Fasel, B., and Juergen Luettin. "Automatic Facial Expression Analysis: a Survey." Pattern Recognition 36.1 (2003): 259-275. Crossref. Web. https://doi.org/10.1016/S0031-3203(02)00052-3

Gupta, Sandeep K. et al. "A Hybrid Method of Feature Extraction for Facial Expression Recognition." 2011 Seventh International Conference on Signal Image Technology & Internet-Based Systems (2011): n. pag. Crossref. Web. https://doi.org/10.1109/SITIS.2011.64

Hoiem, Derek, Alexei A. Efros, and Martial Hebert. "Recovering Occlusion Boundaries from an Image." International Journal of Computer Vision 91.3 (2010): 328-346. Crossref. Web. https://doi.org/10.1007/s11263-010-0400-4

Hough_transform

Jia, Hongjun, and Aleix M. Martinez. "Face Recognition with Occlusions in the Training and Testing Sets." 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition (2008): n. pag. Crossref. Web. https://doi.org/10.1109/AFGR.2008.4813410

Kraiss, Karl-Friedrich, ed. "Advanced Man-Machine Interaction." Signals and Communication Technology (2006): n. pag. Crossref. Web. https://doi.org/10.1007/3-540-30619-6

Visual Interfaces to Computer

Lajevardi, Seyed Mehdi, and Zahir M. Hussain. "A Novel Gabor Filter Selection Based on Spectral Difference and Minimum Error Rate for Facial Expression Recognition." 2010 International Conference on Digital Image Computing: Techniques and Applications (2010): n. pag. Crossref. Web. https://doi.org/10.1109/DICTA.2010.33

Li, P. et al. "Automatic Recognition of Smiling and Neutral Facial Expressions." 2010 International Conference on Digital Image Computing: Techniques and Applications (2010): n. pag. Crossref. Web. https://doi.org/10.1109/DICTA.2010.103

Lirong, Wang et al. "Facial Expression Recognition Based on Local Texture Features." 2011 14th IEEE International Conference on Computational Science and Engineering (2011): n. pag. Crossref. Web. https://doi.org/10.1109/CSE.2011.96

Lu, Kun, and Xin Zhang. "Facial Expression Recognition from Image Sequences Based on Feature Points and Canonical Correlations." 2010 International Conference on Artificial Intelligence and Computational Intelligence (2010): n. pag. Crossref. Web. https://doi.org/10.1109/AICI.2010.53

Luxand Face 4.0. Inc. Retrieved from 17 June 2013 http://www.luxand.com.

Moreira, J. L., Braun, A. & Musse, S. R. (2010). Eyes and eyebrows detection for performance driven animation.

Pantic, M, and L.J.M Rothkrantz. "Expert System for Automatic Analysis of Facial Expressions." Image and Vision Computing 18.11 (2000): 881-905. Crossref. Web. https://doi.org/10.1016/S0262-8856(00)00034-2

Pantic, M., and L.J.M. Rothkrantz. "Facial Action Recognition for Facial Expression Analysis From Static Face Images." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34.3 (2004): 1449-1461. Crossref. Web. https://doi.org/10.1109/TSMCB.2004.825931

Pantic, M., and L.J.M. Rothkrantz. "Facial Action Recognition for Facial Expression Analysis From Static Face Images." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34.3 (2004): 1449-1461. Crossref. Web. https://doi.org/10.1109/TSMCB.2004.825931

Pantic, M., M. Tomc, and L.J.M. Rothkrantz. "A Hybrid Approach to Mouth Features Detection." 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236) n. pag. Crossref. Web. https://doi.org/10.1109/ICSMC.2001.973081

Patil, K K, S D Giripunje, and P R Bajaj. "Facial Expression Recognition and Head Tracking in Video Using Gabor Filter." 2010 3rd International Conference on Emerging Trends in Engineering and Technology (2010): n. pag. Crossref. Web. https://doi.org/10.1109/ICETET.2010.147

Raducanu, B., Pantic, M., Rothkrantz, L. J. M. & Grana, M. (1999). Automatic eyebrow tracking using boundary Chain code. In Proceedings of the Advanced School Computing Imaging Conference (pp. 137–143)

Retrieved from January 21, 2014 http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html.

Riad, A M. "SignsWorld: Deeping into the Silence World and Hearing Its Signs (State of the Art)." International Journal of Computer Science and Information Technology 4.1 (2012): 189-208. Crossref. Web. https://doi.org/10.5121/ijcsit.2012.4115

Shamik S. ICIP

The Arabic Dictionary of Gestures For The Deaf Supreme Council for Family Affairs

The Japanese Female Facial Expression (JAFFE) Database. Retrieved from January 2012 http://www.kasrl.org/jaffe.html.

Viola, P. & Jones, M. (2001). Robust real-time object detection. In 2nd International Workshop on Statistical and Omputational Theories of Vision Modeling, Learning, Computing, and Sampling. Vancouver. Tübingen, Germany. Springer.

Vogler, C. & Goldenstein, S. (2005). Analysis of facial expressions in American sign language. In Proceedings of the 3rd International Conference on Universal Access in Human–Computer Interaction (UAHCI). Las Vegas. San Diego, CA, USA.

Voronoi_diagram, Wikipedia. Retrieved from 17 June 2013 http://en.wikipedia.org/wiki/Voronoi_diagram.

Xiao, Y. & Yan, H. (10–12 December 2003). Face boundary extraction. In C.Sun, H.Talbot, S.Ourselin, & T.Adriaansen (Eds.), Proceedings of the VII Digital Image Computing: Techniques and Applications. Sydney. Data Mining Lab at Montana State University.

Yongmian Zhang, and Qiang Ji. "Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences." IEEE Transactions on Pattern Analysis and Machine Intelligence 27.5 (2005): 699-714. Crossref. Web. https://doi.org/10.1109/TPAMI.2005.93

Zhao, W. et al. "Face Recognition." ACM Computing Surveys 35.4 (2003): 399-458. Crossref. Web. https://doi.org/10.1145/954339.954342

Lin Zhong et al. "Learning Active Facial Patches for Expression Analysis." 2012 IEEE Conference on Computer Vision and Pattern Recognition (2012): n. pag. Crossref. Web. https://doi.org/10.1109/CVPR.2012.6247974

Zhu, Z., Ji, Q., Fujimura, K. & Lee, K. (2002). Combining Kalman filtering and mean shift for real time eye tracking under active IR illumination. In Proceeding of 16th Pattern Recognition International Conference (Vol. 4, pp. 318–321). Germany, Springer.

JOURNAL INFORMATION


ISSN PRINT: 1079-8587
ISSN ONLINE: 2326-005X
DOI PREFIX: 10.31209
10.1080/10798587 with T&F
IMPACT FACTOR: 0.652 (2017/2018)
Journal: 1995-Present




CONTACT INFORMATION


TSI Press
18015 Bullis Hill
San Antonio, TX 78258 USA
PH: 210 479 1022
FAX: 210 479 1048
EMAIL: tsiepress@gmail.com
WEB: http://www.wacong.org/tsi/