Pengenalan Wajah Pada Video Dengan Metode Active Appearance Model (AAM)

  • Maulana Azhar Asyrofie Sekolah Tinggi Ilmu Komputer Cipta Karya Informatika
  • Dadang Iskandar Mulyana Sekolah Tinggi Ilmu Komputer Cipta Karya Informatika

Abstract

Penelitian ini berfokus pada pengenalan wajah dalam video menggunakan metode Active Appearance Model (AAM), sebuah teknik yang mengintegrasikan informasi bentuk dan tekstur wajah untuk melakukan deteksi serta fitting wajah dengan presisi tinggi. Penelitian ini bertujuan untuk mengembangkan dan mengevaluasi kinerja model AAM yang dilatih menggunakan dataset Labeled Faces in the Wild (LFPW) dan menerapkannya pada aplikasi real-time. Uji coba dilakukan dalam berbagai kondisi pencahayaan, variasi ekspresi wajah, dan sudut pandang yang berbeda untuk menilai ketahanan (robustness) dari model yang dikembangkan. Hasil pengujian menunjukkan bahwa AAM mampu mengenali wajah dengan tingkat akurasi yang signifikan, bahkan pada kondisi video real-time yang menantang. Namun, ditemukan penurunan performa ketika model dihadapkan dengan kondisi pencahayaan yang sangat rendah atau sangat terang serta variasi ekspresi wajah yang sangat signifikan. Meski demikian, hasil penelitian secara keseluruhan menunjukkan bahwa model AAM yang dilatih pada dataset LFPW memberikan tingkat kesalahan fitting rata-rata sebesar 0,12 pada landmark wajah tertentu, terutama di area mata dan mulut. Temuan ini mengindikasikan bahwa AAM memiliki potensi besar untuk diterapkan dalam sistem pengenalan wajah berbasis video, namun memerlukan pengembangan lebih lanjut untuk menghadapi situasi yang lebih kompleks.

References

[1] M. Taskiran, N. Kahraman, and C. E. Erdem, “Face recognition: Past, present and future (a review),” Digit. Signal Process. A Rev. J., vol. 106, p. 102809, 2020, doi: 10.1016/j.dsp.2020.102809.
[2] Y. Kortli, M. Jridi, A. Al Falou, and M. Atri, “Face recognition systems: A survey,” Sensors (Switzerland), vol. 20, no. 2, 2020, doi: 10.3390/s20020342.
[3] Y. Wu and Q. Ji, “Facial Landmark Detection: A Literature Survey,” Int. J. Comput. Vis., vol. 127, no. 2, pp. 115–142, 2019, doi: 10.1007/s11263-018-1097-z.
[4] Q. S. As Shidiqi, E. Utami, and A. F. Sofyan, “Tinjauan Literatur Sistematik Tentang Penerapan Motion Capture Pada Proses Produksi Animasi,” J. Inf. J. Penelit. dan Pengabdi. Masy., vol. 6, no. 2, pp. 28–34, 2020, doi: 10.46808/informa.v6i2.180.
[5] Z. H. Feng, J. Kittler, B. Christmas, and X. J. Wu, “A unified tensor-based active appearance model,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 15, no. 3s, 2019, doi: 10.1145/3338841.
[6] M. Jha, Smart Intelligent Computing and Applications, vol. 104. Springer Singapore, 2019. doi: 10.1007/978-981-13-1921-1.
[7] J. Liu et al., “Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images,” IEEE Trans. Med. Imaging, vol. 40, no. 10, pp. 2820–2831, 2021, doi: 10.1109/TMI.2021.3055483.
[8] V. Martin, R. Séguier, A. Porcheron, and F. Morizot, “Face aging simulation with a new wrinkle oriented active appearance model,” Multimed. Tools Appl., vol. 78, no. 5, pp. 6309–6327, 2019, doi: 10.1007/s11042-018-6311-z.
[9] Y. Xu, Y. Pang, and X. Jiang, “A Facial Expression Recognition Methond Based on Improved HOG Features and Geometric Features,” Proc. 2019 IEEE 4th Adv. Inf. Technol. Electron. Autom. Control Conf. IAEAC 2019, no. Iaeac, pp. 1118–1122, 2019, doi: 10.1109/IAEAC47372.2019.8997772.
[10] H. Dai and L. Shao, “PointAE: Point auto-encoder for 3D statistical shape and texture modelling,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2019-Octob, no. Iccv, pp. 5409–5418, 2019, doi: 10.1109/ICCV.2019.00551.
[11] T. Kopalidis, V. Solachidis, N. Vretos, and P. Daras, “Advances in Facial Expression Recognition: A Survey of Methods, Benchmarks, Models, and Datasets,” Inf., vol. 15, no. 3, 2024, doi: 10.3390/info15030135.
[12] U. Sabina and T. K. Whangbo, “Edge-based effective active appearance model for real-time wrinkle detection,” Ski. Res. Technol., vol. 27, no. 3, pp. 444–452, 2021, doi: 10.1111/srt.12977.
[13] M. Khajavi and A. ahmadyfard, “Human face aging based on active appearance model using proper feature set,” Signal, Image Video Process., vol. 17, no. 4, pp. 1465–1473, 2023, doi: 10.1007/s11760-022-02355-4.
[14] Q. Y. Chang, S. C. Chong, and T. S. Ong, “an Amalgamation of Active Appearance Model and Opponent Color Local Binary Pattern in Age Estimation,” J. Eng. Sci. Technol., vol. 17, no. 6, pp. 4130–4143, 2022.
[15] D. M. Watson and A. Johnston, “A PCA-Based Active Appearance Model for Characterising Modes of Spatiotemporal Variation in Dynamic Facial Behaviours,” Front. Psychol., vol. 13, no. May, pp. 1–13, May 2022, doi: 10.3389/fpsyg.2022.880548.
[16] M. Gavrilescu and N. Vizireanu, Predicting depression, anxiety, and stress levels from videos using the facial action coding system, vol. 19, no. 17. 2019. doi: 10.3390/s19173693.
[17] S. Bi et al., “Deep relightable appearance models for animatable faces,” ACM Trans. Graph., vol. 40, no. 4, 2021, doi: 10.1145/3450626.3459829.
[18] U. Sabina, J.-S. Kim, T.-K. Whangbo, and D.-K. Park, “Wrinkle detection system based on active appearance model,” J. Next-generation Converg. Inf. Serv. Technol., vol. 8, no. 4, pp. 385–395, 2019, doi: 10.29056/jncist.2019.12.01.
[19] Y. Gordienko et al., Deep learning with lung segmentation and bone shadow exclusion techniques for chest X-ray analysis of lung cancer, vol. 754. Springer International Publishing, 2019. doi: 10.1007/978-3-319-91008-6_63.
[20] J. Li, K. Jin, D. Zhou, N. Kubota, and Z. Ju, “Attention mechanism-based CNN for facial expression recognition,” Neurocomputing, vol. 411, pp. 340–350, 2020, doi: 10.1016/j.neucom.2020.06.014.
[21] J. H. Kim, B. G. Kim, P. P. Roy, and D. M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, no. c, pp. 41273–41285, 2019, doi: 10.1109/ACCESS.2019.2907327.
[22] H. L. Bear and R. Harvey, “Phoneme-to-viseme mappings: the good, the bad, and the ugly,” Speech Commun., vol. 95, no. February, pp. 40–67, 2017, doi: 10.1016/j.specom.2017.07.001.
[23] D. I. Mulyana and Edi, “Penerapan Face Recognition Dengan Algoritma Viola Jones Dalam Sistem Presensi Kehadiran Siswa Dan Guru Pada Sekolah Idn Boarding School Jonggol,” J. Indones. Manaj. Inform. dan Komun., vol. 4, no. 3, pp. 1749–1757, 2023, doi: 10.35870/jimik.v4i3.398.
[24] S. Ependi et al., “Klasifikasi Pendeteksi Wajah Berhijab Mengunakan Metode CNN (Convlutional Neural Network),” J. Pendidik. Tambusai, vol. 6, no. 1, pp. 3157–3164, 2022, [Online]. Available: https://jptam.org/index.php/jptam/article/view/3363
Published
2024-09-21
Abstract viewed = 0 times
PDF downloaded = 0 times