IJEM Vol. 14, No. 6, 8 Dec. 2024
Cover page and Table of Contents: PDF (size: 586KB)
PDF (586KB), PP.1-10
Views: 0 Downloads: 0
Indonesian sign language, BISINDO, Machine learning, Supervised learning, Convolutional neural network
Indonesian Sign Language (BISINDO) is one of the visual-based alternative languages used by people with hearing impairments. There are hundreds of thousands of Indonesian vocabularies that sign language gestures can represent. However, because the number of deaf people in Indonesia is only seven million or 3% of the population, sign language has become unfamiliar and challenging for some normal or laypeople to understand. This study aims to classify and detect gestures in sign language vocabulary directly based on mobile. Classification learning techniques are needed to recognize variations in gestures, such as machine learning with supervised learning techniques. The development of this research uses the convolutional neural network method with the help of techniques from the single shot detector architecture as the object of detection and the MobileNet architecture for classification. The object is 32 gestural vocabularies from the lyrics of the song 'Bidadari Tak Bersayap' with a dataset of 17,600 images. Then the images are divided into two parts of the model based on the nature of the biased and non-biased data, amounting to 8 and 24 classes, respectively. The research results in a biased model prediction of 15 out of 16, while a non-biased model of 36 out of 48 correct predictions with a total accuracy of real-time based testing on mobile of 93.75% and 75%, respectively.
Qurrotul Aini, Ferdian Rachardi, Zainul Arham, "Indonesian Sign Language: Detection with Applying Convolutional Neural Network in a Song Lyric", International Journal of Engineering and Manufacturing (IJEM), Vol.14, No.6, pp. 1-10, 2024. DOI:10.5815/ijem.2024.06.01
[1]Statistics Indonesia, Intercensus Population Survey Results (SUPAS). Jakarta: Statistics Indonesia, 2015.
[2]D. Gerkatin, Get acquainted with Bisindo. Jakarta: DPD Gerkatin, 2010.
[3]D. Pradama, “Perancangan Informasi Bahasa Isyarat Bisindo Pada penyandang tuna rungu melalui media interaktif aplikasi Android,” Repository UNIKOM, 21-Dec-2017. [Online]. Available: https://repository.unikom.ac.id/57710/. [Accessed: 07-Mar-2022].
[4]G. Gumelar, H. Hafiar, and P. Subekti, “Indonesian sign language as a deaf culture through the meaning of movement members for the welfare of the deaf,” INFORMASI: Kajian Ilmu Komunikasi, vol. 48, no. 1, pp. 65–78, 2018, doi: 10.21831/informasi.v48i1.17727.
[5]J. L. Raheja, A. Mishra, and A. Chaudhary, “Indian sign language recognition using SVM,” Pattern Recognit. Image Anal., vol. 26, no. 2, pp. 434–441, 2016.
[6]R. Z. Fadillah, A. Irawan, and M. Susanty, “Model penerjemah bahasa isyarat indonesia (BISINDO) menggunakan pendekatan transfer learning,” PETIR: Jurnal Pengkajian dan Penerapan Teknik Informatika, vol. 15, no. 1, pp. 1-9, 2022, doi: https://doi.org/10.33322/petir.v15i1.1289.
[7]R. I. Borman, B. Priopradono, and A. R. Syah, “Classification of hand coded objects in indonesian sign language alphabet sign recognition (bisindo),” in Proceeding of National Seminar Computer Crime and Digital Evidence, September, pp. D1–D4, 2017.
[8]V. Bheda and D. Radpour, “Using deep convolutional networks for gesture recognition in American sign language,” arXiv.org, 20-Nov-2017. [Online]. Available: https://arxiv.org/abs/1710.06836. [Accessed: 07-Mar-2022].
[9]S. Ikram and N. Dhanda, “American sign language recognition using convolutional neural network,” 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies (GUCON), 2021, pp. 1–12, doi: 10.1109/GUCON50781.2021.9573782.
[10]M. B. Tamam, H. Hozairi, M. Walid, and J. F. A. Bernardo, “Classification of Sign Language in Real Time Using Convolutional Neural Network,” Appl. Inf. Syst. Manage., vol. 6, no. 1, pp. 39–46, 2023, doi: 10.15408/aism.v6i1.29820.
[11]W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” arXiv.org, 29-Dec-2016. [Online]. Available: https://arxiv.org/abs/1512.02325. [Accessed: 07-Mar-2022].
[12]S. Srivastava, A. V. Divekar, C. Anilkumar, I. Naik, V. Kulkarni, and V. Pattabiraman, “Comparative analysis of deep learning image detection algorithms,” Journal of Big Data, vol. 8, Art. no. 66, 2021, doi: 10.1186/s40537-021-00434-w
[13]A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv.org, 17-Apr-2017. [Online]. Available: https://arxiv.org/abs/1704.04861. [Accessed: 02-Apr-2022].
[14]J. Huang et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 3296–3305, 2017.
[15]M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes (VOC) Challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010.
[16]S. J. Putra, Y. Sugiarti, G. Dimas, M. Nur Gunawan, T. Sutabri and A. Suryatno, "Document Classification using Naïve Bayes for Indonesian Translation of the Quran," 2019 7th International Conference on Cyber and IT Service Management (CITSM), 2019, pp. 1-4, doi: 10.1109/CITSM47753.2019.8965390.
[17]R. Padilla, S. L. Netto and E. A. B. da Silva, "A Survey on Performance Metrics for Object-Detection Algorithms," 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), 2020, pp. 237-242, doi: 10.1109/IWSSIP48289.2020.9145130.