Implementation of Yin Algorithm to Detect Human Voice Emotions According to Gender Implementation of Yin Algorithm to Detect Human Voice Emotions According to Gender
Main Article Content
Abstract
Computer technology and artificial intelligence experience rapid development every year, one of which is in media speech recognition. Speech recognition is a virtual digital data assistant that exists in software applications, and is used as a tool to help human needs such as communication, but is often misused by users. This study conducted a voice recording to get the difference in the sound of each human gender data. The study uses the Yin algorithm to extract data, then the sound pitching process is performed using the histogram pitch feature of the standard deviation and the mean. From this study, it was found that the pitch of men is different from women. The shape of the pitch histogram contours is similar between men and women but the female pitch histogram shifts to a higher frequency than men. This pitch shift in women occurs in all expressions.
Downloads
Article Details
[2] Banse, Rainer, and Klaus R Scherer. 1996. "Acoustic Profile in Vocal Emotion Expression." Journal of Personality and Social Psychology 70 (3): 614-636.
[3] Bozkurt, Elif, Engin Erzin, and Cigdem Eroglu Erdem. 2011. "Formant position based weighted spectral features." ScienceDirect.
[4] de Cheveigne, Alain, and Hideki Kawahara. 2002. "YIN, a fundamental frequency estimator for speech and music." Journal Acoustical Society of America 111.
[5] Deng, Jun, Xinzhou Xu, and Zixing Zhang. 2018. "Semi-Supervised Autoencoders for Speech Emotion Recognition." 26 (1): 31-43.
[6] Ghifary, M. T., & Faizah, N. (2019). PENGARUH KEPEMIMPINAN DAN KOMITMEN ORGANISASI TERHADAP KINERJA PEGAWAI DINAS PERINDUSTRIAN DAN PERDAGANGAN KABUPATEN PASURUAN. JURNAL EKBIS: ANALISIS, PREDIKSI DAN INFORMASI, 20(1), 1172-1180.
[7] n.d. Introduction to SDL 2.0. Accessed 7 12, 2019. https://wiki.libsdl.org/Introduction.
[8] McLoughlin, Ian. 2009. Applied Speech and Audio Processing. New York: Cambridge University Press.
[9 Polzehl, Tim, and Alexander Schmitt. 2011. "Anger Recognition in Speech Using Acoustic and Linguistic Cues." ScienceDirect.
[10] Potegal, Michael, Gerhard Stemmler, and Charles Spielberger. 2010. In International Handbook of Anger. New York: Springer.
[11] Rabiner, Lawrence R., and Ronald W. Schafer. 2007. Introduction to Digital Speech Processing. Foundations and Trends R in Signal Processing.
[12] 2016. The Virtual Digital Assistant Market Will Reach $15.8 Billion Worldwide by 2021. August 3. Accessed March 13, 2018. https://www.tractica.com/newsroom/press-releases/the-virtual-digital-assistant-market-will-reach-15-8-billion-worldwide-by-2021/.
[13] Syairozi, M. I., & Cahya, S. B. (2017). SUKUK AL INTIFAA: INTEGRASI SUKUK DAN WAKAF DALAM MENINGKATKAN PRODUKTIFITAS SEKTOR WAKAF PENDORONG INVESTASI PADA PASAR MODAL SYARIAH. JPIM (JURNAL PENELITIAN ILMU MANAJEMEN), 2(2), 12-Halaman.
[14] Syairozi, M., Rosyad, S., & Pambudy, A. P. (2019). PEMBERDAYAAN MASYARAKAT SEBAGAI PENGGUNA KOSMETIK ALAMI BERIBU KHASIAT HASIL PRODUK TANI UNTUK MEMINIMALKAN PENGELUARAN MASYARAKAT DESA WONOREJO KECAMATAN GLAGAH KAB. LAMONGAN. Empowering: Jurnal Pengabdian Masyarakat, 3, 88-98.
[15] Titze, Ingo R. 1989. "Physiologic and acoustic difference between male and female voices." Journal Acoustic Society of America 85.
[16] Watase, H. Nishizaki and K. 2017. "Emotion classification of spontaneous speech using spoken term detection." 2017 IEEE 6th Global Conference on Consumer Electronics (GCCE). Nagoya.
[17] Zhang, Shiqing, and Shiliang Zhang. 2018. "Speech Emotion Recognition Using Deep Convolutional Neural Network and Discriminant Temporal Pyramid Matching." IEEE (IEEE) 20 (6): 1576-1590.