The Effectiveness of Indian Music in Emotion Pattern Recognition under the Framework of Machine Intelligence

Jump To References Section

Authors

  • PhD Scholar, MAKAUT, West Bengal, India. ,IN
  • Associate Professor, JIS College of Engineering Kalyani, India. ,IN

DOI:

https://doi.org/10.18311/jmmf/2023/34160

Keywords:

Emotion, Music Detection, Perception, Recognition, Signal Processing.

Abstract

Experts in music therapy has suggested music as an aid to give the positive state of mind by keeping all sorts of depression and anxiety away. Music helps to bring back the original state of vibration by controlling our emotions [1]. A music is a combined effect of melody, the singer’s voice, and linguistics. The singer voice expresses the singer’s emotion like glad sorrow, anxiety, peace, tiredness and which in turns control the listener’s mental state. Indian music is analyzed and an approach for information retrieval to propose a therapeutic system through detection and identification of Indian music is initiated. Music Information Retrieval is a powerful tool to analyze different characteristics of a music. However, in this approach different traits of music are studied, and categorization has been done which leads to the therapeutic cause.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2023-07-04

How to Cite

Das, P., & Neogi, B. (2023). The Effectiveness of Indian Music in Emotion Pattern Recognition under the Framework of Machine Intelligence. Journal of Mines, Metals and Fuels, 71(5), 619–626. https://doi.org/10.18311/jmmf/2023/34160

Issue

Section

Articles

 

References

P. Das, S. Gupta, B. Neogi, (2020): “Measurement of effect of music on human brain and consequent impact on attentiveness and concentration during reading”, Procedia Computer Science, Elsevier Science, ISSN:1877-0509, Vol. 172, pp. 1033-1038, 2020. (https:/ /doi.org/10.1016/j.procs.2020.05.151)

Kee Moe Han, Theingi Zin, Hla Myo Tun (2016): “Extraction Of Audio Features For Emotion Recognition System Based On Music”, International Journal of Scientific & Technology Researchvolume 5, issue 06, june.

Yi-Hsuan Yang, Yu-Ching Lin, Ya-Fan Su, and Homer H. Chen, “Music Emotion Classification: A Regression Approach”, in Multimedia and Expo, IEEE International Conference, 2007, pp.208-211.

Yi-Hsuan Yang, Ya-Fan Su, Yu-Ching Lin, and Homer H. Chen, (2007): “Music Emotion Recognition: The Role of Individuality.” HCM’07, September 28, Augsburg, Bavaria, Germany.

S. Reddy and J. Mascia, (2006): “Lifetrak: music in tune with your life,” Proc. Human-centered multimedia, pp.25– 34.

S. Dornbush, K. Fisher, K. Mckay, A. Prikhodko, and Z. Segall, (2005): “XPOD – A human activity and emotion aware mobile music player,” Proc. Int. Conf. Mobile Technology, Applications and Systems.

E. Schubert, (1999): “Measurement and time series analysis of emotion in music,” Ph.D. dissertation, School of Music & Music Education, Univ. New South Wales, Sydney, Australia.

Y. Feng, Y. Zhuang, and Y. Pan, (2003): “Popular music retrieval by detecting mood,” Proc. ACM SIGIR, pp. 375–376.

D. Yang and W. Lee, (2004): “Disambiguating music emotion using software agents,” Proc. Int. Conf. Music Information Retrieval, pp. 52–58.

T. Li and M. Ogihara, (2006): “Content-based music similarity search and emotion detection,” Proc. IEEE Int. Conf. Acoustic, Speech, and Signal Processing, pp.17–21.

L. Lu, D. Liu, and H.-J. Zhang, (2006): “Automatic mood detection and tracking of music audio signals,” IEEE Trans. Audio, Speech, and Language Processing, vol. 14, no. 1, pp. 5–18.

R. E. Thayer, (1989): The Biopsychology of Mood and Arousal, New York, Oxford University Press.

P. Synak and A, Wieczorkowska, (2005): “Some issues on detecting emotions in music,” Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Springer, pp. 314–322.

Nadia Lachetar, Halima Bahi, “Song classification”, Computer Science Department.

Bram van de Laar, (2006): “Emotion detection in music, a survey” in Twente Student Conference on IT vol 1,p.700

Adit Jamdar, Jessica Abraham, Karishma Khanna and Rahul Dubey, (2015): “emotion analysis of songs based on lyrical and audio features”, International Journal of Artificial Intelligence & Applications (IJAIA),Vol.6, No. 3, May.

C. Ho, W.T. Tsai, K.S. Lin, and H. H. Chen, (2013): “Extraction and alignment and evaluation of motion beats for street dance,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May.

Jay K. Patela, E.S. Gopia, (2015): Musical Notes Identification using Digital Signal Processing, Proceeding of 3rd International Conference on Recent Trends in Computing (ICRTC-2015)

S. Gupta, (2007): “Music Perception, Cognition and Effect: A case study of Artificial Intelligence illustrating Disease Diagnosis”, Proceedings of the 5 th International Conference on Information Science, Technology and Management (CISTM) 2007, ISBN: 978-0-9772107-8-7, 50, pp. 1-16

P. Cano, E. Batlle, H. Mayer, and H. Neuschmied, (2002): “Robust Sound Modeling for Song Detection in Broadcast Audio,” in Proceedings of the 112th Audio Engineering Society Convention, Preprint 5531, Munich: Audio Engineering Society (AES).

A. Ramalingam and S. Krishnan, “Gaussian Mixture Modeling of Short-Time Fourier Transform Features for Audio Fingerprinting,” Transactions on Information Forensics and Security, Vol. 1, no. 4, pp. 457-463, Dec. 2006, ISSN: 1556-6013. DOl: 10.1109/ TIFS.2006.885036.

O. Hellmuth, E. Allamanche, J. Herre, T. Kastner, M. Cremer, and W. Hirsch, (2001): “Advanced Audio Identification Using MPEG-7 Content Description,” in Proceedings of the 111th Audio Engineering Society Convention, Preprint 5463, New York: Audio Engineering Society, 2001

J. S. Seo, M. Jin, S. Lee, D. Jang, S. Lee, and C. D. Yoo, (2005): “Audio Fingerprinting Based on Normalized Spectral Subband Centroids,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Institute of Electrical and Electronics Engineers (IEEE), 2005, pp. 213-216, ISBN: 0-7803- 8874-7. DOl:10.1109/ICASSP.2005.1415684

A. Kimura, K. Kashino, T. Kurozumi, and H. Murase, (2001): “Very Quick Audio Searching: Introducing Global Pruning to the Time-Series Active Search,” in Proceedingsof the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 3, Institute of Electrical and Electronics Engineers(IEEE), pp. 1429-1432, ISBN: 0-7803-7041-4. DOl: 10.1109/ICASSP.2001.941198.

J. Haitsma and T. Kalker, “Speed-Change Resistant Audio Fingerprinting Using Auto-Correlation,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 4, Institute of Electrical and Electronics Engineers(IEEE), 2003, pp. IV-72831-, ISBN: 0-7803-7663-3. DOl: 10. 1109/ICASSP.2003.1202746.

C. Yang, “MACS: Music Audio Characteristic Sequence Indexing for Similarity Retrieval,” in Proceedings of the Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz: Institute of Electrical and Electronics Engineers (IEEE), Oct. 2001, pp. 123-126, ISBN: 0-7803-7126-7. DOl: 10.1109/ASPAA.2001.969558.

A. Wang, (2003): “An Industrial Strength Audio Search Algorithm,” in Proceedings of the 4th International Conference on Music Information Retrieval (ISMIR), Washington.

S. Kim and C. D. Yoo, (2007): “Boosted Binary Audio Fingerprint Based on Spectral Subband Moments,” in Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol.1, Institute of Electrical and Electronics Engineers(IEEE), Apr. 2007, pp. I-241I-244-, ISBN: 1-4244-0727-3.DOl: 10.1109/ICASSP.2007.366661

S. Sukittanon and L. E. Atlas, (2002): “Modulation Frequency Features for Audio Fingerprinting,”in Proceedings of the International Conference on Acoustics Speechand Signal Processing (ICASSP), Institute of Electrical and Electronics Engineers (IEEE), pp. II–II, ISBN: 0-7803-0946-4. DOl: 10. 1109/ ICASSP.2002.1006107.

D. Schonberg and D. Kirovski, “Fingerprinting and Forensic Analysis of Multimedia,” in Proceedings of the 12th ACM Multimedia Conference, New York, 2004. DOl: 0.1145/1027527.1027712.

J. I. Martinez, J. Vitola, A. Sanabria, and C. Pedraza, (2002): “Fast Parallel Audio Fingerprinting Implementation in Reconfigurable Hardware and GPUs,” in Proceedings of the Southern Conference on Programmable Logic (SPL), Institute of Electrical and Electronics Engineers(IEEE), Apr. 2011, pp. 245-250, ISBN: 978-1-4244-_8847-6. DOl: 10.1109/ SPL.2011.5782656.

C.J.C. Burges, J.C. Platt, and S. Jana: “Extracting Noise-Robust Features from Audio Data,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Institute of Electrical and Electronics Engineers(IEEE), ISBN: 0-7803-0946-4. DOl: 10.1109/ICASSP.2002.1005916.

G. Richly, L. Varga, F. Kovacs, and G. Hosszu, (2000): “Short-Term Sound Stream Characterization for Reliable, Real-Time Occurrence Monitoring of Givi n Sound-Prints,” in Proceedings of the 10th Mediterranean Electrotechnical Conference. Information Technology and Electrotechnology for the Mediterranean Countries (CMeleCon), Vol.2, Institute of Electrical and Electronics Engineers(IEEE), 2000, pp. 526-528, ISBN: 0-7803-6290-X. DOl: 10.1109/ MELCON.2000.879986.

Most read articles by the same author(s)