MOOD DETECTION BASED ON LAST SONG LISTENED ON SPOTIFY

Authors

  • Ridi Ferdiana Department of Electrical and Information Engineering, Universitas Gadjah Mada, 55281, Yogyakarta, Indonesia
  • Wiiliam Fajar Dicka Department of Electrical and Information Engineering, Universitas Gadjah Mada, 55281, Yogyakarta, Indonesia
  • Faturahman Yudanto Department of Electrical and Information Engineering, Universitas Gadjah Mada, 55281, Yogyakarta, Indonesia

DOI:

https://doi.org/10.11113/aej.v12.16834

Keywords:

Four-dimensional mood scale, machine learning, natural language processing, sentiment analysis, Multi-class classification

Abstract

A Song Is One Medium Used To Express Someone’s Emotion, Whether As A Performer Or Audience. With The Advancement Of Machine Learning And A Deeper Understanding Of Sentiment Analysis, We Decided To Study Mood Detection Based On The Last Song Listened To. One of the direct ways to measure someone's mood is by using a Four-dimensional Mood Scale (FDMS) device. This device categorized mood into four dimensions: low valence, high valence, low arousal, and high arousal. In this article, we used a variation of FDMS adapted to the Indonesian language called FDMS-55 to compare the result from our model. Our model is trained using song data collected from Spotify and Genius using their respective API (Application Programming Interface). We classified manually into a mood class and then processed further using Azure Cognitive Service Text Analytics API. Based on evaluation conducted on the model, the FastTreeOva algorithm produces the highest accuracy both on valence class with 0.8901 and arousal class with 0.9167. The comparison between the model result and respondent's FDMS-55 device result is made with cosine similarity and yields similarity value of 0.770 with 0.103 standard deviation. It is concluded that someone's mood is related to the song they listened to, and our model can precisely predict someone's mood based on the last song they listened to.

 

References

J. K. Vuoskoski and T. Eerola, 2011 "Measuring Music-Induced Emotion: A Comparison of Emotion Models, Personality Biases, and Intensity of Experiences," Musicae Scientiae, 15(2): 159–173., doi: 10.1177/102986491101500203.

T. J. Huelsman, R. C. Nemanick, and D. C. Munz, "Scales to measure four dimensions of dispositional mood: Positive energy, tiredness, negative activation, and relaxation," Educational and Psychological Measurment, 58(5): 804–819, 1998.

Y. E. Kim et al., 2010. "Music emotion recognition: A state of the art review," Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR 2010),. ISMIR 2010, vol. 11: 255–266,

X. Hu, J. S. Downie, C. Laurier, M. Bay, and A. F. Ehmann, "The 2007 mirex audio mood classification task: Lessons learned," Proceedings of the 9th International Society for Music Information Retrieval (ISMIR 2008). 9: 462–467, 2008.

R. E. Thayerm, 1989.The Biopsychology of Mood and Arousal. New York, NY, USA: Oxford University Press,

D. Yang and W.-S. Lee, 2004. "Disambiguating Music Emotion Using Software Agents,"

L. M. Gómez and M. N. Cáceres, 2018, “Applying Data Mining for Sentiment Analysis in Music,” 198–205. doi: 10.1007/978-3-319-61578-3_20.

J. A. Russel, 1980,"A circumplex model of affect," Journal of Personality and Social Psychology, 39(6): 1161-1178. Journal of Personality and Social Psychology, [Online]. Available: https://doi.org/10.1037/h0077714

M. S. M. Yik, J. A. Russel, C. K. Ahn, J. M. F. Dols, and N. Suzuki, 2002."Relating the Five-Factor Model of Personality to a Circumplex Model of Affect. In: McCrae R R., Allik J. (eds) The Five-Factor Model of Personaloty Across Cultures," in International and Cultural Psychology Series, Boston: Springer,

C. Laurier, M. Sordo, J. Serr̀a, and P. Herrera, "Music mood representations from social tags," Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR 2009), vol. 10, pp. 381–386, 2009.X. Hu, M. Bay, and J. S. Downie, 2007. "Creating a simplified music mood classification ground-truth set," Proceedings of the 8th International Society for Music Information Retrieval Conference (ISMIR 2007), 309–310

A. S. Bhat, V. S. Amith, N. S. Prasad, and D. M. Mohan, "An efficient classification algorithm for music mood detection in western and Hindi music using audio feature extraction," International Conference on Signal Image Processing Challenge (ICSIP), 5: 359–364, 2014, [Online]. Available: https://doi.org/10.1109/ICSIP.2014.63

D. H. Silvera, A. M. Lavack, and F. Kropp, 2008,"Impulse buying: the role of affect, social influence, and subjective wellbeing," Journal of Consumer Marketing, 25(1): 23–33, doi: 10.1108/07363760810845381.

B. Verplanken, A. G. Herabadi, J. A. Perry, and D. H. Silvera, 2005, "Consumer style and health: The role of impulsive buying in unhealthy eating," Psychology & Health, 20(4): 429–441. doi: 10.1080/08870440412331337084.

I. Adinugroho, 2018"Memahami Mood Dalam Konteks Indonesia: Adaptasi Dan Uji Validitas Four Dimensions Mood Scale (Understanding Mood in Indonesian Context: Adaptation and Validity Examination of Four Dimensions Mood Scale)," SSRN Electron, 5(2)

Downloads

Published

2022-08-31

Issue

Section

Articles

How to Cite

MOOD DETECTION BASED ON LAST SONG LISTENED ON SPOTIFY. (2022). ASEAN Engineering Journal, 12(3), 123-127. https://doi.org/10.11113/aej.v12.16834