Application of deep learning and generative adversarial network in music data analysis of Internet of Things
Author Names:
Chen Wang
Author Affiliation:
School of Music, Shandong University of Arts, Jinan 250014, China
Author Email:
wangchen_wang@outlook.com
Publication Date:
April 24, 2026
Page numbers:
DOI Number:
https://doi.org/10.1177/14727978251352136
Abstract:
In recent years, the rapid rise of technologies such as the Internet of Things (IoT) and Artificial Intelligence (AI) has transformed numerous domains, particularly smart homes. As people experience greater material comfort, they increasingly seek deeper, more emotionally intelligent ways to interact with technology. Music, rich in emotional content, serves as a powerful medium for interpersonal communication and is increasingly regarded as a natural channel for intelligent human-computer interaction. However, traditional music emotion recognition techniques face challenges with low recognition accuracy and high computational costs. To address these limitations, we propose an efficient deep learningbased music emotion recognition system that integrates generative adversarial networks (GANs) within an IoT framework. The system employs a convolutional neural network (CNN) to extract both local and global features from musical signals using Mel-frequency techniques. These features enhance the GAN’s ability to detect complex emotional expressions in music. Experimental results demonstrate that the proposed model achieves significantly lower error rates and greater recognition accuracy compared to state-of-the-art methods. Specifically, it attains an accuracy of 94.06%, confirming its effective performance and suitability for real-time, emotion-aware music recommendation in IoT applications.
Keywords:
music data analysis, deep learning (DL), generative adversarial network (GAN), convolutional neural network (CNN), Internet of Things (IoT)
You need to register before accessing this content.