Enhancing Emotional Expression in Algorithmic Music Composition Systems Using Reinforcement Learning
Author Names:
Pengcheng Xiao
Author Affiliation:
Department of Composition Theory, School of Music, Sangmyung University, Jongno-gu, Seoul, South Korea 030-031
Author Email:
pengchengx0409@163.com
Publication Date:
April 24, 2026
Page numbers:
DOI Number:
https://doi.org/10.1177/14727978251352150
Abstract:
Artificial intelligence (AI) has advanced algorithmic music composition; however, generating emotionally expressive music remains an unresolved challenge. The application of existing models in emotional computing is limited, and has difficulty matching musical characteristics with human-perceived emotions. The research proposes an Intelligent Golden Eagle-driven Scalable Reinforcement Learning (IGE-SRL) framework to optimize the emotional depth of AI-generated music compositions. Data collection involves compiling a dataset of emotionally labeled music pieces from open source. The dataset includes diverse genres and emotional contexts, ensuring balanced representation. Preprocessing includes noise reduction into fixed-length sequences. Mel-frequency cepstral coefficients (MFCCs) are extracted as key features to capture timbral and spectral characteristics relevant to emotional perception. The IGE-SRL framework uses a policy network to generate melodies and harmonies with an adaptive reward function, integrating emotion recognition models, listener feedback, and sentiment analysis. The IGE algorithm improves exploration-exploitation balance by dynamically adjusting learning parameters, while the SRL agent refines music sequences through policy gradient updates, ensuring emotionally coherent pieces. When compared to traditional methods and rule-based approaches, the suggested IGE-SRL framework greatly improves the emotional expressiveness of AI-generated music. The performance of the suggested IGE-SRL method was evaluated in terms of precision (95.2%), F1-score (97.3%), accuracy (95.13%), recall (96.4%), and entropy (6.124). A greater sense of musical diversity and emotional coherence is confirmed by listener evaluations. By optimizing SRL parameters, the IGE approach enhances composition quality and convergence speed. With an emphasis on multimodal emotion modeling and real-time adaptive composition, the research advances affective computing and AI-driven music production.
Keywords:
music composition, emotion expression, artificial intelligence (AI), Mel-frequency cepstral coefficients (MFCCs), Intelligent Golden Eagle-driven Scalable Reinforcement Learning (IGE-SRL)
You need to register before accessing this content.