A Data-Driven Narrative Engine for Interactive Art Design: Architecture and Dynamic Content Generation Methods
Author Names:
Qiong Xie, Hong Ji, Hongrui Liu
Author Affiliation:
College of Publishing, University of Shanghai for Science and Technology, Shanghai, China
Author Email:
monster81844@163.com
Publication Date:
February 26, 2026
Page numbers:
979-995
DOI Number:
https://doi.org/10.66113/jcmse.26071
Abstract:
Interactive art installations are increasingly augmented by intelligent systems that foster immersive, user-adaptive experiences. Traditional narrative engines often rely on predefined scripts, lacking responsiveness to real-time user behaviors. To address this, a data-driven narrative engine is proposed that dynamically generates and adapts story elements based on multimodal user interaction data, specifically tailored for interactive art environments. The system architecture integrates gesture, gaze, and emotional signals captured through ambient sensors and camera input within mixed-reality settings. A curated dataset of numerous interaction sequences is preprocessed using Savitzky–Golay filtering and temporal normalization to enhance signal clarity and temporal alignment. Temporal Convolutional Networks (TCNs) extract behavioral patterns, while a novel Fire Hawk Optimizer-driven Sequence-to-Sequence (FHO-Seq2Seq) model powers narrative generation. This model is trained on annotated narrative corpora and tuned for contextual coherence and emotional resonance. Performance evaluation yields a BLEU score of 0.45, ROUGE-L score of 0.52, and METEOR score of 0.38, indicating strong linguistic and semantic fidelity. Real-time responsiveness is achieved with an average response latency of 182 ms and a stable frame rate of 58–60 FPS in Unity 3D visualization. Signal processing modules enhance emotional signal clarity from 0.71 to 0.94 and improve character emotional recognition accuracy to 97.6%. User studies report high ratings for character believability (4.5/5), storyline coherence (0.85), and emotional engagement (4.3/5). These results demonstrate the effectiveness of the proposed system in enabling context-sensitive, emotionally aligned storytelling within interactive installations, offering new directions for narrative-driven digital art and human–computer interaction.
Keywords:
DMSNFM, multimedia big data, sensor network, target tracking, distributed localization, local fusion center
You need to register before accessing this content.