Logotipo ImpactU
Autor

EEG-Based Music Emotion Prediction Using Supervised Feature Extraction for MIDI Generation

Acceso Abierto

Abstract:

Advancements in music emotion prediction are driving AI-driven algorithmic composition, enabling the generation of complex melodies. However, bridging neural and auditory domains remains challenging due to the semantic gap between brain-derived low-level features and high-level musical concepts, making alignment computationally demanding. This study proposes a deep learning framework for generating MIDI sequences aligned with labeled emotion predictions through supervised feature extraction from neural and auditory domains. EEGNet is employed to process neural data, while an autoencoder-based piano algorithm handles auditory data. To address modality heterogeneity, Centered Kernel Alignment is incorporated to enhance the separation of emotional states. Furthermore, regression between feature domains is applied to reduce intra-subject variability in extracted Electroencephalography (EEG) patterns, followed by the clustering of latent auditory representations into denser partitions to improve MIDI reconstruction quality. Using musical metrics, evaluation on real-world data shows that the proposed approach improves emotion classification (namely, between arousal and valence) and the system’s ability to produce MIDI sequences that better preserve temporal alignment, tonal consistency, and structural integrity. Subject-specific analysis reveals that subjects with stronger imagery paradigms produced higher-quality MIDI outputs, as their neural patterns aligned more closely with the training data. In contrast, subjects with weaker performance exhibited auditory data that were less consistent.

Tópico:

Neuroscience and Music Perception

Citaciones:

Citations: 0
0

Citaciones por año:

No hay datos de citaciones disponibles

Altmétricas:

Paperbuzz Score: 0
0

Información de la Fuente:

SCImago Journal & Country Rank
FuenteSensors
Cuartil año de publicaciónNo disponible
Volumen25
Issue5
Páginas1471 - 1471
pISSNNo disponible
ISSNNo disponible

Enlaces e Identificadores:

Artículo de revista