This article focuses on the process of designing a prediction system for automatic recognition of emotions in music. One of the main goals of this work is to analyze a prediction solution and some possible variations in its design that allow maximizing the success rate of predictions through a machine learning technique. For the training process a data set of 1802 sound files previously annotated in a dimensional emotional model with arousal and valence evaluation is used. Each song file has 260 low-level features obtained from a dynamic process of extracting audio features. Considering the analysis of the performance of the proposed solution, some improvements were carried out. This final solution sets the basis for the implementation of an emotional classification system for music in the future.