In healthcare applications, retraining models for new users often require collecting many labeled data, which is challenging and expensive in these types of applications, such as atrial fibrillation detection. Unsupervised and self-supervised techniques have emerged as promising methods to deal with the scarcity of labeled data. Contrastive learning is a recent technique that aims to improve model accuracy by a pre-trained process with unlabelled data. In this work, we propose the implementation of contrastive learning to improve the performance of a CNN that classifies atrial fibrillation in scenarios with few labeled data, small models, and noisy data. The strategy was evaluated in the most extensive public ECG dataset. We present results regarding the F1-score for a different amount of unlabeled-labeled data and different model sizes. The results suggest that our strategy outperforms the baseline strategy up to 30% of the 10-fold mean F1-score compared to an improvement of 5.8% AUC in the state of the art.
Tópico:
ECG Monitoring and Analysis
Citaciones:
0
Citaciones por año:
No hay datos de citaciones disponibles
Altmétricas:
0
Información de la Fuente:
Fuente2021 29th European Signal Processing Conference (EUSIPCO)