Logotipo ImpactU
Autor

Synthesizing fractional anisotropy maps from T1-weighted magnetic resonance images using a simplified generative adversarial network

Acceso Cerrado

Abstract:

Image-to-image translation techniques can be used to synthesize brain image modalities that could provide complementary information about the organ. This image-generation task is often done with the use of Generative Adversarial Networks (GANs), which is a computationally expensive task. This study focuses on the synthesis of three-plane slices of fractional anisotropy maps from T1-weighted Magnetic Resonance Images through the use of a simplified GAN-based architecture that significantly reduces the number of parameters involved. Brain magnetic resonance images from 194 cognitively normal subjects from the ADNI database were used. The proposed GAN architecture was compared against two state-of-the-art networks, namely pix2pix and CycleGAN. Using almost 70% less parameters than those used in pix2pix, the proposed method showed competitive results in mean PSNR (20.21 ± 1.38) and SSIM (0.65 ± 0.07) when compared to pix2pix (PSNR: 20.46 ± 1.46, SSIM: 0.66 ± 0.07), outperforming quality metrics achieved by CycleGAN (PSNR: 18.65 ± 1.31, SSIM: 0.61 ± 0.08). By using a simplified GAN-based architecture that highlights the potential of parameter reduction through stacked convolutions, the presented model is competitive at generating three-plane fractional anisotropy maps from T1-weighted images when compared with state-of-the-art methods.

Tópico:

Advanced Image Processing Techniques

Citaciones:

Citations: 0
0

Citaciones por año:

No hay datos de citaciones disponibles

Altmétricas:

Paperbuzz Score: 0
0

Información de la Fuente:

FuenteNo disponible
Cuartil año de publicaciónNo disponible
VolumenNo disponible
IssueNo disponible
Páginas94 - 94
pISSNNo disponible
ISSNNo disponible
Perfil OpenAlexNo disponible

Enlaces e Identificadores:

Artículo de revista