Summary In a seismic survey, transducers (geophones or hydrophones) record seismic traces that convey the interaction between seismic wavefields and geological layers. While these traces are fundamentally elastic due to the Earth's natural layer elasticity, acoustic waves are often the focus of interpretation and imaging in seismic processing. This study introduces a method for isolating acoustic wave information from seismic traces by employing a 1D conditional generative adversarial network (cGAN). A GAN comprises two neural networks, a generator (G) and a discriminator (D), engaged in a zero-sum game. In this game, G learns to generate data with the same statistics as the training set, effectively converting elastic input into an acoustic equivalent. D's role is to distinguish between real and generated data. In this context, the model is conditional, meaning that G's generation of acoustic events depends on the statistical characteristics of the input elastic events during the training stage. This approach offers a promising solution for obtaining acoustical information in seismic data processing.