One of the most effective strategies to improve image quality is the averaging of several pieces of information. The method is particularly useful if there are correlated patterns along the samples. The averaging diminish the effect of non-correlated information such as Gaussian noise for the rightness, or not perfectly overlapped data with devastating consequences on the images. As simple as it may appear initially, applying this strategy in medical images is not always straightforward. The diffusion Magnetic Resonance Imaging (dMRI) is one of those challenging cases. The dMRI presents to the operator a myriad of possibilities with countless applications. It is the preferred technique for discovering and analyzing neuronal-related pathologies. However, its potential for human assessments in the clinics has not been fully exploited due to the excessive required scanning time. This technique is also valued in the small animal modeling of neurodegenerative pathologies where several imaging sessions are performed with the aim of monitoring the progression of the maladies. But again, the produced images lack a good quality due to the high spatial resolution imposed by the dimensions of the subjects. Here, we present the theoretical framework to demonstrate that dMRI images can be quality enhanced in a post-acquisition stage. Moreover, we postulate that quality-improving efforts are better employed in the tensor field than in the commonly used raw data domain. To reinforce the claims mentioned above, two cases with real data are presented in the document.