Underwater color image quality assessment (NR-UWIQA) plays an important role in the analysis and applications of underwater imaging, as well as image processing algorithms. In this paper, we propose a novel yet lightweight no-reference (NR) underwater image quality assessment method, which is based on a Convolutional Neural Network (CNN) architecture. We train the CNN model using patches of underwater images. Instead of feeding the model with patches randomly selected from underwater images, we choose patches that are most perceptually relevant by incorporating the properties of human visual system (HVS), such as visual saliency. These selected patches are then fed to the CNN model for quality estimation. The proposed quality assessment method does not require reference content to estimate quality. To train and test the proposed method, we use the UID-LEIA dataset, which contains a set of images captured underwater under different conditions of water. Experimental tests show that it outperforms current underwater quality assessment methods.