Visual Attention Models are usually tested using collections of natural images that have intentionally salient objects and obvious context information. On the other hand, in the literature, few algorithms have considered datasets with non-context information to modeling attention. Moreover, Visual Attention Models haven't been well-measured considering both contextless and context-awareness environments. In this paper, we compare some well-known Bottom Up visual attention models performance using contextless and context aware datasets, using the Pearson Correlation Coefficient as a method to assess the efficiency of each Visual Attention Model in terms of accuracy and eye fixations predictions. The best algorithm outperforms the others by reaching 59,1% and 43,8% of correlation with ground truth information in the contextless and context awareness datasets respectively.