One of the most important tasks in building environment maps with partial information is to find a good alignment between pairs of point clouds representing consecutive frames. RANSAC and ICP are widely used algorithms to align pairs of frames: the former finds an initial transformation which is refined by the latter. Decreasing the alignment error in the first step can reduce the computational cost in the subsequent step. A robot with a RGB-D camera navigating indoors can come across scenes under different conditions. When the environment presents a high degree of image texture and low degree of structure, better results are achieved using only color images. Otherwise, depth information is better suited. In this paper, we contribute a new adaptive technique for the effective integration of color and depth information in two consecutive frames in the alignment process. Our proposal provides automatic means to weigh the relevance of each type of information depending on the visual texture and structure of the scene.