In this paper we present the methodology to build a vision-based wearable system that assists blind people in navigation tasks in unknown indoor environments. The proposed system detects walkable space, obstacles and objects of interest such as doors, chairs, staircases, computers, among others, and plans a path that allows the user to reach the objectives in a safe way (purposeful navigation). The system consists of six modules: for floor segmentation, for building an occupancy grid, for obstacle avoidance, for detecting objects of interest, for path planning and for haptic feedback to the user. Considering the big amount of data that a stereo camera delivers, the fact that the person is moving in a highly dynamic environment and the need of providing immediate feedback to the user, we defined two constraints: 1. Depth and color data must be processed at real time, in a general purpose graphics processing unit GPGPU. 2. Images must be processed in a wearable computer, it means, in a light and small processing device. Moreover, we defined trade-offs between sensing, computation and system usability that allow the system fulfill these requirements appropriately. The partial results in floor segmentation and in building the occupancy 2D grid encourage us to follow developing and improving the constituent modules.