The design of visual robotic behaviors constitutes a substantial challenge. It requires to draw meaningful relation ships and constraints between the acquired visual perception and the geometry of the environment both empirically and programmatically. This contribution proposes a novel robot learning framework to classify and acquire scenario specific autonomous behaviors through demonstration. During demonstration, robocentric 3D range and omnidirectional images are recorded as training instances of typical robot navigation situations pertaining to different contexts in multiple indoor scenarios. A programming by demonstration approach generalizes the demonstrated trajectories to a general mapping between visual features extracted from the omnidirectional image onto a corresponding robot motion. The approach is able to distinguish among different traversing scenarios and further identifies the best matching context within the scenario to predict an appropriate robot motion. As a comparison to context matching, the behaviors are trained by means of an artificial neural network and its generalization ability is evaluated against the former. The experimental validation on the mobile robot indicates that the acquired visual behavior is robust and generalizes meaningful actions beyond the specific environments and scenarios presented during training.