This paper presents a novel approach for floor obstacle segmentation in omnidirectional images which rests upon the fusion of multiple classification generated from heterogeneous segmentation schemes. The individual naive Bayes classifiers rely on different features and cues to determine a pixel's class label. Ground truth data for training and testing the classifiers is obtained from the superposition of 3D scans captured by a photonic mixer device camera. The classification is supported by edge detection which indicate the presence of obstacles and sonar range data. The complementary expert decisions are aggregated by stacked generalization, behavior knowledge space or voting combination. The combined floor classifier achieves a classification accuracy of up to 0.96 true positive rate with only 0.03 false positive rate. A robust robot navigation is accomplished by arbitration among a reactive obstacle avoidance and a corridor following behavior using the robots local free space as perception.