This work evaluates strategies to reduce the computational cost of Gabriel graph-based classifiers, in order to make them more suitable for hardware implementation. An analysis of the impact of the bit precision provides insights on the model's robustness for lower precision arithmetic. Additionally, a parallelization technique is proposed in order to improve the efficiency of the support edges computation. The results show that the lower bit precision models are statistically equivalent to the reference double-precision ones. Also, the implementation of the proposed parallel algorithm provides a significant reduction in the running time when applied in large datasets, while maintaining its accuracy.