We propose an abstract self bounding genetic algorithm that can be applied to various problems of machine learning. The bound on the generalization error that is output by our algorithm is based on Rademacher penalization, a data driven penalization technique. We prove probabilistic oracle inequalities for the theoretical risk of the estimators based on this approach. This is done by comparing the performance of an idealized genetic algorithm that uses a fitness function based on the generalization error with that of an empirical genetic algorithm based on Rademacher penalization. The inequalities indicate that although we are not able to implement the idealized algorithm (because of the inability to compute the generalization error), the empirical algorithm does almost as well as the idealized algorithm would.