Abstract An essential task of automated machine learning ( $$\text {AutoML}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mtext>AutoML</mml:mtext> </mml:math> ) is the problem of automatically finding the pipeline with the best generalization performance on a given dataset. This problem has been addressed with sophisticated $$\text {black-box}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mtext>black-box</mml:mtext> </mml:math> optimization techniques such as Bayesian optimization, grammar-based genetic algorithms, and tree search algorithms. Most of the current approaches are motivated by the assumption that optimizing the components of a pipeline in isolation may yield sub-optimal results. We present $$\text {Naive AutoML}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mtext>Naive AutoML</mml:mtext> </mml:mrow> </mml:math> , an approach that precisely realizes such an in-isolation optimization of the different components of a pre-defined pipeline scheme. The returned pipeline is obtained by just taking the best algorithm of each slot. The isolated optimization leads to substantially reduced search spaces, and, surprisingly, this approach yields comparable and sometimes even better performance than current state-of-the-art optimizers.