The evolution of technology and society brings many benefits to both companies and people. Nevertheless, with the Artificial Intelligence developments, the fight against racism, xenophobia or any other type of discrimination continue to have an important role for contemporary social struggles and public debate. Therefore, it is needed that in the development of different kinds of technologies, and specially in Machine Learning and Artificial Intelligence, these biases are not stimulated and increased but are diminished. For this purpose, a consistent method to identify and evaluate these is necessary. This article proposes a general model for post-development bias assessment for Machine Learning models. To validate its capabilities, two models –the GPT-2 and GPT-3 models– are evaluated, resulting on positive results which speaks highly of their training processes. We found a decrease in the likelihood of nearly 8.6 percentage points of a negative comment –when a text is generated by the models and has a racial related word present on it–, against the base case where the text is generated by a human and also contains racial related words.