Two models are provided to classify NLP related abstracts into one of 5 categories. The classification into one of the five categories depends on the approach to NLP used. Category 4: The abstract does not mention a particular model or technique. Some papers analyzing frameworks, surveys, papers centered the computer vision component of NLP and dataset proposals among others fall into this category. Category 0 (Rule-Based): A model based on rules or symbolic analysis is used. Category 1 (Statistical Methods): An approach using statistical methods is used. This includes BoWs, N-Grams, TF-IDF, along with other machine learning techniques like SVMs, Logistic Regression, LDA and others. Shallow neural network models like word2vec also belong in this category. Category 2 (Deep Learning): Approaches that use Deep Learning and other Deep Neural Network architectures such as RNNs, CNNs and LSTM are included in this category. Category 3 (Transformer Models): The approach proposed uses transformer based models, like BERT, GPT, T5 and others. It is to note that the classification could be imprecise, is not strictly defined and should be used only as a starting point. A collection of articles classified with the models can be found at: Canchila, Santiago; Meneses-Eraso, Carlos; Casanoves-Boix, Javier; Cortés-Pellicer, Pascual; Castelló-Sirvent, Fernando, 2023, "Indexed NLP Article Metadata Dataset", https://doi.org/10.7910/DVN/5YIGNG, Harvard Dataverse, V1 Please use the setfit library to load the SetFit model, and the transformers library to load the BERT model. Please refer to the original papers for model specifications and citation.