Different methods of sign language recognition have been proposed, some based on devices that must be worn by the person to identify signs, these have the disadvantage of having a noisy output, or constant use deteriorates the elements of the instrument. This article proposes a model that allows recognizing the static signs within the LSC alphabet. The signs are captured using a device with an infrared camera called Leap Motion, capable of graphically obtaining the hand that is being displayed in front of it, building a database made up of 38 people. The recognition system is developed using three (3) computational intelligence techniques: Support Vector Machine (SVM), Multilayer Perceptron (MLP) and Random Forests (RF), and a Staking model with the results of individual techniques. Obtaining a recognition percentage of 97.41%, surpassing the SVM by 1.49%, MLP by 3.91% and RF by 4.08%.