Speech-based biometrics is one of the most effective ways for identity management and one of the preferred methods by users and companies given its flexibility, speed and reduced cost. Current state-of-the-art speaker recognition systems are known to be strongly dependent on the condition of the speech material provided as input and can be affected by unexpected variability presented during testing, such as environmental noise, changes in vocal effort or pathological speech due to speech and/or voice disorders. In this chapter, we are particularly interested in understanding the effects of dysarthric speech on automatic speaker identification performance. We explore several state-of-theart feature representations, including i-vectors, bottleneck neural-networkbased features, as well as a covariance-based feature representation. High-level features, such as i-vectors and covariance-based features, are built on top of four different low-level presentations of dysarthric/controlled speech signal. When evaluated on TORGO and NEMOURS databases, our best single system accuracy was 98.7%, thus outperforming results previously reported for these databases.