With the growing amount of data being generated and collected, it becomes increasingly common to have scenarios where there are large-scale labeled data but limited computational resources, making it impossible to train predictive models using all available samples.Faced with this reality, we adopt the Machine Teaching paradigm as an alternative to obtain effective models using a representative subset of available data.Initially, we consider a central problem of the Machine Teaching area which consists of finding the smallest set of samples necessary to obtain a given target hypothesis h * .We adopt the black-box learner teaching model introduced in (DASGUPTA et al., 2019), where teaching is done interactively without any knowledge about the learner's algorithm and its hypothesis class, except that it contains the target hypothesis h * .We refine some existing results for this model and study its variants.In particular, we extend a result from (DASGUPTA et al., 2019) to the more realistic scenario where h * may not be contained in the learner's hypothesis class, and therefore, the teacher's objective is to make the learner converge to the best available approximation of h * .We also consider the scenario with non-adversarial black-box learners and show that we can obtain better results for the type of learner that moves to the next hypothesis smoothly, preferring hypotheses that are closer to the current hypothesis.Next, we address the Time-Constrained Learning problem, considering a scenario where we have a huge dataset and a time limit to train a given learner using this dataset.We propose the TCT method, an algorithm for this task, developed based on Machine Teaching principles.We present an experimental study involving 5 different learners and 20 datasets in which we show that TCT outperforms alternative methods considered.Finally, we prove approximation guarantees for a simplified version of TCT.