This study aims to investigate the impact of domain-adapted pretraining and cross-lingual transfer on programming language understanding tasks. This paper describes a new approach based on leveraging the code subset of the Dolma corpus focusing on Python, Java, and JavaScript to fine-tune a pre-trained CodeBERT model. Using a combination of pretraining, fine-tuning, and evaluation techniques, the authors evaluated the fine-tuned models on three essential tasks from the CodeXGLUE benchmark: code completion, code summarization, and code search. In addition, the authors examine the zero-shot cross-lingual transferability of the models by fine-tuning one language and testing others. The results provide insights into the effectiveness of domain-adapted pretraining, showing consistent improvements across all three tasks. The cross-lingual transfer evaluation revealed some level of transferability, although there was still a significant gap between monolingual and cross-lingual performance. This new approach makes it possible to conduct valuable empirical studies even with limited computational resources, contributing to a broader understanding of pretraining and transfer learning in the programming domain. This paper is novel because it demonstrates the feasibility and efficiency of conducting programming language understanding research in resource-constrained settings while providing valuable insights for future research. Keywords: programming language understanding, domain-adapted pretraining, cross-lingual transfer, Dolma, CodeXGLUE benchmark DOI: https://doi.org/10.35741/issn.0258-2724.59.2.26