We consider a Markov control model in discrete time with countable both state space and action space. Using the value function of a suitable long-run average reward problem, we study various reachability/controllability problems. First, we characterize the domain of attraction and escape set of the system, and a generalization called <inline-formula><tex-math notation="LaTeX">$p$</tex-math></inline-formula>-domain of attraction, using the aforementioned value function. Next, we solve the problem of maximizing the probability of reaching a set <inline-formula><tex-math notation="LaTeX">$A$</tex-math></inline-formula> while avoiding a set <inline-formula><tex-math notation="LaTeX">$B$</tex-math></inline-formula>. Finally, we consider a constrained version of previous problem, where we ask for the probability of reaching the set <inline-formula><tex-math notation="LaTeX">$B$</tex-math></inline-formula> to be bounded. In the finite case, we use linear programming formulations to solve these problems.