Research works in collective problem-solving usually assume fixed communication structures and explore effects thereof. In contrast, in real settings, individuals may modify their set of connections in the search of information and feasible solutions. This paper illustrates how groups collectively search for solutions in a space under the presence of dynamic structures and individual-level learning. For that, we built an agent-based computational model. In our model, individuals (i) simultaneously conduct search of solutions over a complex space (i.e. a NK landscape), (ii) are initially connected to each other according to a given network configuration, (iii) are endowed with learning capabilities (through a reinforcement learning algorithm), and (iv) update (i.e. create or severe) their links to other agents according to such learning features. Results reveal conditions under which performance differences are obtained, considering variations in the number of agents, space complexity, agents' screening capabilities and reinforcement learning.