In this paper we propose a strategy to fuse visual features and unstructured-text data in a medical image retrieval system. The main goal of this work is to investigate whether the semantic information from text descriptions can be transfered to a visual similarity measure. Then, a system to search using the query-by-example paradigm is evaluated instead of a keyword-based search. We achieve this by using Latent Semantic Kernels to generate a new representation space whose coordinates define latent concepts that merge visual patterns and textual terms. The proposed method is tested in a medical image collection from the ImageCLEFmed08 challenge. The experimental evaluation tests the system using different image queries. The results show an improvement of the visual-text fused approach with respect to only using visual information.