Nearest-neighbor (NN) classification has been widely used in many research areas, as it is a very intuitive technique. As long as we can defined a similarity or distance between two objects, we can apply NN, therefore making it suitable even for non-vectorial data such as graphs. An alternative to NN is the dissimilarity space [2], where distances are used as features, i.e. an object is represented as a vector of its distances to prototypes or landmarks. This representation can be used with any classifier, and has been shown to be potentially more effective than NN classification on the same dissimilarities. Defining distance measures on complex objects is not a trivial task. Due to human judgments, suboptimal matching procedures or simply by construction, distance measures on non-vectorial data may often be asymmetric. A common solution for NN approaches is to symmetrize the measure by averaging the two distances [2]. However, in the dissimilarity space, symmetric measures are not required. We explore whether asymmetry is an artifact that needs to be removed, or an important source of information. This abstract highlights one example of informative asymmetric measures, covered in [1].
Tópico:
Data Management and Algorithms
Citaciones:
0
Citaciones por año:
No hay datos de citaciones disponibles
Altmétricas:
No hay DOI disponible para mostrar altmétricas
Información de la Fuente:
FuenteBelgium-Netherlands Conference on Artificial Intelligence