Logotipo ImpactU
Autor

Automatically Assessing Code Understandability

Acceso Abierto
ID Minciencias: ART-0000437204-100
Ranking: ART-ART_A1

Abstract:

Understanding software is an inherent requirement for many maintenance and evolution tasks. Without a thorough understanding of the code, developers would not be able to fix bugs or add new features timely. Measuring code understandability might be useful to guide developers in writing better code, and could also help in estimating the effort required to modify code components. Unfortunately, there are no metrics designed to assess the understandability of code snippets. In this work, we perform an extensive evaluation of 121 existing as well as new code-related, documentation-related, and developer-related metrics. We try to (i) correlate each metric with understandability and (ii) build models combining metrics to assess understandability. To do this, we use 444 human evaluations from 63 developers and we obtained a bold negative result: none of the 121 experimented metrics is able to capture code understandability, not even the ones assumed to assess quality attributes apparently related, such as code readability and complexity. While we observed some improvements while combining metrics in models, their effectiveness is still far from making them suitable for practical applications. Finally, we conducted interviews with five professional developers to understand the factors that influence their ability to understand code snippets, aiming at identifying possible new metrics.

Tópico:

Software Engineering Research

Citaciones:

Citations: 73
73

Citaciones por año:

Altmétricas:

Paperbuzz Score: 0
0

Información de la Fuente:

SCImago Journal & Country Rank
FuenteIEEE Transactions on Software Engineering
Cuartil año de publicaciónNo disponible
Volumen47
Issue3
Páginas595 - 613
pISSNNo disponible
ISSN0098-5589

Enlaces e Identificadores:

Artículo de revista