As self-driving vehicles develop, they may encounter situations in which all possible actions lead to an accident, commonly referred to as ethical dilemmas. Research in this area suggests that these dilemmas are limited to a fatal failure of braking systems. This paper outlines an approach to address ethical dilemmas involving autonomous vehicles (AVs), where the AV dynamically adapts its decision policy to minimize the harm generated in a possible accident. The proposed model uses a Markov decision process (MDP), to define the actions that the AV should carry out. The AV action is selected based on the potential harm to different stakeholders, such as passengers, pedestrians, cyclists, and other drivers. The ethical decision-making component is added adapting the reward function, to consider different ethical principles that should guide the decision process in dilemma situations. Our suggested study cases demonstrate that AVs can conduct numerous policy reevaluations, which result in the vehicle's trajectory being adapted based on the ethical evaluation. These results emphasize the significance of ethical deliberation not only in crucial situations like brake failure but also in normal driving conditions.
Tópico:
Ethics and Social Impacts of AI
Citaciones:
0
Citaciones por año:
No hay datos de citaciones disponibles
Altmétricas:
0
Información de la Fuente:
Fuente2019 IEEE 4th Colombian Conference on Automatic Control (CCAC)