Updated: Nov 26, 2021
With new technologies always comes the scrutiny of requirements of new laws and ethical consideration towards the brought society. Many ethical issues are encountered when considering how AVs are to be programmed in an event of actions taken in an accident situation. The safety potential of AVs had struck the discussion about whether non-autonomous driving should be banned for safety reasons when a level of safe and reliable autonomous technology is achieved (Nyholm & Smids, 2016).
AVs driving systems may have a large advantage over human drivers in avoiding crash situations before they occur due to the constant 360 degrees of monitoring of the environment eliminating human errors. However, AV technology has still a lot of requirements of technology improvement and legal considerations nevertheless of Tesla and Waymo already bringing level 3 and 5 autonomous vehicles into the market.
Several ethical guidelines and best practice documents are established to assist programmers in developing ethically-sound crash algorithms however these guides have been criticized as too vague and incoherent (Ryan, 2019). A hypothetical scenario known as the ‘trolley problem’, which considers the ethical dilemmas of whether to sacrifice one person to save a larger number is one of the biggest challenges AV programmers face when developing the crash response of their autonomous technology.
It can be certainly proved that an AI algorithm with a vehicle braking system optimized with thousand real-time data points is far superior to a human reaction time as seen in early test vehicle programming by Google in 2015 (Gibbs, 2015). Nevertheless, an AV has many dilemmas of formulations of hypothetical situations whether the vehicle should prioritize the safety of its occupants over pedestrians in a crash situation.
Of course, very few people would buy a car that prioritises the lives of others over the vehicle driver and passenger, but if they aim to protect the driver they may crash into children or light vehicles (Contissa, et al., 2017). On the other hand, as mentioned by De Sio, if safety is prioritised, in a similar crash situation the AI of the vehicle may hit a motorcyclist wearing a helmet, opposed to one without one because they would be more likely to survive (De Sio, 2017).
This may lead to a chain reaction of people start to take unsafe activities in order to become safe due to knowing the response of the AVs algorithm, with an example being of not wearing a helmet thus avoiding collisions in possible accident situations.
Analysts’ solution for this matter suggests that crash-optimization should be implemented because some crashes would be unavoidable (Lin, 2015) with algorithms based on vehicles decisions on least-likely determinable harm done in a situation (Ryan, 2019). Given that government regulations in regards to AV algorithms decision-making lack in clarity, it is likely that vehicle manufacturers will control and regulate this section of the market.
A recent action of the US Department of transportation draft about Automated Vehicle Policy (Transportation, 2016) suggest that AV manufactures address ethical issues in a transparent and conscious manner with inputs from other stakeholders. However as argued by (Ryan, 2019), programmed responses remove control from the human being in driving circumstances and removes the choice and ability to make decisions in the vehicle’s navigation.
These statements are directly associated with concepts threatening free will and moral responsibility which are being replaced by algorithms and AI of AVs developed by private entities (CNIL, 2018). Another issue arising with AVs is the topic of insurance and privacy.
As AVs will be able to store an array of driver’s data such as habits, patterns and behavior insurance could be tailored to individual performances hence providing higher premiums to safer more conscious drivers. Conversely, a large amount of data will be collected which could infringe personal privacy and data security which could be subject to negative consequences of hackers stealing the data or on the other hand, a positive consequence could be the allowance of police accessing this information to reduce crime.
An approach currently promoted is the DRIC “data remains in-car” which attempts to process data within the vehicles rather than transmit to third parties (CNIL, 2018) however, it should be pointed out that technical challenges are present to implement this type of technology.
De Sio, F. S., 2017. Killing by autonomous vehicles and the legal doctrine of necessity, p425: Ethical The- ory and Moral Practice.
CNIL, 2018. Connected vehicles: A compliance package for a responsible use of data.. [Online] Available at: https://www.cnil.fr/en/connected-vehicles-compliance-package-responsible-use-data [Accessed 29 May 2021].
Contissa, G., Lagioia, F. & Sartor, G., 2017. The ethical knob: Ethically-customisable automated vehicles and the law, s.l.: Artificial Intelligence and Law.
Ryan, M., 2019. The Future of Transportation: Ethical, Legal, Social and Economic Impacts of Self‐driving Vehicles in the Year 2025, s.l.: Science and Engineering Ethics (2020) 26:1185–1208
Transportation, U. D. o., 2016. Federal Automated Vehicles Policy: Accelerating the Next Revolution In Roadway Safety, s.l.: s.n.
Lin, P., 2015. Why Ethics Matters for Autonomous Cars. , s.l.: Autonomes Fahren: Technische, rechtliche und gesellschaftliche Aspekte (pp. 69-85). SpringerLink.
Gibbs, S., 2015. The Guardian. [Online] Available at: https://www.theguardian.com/technology/2014/may/28/google-self-driving-car-how-does-it-work
Nyholm, S. & Smids, J., 2016. The ethics of accident-algorithms for self-driving cars: An applied trol- ley problem? Ethical Theory and Moral Practice, s.l.: s.n.