Please join the Philosophy Department for Professor Buccella’s lecture at 5:00 pm, Oct. 21, in the CSB Auditorium.
From Dr. Buccella’s abstract:
It is often said that artificial intelligence (AI) will eventually take over some of our decision-making tasks and will deliver solutions to problems somewhat directly and autonomously, even in high-stakes contexts like healthcare, finance, or the law. These are also the contexts most people usually bring up as examples to stress the importance of “AI ethics”, i.e. the field of AI research which aims at aligning machine decisions with human moral values so that, in contexts where the ‘right’ thing to do depends on, among other things, moral considerations, we can expect the machine to take such considerations into account in appropriate ways.
However, the question of whether AI systems are even capable of taking into account ethical considerations in the first place has not received much attention. Taking inspiration from Simone de Beauvoir’s ethical views, and in particular, from the connection she proposes between ethical decisions and freedom, I argue in favor of a way of characterizing ethical decisions from which it follows that AI systems are in principle not in a position to make ethically charged decisions. This conclusion has three important consequences having to do with the role AI should play in ethical decision-making and the goals of AI ethics as a discipline. I explore these consequences in the second part of the paper. Drawing on previous work, I suggest that AI’s role in ethical decision-making consists in the fact that it impacts the material conditions underlying human ethical agency.
Monday, Oct. 21 / 5:00 PM
For more information, please contact Dan Werner (Philosophy) wernerd@newpaltz.edu