Topic: Introduction to AI and AI Ethics
Artificial intelligence, machine ethics, moral agency, moral machines
- Why is it important to consider ethics in AI development?
- What are the challenges in teaching AI systems to make ethical decisions?
- How can AI ethics impact society and human behavior?
Wendell Wallach and Colin Allen. Moral Machines: Teaching Robots Right From Wrong. Oxford: Oxford University Press, 2008. (Introduction + Chapter 3)
Wallach and Allen argue that as artificial intelligence (AI) and robotics continue to develop, there will be an increasing need for machines that are capable of making ethical decisions. This is because AI and robots are becoming increasingly integrated into areas of human life where decision-making involves complex moral and ethical considerations.
The authors explore the idea of building "artificial moral agents" (AMAs) capable of performing tasks that traditionally required human judgment. They propose that these machines should be guided by ethical considerations to ensure their decisions and actions are morally acceptable.
Reading: "Utilitarianism" - John Stuart Mill.
for facilitation schedule
Summary: This reading explores the fundamental principles of utilitarianism, an ethical theory that advocates for the greatest happiness for the greatest number. Students will examine how this theory could be applied in AI ethics, particularly in scenarios involving large-scale decision-making.
- How might a utilitarian evaluate the benefits and harms of AI?
- In what ways can the principle of utility guide decision-making processes in AI?
- How could a utilitarian perspective inform discussions around privacy and surveillance by AI technologies?
- What challenges might a utilitarian face when attempting to predict the long-term consequences of AI?
- How might utilitarian ethics inform our understanding of AI's impact on job displacement?
Let’s do some experimentation with how machines can work. One of the common philosophical thought experiments is the famed (infamous) Trolly Problem. Check out the site below if you want a light hearted take on this class problem
Group Activity Instructions: Designing Moral Dilemmas for Large Language Models
Objective: The objective of this group activity is to explore the ethical implications of large language models and their decision-making processes in moral dilemmas. By engaging with the Moral Machines website and drawing insights from John Stuart Mill's utilitarianism, students will collaboratively design their own scenarios to examine how large language models might make decisions in morally challenging situations.
- Formation of Groups: Form groups of 5 to 6 students. Each student will be responsible for submitting their individual journal entry based on the group discussion and activities.
- Explore Moral Machines Website: Visit the Moral Machines website developed by the MIT lab. Take the moral decision-making quiz to gain an understanding of the various scenarios and the factors involved in decision-making by machine intelligence.
- Discuss Moral Machines Reading: Reflect on the reading from Moral Machines: Teaching Robots Right From Wrong by Wendell Wallach and Colin Allen. Consider the ethical considerations and challenges associated with machines making moral decisions in complex situations.
- Analyze Utilitarianism Principles: Review the key concepts of John Stuart Mill's utilitarianism discussed in class. Consider how utilitarian ethics might be applied to the decision-making process of large language models.
- Scenario Design: As a group, create your own scenarios that involve moral dilemmas where large language models play a role in decision-making. Each scenario should be designed to test how a large language model might make choices in line with utilitarian principles.
- Start by identifying a hypothetical scenario that presents a moral dilemma one might face with a large language models such as GPT.
- Describe the context, characters, and conflicting moral interests involved.
- Outline the options or choices available to the large language model.
- Consider the potential consequences of each choice and their impact on overall happiness and well-being.
- Discuss the ethical considerations and trade-offs involved in the decision-making process.
- Ensure the scenarios challenge the students' understanding of utilitarianism and the application of ethical principles in complex situations.
- Develop 5 or more
- Scenario Presentation and Discussion: Each group will present their designed scenarios to the class. The presentations should include:
- An overview of the scenario, including the context and moral dilemmas.
- The options available to the large language model and the potential consequences of each choice.
- A discussion of how the scenario relates to utilitarian principles and the challenges involved.
- Encourage active participation and engagement from the class during the presentations.
- Facilitate a class discussion after each presentation to explore different perspectives and considerations.
- A summary of the presented scenarios and the ethical dilemmas they raised.
- Personal reflections on the challenges of designing scenarios for large language models.
- Insights gained from the activity, including any new perspectives or considerations regarding utilitarian ethics and the decision-making of large language models.
- Connections between the scenarios and real-world implications of AI technologies.
Optional Extension: Students can further explore the ethical implications of large language models by discussing and debating the potential approaches for implementing ethical guidelines or constraints in their decision-making algorithms. This extension can provide additional depth and critical thinking to the activity.