Introduction to AI and AI Ethics
🧠

Introduction to AI and AI Ethics

Dates
May 31, 2023
Type
LectureLab
Section
Introductions
Guest Speaker

Lecture

Topic: Introduction to AI and AI Ethics

Key Terms:

Artificial intelligence, machine ethics, moral agency, moral machines

Guiding Questions

  1. Why is it important to consider ethics in AI development?
  2. What are the challenges in teaching AI systems to make ethical decisions?
  3. How can AI ethics impact society and human behavior?

To Read

Wendell Wallach and Colin Allen. Moral Machines: Teaching Robots Right From Wrong. Oxford: Oxford University Press, 2008. (Introduction + Chapter 3)

Summary

Wallach and Allen argue that as artificial intelligence (AI) and robotics continue to develop, there will be an increasing need for machines that are capable of making ethical decisions. This is because AI and robots are becoming increasingly integrated into areas of human life where decision-making involves complex moral and ethical considerations.

The authors explore the idea of building "artificial moral agents" (AMAs) capable of performing tasks that traditionally required human judgment. They propose that these machines should be guided by ethical considerations to ensure their decisions and actions are morally acceptable.

To Watch

Student Facilitation

Reading: "Utilitarianism" - John Stuart Mill.

We are reading Mill’s Utilitarianism in Chapter 9 of 📕Ethics: The Essential Modern Writings

UntitledUntitled
for facilitation schedule

Summary: This reading explores the fundamental principles of utilitarianism, an ethical theory that advocates for the greatest happiness for the greatest number. Students will examine how this theory could be applied in AI ethics, particularly in scenarios involving large-scale decision-making.

Key Questions

  1. How might a utilitarian evaluate the benefits and harms of AI?
  2. In what ways can the principle of utility guide decision-making processes in AI?
  3. How could a utilitarian perspective inform discussions around privacy and surveillance by AI technologies?
  4. What challenges might a utilitarian face when attempting to predict the long-term consequences of AI?
  5. How might utilitarian ethics inform our understanding of AI's impact on job displacement?

Journal

Let’s do some experimentation with how machines can work. One of the common philosophical thought experiments is the famed (infamous) Trolly Problem. Check out the site below if you want a light hearted take on this class problem

Group Activity Instructions: Designing Moral Dilemmas for Large Language Models

Objective: The objective of this group activity is to explore the ethical implications of large language models and their decision-making processes in moral dilemmas. By engaging with the Moral Machines website and drawing insights from John Stuart Mill's utilitarianism, students will collaboratively design their own scenarios to examine how large language models might make decisions in morally challenging situations.

Instructions:

  1. Formation of Groups: Form groups of 5 to 6 students. Each student will be responsible for submitting their individual journal entry based on the group discussion and activities.
  2. Explore Moral Machines Website: Visit the Moral Machines website developed by the MIT lab. Take the moral decision-making quiz to gain an understanding of the various scenarios and the factors involved in decision-making by machine intelligence.
  3. Discuss Moral Machines Reading: Reflect on the reading from Moral Machines: Teaching Robots Right From Wrong by Wendell Wallach and Colin Allen. Consider the ethical considerations and challenges associated with machines making moral decisions in complex situations.
  4. Analyze Utilitarianism Principles: Review the key concepts of John Stuart Mill's utilitarianism discussed in class. Consider how utilitarian ethics might be applied to the decision-making process of large language models.
  5. Scenario Design: As a group, create your own scenarios that involve moral dilemmas where large language models play a role in decision-making. Each scenario should be designed to test how a large language model might make choices in line with utilitarian principles.
    • Start by identifying a hypothetical scenario that presents a moral dilemma one might face with a large language models such as GPT.
    • Describe the context, characters, and conflicting moral interests involved.
    • Outline the options or choices available to the large language model.
    • Consider the potential consequences of each choice and their impact on overall happiness and well-being.
    • Discuss the ethical considerations and trade-offs involved in the decision-making process.
    • Ensure the scenarios challenge the students' understanding of utilitarianism and the application of ethical principles in complex situations.
    • Develop 5 or more
🌀
For example, you might think of certain subjects that would be morally sensitive to ask an AI. Or you might think about the use of an LLM AI in specific industries (military, medicine, engineering, etc.)
  1. Scenario Presentation and Discussion: Each group will present their designed scenarios to the class. The presentations should include:
    • An overview of the scenario, including the context and moral dilemmas.
    • The options available to the large language model and the potential consequences of each choice.
    • A discussion of how the scenario relates to utilitarian principles and the challenges involved.
    • Encourage active participation and engagement from the class during the presentations.
    • Facilitate a class discussion after each presentation to explore different perspectives and considerations.
  2. Individual Journal Entry: After the group presentations, each student should reflect on the presented scenarios and write an individual journal entry. The journal entry should include:
    • A summary of the presented scenarios and the ethical dilemmas they raised.
    • Personal reflections on the challenges of designing scenarios for large language models.
    • Insights gained from the activity, including any new perspectives or considerations regarding utilitarian ethics and the decision-making of large language models.
    • Connections between the scenarios and real-world implications of AI technologies.
  3. Submission: Each student should submit their individual journal entry based on the group activity.

Optional Extension: Students can further explore the ethical implications of large language models by discussing and debating the potential approaches for implementing ethical guidelines or constraints in their decision-making algorithms. This extension can provide additional depth and critical thinking to the activity.