Wechseln zu: Navigation, Suche

Baukasten:Algorithmic Bias – Life with Artificial Intelligence – digital

Version vom 27. Januar 2021, 09:46 Uhr von Andre.baier (Diskussion | Beiträge) (Algorithmic Bias – Life with Artificial Intelligence – digital)

(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)

Algorithmic Bias – Life with Artificial Intelligence – digital

Algorithmic-bias.jpg


Through this building block participants develop sensitivity towards using and living with artificial intelligence (AI). They get the ability to reflect on daily AI usage as well as develop an awareness regarding the challenges and problems we are facing when using AI systems.

Participants understand the different use cases of artificial intelligence in their daily life, which can be unexpected in some cases. Also they get a brief idea of how artificial intelligence systems work and that they are able to fail. This building block gives different examples of AI being subjective and discriminating. AI very much depends on the people setting it up as well as the training data provided and so it remains biased in many cases. In the follow-up tasks examples are given to handle the difficulties programming AI.

The building block includes a theoretical part for preparation, to ensure that participants have a basic understanding of artificial intelligence. After that, participants will discuss the consequences and problems of AI usage. Finally, they will brainstorm ideas on how these problems can be prevented and how society and individuals should deal with them.

Title
Algorithmic Bias – Life with Artificial Intelligence – digital
Topic
Thinking about the impact especially biases in AI systems have on humanity such as the impact humanity has on AI systems.
Type
Digital
Keywords
AI systems, bias, training datasets, high tech
Competences
perspective-taking, anticipation, gaining interdisciplinary knowledge, dealing with incompleteness and overcomplexity, motivation, reflecting principles, acting independently
Forms of Learning
cooperative, fact-oriented
Methods
experiencing unexpected situations (Google/ Bing search), evaluate different use cases, group discussion, reflecting group discussion, personal reflection
Group Size
>2
Duration
30 minutes
Material and Space
E-learning unit based on information material and videos about algorithmic bias
Quality
good - building block developed by participants in Berlin
Semester
Winter Semester 2020/21


Preparation and Follow-Up

Facilitators’ Preparation

Facilitators first get an impression about what bias in regard to AI means. To get an overview the fast AI Video 2 can be used (Fast AI Video 2). For preparation it is important to get an idea about the diversity of AI types, how they can be used and which biases can occur. Also the responsible use of AI and possible examples should not be missing. Sources from the references can be used and renewed. The facilitators also have a look at preparation videos for participants.

Participants’ Preparation

Participants understand how AI works (e.g. the significance of training data), what bias is and how they go together. They experience the outcome of AI systems that were trained with biased datasets and gain an overview of bias types (which provides the base for the group work within the session).

Participants’ Follow-Up

There are two follow up tasks. As always participants should reflect on the session in their learning journal. This can be their own thoughts about AI in our daily lives but also the building blocks structure and content.

In regard to these questions participants also have a look at the “guidelines for human AI interaction” from Microsoft. They reflect on the content as well as they reflect on Microsoft as the author and how this fact possibly influences the guidelines.

Schedule

Minute 00 - Introduction

Notes

The Facilitators give the participants a short introduction into the building block. They mention that AI systems are very complex, but still part of our daily lives. It is important to underline the relevance of trying to have a basic idea of what we are using and how it works. The facilitators encourage participants to get comfortable with the topic. Everyone should have a basic understanding of the topic.

Slides

Schedule for today’s session

  • 00.00 - 1. Introduction
  • 00:03 - 2. Reflection on preparation
  • 00:08 - Have a close look at AI use cases in small groups
  • 00:23 - Presentation of discussion outcomes
  • 00:28 - Summary and follow up


After today’s session we hope…

  • … that you have a basic idea of how AI works/ what defines it
  • … that you are able to identify AI systems in your daily life
  • … that you are able to question those AI systems, especially in regarding bias

Minute 03 – Reflection on Preparation

Notes

The facilitators ask the participants if somebody wants to reflect on their results of the preparation task. The questions in the slides are asked by the facilitators. On the one hand they help to better understand their own results, on the other hand participants share their opinion about it.

Slides

Preparation reminder Using Google or Bing look for the following terms on image search:

  • 1. Nurse
  • 2. Woman smiling
  • 3. CEO stock photo


Questions for reflection

  • Where were you located while running the search and did you use a VPN?
  • What were the results of your image search? Did you notice anything special?
  • Do you have an idea on how the search results could relate to bias?
  • What do you think about the results?

Minute 08 - Group Work - Types of Bias in specific Application

Notes

The facilitators explain the tasks and briefly introduce the AI use cases. They give a more detailed input in regard to the different use cases, than on the slides.

The facilitators create breakout sessions. The number of groups depends on the number of participants. Each group should consist of 3-6 participants. If possible name each breakout session according to the use case.

The facilitators post the tasks into the chat. The groups work on their tasks and post their results in the forum the facilitators have prepared.

Slides

Your use cases

  • Group 1: Routes for Police Officers
  • Group 2: Search Engines
  • Group 3: Health Care
  • Group 4: Risk Assessment


Overview of Tasks - Small Groups - 15 min.

  • Task 1: Think of biases that could be present in the App/ Software
  • Task 2: Categorization and reflection of biases in the use case
  • Task 3: Discuss solutions in this use case
  • Task 4: Post your results in the forum


AI usage example for small group discussion Your group is given an AI usage example:

Group 1 - Routes for Police Officers – Police stations use AI solutions to predict in which neighborhood crimes are more likely to occur and add those routes for officers on patrol.

Group 2 - TayTweets Twitter Bot – Microsoft’s Twitter Bot resembles a teenager; after conversing with humans the AI started tweeting racist tweets. (It was shut down after 96k Tweets, 16 hours after release.)

Group 3 - Healthcare – AI Systems are implemented to diagnose illnesses or to aid the doctors. There is a disproportionate selection of patients; some groups are underrepresented. Data available for AI are gathered in academic medical centers; AI systems will know less about patients from populations that do not typically have academic medical centers.

Group 4 - Risk Assessment – Predictions if a defendant committed a crime before it is made. Historical data of the defendant’s ethnicity is taken into account, if the ethnicity has a higher number of arrests, the defendant will get an according score for the judges to rely on.


Overview of Tasks - Small Groups - 15 min.

They give a more detailed input in regard to the different use cases, than on the slides.

  • Task 1: Think of biases that could be present in the App/Software
  • Task 2: Categorization and reflection of biases in the use case
  • Task 3: Discuss solutions in this use case
  • Task 4: Post your results in the forum


Task 1: Think of biases that could be present in the App/ Software (~2min)

  • Briefly discuss the tasks of the AI as a group:
  • Why is it used?
  • What are its functions?


Task 2: Categorization and reflection of biases in the use case (~6min)

  • Use the algorithmic biases presented in the preparation
  • Think of biases that could be present in the use case
  • Explain to your group how the biases could occur
  • Discuss what impact the biases have in real-life scenarios

Preparation Reminder – Here are the different types from the preparation:

Historical bias

Historical bias comes from the fact that people are biased, processes are biased, and society is biased. The past decisions make up the data set used to implement a new AI to make these decisions. Therefore they are inherently biased.

Example: An all-white jury is 16 percentage points more likely to convict a Black defendant than a white one, but when a jury had one Black member it convicted both at the same rate.


Measurement bias

Occurs when our models make mistakes because we are measuring the wrong thing, or measuring it in the wrong way, or incorporating that measurement into the model inappropriately.

Example: If the dataset is taken with a different camera or in a different environment than it will be operating; the AI might focus on the wrong aspects.


Aggregation bias Occurs when models do not aggregate data in a way that incorporates all of the appropriate factors, or when a model does not include the necessary interaction terms.

Example: The way diabetes is treated is often based on simple univariate statistics and studies involving small groups of heterogeneous people. Analysis of results is often done in a way that does not take account of different ethnicities or genders. Patients that differ from the “norm” might be mistreated


Representation bias

Is very common for simple models. When there is some clear, easy-to-see underlying relationship, a simple model will often simply assume that this relationship holds all the time.

Example: When implementing a Simple AI to determine the gender of a person doing a certain profession, the AI did not only reflect the actual gender imbalance in the underlying population, but amplified it (Remember the nurse image search example?).


Unbalanced classes in training data

The training data may not have enough examples of each class. Which can affect the accuracy of predictions for example with facial recognition software.

Example: MIT researchers have studied the most popular computer vision APIs to see how accurate they were. Microsoft for example was 100% effective for white males, 98.3% effective for light females, 94% effective for dark males but only 79.2% effective for dark females.


Data amplified by feedback loops

Small amounts of bias can rapidly increase exponentially because of feedback loops.

Example: If police are sent to a specific neighborhood due to biased data, more people get arrested there and the bias is confirmed.

Source: Howard, J. & Gugger, S. (2020). Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD (1. Aufl.). O’Reilly Media.

Task 3: Discuss solutions in this use case (~5min)

  • What responsibilities AI has in this use case?
  • How should AI be used here?
  • Are there any alternatives to the implementation of AI?


Task 4: Share your outcomes! - Post your results in the forum (~2min)

  • What was your AI Use Case?
  • What types of bias can occur?
  • What are ways to solve/ be more aware of the biases?

Minute 24 - Reflection on Group Work

Notes

In the plenary, participants share their discussion outcomes to the other groups. The facilitators open the room for questions, opinions can be exchanged. Depending on the time, a discussion on the single topics can be deepened. The facilitators close the discussion with an outlook on responsible solutions of algorithmic bias that should be considered when working with AI.

Slides

Introduce each other in your case studies

  • What was your AI Use Case?
  • Which biases could occur?
  • How would you go about solving this?

Minute 29 - Summary and Follow Up

Notes

The facilitators summarize the findings of the single groups and the group as one, as well as the risks of blind trust in AI systems. They provide the Technology’s Vicious Cycle as an inspiration for the participants follow-up tasks. The participants are asked to reflect on how they emotionally respond to the sessions content. For the second part of the follow-up the facilitators provide the participants with microsoft solutions for algorithmic bias. The participants reflect on the content as well as they reflect on Microsoft as the author and how this fact possibly influences the guidelines.

Slides

Follow-up tasks overview

  • Task 1: Reflection on today’s session
  • Task 2: Evaluation and reflection microsoft solution for algorithmic bias in AI


Task 1: Reflection on today’s session

  • reflect on today’s session with the help of the tool Technologies Vicious Circle and these questions
  • Are you scared about AI in your daily life and in general?
  • Did learning about how AI works change your attitude towards AI?
  • After facing the problems that come up when using a biased system, where do you think AI solutions save time and resources and where they are problematic?
  • Can we make sure that biased AI won’t influence mankind? If yes, how and if no, can we guide it in a positive direction?


Technology’s Vicious Cycle

Technology doesn’t have to be evil to drive a vicious cycle.

The problems of present-day technology are solved with new technology, which in turn result in newer problems, which will eventually require even newer technology to solve the problems of the latest technology. However, this is only possible if the latest problems are taken into account, which will eventually require a much more advanced technology…


Task 2: Microsoft solution for algorithmic bias in AI - evaluation and reflection

  • Microsoft Guidelines for non-discriminating use of AI
  • Microsoft - Resources on responsible AI and how it should interact with humans
  • Guidelines for Human AI interaction
  • (some extra source: Understanding responsible AI )


Look at the Guidelines for human AI interaction

  • Evaluate the content of the guidelines:
    • Is this solution free of discrimination?
    • What do you like about it?
    • What is missing?
  • Reflection on Microsoft solutions:
    • What impact does Microsoft as the author have on these solutions?
    • Where do you suspect this to be problematic?

Notes and Remarks

Authors’ Note

  • reflection on preparation: note that location can have an impact on bias (e.g. IP Address when using search engines)

group work: make sure the participants understand the use cases

  • reflection on group work: try to reflect all use cases after group work (pay attention on time while moderating the discussion in plenum)
  • generally: pay attention on time management (e.g. clearly communicate the schedule to the participants)

potential extension – building block can be easily extended to 45 minutes (or more) by spending more time on the following aspects:

  • The reflection on preparation can be discussed more in detail.
  • The group work time can be extended and participants could get together between tasks, for intermediate reflections on their outcomes of tasks 1-3 before getting back into breakout rooms for tasks 4-5.
  • The discussions in plenum can be individually adjusted in duration by moderating accordingly.

Further Notes

If possible, the participants could choose their case study topics in advance (e.g. with a poll). This way they could get familiar with their topic beforehand (during preparation) and the group work could focus on discussion and transfer.

  • concerning group size: it would be necessary to know how many participants will join the building block, to estimate how many groups are needed
  • concerning breakout sessions: probably easier to realize in present-format (not online), breakout rooms have to be preset according to chosen case study topics

References

  • Fastai – Our main source, great comprehensive ethics chapter on bias and AI that is used in a regular deep learning course to raise awareness among future developers: Howard, J. & Gugger, S. (2020). Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD (1. Aufl.). O’Reilly Media.
  • Crash Course - A very descriptive video the first part of which is used in the preparation to explain the difference between bias and discrimination: Algorithmic Bias and Fairness: Crash Course AI #18
  • The Verge - Bias shown in a recruiting tool at amazon, the first example that inspired us to take up this topic
  • Vox - Interesting think piece on why algorithms can be racist and/or sexist Why algorithms can be sexist and racist - Biased Amazon recruiting tool
  • Do Google’s ‘unprofessional hair’ results show it is racist? - The Guardian - Article on biased search engine results
  • Technology Review - Biased training data leading to “racist predictive policing algorithms”) - Predictive policing algorithms are racist. They need to be dismantled.
  • TED - A lesson on bias and examples about several different usages of AI systems, that might become a problem, used in preparation - Can we protect AI from our biases? | Robin Hauser | TED Institute
  • TEDx - Many different examples for AI also very “simple” examples to make clear what is already considered AI, providing actions to make AI better - Stop assuming data, algorithms and AI are objective | Mata Haggis-Burridge | TEDxDelft
  • Survival of the best Fit - Interesting game that shows how AI usage in a recruiting situation can mirror employers bias- Recruiting AI game - Technology Review - An article playing with different numbers on arrests and convictions

Courtroom Algorithm Game - Criminal Justice AI

  • WIRED - Article about a project what tried to fix AI biases in their algorithm and what are difficulties in fixing ai biases - AI is biased. How scientists are trying to fix it
  • WIRED - Biased facial recognition algorithms are more likely to mix up black faces than white faces - The best algorithms still struggle to recognize black faces
  • Microsoft - Resources on responsible AI and how it should interact with humans Understanding responsible AI - Guidelines for Human AI interaction

Material

Participants’ Preparation

Notes

To prepare the participants well for a controversial discussion the preparation has different focusses. Starting with a video, the participants get a little overview to continue with some exercises and reflect on these.

Slides

Source: https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html (25.12.2002 14:36)

Overview of tasks

  • Task 1: Short overview of algorithmic bias
  • Task 2: Exercise - Search Engine Activity
  • Task 3: Read “The different Types of Algorithmic Bias”


Task 1: Short overview of algorithmic bias

  • watch the video to get a short idea of what algorithmic bias is: Algorithmic Bias and Fairness: Crash Course AI #18 OR *Can we protect AI from our biases? | Robin Hauser | TED Institute
  • take notes: look for the purpose of AI and how the use of AI leads to problems


Task 2: Exercise - Search Engine Activity

  • use two or three different search engines to search for images of:
    • Nurse
    • Woman/Man smiling (also have a look on the captures)
    • CEO stock photo


answer the following questions

  • What search engine were you using?
  • Can you recognize some kind of bias?

If you have the opportunity to use a VPN, do the task again and change your IP to an African or Asian one. Were the results different?


Task 3: Read “The different Types of Algorithmic Bias” As you might have already seen from the examples in the videos, as well as your quick image search, there are different types of bias that can be built into an AI system. To get an Overview here is a summary to read as a preparation for the session.

Historical bias

Historical bias comes from the fact that people are biased, processes are biased, and society is biased. The past decisions make up the data set used to implement a new AI to make these decisions. Therefore they are inherently biased.

Example: An all-white jury is 16 percentage points more likely to convict a Black defendant than a white one, but when a jury had one Black member it convicted both at the same rate.


Measurement bias

Occurs when our models make mistakes because we are measuring the wrong thing, or measuring it in the wrong way, or incorporating that measurement into the model inappropriately.

Example: If the dataset is taken with a different camera or in a different environment than it will be operating; the AI might focus on the wrong aspects.


Aggregation bias Occurs when models do not aggregate data in a way that incorporates all of the appropriate factors, or when a model does not include the necessary interaction terms.

Example: The way diabetes is treated is often based on simple univariate statistics and studies involving small groups of heterogeneous people. Analysis of results is often done in a way that does not take account of different ethnicities or genders. Patients that differ from the “norm” might be mistreated


Representation bias

Is very common for simple models. When there is some clear, easy-to-see underlying relationship, a simple model will often simply assume that this relationship holds all the time.

Example: When implementing a Simple AI to determine the gender of a person doing a certain profession, the AI did not only reflect the actual gender imbalance in the underlying population, but amplified it (Remember the nurse image search example?).


Unbalanced classes in training data

The training data may not have enough examples of each class. Which can affect the accuracy of predictions for example with facial recognition software.

Example: MIT researchers have studied the most popular computer vision APIs to see how accurate they were. Microsoft for example was 100% effective for white males, 98.3% effective for light females, 94% effective for dark males but only 79.2% effective for dark females.


Data amplified by feedback loops

Small amounts of bias can rapidly increase exponentially because of feedback loops.

Example: If police are sent to a specific neighborhood due to biased data, more people get arrested there and the bias is confirmed.


Source: Howard, J. & Gugger, S. (2020). Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD (1. Aufl.). O’Reilly Media.


Presence time – Materials

There is no material needed in addition to the slides.

During a digital session using video calls it is recommended that the facilitators all use the same virtual background, to make it easier for the participants to locate the facilitators and to provide a more homogeneous moderation image. An example could be something like this:

https://br.freepik.com/fotos-premium/forma-futurista-de-conexao-tecnologica-rede-de-pontos-azuis-abstrato-fundo-azul-conceito-de-rede-comunicacao-na-internet-renderizacao-em-3d_7807873.htm (27.12.2020 14:29)

Participants’ Follow-Up

Notes

The facilitators summarize the findings of the single groups and the group as one, as well as the risks of blind trust in AI systems. They provide the Technology’s Vicious Cycle as an inspiration for the participants follow-up tasks. The participants are asked to reflect on how they emotionally respond to the sessions content. For the second part of the follow-up the facilitators provide the participants with microsoft solutions for algorithmic bias. The participants reflect on the content as well as they reflect on Microsoft as the author and how this fact possibly influences the guidelines.

Slides

Follow-up tasks overview

  • Task 1: Reflection on today’s session
  • Task 2: Evaluation and reflection microsoft solution for algorithmic bias in AI


Task 1: Reflection on today’s session reflect on today’s session with the help of the tool Technologies Vicious Circle and these questions

  • Are you scared about AI in your daily life and in general?
  • Did learning about how AI works change your attitude towards AI?
  • After facing the problems that come up when using a biased system, where do you think AI solutions save time and resources and where they are problematic?
  • Can we make sure that biased AI won’t influence mankind? If yes, how and if no, can we guide it in a positive direction?


Technology’s Vicious Cycle

Technology doesn’t have to be evil to drive a vicious cycle.

The problems of present-day technology are solved with new technology, which in turn result in newer problems, which will eventually require even newer technology to solve the problems of the latest technology. However, this is only possible if the latest problems are taken into account, which will eventually require a much more advanced technology…


Task 2: Microsoft solution for algorithmic bias in AI - evaluation and reflection

  • Microsoft Guidelines for non-discriminating use of AI
    • Microsoft - Resources on responsible AI and how it should interact with humans
    • Guidelines for Human AI interaction
    • (some extra source: Understanding responsible AI )

Look at the Guidelines for human AI interaction

  • Evaluate the content of the guidelines:
    • Is this solution free of discrimination?
    • What do you like about it?
    • What is missing?
  • Reflection on Microsoft solutions:
    • What impact does Microsoft as the author have on these solutions?
    • Where do you suspect this to be problematic?