Overview
Table of Contents
- Overview
- 1. Introduction to AI Ethics
- 2. Human-Centered Design for AI
- 3. Identifying Bias in AI
- 4. AI Fairness
- 5. Model Cards
- References
Overview
Explore practical tools to guide the moral design of AI systems.
1. Introduction to AI Ethics
Learn what to expect from the course.
2. Human-Centered Design for AI
Design systems that serve people’s needs. Navigate issues in several real-world scenarios.
6 Steps to Design HCD AI Systems:
- Understand people’s needs to define the problem
- Ask if AI adds value to any potential solution
- Consider the potential harms that the AI system could cause
- Prototype, starting with non-AI solutions
- Provide ways for people to challenge the system
- Build in safety measures
3. Identifying Bias in AI
Bias can creep in at any stage in the pipeline. Investigate a simple model that identifies toxic text.
6 types of bias (Src: link):
- historial bias
- reprensentations bias
- measurements bias
- aggregation bias
- evaluation bias
- deployment bias
Bias framework:
(Src: link)
4. AI Fairness
Learn about four different types of fairness. Assess a toy model trained to judge credit card applications.
4 fairness criteria (Src: link)
- Demographic parity/ statistical parity
- Equal opportunity (TPR based/confusion matrix)
- Equal accuracy
- Group unaware / "Fairness through unawareness"
5. Model Cards
Increase transparency by communicating key information about machine learning models.
9 model card sections(Src. Link):
- Model Details
- Intended Use
- Factors
- Metrics
- Evaluation Data
- Training Data
- Quantitative Analyses
- Ethical Considerations
- Caveats and Recommendations
References
Tools
HCD Application to AI:
- Lex Fridman’s introductory lecture on Human-Centered Artificial Intelligence
- Google’s People + AI Research (PAIR) Guidebook
- Stanford Human-Centered Artificial Intelligence (HAI) research
Identifying Bias:
To continue learning about bias, check out the Jigsaw Unintended Bias in Toxicity Classification competition that was introduced in this exercise.
- Kaggler Dieter has written a helpful two-part series that teaches you how to preprocess the data and train a neural network to make a competition submission. Get started here.
- Many Kagglers have written helpful notebooks that you can use to get started. Check them out on the competition page.
Another Kaggle competition that you can use to learn about bias is the Inclusive Images Challenge, which you can read more about in this blog post.
The competition focuses on evaluation bias in computer vision.
AI Fairness:
- Impossibility Theorem of Machine Fairness
- Explore different types of fairness with an interactive tool. You can read more about equal opportunity in this blog post. Analyze ML fairness with this walkthrough of the What-If Tool, created by the People and AI Research (PAIR) team at Google. This tool allows you to quickly amend an ML model, once you've picked the fairness criterion that is best for your use case.
Further reading:
- partnershipai.org
- Responsible AI - FAIR (Facebook Artificial Intelligence Research) progress and learnings across socially responsible AI research - metaai
- ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Model Cards:
- Model Cards for AI Model Transparency
- Open AI’s model card for GPT-3
- Google Cloud's example model cards - Face Detection
- Huggingface Model cards
Further read:
Papers: