What kind of AI do we want? Bringing artistic and technological practices together.
In this seminar, we look at “artificial intelligence” (AI) as a historical-material practice. That is, we understand AI as shaped by the concrete conditions of its development and use. We will address the current discourse within our democratically shaped society around bias in AI, trustworthy AI, and look at decolonial as well as indigenous approaches to AI.
The is a joint module by ZHdK (Felix Stalder) and by ETH Zurich (Nora al-Badri / Adrian Notz)
Date: March 14-18, 2022
Time: 10:00-13:00 / 14:00-17:00 Uhr
Location: Monday-Wednesday
- ETH, Room ML H37.1 Map
- ETH MaschinenLabor
- Sonneneggstrasse 3, 8092 Zürich
Location: Thursday-Friday
- ZHDK, Room ZT 6.K04
- Zürcher Hochschule der Künste Toni-Areal,
- Pfingstweidstrasse 96, 8031 Zürich
Course requirements:
- Presence in class (at least 80% of the time)
- Active contribution to discussions in class
- Active participation in group work and group presentations
Monday, 14.03.
Morning: art & science
Introduction to the seminar
Introduction: Art and Science
Guest: Aparna Rao, artist Bangalore; Robotics Aesthetics & Usability Center (RAUC), ETH
Further Reading
- Leonardo im Labor, KUNSTFORUM International, Band 277 (October 2021)
Afternoon: Bias in AI
Videos/Artistic Works
Amazon Go - SNL, 13.03.2022
Bias In Data
- Roth, Lorna. 2009. “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.” Canadian Journal of Communication 34 (1). (Vox, 2015, 4:39)
- !Mediengruppe Bitnik! Dada. State of the Reference, 2017
Bias in Labelling
- Trevor Paglen “Imagenet Roulette” Twitter #imagenetroulette
- https://excavating.ai (Talk @ HKW, 2019)
- Kate Crawford, Trevor Paglen: Datafication of Science, HKW 2019
- Lyons, Michael J. 2021. “‘Excavating AI’ Re-Excavated: Debunking a Fallacious Account of the JAFFE Dataset.” ArXiv:2107.13998 [Cs], July.
Bias in Institutional Interest
Bias in Modelling
- Buolamwini, Joy. 2018. AI, Ain't I A Woman? (3:30)
Literature
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. Pro Publica
- Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity.
- Crooks, Roderic, and Morgan Currie. 2021. “Numbers Will Not Save Us: Agonistic Data Practices.” The Information Society 37 (4): 201–13.
- Dastin, Jeffrey. 2018. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.” Reuters. October 10, 2018.
- Coded Bias. 2020. 85:00 Netflix
- Gandy, Jr., Oscar H. 2020. “Panopticons and Leviathans: Oscar H. Gandy, Jr. on Algorithmic Life.” Logic Magazine. 2020.
- Hildebrandt, Mireille. 2019. “Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning.” Theoretical Inquiries in Law 20 (1): 83–121.
- Hooker, Sara. 2021. “Moving beyond ‘Algorithmic Bias Is a Data Problem.’” Patterns 2 (4): 100241
- Khan, Nora introduction in: Reas, Casey: Making Pictures with Generative Adversarial Networks, Anteism Books, 2019
- Kogan, Gene Machine Learning for Artists, 2017. see also ml4a.github.io
- Lyons, Michael J. “Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset, 2021 deepai.org
- Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
- O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Allen Lane, Penguin Random House.
- Reas, Casey: Making Pictures with Generative Adversarial Networks, Anteism Books, 2019
Tuesday 15.03.
Morning: Digital Colonialism
Readings in Class
- Shankar, Shreya, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. 2017. “No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World.” Presented at NIPS 2017 Workshop on Machine Learning for the Developing World
- Scheuerman, Morgan Klaus, Madeleine Pape, and Alex Hanna. 2021. “Auto-Essentialization: Gender in Automated Facial Analysis as Extended Colonial Project.” Big Data & Society 8 (2): 205395172110537.
- Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press. P 52-66 PDF
Videos, artistic work
- knowbotiq: Mercurybodies - remote sensations (2022), video 12 min
- Nora Al-Badri, “The Other Nefertiti”/“Nefertiti Bot” / “Babylonian Vision”
- Karim Ben Khelifa, Seven Grams:🇨🇩 Is our hunger for technology dooming DR Congo? | The Stream, Al Jazeera English. 25 Min
- Raoul Peck. 2016 “I am not Your Negro”. video, 93 min.
Further Reading
- Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press.
- Mohamed, Shakir, Marie-Therese Png, and William Isaac. 2020. “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.” Philosophy & Technology 33 (4): 659–84. https://doi.org/10.1007/s13347-020-00405-8
- Mejias, Ulises A., and Nick Couldry. 2019. “Datafication.” Internet Policy Review 8 (4).
- Arun, Chinmayi. 2020. “AI and the Global South: Designing for Other Worlds.” In The Oxford Handbook of Ethics of AI, edited by Markus Dirk Dubber, Frank Pasquale, and Sunit Das. Oxford Handbooks Series. New York, NY: Oxford University Press.
- María do Mar Castro Varela / Nikita Dhawan. 2015. “Postkoloniale Theorie. Eine kritische Einführung”, Transcript Verlag.
- AI-Myths: Ethics guidelines will save us (2020)
Afternoon: Trustworthy AI
Introduction by Prof. Dr. Alexander Ilic, Head of ETH AI Center
Lecture: Hoda Heidari, ETH alum and now faculty member at CMU
Further Reading
- High-Level Expert Group on AI. Ethics guidelines for trustworthy AI. April 2019
- Li, Bo, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2021. “Trustworthy AI: From Principles to Practices.”
- Liu, Haochen, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil K. Jain, and Jiliang Tang. 2021. “Trustworthy AI: A Computational Perspective.” (comprehensive survey of six crucial dimensions ”(i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being)
- Singh, Richa, Mayank Vatsa, and Nalini Ratha. 2020. “Trustworthy AI.”
- Yaghini, Mohammad, Andreas Krause, and Hoda Heidari. 2021. “A Human-in-the-Loop Framework to Construct Context-Aware Mathematical Notions of Outcome Fairness.” ArXiv:1911.03020 [Cs], May.
Wednesday 16.03.
Morning
Trustworth AI
Lecture: Menna El-Assady, research fellow AI Center, ETH Zurich Presenation slides
- Sperrle, Fabian, Mennatallah El-Assady, Grace Guo, Duen Horng Chau, Alex Endert, and Daniel Keim. 2020. “Should We Trust (X)AI? Design Dimensions for Structured Experimental Evaluations.”
- 7 Types of Data Bias in Machine Learning. Lionbridge AI, Aug 11, 2020
- Beck, Christin, Hannah Booth, Mennatallah El-Assady, and Miriam Butt. 2020. “Representation Problems in Linguistic Annotations: Ambiguity, Variation, Uncertainty, Error and Bias.” In Proceedings of the 14th Linguistic Annotation Workshop, 60–73. Barcelona, Spain: Association for Computational Linguistics.
Art/Design Project:
- By Kate Crawford and Vladan Joler. (2018). Anatomy of an AI System. The Amazon Echo as an anatomical map of human labor, data and planetary resources
Indigenous (perspectives on) AI
Introduction: Possibilities and limits of making available indigenous knowledge/experience for non-indigenous people
Readings in Class
- Lewis, Jason Edward, Noelani Arista, Archer Pechawis, and Suzanne Kite. 2018. “Making Kin with the Machines.” Journal of Design and Science, July.
- Introduction & Hāloa : the long breath, I = Author 2
- Introduction & wahkohtawin: kinship within and beyond the immediate family, the state of being related to others, I = Author 3
- Introduction & wakȟáŋ: that which cannot be understood, I = Author 4
- Indigenous Protocol and Artificial Intelligence Guidelines for Indigenous-centred AI Design v.1 (p.20-22)
- Indigenous Protocol and Artificial Intelligence How to Build Anything Ethically. Suzanne Kite in discussion with Corey Stover, Melita Stover Janis, and Scott Benesiinaabandan (p.75-84)
Further Reading:
- Cipolle, Alex V. 2022. “How Native Americans Are Trying to Debug A.I.’s Biases.” The New York Times, March 22, 2022, sec. Technology.
- Joichi Ito, “Resisting Reduction: A Manifesto,” Journal of Design and Science 3 (November 2017) (See, Lewis et al, above)
- Lewis, J. E. (2019). An orderly assemblage of biases: Troubling the monocultural stack. In Schweitzer, I. (Ed.), Afterlives of Indigenous archives (pp.219–31). Lebanon, MA: New England Press.
- Lozano-Hemmer, R. (1996). FLOATING TROUT SPACE - native art in cyberspace. Telepolis.
- Research Data Alliance International Indigenous Data Sovereignty Interest Group. (September 2019). “CARE Principles for Indigenous Data Governance.” The Global Indigenous Data Alliance
- Taiuru, K. (2020). Treaty of Waitangi/Te Tiriti and Māori Ethics Guidelines for: AI, Algorithms, Data and IOT. (Sections 5 (Introduction) - 9.4. Data is a Taonga – Te Ao Māori Perspective & 13.3 (i) Māori Data Sovereignty Guidelines)
Afternoon: Indigenous AI
Guest: Tiara Roxanne, Postdoctoral Fellow at Data & Society in NYC, Indigenous Mestiza scholar and artist based in Berlin.
- Mimi Onuoha and Mother Cyborg. 2020. A people's Guide to AI.
Thursday 17.03.
Morning: Art & AI
Discussion of texts from Tuesday.
Presentation: Nora al-Badry
Further Works
“Let me into your home: artist Lauren McCarthy on becoming Alexa for a day” (Guardian.co.uk, May 2019)
Group Work: Task for each group:
Develop a conceptual sketch of a project that deals with one or more issue(s) that are particularly relevant to the group from the discussions on bias, trustworthy AI, digital colonialism, or indigenous AI. The sketch project can be based on AI, but doesn't need to be. You can use whatever medium you like to address the issues.
Breakout rooms (12:00 - 17:00)
ZT 5.F11 & ZT 6.F09
Afternoon: Group Work
16:00 -17:00
Group mentoring
Nora Al-Badri (ZT 6.K04)
16:00 - 16:20 Group 1
16:20 - 16:40 Group 2
16:40 -17:00 Group 3
Felix Stalder (ZT 6.F09)
16:00 - 16:20 Group 4
16:20 - 16:40 Group 5
Friday, 18.03.
Morning: Group Work
Breakout Rooms
ZT 5.F12
T 5.F04
Afternoon: Group Presentations
Each group 10 minutes presentation, 10 minutes discussion
- Investigating Youtube Recommendations
- AI writing Sci-Fi Stories
- generative native fashion
- (A)I heard you. Can you stop?
- Augmenting Polyterasse: Alternative Retelling of History
Feedback and Wrap-up