===== What kind of AI do we want? Bringing artistic and technological practices together. ===== In this seminar, we look at "artificial intelligence" (AI) as a historical-material practice. That is, we understand AI as shaped by the concrete conditions of its development and use. We will address the current discourse within our democratically shaped society around bias in AI, trustworthy AI, and look at decolonial as well as indigenous approaches to AI. The is a joint module by ZHdK (Felix Stalder) and by ETH Zurich (Nora al-Badri / Adrian Notz) **Date: **March 14-18, 2022 **Time: **10:00-13:00 / 14:00-17:00 Uhr **Location: Monday-Wednesday** * ETH, Room ML H37.1 [[http://www.rauminfo.ethz.ch/Rauminfo/grundrissplan.gif?gebaeude=ML&geschoss=H&raumNr=37.1&lang=en|Map]] * ETH MaschinenLabor * Sonneneggstrasse 3, 8092 Zürich **Location: Thursday-Friday** * ZHDK, Room ZT 6.K04 * Zürcher Hochschule der Künste Toni-Areal, * Pfingstweidstrasse 96, 8031 Zürich **Course requirements:** * Presence in class (at least 80% of the time) * Active contribution to discussions in class * Active participation in group work and group presentations ==== Monday, 14.03. ==== === Morning: art & science === Introduction to the seminar Introduction: Art and Science Guest: [[http://www.porsandrao.com/bio|Aparna Rao]], artist Bangalore; [[https://asl.ethz.ch/research/rauc.html|Robotics Aesthetics & Usability Center (RAUC)]], ETH Further Reading * [[https://www.kunstforum.de/band/2021-277-leonardo-im-labor-kunst-und-wissenschaft/|Leonardo im Labor]], KUNSTFORUM International, Band 277 (October 2021) === Afternoon: Bias in AI === **Avoidable and unavoidable biases in AI** {{:biases.png?200|}} **Agonistic Machine Learning** [[https://pad.vmk.zhdk.ch/bias_in_data|Pad for group exercise]] == Videos/Artistic Works == [[https://youtu.be/zS9U3Gc832Y?t=1|Amazon Go]] - SNL, 13.03.2022 **Bias In Data** * Roth, Lorna. 2009. “[[https://cjc-online.ca/index.php/journal/article/view/2196|Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.]]” Canadian Journal of Communication 34 (1). ([[https://www.youtube.com/watch?v=d16LNHIEJzs|Vox, 2015]], 4:39) * [[https://www.youtube.com/watch?v=YJjv_OeiHmo|The racist soap dispenser @ Facebook]], 2017 * !Mediengruppe Bitnik! [[https://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/sor/|Dada. State of the Reference,]] 2017 **Bias in Labelling** * Trevor Paglen [[https://www.wired.com/story/viral-app-labels-you-isnt-what-you-think/|“Imagenet Roulette”]] Twitter [[https://twitter.com/search?q=%23ImagenetRoulette|#imagenetroulette]] * https://excavating.ai (Talk @ HKW, 2019) * Kate Crawford, Trevor Paglen: [[https://www.hkw.de/en/app/mediathek/video/69622|Datafication of Science]], HKW 2019 * Lyons, Michael J. 2021. [[http://arxiv.org/abs/2107.13998|“‘Excavating AI’ Re-Excavated: Debunking a Fallacious Account of the JAFFE Dataset.”]] ArXiv:2107.13998 [Cs], July. **Bias in Institutional Interest** * [[https://whitecollar.thenewinquiry.com|White Collar Crime Risk Zone]] **Bias in Modelling** * Buolamwini, Joy. 2018. [[https://www.youtube.com/watch?v=QxuyfWoVV98|AI, Ain't I A Woman?]] (3:30) == Literature == * Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). [[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing|Machine bias]]. Pro Publica * Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity. * Crooks, Roderic, and Morgan Currie. 2021. “[[https://doi.org/10.1080/01972243.2021.1920081|Numbers Will Not Save Us: Agonistic Data Practices]].” The Information Society 37 (4): 201–13. * Dastin, Jeffrey. 2018. “[[https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G|Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women]].” Reuters. October 10, 2018. * [[https://www.netflix.com/at/title/81328723|Coded Bias]]. 2020. 85:00 Netflix * Gandy, Jr., Oscar H. 2020. “[[https://logicmag.io/commons/panopticons-and-leviathans-oscar-h-gandy-jr-on-algorithmic-life|Panopticons and Leviathans: Oscar H. Gandy, Jr. on Algorithmic Life]].” Logic Magazine. 2020. * Hildebrandt, Mireille. 2019. “[[ https://doi.org/10.1515/til-2019-0004|Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning]].” Theoretical Inquiries in Law 20 (1): 83–121. * Hooker, Sara. 2021. “[[https://doi.org/10.1016/j.patter.2021.100241|Moving beyond ‘Algorithmic Bias Is a Data Problem]].’” Patterns 2 (4): 100241 * Khan, Nora introduction in: Reas, Casey: Making Pictures with Generative Adversarial Networks, Anteism Books, 2019 * Kogan, Gene [[https://medium.com/@genekogan/machine-learning-for-artists-e93d20fdb097|Machine Learning for Artists]], 2017. see also [[[https://ml4a.github.io|ml4a.github.io]] * Lyons, Michael J. "[[https://deepai.org/publication/excavating-ai-re-excavated-debunking-a-fallacious-account-of-the-jaffe-dataset|Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset]], 2021 deepai.org * Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. * O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Allen Lane, Penguin Random House. * Reas, Casey: Making Pictures with Generative Adversarial Networks, Anteism Books, 2019 ==== Tuesday 15.03. ==== === Morning: Digital Colonialism === **Readings in Class** * Shankar, Shreya, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. 2017. “[[http://arxiv.org/abs/1711.08536 |No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World]].” Presented at NIPS 2017 Workshop on Machine Learning for the Developing World * Scheuerman, Morgan Klaus, Madeleine Pape, and Alex Hanna. 2021. “[[https://doi.org/10.1177/20539517211053712|Auto-Essentialization: Gender in Automated Facial Analysis as Extended Colonial Project]].” Big Data & Society 8 (2): 205395172110537. * Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press. P 52-66 {{ :wendy_hui_kyong_chun_-_discriminating_data-mit_press_2021_p.52-66.pdf |PDF}} **Videos, artistic work ** * knowbotiq: [[https://vimeo.com/682434539?embedded=true&owner=32068464&source=vimeo_logo|Mercurybodies - remote sensations]] (2022), video 12 min * [[https://www.nora-al-badri.de|Nora Al-Badri]], “[[https://aksioma.org/the.other.nefertiti|The Other Nefertiti”]]/“Nefertiti Bot” / “Babylonian Vision” * Karim Ben Khelifa, [[https://www.youtube.com/watch?v=_OQDDlRP3m4|Seven Grams:🇨🇩 Is our hunger for technology dooming DR Congo?]] | The Stream, Al Jazeera English. 25 Min * Raoul Peck. 2016 “I am not Your Negro”. video, 93 min. **Further Reading** * Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press. * Mohamed, Shakir, Marie-Therese Png, and William Isaac. 2020. “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.” Philosophy & Technology 33 (4): 659–84. https://doi.org/10.1007/s13347-020-00405-8 * Mejias, Ulises A., and Nick Couldry. 2019. “[[https://doi.org/10.14763/2019.4.1428|Datafication]].” Internet Policy Review 8 (4). * Arun, Chinmayi. 2020. “[[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3403010|AI and the Global South: Designing for Other Worlds]].” In The Oxford Handbook of Ethics of AI, edited by Markus Dirk Dubber, Frank Pasquale, and Sunit Das. Oxford Handbooks Series. New York, NY: Oxford University Press. * María do Mar Castro Varela / Nikita Dhawan. 2015. “Postkoloniale Theorie. Eine kritische Einführung”, Transcript Verlag. * AI-Myths: [[https://www.aimyths.org/ethics-guidelines-will-save-us|Ethics guidelines will save us]] (2020) === Afternoon: Trustworthy AI === Introduction by [[https://im.ethz.ch/people/ailic.html|Prof. Dr. Alexander Ilic]], Head of ETH AI Center Lecture:[[https://www.cs.cmu.edu/~hheidari| Hoda Heidari]], ETH alum and now faculty member at CMU Further Reading * High-Level Expert Group on AI. [[https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai|Ethics guidelines for trustworthy AI]]. April 2019 * Li, Bo, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2021. “[[https://doi.org/10.48550/ARXIV.2110.01167|Trustworthy AI: From Principles to Practices]].” * Liu, Haochen, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil K. Jain, and Jiliang Tang. 2021. “[[https://doi.org/10.48550/ARXIV.2107.06641|Trustworthy AI: A Computational Perspective]].” (comprehensive survey of six crucial dimensions "(i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being) * Singh, Richa, Mayank Vatsa, and Nalini Ratha. 2020. “[[https://doi.org/10.48550/ARXIV.2011.02272|Trustworthy AI]].” * Yaghini, Mohammad, Andreas Krause, and Hoda Heidari. 2021. “[[http://arxiv.org/abs/1911.03020|A Human-in-the-Loop Framework to Construct Context-Aware Mathematical Notions of Outcome Fairness]].” ArXiv:1911.03020 [Cs], May. ==== Wednesday 16.03. ==== === Morning === **Trustworth AI** Lecture: [[https://el-assady.com/|Menna El-Assady]], research fellow AI Center, ETH Zurich {{ :2022-03-16_el-assady_trustworthy-ai.pdf |Presenation slides}} * Sperrle, Fabian, Mennatallah El-Assady, Grace Guo, Duen Horng Chau, Alex Endert, and Daniel Keim. 2020. “[[https://doi.org/10.48550/ARXIV.2009.06433|Should We Trust (X)AI? Design Dimensions for Structured Experimental Evaluations]].” * [[https://stereoset.mit.edu|StereoSet. A Measure of Bias in Language Models]] * [[https://becominghuman.ai/7-types-of-data-bias-in-machine-learning-2198cf1bccfd|7 Types of Data Bias in Machine Learning]]. Lionbridge AI, Aug 11, 2020 * [[https://research.aimultiple.com/ai-bias/|Bias in AI: What it is, Types, Examples & 6 Ways to Fix it in 2022]] * Beck, Christin, Hannah Booth, Mennatallah El-Assady, and Miriam Butt. 2020. “[[https://aclanthology.org/2020.law-1.6|Representation Problems in Linguistic Annotations: Ambiguity, Variation, Uncertainty, Error and Bias]].” In Proceedings of the 14th Linguistic Annotation Workshop, 60–73. Barcelona, Spain: Association for Computational Linguistics. * [[https://lotteringtamara.github.io/runawaymodels/ |Runaway Models: Hey Siri, Tell me a story!]] * https://explainer.ai Art/Design Project: * By Kate Crawford and Vladan Joler. (2018). [[https://anatomyof.ai|Anatomy of an AI System. The Amazon Echo as an anatomical map of human labor, data and planetary resources]] **Indigenous (perspectives on) AI** Introduction: Possibilities and limits of making available indigenous knowledge/experience for non-indigenous people **Readings in Class** * Lewis, Jason Edward, Noelani Arista, Archer Pechawis, and Suzanne Kite. 2018. “[[https://doi.org/10.21428/bfafd97b|Making Kin with the Machines]].” Journal of Design and Science, July. - Introduction & Hāloa : the long breath, I = Author 2 - Introduction & wahkohtawin: kinship within and beyond the immediate family, the state of being related to others, I = Author 3 - Introduction & wakȟáŋ: that which cannot be understood, I = Author 4 * [[https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdf|Indigenous Protocol and Artificial Intelligence]] Guidelines for Indigenous-centred AI Design v.1 (p.20-22) * [[https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdf|Indigenous Protocol and Artificial Intelligence]] How to Build Anything Ethically. Suzanne Kite in discussion with Corey Stover, Melita Stover Janis, and Scott Benesiinaabandan (p.75-84) **Further Reading:** * Cipolle, Alex V. 2022. “[[https://www.nytimes.com/2022/03/22/technology/ai-data-indigenous-ivow.html.|How Native Americans Are Trying to Debug A.I.’s Biases]].” The New York Times, March 22, 2022, sec. Technology. * Joichi Ito, “[[https://jods.mitpress.mit.edu/pub/resisting-reduction|Resisting Reduction: A Manifesto,]]” Journal of Design and Science 3 (November 2017) (See, Lewis et al, above) * Lewis, J. E. (2019). [[https://www.obxlabs.net/wp-content/uploads/2019/08/Lewis-Jason-Edward.-An-Orderly-Assemblage-of-Biases-Troubling-the-Monocultural-Stack.-2019.pdf|An orderly assemblage of biases: Troubling the monocultural stack]]. In Schweitzer, I. (Ed.), Afterlives of Indigenous archives (pp.219–31). Lebanon, MA: New England Press. * Lozano-Hemmer, R. (1996). [[https://www.heise.de/tp/features/FLOATING-TROUT-SPACE-native-art-in-cyberspace-3441019.html|FLOATING TROUT SPACE - native art in cyberspace]]. Telepolis. * Research Data Alliance International Indigenous Data Sovereignty Interest Group. (September 2019). “[[https://www.gida-global.org/care|CARE Principles for Indigenous Data Governance.]]” The Global Indigenous Data Alliance * Taiuru, K. (2020). [[http://www.taiuru.maori.nz/tiritiethicalguide/|Treaty of Waitangi/Te Tiriti and Māori Ethics Guidelines for: AI, Algorithms, Data and IOT]]. (Sections 5 (Introduction) - 9.4. Data is a Taonga – Te Ao Māori Perspective & 13.3 (i) Māori Data Sovereignty Guidelines) === Afternoon: Indigenous AI === Guest: [[https://www.tiararoxanne.com/about.html|Tiara Roxanne]], Postdoctoral Fellow at Data & Society in NYC, Indigenous Mestiza scholar and artist based in Berlin. * Mimi Onuoha and Mother Cyborg. 2020. [[https://alliedmedia.org/resources/peoples-guide-to-ai| A people's Guide to AI]]. ==== Thursday 17.03. ==== === Morning: Art & AI === **Discussion** of texts from Tuesday. **Presentation**: Nora al-Badry **Further Works** "[[https://www.theguardian.com/artanddesign/2019/may/14/artist-lauren-mccarthy-becoming-alexa-for-a-day-ai-more-than-human|Let me into your home: artist Lauren McCarthy on becoming Alexa for a day]]" (Guardian.co.uk, May 2019) **Group Work:** Task for each group: Develop a conceptual sketch of a project that deals with one or more issue(s) that are particularly relevant to the group from the discussions on bias, trustworthy AI, digital colonialism, or indigenous AI. The sketch project can be based on AI, but doesn't need to be. You can use whatever medium you like to address the issues. Breakout rooms (12:00 - 17:00) ZT 5.F11 & ZT 6.F09 === Afternoon: Group Work === 16:00 -17:00 Group mentoring Nora Al-Badri (ZT 6.K04) 16:00 - 16:20 Group 1 16:20 - 16:40 Group 2 16:40 -17:00 Group 3 Felix Stalder (ZT 6.F09) 16:00 - 16:20 Group 4 16:20 - 16:40 Group 5 ==== Friday, 18.03. ==== === Morning: Group Work === Breakout Rooms ZT 5.F12 T 5.F04 === Afternoon: Group Presentations === Each group 10 minutes presentation, 10 minutes discussion - Investigating Youtube Recommendations - AI writing Sci-Fi Stories - generative native fashion - (A)I heard you. Can you stop? - Augmenting Polyterasse: Alternative Retelling of History * https://www.aiwriter.app/ Feedback and Wrap-up