====== What kind of AI do we want? The case of generative AI ====== This is a joint module for BFA students (ZHdK) and BA students of Computer Science (ETH) to bring together artistic and technological perspectives. Our starting point is to consider «Artificial Intelligence» (AI) as a historical-material practice, i.e. shaped by the concrete conditions of its development and use. We focus on "Generative AI" as a technological and artistic field, as well as a site of critical interrogation. We will cover themes such as "Bias in AI", "Digital Colonialism", and the potentials and limits of current AI approaches. The presentations will be discussed in depth and key publications from computer science and art theory will be read and discussed. Experts from different fields and artists will be invited and selected artworks will be discussed. At the end of the module, interdisciplinary teams will develop concepts for joint practice-oriented projects. This module is a cooperation between the Department of Fine Arts (ZHdK) and ETH AI Center. It is open to BA students from both institutions and requires no prior technical or theoretical expertise. On the first day, there will be a hands-on introduction to Machine Learning for BFA students. **Course requirements (for ZHdK students)** * 80% presence * Contribution to discussions and group work [[https://pad.vmk.zhdk.ch/whatgenerativeAI|Pad with notes]] ===== Mon. 13.03.2023 ====== Time: 10:00–17:00 Place: @ZHDK Viaduktraum ZT 2.A05 Introduction to Machine Learning with [[https://alexandrerputtick.wordpress.com/|Alexandre Puttick]] (ZHdK students only) Alexandre Puttick works as a data scientist and researcher in applied AI. His research focuses on applications in Mental Health, bias detection and mitigation in language models and explainable AI. He is also a collaborator on the ZHdK-based artistic research project “Latent Space: Performing Ambiguous Data,” which explores the state in which different valid readings co-exist with data-driven systems. **Machine Learning Resources** * Gene Kogan. Machine Learning for Art. https://ml4a.net/fundamentals/ * Machine Learning Glossary. https://ml-cheatsheet.readthedocs.io/en/latest/index.html **[[https://bengrosser.com/projects/|Ben Grosser:]]** Metrics in Social Media ====== Tues. 14.03.2023 ====== Time: 10:00-17:00 Place: @ZHDK Viaduktraum ZT 2.A05 === Artist Presentation === [[https://www.nora-al-badri.de|Nora Al-Badri]] Nora Al-Badri is a multi-disciplinary and conceptual media artist with a German-Iraqi background. Her works are research-based as well as paradisciplinary and as much post-colonial as post-digital. === Presentation From text to image with AI - when, how and why? === by [[https://dvstudies.net/2021/11/19/eva-cetinic/|Dr. Eva Cetinić]], Postdoctoral Fellow. [[https://dvstudies.net/|Digital Visual Studies]], UZH Her research interests focus on studying new research methodologies rooted in the intersection of artificial intelligence and art history. Particularly, she is interested in exploring deep learning techniques for computational image understanding and multimodal reasoning in the context of visual art. **Abstract:** Introduction to the concept of multimodality within deep learning - the “revolution of 2021” with multimodal foundation models (e.g. CLIP). Discussion of the various aspects and problems that arise from models being trained on hundreds of million image-text pairs from the Internet (e.g. bias, cultural specificity, limitations of risk mitigation techniques, consent of content use and copyright issues, etc.). Discussion of the concept of “prompting”; the notion of similarity between word and image; the relation to art - using existing art, creating new “art”; the aesthetics of generated images - how it started and where it is going; the potential impact on the perception of images and media content. * [[https://www.biennial.com/collaborations/the-next-biennial-should-be-curated-by-a-machine-experiment-aitnb|The Next Biennial Should be Curated by a Machine: Experiment AI-TNB]] | Liverpool Biennial of Contemporary Art (2021) * https://ai.biennial.com * Eva Cetinic. [[https://arxiv.org/abs/2211.15271|The Myth of Culturally Agnostic AI Models]] (28 Nov 2022) == Workshop with Eva Cetinić == {{ ::ec_workshop_presentation.pdf |Workshop PDF}} === The Art of Bias. Artistic works exploring (bias in) machine vision. === Mario Klingenmann & Google Culture. [[https://artsexperiments.withgoogle.com/xdegrees/|X Degrees of Separation]]. 2018 * [[https://dzlab.github.io/dl/2019/02/02/X-Degrees-Separation/|X Degrees of Separation with PyTorch]] Bruno Moreschi. [[https://aeon.co/videos/what-does-an-ai-make-of-what-it-sees-in-a-contemporary-art-museum|Recoding Art]], 2021, 14 Min * Pereira, Gabriel, and Bruno Moreschi. 2021. “Artificial Intelligence and Institutional Critique 2.0: Unexpected Ways of Seeing with Computer Vision.” AI & SOCIETY 36 (4): 1201–23. https://doi.org/10.1007/s00146-020-01059-y. !Mediengruppe Bitnik. [[https://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/sor/|Dada. State of Reference]]. (2017) & [[https://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/samesame/|Same Same. Watching Algorithms. Cabaret Voltaire Edition]]. (2015) ====== Wed. 15.03.2023 ====== Time: 10:00 - 17:00 Place: ETHZ, LFW B3, Universitätsstrasse 2, 8092 Zürich === Structured Randomness as a tool in the artistic process === * Oblique Strategies (subtitled Over One Hundred Worthwhile Dilemmas), Brian Eno and Peter Schmidt, 1975 ([[http://www.rtqe.net/ObliqueStrategies/OSintro.html|A primer]], [[https://www.oblique.pouruntemps.com|online version]]) === Bias in AI === Bias as "incorrect representation"/"systematic distortion" vs bias as "unacknowledged standpoint" ([[https://dict.leo.org/englisch-deutsch/bias|dt. Übersetzung]]) {{:biases.png?200|}} == Videos/Artistic Works == [[https://youtu.be/zS9U3Gc832Y?t=1|Amazon Go]] - SNL, 13.03.2022 **Bias In Data** * Roth, Lorna. 2009. “[[https://doi.org/10.22230/cjc.2009v34n1a2196|Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.]]” Canadian Journal of Communication 34 (1). ([[https://www.youtube.com/watch?v=d16LNHIEJzs|Vox, 2015]], 4:39) * [[https://www.youtube.com/watch?v=YJjv_OeiHmo|The racist soap dispenser @ Facebook]], 2017 * Mozilla Internet Health Report, 2022: [[https://2022.internethealthreport.org/|AI and Power]] **Bias in Labelling** * Trevor Paglen [[https://www.wired.com/story/viral-app-labels-you-isnt-what-you-think/|“Imagenet Roulette”]] Twitter [[https://twitter.com/search?q=%23ImagenetRoulette|#imagenetroulette]] * Kate Crawford, Trevor Paglen [[https://excavating.ai|Excavating AI. The Politics of Images in Machine Learning Training Sets]] * Kate Crawford, Trevor Paglen: [[https://www.hkw.de/en/app/mediathek/video/69622|Datafication of Science]], HKW 2019 * Lyons, Michael J. 2021. [[http://arxiv.org/abs/2107.13998|“‘Excavating AI’ Re-Excavated: Debunking a Fallacious Account of the JAFFE Dataset.”]] ArXiv:2107.13998 [Cs], July. **Bias in Institutional Interest** * [[https://whitecollar.thenewinquiry.com|White Collar Crime Risk Zone]] **Bias in Modelling** * Buolamwini, Joy. 2018. [[https://www.youtube.com/watch?v=QxuyfWoVV98|AI, Ain't I A Woman?]] (3:30) **Exploring Data in Machine Vision** * https://this-person-does-not-exist.com/en * [[https://pitscher.net/|Pitscher]] (Matthias Schäfer) https://this-person-does-exist.com * Adam Harvey. [[https://adam.harvey.studio/on-computer-vision/|On Computer Vision]]. 2021 & [[https://adam.harvey.studio|others]] **The case of LLM (eg. ChatGPT)** * Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “[[https://doi.org/10.1145/3442188.3445922|On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.]]” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM. * Emily Bender (2022) [[https://www.youtube.com/watch?v=wuU-5rGPbyg|Resisting dehumanization in the age of AI]] (June 22), 62 Min * Dodge, Jesse, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. “[[https://doi.org/10.48550/ARXIV.2104.08758|Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus]].” == Further Reading == * Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). [[https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing|Machine bias]]. Pro Publica * Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity. * Crooks, Roderic, and Morgan Currie. 2021. “[[https://doi.org/10.1080/01972243.2021.1920081|Numbers Will Not Save Us: Agonistic Data Practices]].” The Information Society 37 (4): 201–13. * Dastin, Jeffrey. 2018. “[[https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G|Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women]].” Reuters. October 10, 2018. * [[https://www.netflix.com/at/title/81328723|Coded Bias]]. 2020. 85:00 Netflix * Gandy, Jr., Oscar H. 2020. “[[https://logicmag.io/commons/panopticons-and-leviathans-oscar-h-gandy-jr-on-algorithmic-life|Panopticons and Leviathans: Oscar H. Gandy, Jr. on Algorithmic Life]].” Logic Magazine. 2020. * Hildebrandt, Mireille. 2019. “[[ https://doi.org/10.1515/til-2019-0004|Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning]].” Theoretical Inquiries in Law 20 (1): 83–121. * Hooker, Sara. 2021. “[[https://doi.org/10.1016/j.patter.2021.100241|Moving beyond ‘Algorithmic Bias Is a Data Problem]].’” Patterns 2 (4): 100241 * Khan, Nora introduction in: Reas, Casey: Making Pictures with Generative Adversarial Networks, Anteism Books, 2019 * Kogan, Gene [[https://medium.com/@genekogan/machine-learning-for-artists-e93d20fdb097|Machine Learning for Artists]], 2017. see also [[[https://ml4a.github.io|ml4a.github.io]] * Lyons, Michael J. "[[https://deepai.org/publication/excavating-ai-re-excavated-debunking-a-fallacious-account-of-the-jaffe-dataset|Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset]], 2021 deepai.org * Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. * O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Allen Lane, Penguin Random House. * Reas, Casey: Making Pictures with Generative Adversarial Networks, Anteism Books, 2019 === Digital Colonialism === **Videos, artistic work ** * knowbotiq: [[https://vimeo.com/682434539?embedded=true&owner=32068464&source=vimeo_logo|Mercurybodies - remote sensations]] (2022), video 12 min * [[https://www.nora-al-badri.de|Nora Al-Badri]], “[[https://aksioma.org/the.other.nefertiti|The Other Nefertiti”]]/“Nefertiti Bot” / “Babylonian Vision” * Karim Ben Khelifa, [[https://www.youtube.com/watch?v=_OQDDlRP3m4|Seven Grams:🇨🇩 Is our hunger for technology dooming DR Congo?]] | The Stream, Al Jazeera English. 25 Min * Raoul Peck. 2016 “I am not Your Negro”. video, 93 min. **Reading in class** Sabelo Mhlambi. 7/8/2020. “[[https://carrcenter.hks.harvard.edu/publications/rationality-relationality-ubuntu-ethical-and-human-rights-framework-artificial|From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance]].” Carr Center Discussion Paper Series, 2020-009. [[https://carrcenter.hks.harvard.edu/files/cchr/files/ccdp_2020-009_sabelo_b.pdf|Full-text PDF]]. * All: Abstract * Group 1: p1-8 Introduction * Group 2: p 8-16 Individualism: the irrational personhood / Ubuntu as Relational Personhood * Group 3: p 16-23 Implications of Relational Personhood / Data Colonialism and Surveillance Capitalism as Attacks on Relational Personhood / Five Core Critiques of Artificial Intelligence * All: 24- 26 Ethics and Technology / Technology and Policy using Ubuntu / Conclusion **Further Reading** * Shankar, Shreya, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. 2017. “[[http://arxiv.org/abs/1711.08536 |No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World]].” Presented at NIPS 2017 Workshop on Machine Learning for the Developing World * Scheuerman, Morgan Klaus, Madeleine Pape, and Alex Hanna. 2021. “[[https://doi.org/10.1177/20539517211053712|Auto-Essentialization: Gender in Automated Facial Analysis as Extended Colonial Project]].” Big Data & Society 8 (2): 205395172110537. * Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press. P 52-66 {{ :wendy_hui_kyong_chun_-_discriminating_data-mit_press_2021_p.52-66.pdf |PDF}} * Chun, Wendy Hui Kyong. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press. * Mohamed, Shakir, Marie-Therese Png, and William Isaac. 2020. “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.” Philosophy & Technology 33 (4): 659–84. https://doi.org/10.1007/s13347-020-00405-8 * Mejias, Ulises A., and Nick Couldry. 2019. “[[https://doi.org/10.14763/2019.4.1428|Datafication]].” Internet Policy Review 8 (4). * Arun, Chinmayi. 2020. “[[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3403010|AI and the Global South: Designing for Other Worlds]].” In The Oxford Handbook of Ethics of AI, edited by Markus Dirk Dubber, Frank Pasquale, and Sunit Das. Oxford Handbooks Series. New York, NY: Oxford University Press. * María do Mar Castro Varela / Nikita Dhawan. 2015. “Postkoloniale Theorie. Eine kritische Einführung”, Transcript Verlag. * AI-Myths: [[https://www.aimyths.org/ethics-guidelines-will-save-us|Ethics guidelines will save us]] (2020) === Group Work I === Develop a conceptual sketch of a project (situation or application) that deals with one or more issue(s) that are particularly relevant to the group from the discussions on bias, digital colonialism AI. The sketch project can be based on AI, but doesn't need to be. You can use whatever medium you like to address the issues. Situation or application that touches and references one or more issues that were discussed in the seminar AND that relates to a field or interest or an existing practice/ experience from the team members. Goal is to present a conceptual sketch Mixed teams (ZHdK and ETH) ====== Thur. 16.03.2023 ====== Time: 10:00 - 17:00 Place: ETHZ, LFW B3, Universitätsstrasse 2, 8092 Zürich === Guest === [[https://hannesbajohr.de/en|Hannes Bajohr]] (Fellow Collegium Helveticum): Post-Artifical Writing: Authorship in the Age of Artificial Intelligence. * Bajohr, Hannes. 2022. “[[https://hannesbajohr.de/wp-content/uploads/2022/11/Bajohr_Dumme_Bedeutung.pdf|Dumme Bedeutung. Künstliche Intelligenz und artifizielle Semantik]].” Merkur, 2022 === Art of generative AI=== * Giacomo Miceli. [[https://infiniteconversation.com/|Infitite Coversation]]. (2022) * [[https://adam.harvey.studio/| Adam Harvey]] * [[https://refikanadol.com|Refik Anadol]] * ​R.H. Lossin (2023). [[https://www.e-flux.com/criticism/527236/refik-anadol-s-unsupervised|Refik Anadol’s “Unsupervised”]]. e-flux (March 14). * Christopher Kulendran Thomas ([[https://www.newgalerie.com/?page=artists&id=25|Being Human]], 2019 | [[https://www.berlinartlink.com/2022/12/16/christopher-kulendran-thomas-imagines-alternate-realities-in-another-world|Finesse]], 2022 | [[https://www.frontart.org/artists/kulendran-thomas|Dataset #1-4]]| [[https://vimeo.com/777019419?embedded=true&source=vimeo_logo&owner=19338498|Video, KW Berlin]]) * Lauren Lee McCarthy ([[https://lauren-mccarthy.com/LAUREN|Lauren]], [[https://lauren-mccarthy.com/SOMEONE|Someone]], 2019) * Memo Atkin. [[https://mixed-news.com/en/tiktoks-latest-beauty-ar-filter-is-indistinguishable-from-reality|TikTok’s latest beauty AR filter is indistinguishable from reality]]. 2023 * Kendrick Lamar - [[https://www.youtube.com/watch?v=uAPUkgeiFVY|The Heart Part 5]], 2022 * Francesca Panetta, Halsey Burgund. [[https://moondisaster.org|Moon Disaster Speech]]. 2019 === Group Work II === ====== Fri. 17.03.2023 ====== Time: 10:00 - 17:00 Place: ETHZ, LFW B3, Universitätsstrasse 2, 8092 Zürich [[https://news.uchicago.edu/story/uchicago-scientists-develop-new-tool-protect-artists-ai-mimicry|UChicago scientists develop new tool to protect artists from AI mimicry]]. Feb 14, 2023 | [[https://arxiv.org/pdf/2302.04222.pdf| arxiv.org PDF]] === Group work III === === Group presentations === ====== additional resources ====== Eryk Salvaggio. CRITICAL TOPICS: AI IMAGES. Syllabus for 26 classes. https://www.cyberneticforests.com/ai-images