Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision |
z-modul:kunst_kuenstliche_intelligenz [2023/02/14 08:25] β fstalder@zhdk.ch | z-modul:kunst_kuenstliche_intelligenz [2023/02/14 08:42] β fstalder@zhdk.ch |
---|
| |
* self-driving cars, [[https://www.nytimes.com/2015/03/20/business/elon-musk-says-self-driving-tesla-cars-will-be-in-the-us-by-summer.html|2015]] | * self-driving cars, [[https://www.nytimes.com/2015/03/20/business/elon-musk-says-self-driving-tesla-cars-will-be-in-the-us-by-summer.html|2015]] |
* Blockchain, 2019 | * Blockchain, [[https://www.pwc.com/gx/en/industries/technology/publications/blockchain-report-transform-business-economy.html|2019]] |
* AI Sentience, 2023 | * AI Sentience, 2023 |
| |
**Main Sources:** | **Main Sources:** |
| |
* Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. βOn the Dangers of Stochastic Parrots: Can Language Models Be Too Big? π¦.β In [[https://doi.org/10.1145/3442188.3445922|Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency]], 610β23. Virtual Event Canada: ACM.Β | * Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. βOn the Dangers of Stochastic Parrots: Can Language Models Be Too Big? π¦.β In [[https://doi.org/10.1145/3442188.3445922|Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency]], 610β23. Virtual Event Canada: ACM. \\ βwe understand the term language model (LM) to refer to systems which are trained on string prediction tasks: that is, predicting the likelihood of a token (character, word or string) given either its preceding context or (in bidirectional and masked LMs) its surrounding context.β (Bender et al., 2021, p. 611) \\ resource use \\ bias (gender, class, language, geography) \\ false narrative |
* βwe understand the term language model (LM) to refer to systems which are trained on string prediction tasks: that is, predicting the likelihood of a token (character, word or string) given either its preceding context or (in bidirectional and masked LMs) its surrounding context.β (Bender et al., 2021, p. 611) | |
* Bender, Emily. 2022. [[https://www.youtube.com/watch?v=wuU-5rGPbyg|Resisting dehumanization in the age of AI.]] Talk at CogSci: Interdisciplinary Study of the Mind (07.29), 62 Min | * Bender, Emily. 2022. [[https://www.youtube.com/watch?v=wuU-5rGPbyg|Resisting dehumanization in the age of AI.]] Talk at CogSci: Interdisciplinary Study of the Mind (07.29), 62 Min |
* Mozilla Internet Health Report. 2022. [[https://2022.internethealthreport.org/facts/|Who Has Power Over AI?]] | * Mozilla Internet Health Report. 2022. [[https://2022.internethealthreport.org/facts/|Who Has Power Over AI?]] |