Difference between revisions of "LFI Course Materials 4/Week ten"
(→Discussion) |
|||
(10 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | === Week | + | === Week 10: AI and algorithms and privacy === |
− | * September | + | * Real time lecture: September 24th 9 - 11 Pacific/12 - 2 Eastern on Zoom https://zoom.us/j/9129428892 |
==== Overview ==== | ==== Overview ==== | ||
+ | Much is made of the coming artificially intelligent future, where automation takes away menial labor and self-driving electric cars get us off the internal combustion engine. But that fantasy ignores the reality of the AI infrastructures currently being built, infrastructures that exacerbate existing social problems and create whole new concerns. Artificial intelligence is built through enormous datasets, which are often created from the wealth of information collected about us without our consent. Processing all of this data is incredibly resource-intensive. Furthermore, lots of AI is used for even greater privacy-violating purposes, like facial recognition and predictive policing. This week, we're joined by Varoon Mathur, technology fellow at AI Now Institute, to talk about the privacy, labor, and ecological implications of artificial intelligence, how AI and related technologies are being used during the current crisis in what Naomi Klein has called the "screen new deal" to create a high-tech dystopia, what power players and political ideologies are at work to shape this reality, and what we can do to fight it. | ||
==== Readings ==== | ==== Readings ==== | ||
− | + | * Anatomy of an AI report and visualization of the data, labor, and ecological impacts of one Amazon Echo device from AI Now Institute: https://anatomyof.ai/ | |
− | * https://www. | + | * The far-right helped create the world's most powerful facial recognition technology: https://www.huffpost.com/entry/clearview-ai-facial-recognition-alt-right_n_5e7d028bc5b6cb08a92a5c48 |
− | * https:// | + | * Palantir provides Covid-19 tracking software to CDC and NHS, pitches European health agencies: https://techcrunch.com/2020/04/01/palantir-coronavirus-cdc-nhs-gotham-foundry/ |
− | * | + | * Defund facial recognition: https://www.theatlantic.com/technology/archive/2020/07/defund-facial-recognition/613771/ |
− | * https:// | + | * (Optional long read) Bad predictions: how civil rights violations impact police data, predictive policing systems, and justice https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423 |
− | |||
− | |||
− | |||
− | |||
− | |||
==== Guest lecturer ==== | ==== Guest lecturer ==== | ||
− | + | Varoon Mathur, technology fellow at [https://ainowinstitute.org/ AI Now Institute] | |
==== Discussion ==== | ==== Discussion ==== | ||
+ | * Discuss the unique impacts AI has on labor, data privacy, and planetary resources. | ||
+ | * What do you make of the multiple connections between the AI industry and individuals with reactionary politics? | ||
+ | * How can we begin to address these issues? What are the implications for library programs and services? For example, we often assist our patrons who've just received holiday gadgets. How can we incorporate these critiques into those services? How can we broaden the critique so that we're not just telling people not to plug in their Echo device? | ||
==== Tasks ==== | ==== Tasks ==== | ||
+ | * Lecture, discussion forum, final project checkins with Alison |
Latest revision as of 13:22, 24 September 2020
Contents
Week 10: AI and algorithms and privacy[edit]
- Real time lecture: September 24th 9 - 11 Pacific/12 - 2 Eastern on Zoom https://zoom.us/j/9129428892
Overview[edit]
Much is made of the coming artificially intelligent future, where automation takes away menial labor and self-driving electric cars get us off the internal combustion engine. But that fantasy ignores the reality of the AI infrastructures currently being built, infrastructures that exacerbate existing social problems and create whole new concerns. Artificial intelligence is built through enormous datasets, which are often created from the wealth of information collected about us without our consent. Processing all of this data is incredibly resource-intensive. Furthermore, lots of AI is used for even greater privacy-violating purposes, like facial recognition and predictive policing. This week, we're joined by Varoon Mathur, technology fellow at AI Now Institute, to talk about the privacy, labor, and ecological implications of artificial intelligence, how AI and related technologies are being used during the current crisis in what Naomi Klein has called the "screen new deal" to create a high-tech dystopia, what power players and political ideologies are at work to shape this reality, and what we can do to fight it.
Readings[edit]
- Anatomy of an AI report and visualization of the data, labor, and ecological impacts of one Amazon Echo device from AI Now Institute: https://anatomyof.ai/
- The far-right helped create the world's most powerful facial recognition technology: https://www.huffpost.com/entry/clearview-ai-facial-recognition-alt-right_n_5e7d028bc5b6cb08a92a5c48
- Palantir provides Covid-19 tracking software to CDC and NHS, pitches European health agencies: https://techcrunch.com/2020/04/01/palantir-coronavirus-cdc-nhs-gotham-foundry/
- Defund facial recognition: https://www.theatlantic.com/technology/archive/2020/07/defund-facial-recognition/613771/
- (Optional long read) Bad predictions: how civil rights violations impact police data, predictive policing systems, and justice https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423
Guest lecturer[edit]
Varoon Mathur, technology fellow at AI Now Institute
Discussion[edit]
- Discuss the unique impacts AI has on labor, data privacy, and planetary resources.
- What do you make of the multiple connections between the AI industry and individuals with reactionary politics?
- How can we begin to address these issues? What are the implications for library programs and services? For example, we often assist our patrons who've just received holiday gadgets. How can we incorporate these critiques into those services? How can we broaden the critique so that we're not just telling people not to plug in their Echo device?
Tasks[edit]
- Lecture, discussion forum, final project checkins with Alison