Course Description

This course will introduce students to ways of thinking about how recent developments in AI systems powered by large language models (LLMs) shape everyday life and how to design such systems in a manner that can respect human values.

Format

This course is a lecture-based course including interactive discussions and a final project. For the final project, students will form interdisciplinary groups of 2-3 members and will create an innovative Human-AI interaction system powered by LLMs (students can also use other deep learning architectures and modalities of their choice beyond text, including vision and audio, but will need to confirm with the instructor in advance). The system can be chosen from a variety of those discussed throughout the course. In the past, student final projects have included counterspeech generators powered by LLMs, an interactive sign-language learning system, and an image generating tool for food and menu design using prompts.

This is a highly interactive class: You’ll be expected to actively participate in activities, projects, assignments, and discussions.

This course will introduce students to ways of thinking about how recent developments in AI systems powered by large language models will shape everyday life and how to design such systems in manner that can respect human values. Students will read and discuss papers in Human-AI interaction powered by language models, including but not limited to:

  • (1) Human-AI interactive systems powered by LLMs that work / clash with the strengths and weaknesses of human cognition,
  • (2) Designing interactive, human-in-the-loop approaches in such systems, and
  • (3) Supporting interpretability, transparency, trust, and fairness in AI tools supported by LLMs.

These topics will be explored in the context of real-world applications (e.g., For Some Autistic People, ChatGPT Is a LifelineLinks to an external site.), through which students will learn how to think both optimistically and critically of what LLM-powered AI systems can do, and how they can and should be integrated into society.

Prequisites

At the minimum, students should have an intermediate proficiency in python programming. A basic knowledge of deep / machine learning, statistics, and prior coursework in Human-Computer Interaction (HCI) are a plus, but not required. This course will include in-class LLM tutorial sessions designed to help you with your course project.

Reading List

Week Date     Topic Reading
1



21-Aug
Introduction and Course Overview
Licklider, Joseph CR. ”Man-computer symbiosis.” IRE transactions on human factors in electronics 1 (1960): 4-11. (read in class)
Shyam Sankar. The Rise of Human Computer Cooperation. TED Talk Video, 2012 (12 mins).
23-Aug

Primer on AI

Lubars, Brian, and Chenhao Tan. ”Ask not what AI can do, but what AI should do: Towards a framework of task delegability.” In Advances in Neural Information Processing Systems, pp. 57-67. 2019.
Xu, Anbang, Zhe Liu, Yufan Guo, Vibha Sinha, and Rama Akkiraju. ”A new chatbot for customer service on social media.” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3506-3510. 2017.
Nityesh Agarwal. ”Getting started with reading Deep Learning Research papers: The Why and the How”, a blog post at Towards Data Science (2018).
2




28-Aug

LLM Overview

Shanahan, M. (2022). Talking about large language models. arXiv preprint arXiv:2212.03551. Optional: Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., … & Wen, J. R. (2023). A survey of large language models. arXiv preprint arXiv:2303.18223.
Can Computers Learn Common Sense?, The New Yorker, 2022 (optional)The Race to Make A.I. Smaller (and Smarter)
A mental health tech company ran an AI experiment on real users. Nothing’s stopping apps from conducting more.
30-Aug

Primer on HCI

Amershi, Saleema, et al. “Guidelines for human-AI interaction.” Proceedings of the 2019 chi conference on human factors in computing systems. 2019.
Yang et al., Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design (CHI 2020)
Shneiderman, B., “Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy.” International Journal of Human-Computer Interaction 36, 6, 495-504. 2020.
3
04-Sep Labor Day No Classes
06-Sep LLM Tutorial - 1 (come to class with laptop)
4



11-Sep

Prompting - 1

Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., & Yang, Q. (2023, April). Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-21).
AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts, Tongshuang Wu, Michael Terry, Carrie J Cai - CHI 2022
Skim: Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), 1-35.
13-Sep
Prompting - 2
PromptChainer: Chaining Large Language Model Prompts through Visual Programming, Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, Carrie J Cai - CHI 2022
Wei, Jason, et al. “Chain-of-thought prompting elicits reasoning in large language models.” Advances in Neural Information Processing Systems 35 (2022): 24824-24837.
5


18-Sep LLM Tutorial - 2 (come to class with laptop)
20-Sep

Fairness, Accountability, Transparency & Ethics in LLMs - 1

Prabhakaran, Vinodkumar, Ben Hutchinson, and Margaret Mitchell. “Perturbation sensitivity analysis to detect unintended model biases.” arXiv preprint arXiv:1910.04210 (2019).
Goyal, Nitesh, et al. “Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation.” Proceedings of the ACM on Human-Computer Interaction 6.CSCW2 (2022): 1-28.
Clark, Elizabeth, et al. “All that’s’ human’is not gold: Evaluating human evaluation of generated text.” arXiv preprint arXiv:2107.00061 (2021).
6


25-Sep
Fairness, Accountability, Transparency & Ethics in LLMs - 2
Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023, April). Co-writing with opinionated language models affects users’ views. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
Wenzel, K., Devireddy, N., Davison, C., & Kaufman, G. (2023, April). Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition. In Proceedings of the 2023 CHI Conference on Human Factors in Com
27-Sep
Fairness, Accountability, Transparency & Ethics in LLMs - 3
“Because AI is 100% right and safe”: User Vulnerabilities and Sources of AI Authority in India, Shivani Kapania, Oliver Siy, Gabe Clapper, Azhagu SP, Nithya Sambasivan - CHI 2022
Mendelsohn, J., Bras, R. L., Choi, Y., & Sap, M. (2023). From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models. arXiv preprint arXiv:2305.17174.
7
02-Oct Project Pitch Presentations
04-Oct Project Pitch Presentations
8

09-Oct Columbus Day
11-Oct
LLM-Supported Health Care
Jo, E., Epstein, D. A., Jung, H., & Kim, Y. H. (2023, April). Understanding the benefits and challenges of deploying conversational AI leveraging large language models for public health intervention. (CHI 2023)
Chen, S., Wu, M., Zhu, K. Q., Lan, K., Zhang, Z., & Cui, L. (2023). LLM-empowered Chatbots for Psychiatrist and Patient Simulation: Application and Evaluation. arXiv preprint arXiv:2305.13614.
9 Oct 16-18 Instuctor is out of town for CSCW
10


23-Oct Tutorial: Web Application Hosting for LLM-Powered Tools (come to class with laptop)
25-Oct

LLM Accesibility and Neurodiversity

Valencia, S., Cave, R., Kallarackal, K., Seaver, K., Terry, M., & Kane, S. K. (2023, April). “The less I type, the better”: How AI Language Models can Enhance or Impede Communication for AAC Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
For Some Autistic People, ChatGPT Is a Lifeline, WIRED, 2023
Elshentenawy, M., Ahmed, M., Elalfy, M., Bakr, A., Heidar, M., & Amer, E. (2023, July). Intellibot: A Personalized Behavioral Analysis Chatbot Framework Powered by GPT-3. In 2023 Intelligent Methods, Systems, and Applications (IMSA) (pp. 136-141). IEEE.
11


30-Oct
LLM-Supported Work: Writing
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities Mina Lee, Percy Liang, Qian Yang CHI 2022
Tale Brush: Sketching Stories with Generative Pretrained Language Models, John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang CHI2022
01-Nov
LLM-Supported Work: Research
Hämäläinen, P., Tavast, M., & Kunnari, A. (2023, April). Evaluating large language models in generating synthetic hci research data: a case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
Park, Joon Sung, et al. “Social Simulacra: Creating Populated Prototypes for Social Computing Systems.” Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 2022.
12
06-Nov Prototype + Midterm Presentation
08-Nov Prototype + Midterm Presentation
13


13-Nov
Multimodality
Zhao, L., Yu, E., Ge, Z., Yang, J., Wei, H., Zhou, H., … & Zhang, X. (2023). ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning. arXiv preprint arXiv:2307.09474. (Demo: https://chatspot.streamlit.app/)
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models (demo: https://www.youtube.com/watch?v=EWFixIk4vjs&t=2s)
15-Nov
Text to Visual/ Audio
Lyu, C., Wu, M., Wang, L., Huang, X., Liu, B., Du, Z., … & Tu, Z. (2023). Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration. arXiv preprint arXiv:2306.09093.
Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion Models; Demo: HuggingFace Suno Bark Application or Demo of Choice
14 Nov 19-23 Thanksgiving Break
15

27-Nov
Creative Applications
FaceChat: An Emotion-Aware Face-to-face Dialogue Framework
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442.
29-Nov No Class, Work on Final Project
16
04-Dec Final Presentation
06-Dec Final Presentation