2024-25 Distinguished Computational Linguistic Lecture

Event Image
White background with grey shapes and lines, information about the event - title, date, time, and location. There is a picture of the speaker, text with his name and title, the RIT COLA logo

Finding linguistic structure in large language models
with Christopher Potts, Ph.D.
Professor and Chair in the Department of Linguistics at Stanford University

Neural network interpretability research has proceeded at an
incredible pace in recent years, leading to many powerful techniques
for understanding how large language models (LLMs) process
information and solve hard generalization tasks. In this talk, I'll argue
that these same techniques can provide rich evidence for linguistic
investigations. My focus will be on the framework of causal
abstraction, a family of interpretability techniques that allow us to test
sophisticated hypothesis about the structures latent in LLM
representations. I'll argue that analyses in this mode can directly
inform deep analyses of specific linguistic phenomena, and that they
can yield insights into the kinds of inductive bias that are sufficient for
acquiring natural languages.


Contact
Zhong Chen
Event Snapshot
When and Where
October 03, 2024
11:00 am - 12:00 pm
Room/Location: Xerox Auditorium | GLE-2580
Who

Open to the Public

Interpreter Requested?

No

Topics
artificial intelligence