Large-Scale Neural Network Models for Neuroscience
Winter 2024, Stanford University
Time: Tues/Thurs 10:30am-11:50am
Location: Gates (Computer Science), Rm B12
Instructor: Daniel Yamins and Klemen Kotar (x@stanford.edu where x=yamins or klemenk)
Time: Tues/Thurs 10:30am-11:50am
Location: Gates (Computer Science), Rm B12
Instructor: Daniel Yamins and Klemen Kotar (x@stanford.edu where x=yamins or klemenk)
The last ten years has seen a watershed in the development of large-scale neural networks in artificial intelligence. At the same time, computational neuroscientists have discovered a surprisingly robust mapping between the internal components of these networks and real neural structures in the human brain. In this class we will discuss a panoply of examples of such "convergent man-machine evolution", including: feedforward models of sensory systems (vision, audition, somatosensation); recurrent neural networks for dynamics and motor control; integrated models of attention, memory, and navigation; transformer models of language areas; self-supervised models of learning; and deep RL models of decision and planning. We will also delve into the methods and metrics for comparing such models to real-world neural data, and address how unsolved open problems in AI (that you can work on!) will drive forward novel neural models. Some meaningful background in modern neural networks is highly advised (e.g. CS229, CS230, CS231n, CS234, CS236, CS 330), but formal preparation in cognitive science or neuroscience is not needed (we will provide this).