Final ACO Doctoral Examination and Defense of Dissertation of Max Dabagia: 8 April 2025

Title: Emergent Assembly-Based Computation in a Model of the Brain

Max Dabagia
ACO PhD student, School of Computer Science

Date: 4/8/2025
Time: 12:00-1:30 pm
Location: Klaus 1202
Zoom: https://gatech.zoom.us/j/97252118317?pwd=NNwQrwBKX6SpJQcrIDkjQkDvSCTOHD.1

Advisor:
Dr. Santosh Vempala, School of Computer Science, Georgia Institute of Technology

Committee:
Dr. Eva Dyer, School of Biomedical Engineering, Georgia Institute of Technology
Dr. Jack Gallant, Department of Psychology, University of California Berkeley
Dr. Christos Papadimitriou, Department of Computer Science, Columbia University
Dr. Will Perkins, School of Computer Science, Georgia Institute of Technology
Dr. Santosh Vempala, School of Computer Science, Georgia Institute of Technology

Reader:
Dr. Eva Dyer, Department of Biomedical Engineering, Georgia Institute of Technology

Link to thesis draft:
https://raw.githubusercontent.com/mdabagia/mdabagia.github.io/master/The...

Abstract:
The brain is a phenomenally complex physical system. Understanding how it gives rise to cognition remains a fundamental challenge for science, as well as a wellspring of inspiration of artificial intelligence. Despite enormous progress since the genesis of modern neuroscience more than a century ago in characterizing individual neurons, and in correlating macroscopic regions of the brain with various aspects of information processing, the intermediate steps -- wherein billions of neurons somehow organize themselves to implement the computations comprising intelligent behavior -- remain essentially mysterious. Thus, the essence of understanding the brain is unraveling the mechanisms by which this self-organization occurs. The starting point of this thesis is a model of the brain, called NEMO, which arises from a few basic ingredients: discretely-firing point neurons, connected with each other randomly and controlled by local inhibition, with Hebbian plasticity (for learning). These ingredients are abstractions of biological mechanisms well-known to experimental neuroscience, comprising in some sense a ``minimal'' model of the brain. Under its dynamics, assemblies emerge in response to stimuli, sets of simultaneously-firing neurons which are hypothesized to be the basic unit of representation in the brain. We explore a few simple algorithms which are implementable under its dynamics, including learning linear classifiers, sequence memorization, automata memorization and simulation, and estimation and sampling from simple statistical models, both in theory and simulation. Taken together, these results comprise the foundation of a novel theory of how the brain could compute, and add theoretical support to the hypothesis that assemblies are fundamental to representation in the brain.