top of page

The Autoregressive Generative Model of Cognition 

My current research aims to test a novel theory of cognition based on the principle of autoregression, where cognitive processes emerge as sequences continuously generated from immediately preceding contexts. In this model, cognition is fundamentally dynamic and generative: each mental event—be it linguistic, perceptual, motoric, or imagistic—is not retrieved from a static memory store but is instead actively generated from residual activation of recent cognitive states.

This perspective contrasts sharply with traditional theories, which view memory and knowledge as stored representations accessed through retrieval processes. Rather than conceiving the brain as a repository of discrete facts or images waiting to be recalled, the autoregressive framework suggests that what appears to be retrieval is actually the generation of new cognitive states conditioned by prior experiences. This generative mechanism extends beyond language to encompass imagery, motor planning, and other forms of cognition, suggesting a unified underlying computational principle that shapes all mental activity.
 

Furthermore, while my theory aligns superficially with predictive coding approaches—both emphasize anticipation and generative processes—it differs crucially by minimizing reliance on accurate representation or prediction of external sensory input. Predictive coding emphasizes the minimization of sensory prediction error; in contrast, the autoregressive theory posits that cognition is primarily an internal generative process, shaped by trained statistical regularities rather than explicit representations of external states.
 

To test and validate this theory, my students and I are conducting computational and behavioral experiments. Computationally, my lab uses large language models and neural network simulations to demonstrate the inherent autoregressive structure of linguistic and cognitive sequences, examining how these networks capture temporal dependencies without explicit memory buffers. Behaviorally, I am conducting memory experiments designed to test my hypothesis that short-term memory functions not as a dedicated storage buffer but rather as residual activation of recent cognitive states, with long-term memory representing trained generative capabilities.
 

This approach has potentially profound implications, offering a new way to understand perception, memory and action —not as processes of representation and retrieval, but as continuous generative acts driven by past context. 

© 2035 by Nakia Hart. Powered and secured by Wix

bottom of page