University of Oxford
https://open.spotify.com/show/7Iz40KtVRxYuAXQKD7j1dM
Foresight Institute’s Neurotech Group - 2026.02
Abstract: Modern AI has converged on a surprisingly general principle: learn powerful priors by predicting what comes next. In this talk, I argue that the same scaling-first recipe can be applied to brain recordings to move from task-specific decoders toward brain foundation models, specifically of the causal, generative kind. I’ll present a framework that treats high-bandwidth electrophysiology (with MEG as a motivating case) as a token stream: first, learn an efficient tokenizer that globally compresses spatiotemporal activity into discrete tokens; then, train a causal long-context sequence model with a standard next-token objective. Conditioning is implicit: instead of adding subject and task labels, or bespoke heads, a snippet of real brain activity becomes the “prompt,” and the model learns to continue it, encouraging specificity to session, subject, and context while remaining architecture-agnostic and compatible with frontier multimodal backbones. I’ll close by discussing what “on-manifold” long-horizon neural generation should mean, why evaluation must probe drift and prompt-specificity (not just reconstruction), and how brain tokens could ultimately be interleaved with language, vision, and action tokens as a route to grounding and more efficient reasoning.
https://www.youtube.com/watch?v=iHVVpmylOPE
Scaling Next-Brain-Token Prediction.pdf
BCI Meeting 2023
https://ricsinaruto.github.io/docs/bci_slides.pdf
Biomag 2022, OHBM 2022, Cortico 2022