MATS Fellow:
Yashvardhan Sharma, Jakub Kryś
Authors:
Jakub Kryś, Yashvardhan Sharma, Janet Egan
Citations
Abstract:
Advances in low-communication training algorithms are enabling a shift from centralised model training to compute setups that are either distributed across multiple clusters or decentralised via community-driven contributions. This paper distinguishes these two scenarios - distributed and decentralised training - which are little understood and often conflated in policy discourse. We discuss how they could impact technical AI governance through an increased risk of compute structuring, capability proliferation, and the erosion of detectability and shutdownability. While these trends foreshadow a possible new paradigm that could challenge key assumptions of compute governance, we emphasise that certain policy levers, like export controls, remain relevant. We also acknowledge potential benefits of decentralised AI, including privacy-preserving training runs that could unlock access to more data, and mitigating harmful power concentration. Our goal is to support more precise policymaking around compute, capability proliferation, and decentralised AI development.
What Happens When Superhuman AIs Compete for Control?
Authors:
Steven Veld
Date:
January 11, 2026
Citations:
0
AI Futures Model: Timelines & Takeoff
Authors:
Brendan Halstead, Alex Kastner
Date:
December 30, 2025
Citations:
0
The MATS Program is an independent research and educational initiative connecting emerging researchers with mentors in AI alignment, governance, and security.
Each MATS cohort runs for 12 weeks in Berkeley, California, followed by an optional 6–12 month extension in London for selected scholars.