Who Controls the Future of AI? The Talent Fight Just Got Personal
Sutskever steps up as CEO of Safe Superintelligence while Meta raids top OpenAI researchers to fuel its new superintelligence lab. The result is a full-blown AI talent war, fragmenting research agendas and redefining who controls the future of artificial general intelligence.
In a week that underscored the growing centrality of talent over capital, Ilya Sutskever officially took the helm at Safe Superintelligence (SSI) after co-founder Daniel Gross exited the company. The move follows Meta’s failed acquisition attempt of SSI and its pivot to aggressive poaching, including offers to Gross and key researchers. Sutskever, formerly OpenAI’s chief scientist, now leads SSI with a single mandate: build safe superintelligence. His appointment comes as Meta pours billions into its new AI moonshot, Meta Superintelligence Labs, built around newly hired Scale AI founder Alexandr Wang and GitHub’s former CEO Nat Friedman. Meta’s hiring spree is already pulling top talent away from OpenAI. The company recently onboarded reasoning lead Trapit Bansal and four senior researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—each with deep expertise in reasoning and multimodal AI. These hires signal a clear ambition: move past AI as assistant and toward AI that reasons, infers, and generalizes. Meta’s new lab is positioned to challenge OpenAI’s dominance in foundational models and compete directly with Anthropic and Google DeepMind in the superintelligence race. What sets this phase apart isn’t just the volume of hires. It’s the sheer size of the offers. Reports point to signing bonuses well into the tens of millions, with some packages speculated to reach $100 million. Meta’s $14 billion acquisition of Scale AI served not just to buy tech, but to anchor its leadership team. For startups like SSI and other frontier labs, this creates immense pressure to offer more than just pay: researchers are increasingly choosing between equity-rich offers and mission-aligned environments. In short, talent is becoming the scarcest input in the AI economy. These movements are fragmenting the research landscape. Sutskever’s SSI is carving out a niche with a strict safety-first agenda. Meta is aligning its efforts around general reasoning and multimodal integration. Meanwhile, new labs like Thinking Machines (founded by ex-OpenAI CTO) are focusing on transparency and open collaboration. Each direction represents a different bet on what the next paradigm of AI will require—and who will define it. Beyond the labs, the implications are geopolitical. Governments are struggling to keep top researchers from migrating to privately funded moonshots. The U.S. AI ecosystem is concentrating in fewer, more elite labs, making global coordination harder. As AI capabilities inch closer to systems that could reason and act autonomously, the question isn’t just who builds them. It’s whose values shape them—and which researchers stay to see it through.
In a week that underscored the growing centrality of talent over capital, Ilya Sutskever officially took the helm at Safe Superintelligence (SSI) after co-founder Daniel Gross exited the company. The move follows Meta’s failed acquisition attempt of SSI and its pivot to aggressive poaching, including offers to Gross and key researchers. Sutskever, formerly OpenAI’s chief scientist, now leads SSI with a single mandate: build safe superintelligence. His appointment comes as Meta pours billions into its new AI moonshot, Meta Superintelligence Labs, built around newly hired Scale AI founder Alexandr Wang and GitHub’s former CEO Nat Friedman.
Meta’s hiring spree is already pulling top talent away from OpenAI. The company recently onboarded reasoning lead Trapit Bansal and four senior researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—each with deep expertise in reasoning and multimodal AI. These hires signal a clear ambition: move past AI as assistant and toward AI that reasons, infers, and generalizes. Meta’s new lab is positioned to challenge OpenAI’s dominance in foundational models and compete directly with Anthropic and Google DeepMind in the superintelligence race.
What sets this phase apart isn’t just the volume of hires. It’s the sheer size of the offers. Reports point to signing bonuses well into the tens of millions, with some packages speculated to reach $100 million. Meta’s $14 billion acquisition of Scale AI served not just to buy tech, but to anchor its leadership team. For startups like SSI and other frontier labs, this creates immense pressure to offer more than just pay: researchers are increasingly choosing between equity-rich offers and mission-aligned environments. In short, talent is becoming the scarcest input in the AI economy.
These movements are fragmenting the research landscape. Sutskever’s SSI is carving out a niche with a strict safety-first agenda. Meta is aligning its efforts around general reasoning and multimodal integration. Meanwhile, new labs like Thinking Machines (founded by ex-OpenAI CTO) are focusing on transparency and open collaboration. Each direction represents a different bet on what the next paradigm of AI will require—and who will define it.
Beyond the labs, the implications are geopolitical. Governments are struggling to keep top researchers from migrating to privately funded moonshots. The U.S. AI ecosystem is concentrating in fewer, more elite labs, making global coordination harder. As AI capabilities inch closer to systems that could reason and act autonomously, the question isn’t just who builds them. It’s whose values shape them—and which researchers stay to see it through.
In a week that underscored the growing centrality of talent over capital, Ilya Sutskever officially took the helm at Safe Superintelligence (SSI) after co-founder Daniel Gross exited the company. The move follows Meta’s failed acquisition attempt of SSI and its pivot to aggressive poaching, including offers to Gross and key researchers. Sutskever, formerly OpenAI’s chief scientist, now leads SSI with a single mandate: build safe superintelligence. His appointment comes as Meta pours billions into its new AI moonshot, Meta Superintelligence Labs, built around newly hired Scale AI founder Alexandr Wang and GitHub’s former CEO Nat Friedman.
Meta’s hiring spree is already pulling top talent away from OpenAI. The company recently onboarded reasoning lead Trapit Bansal and four senior researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—each with deep expertise in reasoning and multimodal AI. These hires signal a clear ambition: move past AI as assistant and toward AI that reasons, infers, and generalizes. Meta’s new lab is positioned to challenge OpenAI’s dominance in foundational models and compete directly with Anthropic and Google DeepMind in the superintelligence race.
What sets this phase apart isn’t just the volume of hires. It’s the sheer size of the offers. Reports point to signing bonuses well into the tens of millions, with some packages speculated to reach $100 million. Meta’s $14 billion acquisition of Scale AI served not just to buy tech, but to anchor its leadership team. For startups like SSI and other frontier labs, this creates immense pressure to offer more than just pay: researchers are increasingly choosing between equity-rich offers and mission-aligned environments. In short, talent is becoming the scarcest input in the AI economy.
These movements are fragmenting the research landscape. Sutskever’s SSI is carving out a niche with a strict safety-first agenda. Meta is aligning its efforts around general reasoning and multimodal integration. Meanwhile, new labs like Thinking Machines (founded by ex-OpenAI CTO) are focusing on transparency and open collaboration. Each direction represents a different bet on what the next paradigm of AI will require—and who will define it.
Beyond the labs, the implications are geopolitical. Governments are struggling to keep top researchers from migrating to privately funded moonshots. The U.S. AI ecosystem is concentrating in fewer, more elite labs, making global coordination harder. As AI capabilities inch closer to systems that could reason and act autonomously, the question isn’t just who builds them. It’s whose values shape them—and which researchers stay to see it through.
We love
and you too
If you like what we do, please share it on your social media and feel free to buy us a coffee.