OpenAI is hiring a senior role to help the company prepare for the risks that come with artificial intelligence getting more powerful. The new “Head of Preparedness” will lead a team that studies worst-case scenarios.
They will test safety measures and try to ensure that the company runs powerful systems responsibly. The job reflects growing concern inside the AI industry about harms ranging from mental-health impacts to misuse of cyber and biological tools.
OpenAI is hiring for ‘Head of Preparedness’ as AI evolves
OpenAI says the role will “lead the preparedness team,” which the company formed in 2023. It will focus on “catastrophic risks” and the safety of systems that can improve themselves. The company has framed the job as urgent and demanding.
As CEO, Sam Altman wrote on X, “We are hiring a Head of Preparedness. This is a critical role at an important time.” He added that models are “now capable of many great things, but they are also starting to present some real challenges.”
The team will clearly measure growing AI capabilities and determine how bad actors could abuse them. Altman called out specific areas of concern, such as models’ effects on mental health. He also discussed models’ ability to find serious computer-security flaws.
“The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security that they are beginning to find critical vulnerabilities,” he wrote. OpenAI will ask the Head of Preparedness to balance enabling defenders while stopping attackers from using the same tools.
OpenAI is offering a huge compensation package for the role, roughly $550,000 a year plus equity. It reflects how senior and specialized the job is. The company says the post will be stressful and will require jumping “into the deep end pretty much immediately.”
