1. Início
  2. / Job Openings
  3. / OpenAI Opens Dream Job With Million-Dollar Salary To Lead AI Risk Management, Accepts Public Applications And Threatens To Turn Any Average Professional Into A Global Guardian Of Modern Technology
Tempo de leitura 5 min de leitura Comentários 0 comentários

OpenAI Opens Dream Job With Million-Dollar Salary To Lead AI Risk Management, Accepts Public Applications And Threatens To Turn Any Average Professional Into A Global Guardian Of Modern Technology

Escrito por Bruno Teles
Publicado em 29/12/2025 às 21:00
vaga dos sonhos com salário milionário na OpenAI busca Diretor de Preparação para lidar com riscos de IA e reforçar segurança da IA em modelos avançados.
vaga dos sonhos com salário milionário na OpenAI busca Diretor de Preparação para lidar com riscos de IA e reforçar segurança da IA em modelos avançados.
Seja o primeiro a reagir!
Reagir ao artigo

New Dream Job With Million-Dollar Salary at OpenAI Offers US$ 555 Thousand Plus Stock Options for Preparation Director, Responsible for Mapping AI Risks, Testing Extreme Models, Defining AI Safety and Influencing Product Launch Decisions Used by Millions of People Worldwide Every Day Today

In 2025, OpenAI published a dream job with a million-dollar salary to lead the new AI Preparation Director, offering an annual compensation of US$ 555,000 plus equity for those willing to lead the technical frontline against serious risks from the company’s most advanced models. The application is public, aimed at professionals willing to turn anxiety into daily work around extreme scenarios.

The role comes after the creation of the team for preparation in 2023 and the departure of the former head in 2024, when part of the security leadership was reallocated and internal criticisms exposed that safety processes had been put on the back burner in favor of product launches. Now, the company is trying to show that it has put a name, an organizational chart, and a robust budget behind the promise of launching powerful models with defined damage control in engineering.

What’s at Stake in the Dream Job With Million-Dollar Salary

In practice, the dream job with a million-dollar salary is not a cosmetic position.

The Preparation Director will be responsible for the technical strategy of OpenAI’s Preparation Framework, the internal framework that defines how the company assesses dangerous capabilities, builds threat models, and decides which mitigations are mandatory before a launch.

The role is described as a direct command over an “operationally scalable security pipeline,” that is, a production line for stress testing AI models that are treated as potential sources of harm at the scale of critical infrastructure.

The declared objective is to detect capabilities that may generate “new risks of serious harm” before reaching the public.

Million-Dollar Salary, Stock Options, and Real Scope of Power

The financial package of the dream job with a million-dollar salary is clear: US$ 555 thousand in annual base salary, plus equity in the company.

In Silicon Valley terms, it signals that the position enters the circle of central decisions, not into a peripheral compliance department.

The Preparation Director will lead technical assessments that directly feed into launch decisions for the most advanced models, with the power to influence or halt timelines in sensitive areas such as cybersecurity, biological and chemical risks, and AI self-improvement.

The structure includes continuous monitoring and application of policies embedded in the engineering flow, not just theoretical opinions written ex post facto.

Which Critical Risks Does OpenAI Want to Monitor

In the Preparation Framework v2, the company defines “serious harm” as consequences that can reach the scale of death or serious injury to thousands of people or economic losses in the order of hundreds of billions of dollars.

With this benchmark, OpenAI restricted the so-called Monitored Categories to three main axes: biological and chemical capabilities, cybersecurity, and self-improvement of AI systems.

Other types of risks, such as broader social impacts, have been moved to the research field, while the Preparation Director will be in charge of areas where an engineering error can translate into large-scale physical or economic harm.

The described governance includes a Security Advisory Group that makes recommendations, an executive leadership that can accept or reject them, and a security committee on the board to oversee the process.

Internal Criticisms, Layoffs, and Dispute Over Priorities

The creation of this dream job with a million-dollar salary is also a response to a series of wear and tear.

In 2024, the former head of preparation, Aleksander Madry, was reassigned, and executives began to accumulate the role, including the current head of recruitment, Joaquin Quiñonero Candela, who previously held the position of head of preparation.

During the same period, former security lead Jan Leike published direct criticisms, stating that culture and security processes had been “relegated to second place” in favor of innovative products.

The Preparation Structure itself admits, in legal language, that security requirements may be adjusted if competitors release high-risk models without equivalent protections, acknowledging that security is now part of the competition, not just an abstract ethics.

CEO Sam Altman describes the position as a “stressful” job and says that whoever assumes the role “will dive headfirst into the toughest parts almost immediately”.

In practice, the message is that the area must deal with models capable of exposing critical computing security vulnerabilities, while also affecting users in emotionally fragile situations.

The reference text cites journalistic investigations linking chatbot responses to tragic outcomes and lawsuits for alleged negligence, as well as showing that OpenAI has been tightening ChatGPT protections on issues such as psychological crisis, self-harm, and emotional dependence of the user on the system, trying to transform problems previously treated as exceptions into product risks to be measured and mitigated.

Public Opinion: Declining Trust and Pressure for Regulation

On the outside of the company, the context of the dream job with a million-dollar salary is an increasingly skeptical public regarding AI.

A survey by the Pew Research Center shows that 50 percent of Americans are more concerned than enthusiastic about the growing role of AI, up from 37 percent in 2021.

Meanwhile, 57 percent consider the social risks of the technology to be high, while only 25 percent see high benefits.

Gallup data indicate that 80 percent of American adults want safety regulations and data protection, even if it slows development, and only 2 percent say they fully trust AI for fair and unbiased decisions, compared to 60 percent who express some degree of distrust.

In this scenario, the new Preparation Director will also be a public credibility piece, even if the role is designed as a strictly technical function.

Preparation as the Adult Phase of the AI Industry

In the technology ecosystem, “preparation” has become the name of the phase in which products stop being treated as experimental prototypes and begin to carry caveats, responsibilities, and potential legal consequences explicitly.

OpenAI describes the hiring as a promise that safety measures must withstand the pressure for speed, market share, and leadership in performance benchmarks.

Whether this promise will be fulfilled or not, the very job description of the dream job with a million-dollar salary suggests that this time there will be a unique professional associated with the responsibility to say “no” when a cutting-edge model fails in extreme testing.

As the public begins to view these guarantees as implicit contracts, the expectation for real consequences if something goes wrong is also increasing.

Given a role of this magnitude, with a dream job with a million-dollar salary to head preparation against AI risks, do you think the most significant impact of this role will be technical within OpenAI or political, in how governments and society begin to view AI security?

Inscreva-se
Notificar de
guest
0 Comentários
Mais recente
Mais antigos Mais votado
Feedbacks
Visualizar todos comentários
Bruno Teles

Falo sobre tecnologia, inovação, petróleo e gás. Atualizo diariamente sobre oportunidades no mercado brasileiro. Com mais de 7.000 artigos publicados nos sites CPG, Naval Porto Estaleiro, Mineração Brasil e Obras Construção Civil. Sugestão de pauta? Manda no brunotelesredator@gmail.com

Compartilhar em aplicativos
0
Adoraríamos sua opnião sobre esse assunto, comente!x