Interested in this AI/ML Engineer role at OpenAI?
Apply Now →Skills & Technologies
About This Role
About the team
The AI Deployment Engineering team is responsible for ensuring the safe and effective deployment of OpenAI technologies across developers and enterprises. We act as trusted technical partners, helping customers move from experimentation to production by designing, implementing, and scaling real\-world AI applications.
The Codex Deployment Engineering team focuses on enabling organizations to adopt next\-generation AI coding tools and intelligent automations throughout their software development lifecycle. We partner directly with engineering teams to integrate Codex into their workflows — from early experimentation and pilot design through enterprise\-scale production rollout — ensuring AI\-enhanced developer experiences are reliable, secure, and deeply embedded within customer environments.
About the role
We are seeking an experienced technical leader to manage a team of AI Deployment Engineers responsible for driving successful Codex adoption across strategic customers. In this role, you will lead engineers who work hands\-on with customer development teams to design AI\-enabled workflows, deploy production\-ready solutions, and establish scalable patterns for AI\-powered software development.
As a manager, you will define how deployment engagements operate at scale — setting technical strategy, coaching engineers, and ensuring consistent execution across customer implementations. You will serve as both a people leader and technical advisor, partnering closely with Sales, Product, Research, and Engineering teams to translate customer needs into deployment best practices and product insights.
Success in this role will be measured by production deployments, sustained developer adoption, and the creation of repeatable deployment patterns that accelerate Codex usage across organizations.
This role is open in our San Francisco office. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Lead, hire, and mentor a high\-performing team of AI Deployment Engineers supporting Codex customers across strategic accounts.
Own the operating model and engagement strategy for Codex deployment efforts, ensuring customers successfully move from pilot to production adoption.
Guide teams in designing and implementing AI\-enhanced development workflows, automations, and scalable deployment architectures.
Act as the senior technical escalation point for complex customer implementations and deployment challenges.
Partner with Sales, Product, Research, and Applied Engineering teams to align customer outcomes with product direction and roadmap priorities.
Help establish repeatable deployment playbooks, technical patterns, and best practices that enable scaled adoption of AI coding tools.
Coach engineers to operate as trusted advisors to engineering leadership and executive stakeholders.
Synthesize insights from customer deployments and translate them into actionable feedback for internal teams.
Champion safe, reliable, and effective adoption of AI\-powered development workflows across industries.
You’ll thrive in this role if you:
Have 8\+ years of experience in technical customer\-facing roles such as deployment engineering, solutions architecture, technical consulting, or post\-sales engineering.
Have 2\+ years of experience leading technical teams, including hiring, mentoring, and developing engineers.
Have experience deploying Generative AI, developer platforms, or cloud\-based software solutions into production environments.
Possess hands\-on technical experience with software development systems and programming languages such as Python or JavaScript.
Understand modern software development lifecycles and how AI tooling transforms developer productivity and workflows.
Are an effective communicator who can translate complex technical and business topics to both engineering teams and executive stakeholders.
Thrive in ambiguous, fast\-moving environments and enjoy building new operating models and teams from first principles.
Demonstrate strong ownership, humility, and a commitment to helping both customers and teammates succeed.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general\-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see
OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement
.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US\-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non\-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non\-compliant, please submit a report through
this form
. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation
$251K – $335K \+ Offers Equity
Salary Context
This $251K-$335K range is above the 75th percentile for AI/ML Engineer roles in our dataset (median: $100K across 15465 roles with salary data).
View full AI/ML Engineer salary data →Role Details
About This Role
AI/ML Engineers build and deploy machine learning models in production. They work across the full ML lifecycle: data pipelines, model training, evaluation, and serving infrastructure. The role has evolved significantly over the past two years. Where ML Engineers once spent most of their time on model architecture, the job now tilts heavily toward inference optimization, cost management, and integrating LLM capabilities into existing systems. Companies want engineers who can ship production systems, and the experimenter-only role is fading fast.
Day-to-day, you're writing training pipelines, debugging data quality issues, setting up evaluation frameworks, and figuring out why your model performs differently in staging than it did on your dev set. The best ML engineers are obsessive about reproducibility and measurement. They instrument everything. They know that a model is only as good as the data feeding it and the infrastructure serving it.
Across the 26,159 AI roles we're tracking, AI/ML Engineer positions make up 91% of the market. At OpenAI, this role fits into their broader AI and engineering organization.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
What the Work Looks Like
A typical week might include: debugging a data pipeline that's silently dropping 3% of training examples, running A/B tests on a new model version, writing documentation for a feature flag system that lets you roll back model deployments, and reviewing a junior engineer's PR for a new evaluation metric. Meetings tend to be cross-functional since ML touches product, engineering, and data teams.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
Skills Required
Python and PyTorch dominate the requirements. Most roles expect experience with cloud platforms (AWS, GCP, or Azure) and familiarity with ML frameworks like TensorFlow or JAX. RAG (Retrieval-Augmented Generation) has become a top-3 skill requirement as companies integrate LLMs into their products. Docker and Kubernetes show up in about a third of postings, reflecting the production focus of the role.
Beyond the core stack, employers increasingly want experience with experiment tracking tools (MLflow, Weights & Biases), feature stores, and vector databases. Fine-tuning experience is valuable but less common than you'd think from reading Twitter. Most production LLM work is RAG and prompt engineering, not fine-tuning. If you have both, you're in a strong position.
Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
Compensation Benchmarks
AI/ML Engineer roles pay a median of $166,983 based on 13,781 positions with disclosed compensation. Mid-level AI roles across all categories have a median of $131,300. This role's midpoint ($293K) sits 75% above the category median. Disclosed range: $251K to $335K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
OpenAI AI Hiring
OpenAI has 11 open AI roles right now. They're hiring across AI/ML Engineer. Positions span San Francisco, CA, US, Washington, DC, US, New York, NY, US. Compensation range: $230K - $385K.
Location Context
AI roles in San Francisco pay a median of $244,000 across 1,059 tracked positions. That's 33% above the national median.
Career Path
Common paths into AI/ML Engineer roles include Data Scientist, Software Engineer, Research Engineer.
From here, career progression typically leads toward ML Architect, AI Engineering Manager, Principal ML Engineer.
The fastest path into ML engineering is through software engineering with a self-directed ML education. A CS degree helps, but production engineering skills matter more than academic credentials. Build something that works, deploy it, and measure it. That portfolio project is worth more than a Coursera certificate. For career growth, the fork comes around the senior level: go deep on technical complexity (staff/principal track) or move into managing ML teams.
What to Expect in Interviews
Expect system design questions around ML pipelines: how you'd build a training pipeline for a specific use case, handle data drift, or design A/B testing infrastructure for model deployments. Coding rounds typically involve Python, with emphasis on data manipulation (pandas, numpy) and algorithm implementation. Take-home assignments often ask you to build an end-to-end ML pipeline from raw data to deployed model.
When evaluating opportunities: Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.