Interested in this MLOps Engineer role at Openkyber?
Apply Now →Skills & Technologies
About This Role
Sr Engr, DevOps (Sr. Observability Engineer) W2 Contract Pay Rate: $62\.11\- $72\.11 per hour Location: Carrollton, TX \- Onsite Role Duties and Responsibilities: Architect, build, and scale comprehensive monitoring solutions using the New Relic platform, including APM, Infrastructure, Logs, Synthetics, and custom instrumentation (NRQL). Develop, manage, and maintain observability configurations\-including alerts, dashboards, and synthetic checks\-using Infrastructure as Code (IaC) tools such as Terraform/OpenTofu. Create and refine insightful dashboards and actionable alerting policies in New Relic to provide real\-time visibility into infrastructure and application health. Act as a subject matter expert on observability, guiding teams on best practices for logging, metrics, and tracing to improve system reliability and reduce mean time to resolution (MTTR). Analyze performance data and telemetry to identify bottlenecks, troubleshoot production issues, and drive performance optimization efforts across the stack. Requirements and Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 7\+ years of experience in a Cloud Engineering role (Observability, DevOps, SRE, etc). 3\+ years of hands\-on experience with the New Relic platform, including deep knowledge of Dashboards, NRQL, APM, and setting up effective alerting. 3\+ years of experience managing infrastructure and configurations with IaC tools like Terraform/OpenTofu (preferred), AWS CDK, CloudFormation, Chef, or Ansible. Extensive hands\-on experience working with major cloud providers such as AWS (preferred), Google Cloud Platform, or Azure. Proficiency in a scripting language such as Python, Go, or Bash for automation and tooling. Strong understanding of cloud architecture, networking principles, Windows/Linux Server administration, microservices in a SaaS context, containerization (Docker, Kubernetes), and CI/CD principles. Deep understanding of security best practices and their implementation in cloud infrastructure and CI/CD pipelines. Experience working in complexly regulated environments. Excellent problem\-solving and troubleshooting skills. Strong communication and collaboration skills. OpenKyber, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate. OpenKyber, Inc. may collect your personal information during the position application process. Please reference OpenKyber, Inc.'s CCPA Privacy Policy at
For applications and inquiries, contact: hirings@openkyber.com
Salary Context
This $128K-$149K range is below the median for MLOps Engineer roles in our dataset (median: $156K across 34 roles with salary data).
View full MLOps Engineer salary data →Role Details
About This Role
MLOps Engineers build the infrastructure that keeps ML models running in production. They own CI/CD pipelines for model deployment, monitoring for data drift and model degradation, and the tooling that lets data scientists ship faster. If ML Engineers build the models, MLOps Engineers build the roads those models travel on.
The job is fundamentally about reliability and velocity. Data scientists want to iterate fast. Product teams want stable predictions. Your job is to make both happen simultaneously. That means building deployment pipelines that catch regressions before they hit production, monitoring systems that alert on data drift before it degrades model performance, and self-service tooling that lets data scientists deploy without filing a ticket.
Across the 26,159 AI roles we're tracking, MLOps Engineer positions make up 0% of the market. At Openkyber, this role fits into their broader AI and engineering organization.
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
What the Work Looks Like
A typical week involves: debugging a model deployment that's serving stale predictions, building a new monitoring dashboard for a feature team, writing Terraform for GPU-enabled inference clusters, reviewing pull requests for the ML platform's CI/CD pipeline, and meeting with data scientists to understand their pain points. You're the bridge between ML and infrastructure.
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
Skills Required
Kubernetes, Docker, and cloud infrastructure are baseline. Most roles want experience with ML-specific tooling: MLflow, Kubeflow, Weights & Biases, or similar. Strong DevOps fundamentals matter more than ML theory. You need to understand model serving (TorchServe, Triton, vLLM), monitoring (Prometheus, Grafana), and infrastructure-as-code (Terraform, Pulumi).
GPU infrastructure knowledge is increasingly valuable as LLM inference becomes a major cost center. Understanding GPU scheduling, multi-node training setups, and inference optimization (quantization, batching, caching) puts you in the top tier. Experience with model registries and feature stores rounds out the profile.
Good MLOps postings specify their ML stack, infrastructure scale, and the problems they're solving (deployment velocity, cost optimization, monitoring gaps). Red flag: companies that want MLOps but don't have any models in production yet. You'll end up doing general DevOps instead.
Compensation Benchmarks
MLOps Engineer roles pay a median of $174,720 based on 43 positions with disclosed compensation. Mid-level AI roles across all categories have a median of $131,300. This role's midpoint ($139K) sits 20% below the category median. Disclosed range: $128K to $149K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Openkyber AI Hiring
Openkyber has 161 open AI roles right now. They're hiring across AI/ML Engineer, AI Consultant, AI Engineering Manager, MLOps Engineer. Positions span GA, US, NJ, US, IL, US. Compensation range: $120K - $199K.
Location Context
Across all AI roles, 7% (1,863 positions) offer remote work, while 24,200 require on-site attendance. Top AI hiring metros: Los Angeles (1,695 roles, $178,000 median); New York (1,670 roles, $200,000 median); San Francisco (1,059 roles, $244,000 median).
Career Path
Common paths into MLOps Engineer roles include DevOps Engineer, Platform Engineer, Data Engineer.
From here, career progression typically leads toward ML Platform Lead, Infrastructure Architect, Engineering Manager.
DevOps engineers with ML curiosity have the shortest path. You already understand deployment, monitoring, and infrastructure. Add ML-specific knowledge (model serving, data pipelines, experiment tracking) and you're competitive. The career ceiling is high: ML Platform Lead roles at top companies pay well because the infrastructure complexity is enormous.
What to Expect in Interviews
Interviews emphasize infrastructure and reliability. Expect questions about CI/CD for ML models, monitoring for data drift, and how you'd design a model serving platform that handles 10K requests per second. Coding rounds focus on Python and infrastructure-as-code (Terraform, Helm). Be ready to discuss tradeoffs between different model serving frameworks and how you'd handle rollback when a new model degrades performance.
When evaluating opportunities: Good MLOps postings specify their ML stack, infrastructure scale, and the problems they're solving (deployment velocity, cost optimization, monitoring gaps). Red flag: companies that want MLOps but don't have any models in production yet. You'll end up doing general DevOps instead.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.