Interested in this MLOps Engineer role at Openkyber?
Apply Now →Skills & Technologies
About This Role
Job Title: Cloud Security Engineer (DevSecOps) Location: Irvine, CA / San Antonio, TX / New York (Onsite Local Preferred) Type: Contract
We are seeking a highly skilled Cloud Security Engineering with DevSecOps experiences to join our cybersecurity team within a leading financial / wealth management firm. This role is pivotal in embedding security into our cloud infrastructure and CI/CD pipelines , ensuring secure and compliant deployment of applications across Multi\-Cloud environments (AWS, Azure, Google Cloud Platform), IAM . You will work closely with cybersecurity leadership, cloud engineering, platform engineering and development teams to design, implement, and maintain secure DevOps practices that align with regulatory and business requirements.
Required :
- 5\+ years of experience in Cloud Security Engineering \& DevSecOps, Cloud Security, or a related field.
- Strong hands\-on experience with AWS and/or Azure security services.
- Proficiency in Python for scripting and automation.
- Experience with CI/CD tools such as Harness, or similar.
- Solid understanding of container security (e.g., Docker, Kubernetes).
- Familiarity with security frameworks such as NIST, CIS Benchmarks, or ISO 27001 .
Key Responsibilities
- Develop Cloud Security Runbooks (aka blueprints) for sanctioned cloud services such as S3, Lambda, RDS, etc..
- Design and implement Cloud Security preventive and detective guardrails using CSPM (Cloud Security Posture Management).
- Design and implement security controls CI/CD pipelines using tools such as Harness, and GitHub.
- Develop and maintain Infrastructure as Code (IaC) using Terraform and AWS CloudFormation.
- Integrate security controls and automated testing into the software development lifecycle (SDLC).
- Collaborate with cloud engineers to ensure secure deployment and configuration of services in AWS and Azure.
- Write and maintain automation scripts in Python to support security monitoring, compliance, and remediation.
- Conduct security reviews, and vulnerability assessments of cloud services and cloud\-native applications.
- Monitor and respond to security events and alerts related to cloud infrastructure and DevOps pipelines.
- Stay current with emerging threats, vulnerabilities, and compliance requirements relevant to financial services.
Preferred:
Experience in a regulated industry, preferably financial services or wealth management. Experience with JIRA, PowerBI
For applications and inquiries, contact: hirings@openkyber.com
Role Details
About This Role
MLOps Engineers build the infrastructure that keeps ML models running in production. They own CI/CD pipelines for model deployment, monitoring for data drift and model degradation, and the tooling that lets data scientists ship faster. If ML Engineers build the models, MLOps Engineers build the roads those models travel on.
The job is fundamentally about reliability and velocity. Data scientists want to iterate fast. Product teams want stable predictions. Your job is to make both happen simultaneously. That means building deployment pipelines that catch regressions before they hit production, monitoring systems that alert on data drift before it degrades model performance, and self-service tooling that lets data scientists deploy without filing a ticket.
Across the 26,159 AI roles we're tracking, MLOps Engineer positions make up 0% of the market. At Openkyber, this role fits into their broader AI and engineering organization.
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
What the Work Looks Like
A typical week involves: debugging a model deployment that's serving stale predictions, building a new monitoring dashboard for a feature team, writing Terraform for GPU-enabled inference clusters, reviewing pull requests for the ML platform's CI/CD pipeline, and meeting with data scientists to understand their pain points. You're the bridge between ML and infrastructure.
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
Skills Required
Kubernetes, Docker, and cloud infrastructure are baseline. Most roles want experience with ML-specific tooling: MLflow, Kubeflow, Weights & Biases, or similar. Strong DevOps fundamentals matter more than ML theory. You need to understand model serving (TorchServe, Triton, vLLM), monitoring (Prometheus, Grafana), and infrastructure-as-code (Terraform, Pulumi).
GPU infrastructure knowledge is increasingly valuable as LLM inference becomes a major cost center. Understanding GPU scheduling, multi-node training setups, and inference optimization (quantization, batching, caching) puts you in the top tier. Experience with model registries and feature stores rounds out the profile.
Good MLOps postings specify their ML stack, infrastructure scale, and the problems they're solving (deployment velocity, cost optimization, monitoring gaps). Red flag: companies that want MLOps but don't have any models in production yet. You'll end up doing general DevOps instead.
Compensation Benchmarks
MLOps Engineer roles pay a median of $174,720 based on 43 positions with disclosed compensation. Mid-level AI roles across all categories have a median of $131,300.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Openkyber AI Hiring
Openkyber has 161 open AI roles right now. They're hiring across AI/ML Engineer, AI Consultant, AI Engineering Manager, MLOps Engineer. Positions span GA, US, NJ, US, IL, US. Compensation range: $120K - $199K.
Location Context
Across all AI roles, 7% (1,863 positions) offer remote work, while 24,200 require on-site attendance. Top AI hiring metros: Los Angeles (1,695 roles, $178,000 median); New York (1,670 roles, $200,000 median); San Francisco (1,059 roles, $244,000 median).
Career Path
Common paths into MLOps Engineer roles include DevOps Engineer, Platform Engineer, Data Engineer.
From here, career progression typically leads toward ML Platform Lead, Infrastructure Architect, Engineering Manager.
DevOps engineers with ML curiosity have the shortest path. You already understand deployment, monitoring, and infrastructure. Add ML-specific knowledge (model serving, data pipelines, experiment tracking) and you're competitive. The career ceiling is high: ML Platform Lead roles at top companies pay well because the infrastructure complexity is enormous.
What to Expect in Interviews
Interviews emphasize infrastructure and reliability. Expect questions about CI/CD for ML models, monitoring for data drift, and how you'd design a model serving platform that handles 10K requests per second. Coding rounds focus on Python and infrastructure-as-code (Terraform, Helm). Be ready to discuss tradeoffs between different model serving frameworks and how you'd handle rollback when a new model degrades performance.
When evaluating opportunities: Good MLOps postings specify their ML stack, infrastructure scale, and the problems they're solving (deployment velocity, cost optimization, monitoring gaps). Red flag: companies that want MLOps but don't have any models in production yet. You'll end up doing general DevOps instead.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.