Interested in this MLOps Engineer role at Openkyber?
Apply Now →Skills & Technologies
About This Role
Location: Lebanon, NJ Salary: $145,000\.00 USD Annually \- $155,000\.00 USD Annually Description: Knowledge Graph \& AI Engineer Overview We are seeking an experienced Knowledge Graph \& AI Engineer with deep expertise in graph technologies, ontology design, and modern AI integration. This role focuses on building scalable knowledge graph architectures, developing graph queries, integrating structured and semi\-structured data, and enabling AI\-driven insights through RAG and LLM\-powered applications. The ideal candidate has experience in data engineering, AI engineering, graph data modeling, and deploying machine learning solutions in cloud environments.
Responsibilities:
- Design and implement knowledge graph architectures using property graph or RDF\-based models.
- Transform and integrate structured and semi\-structured data into optimized graph structures.
- Develop and query graph systems using Cypher and/or SPARQL.
- Design ontologies and entity\-relationship models to support sales intelligence and related use cases.
- Integrate knowledge graphs with LLMs using Retrieval\-Augmented Generation (RAG) patterns.
- Build APIs and backend services to deliver AI\-driven prospecting and recommendation insights.
- Implement scoring models, relationship strength analytics, and graph traversal logic.
- Ensure scalability, security, and performance across enterprise systems.
- Partner with sales, business, and engineering teams to translate requirements into technical solutions.
Required Qualifications:
- 5\+ years of experience in data engineering, AI engineering, or knowledge graph development.
- Hands\-on experience with graph technologies, including property graph and/or RDF frameworks.
- Proficiency with Cypher and/or SPARQL.
- Strong data modeling and ontology design skills.
- Experience integrating data from relational databases and external sources.
- Strong Python development experience.
- Experience integrating LLMs into enterprise applications.
- Familiarity with Retrieval\-Augmented Generation (RAG) architectures and AI\-driven recommendation systems.
- Experience with Amazon SageMaker, AWS Machine Learning tools, ML pipelines, MLOps, CI/CD, model deployment, and inference endpoints.
- Experience with Docker, Kubernetes, and EKS.
- Industry experience in insurance, claims, underwriting, or fraud detection.
Preferred Qualifications:
- Experience building sales intelligence or CRM\-adjacent platforms.
- Knowledge of embeddings, semantic search, and vector databases.
- Experience designing relationship scoring or network analytics models.
- Cloud experience with AWS, Azure, or Google Cloud Platform.
- Experience working in financial services or insurance environments.
Success Criteria:
- Deliver a scalable knowledge graph and AI solution that improves sales prospect identification and relationship insights.
- Demonstrate measurable increases in targeting precision and cross\-sell opportunity discovery.
- Establish reusable architectural patterns to support enterprise AI\-driven sales initiatives.
For applications and inquiries, contact: hirings@openkyber.com
Salary Context
This $145K-$155K range is above the median for MLOps Engineer roles in our dataset (median: $156K across 34 roles with salary data).
View full MLOps Engineer salary data →Role Details
About This Role
MLOps Engineers build the infrastructure that keeps ML models running in production. They own CI/CD pipelines for model deployment, monitoring for data drift and model degradation, and the tooling that lets data scientists ship faster. If ML Engineers build the models, MLOps Engineers build the roads those models travel on.
The job is fundamentally about reliability and velocity. Data scientists want to iterate fast. Product teams want stable predictions. Your job is to make both happen simultaneously. That means building deployment pipelines that catch regressions before they hit production, monitoring systems that alert on data drift before it degrades model performance, and self-service tooling that lets data scientists deploy without filing a ticket.
Across the 26,159 AI roles we're tracking, MLOps Engineer positions make up 0% of the market. At Openkyber, this role fits into their broader AI and engineering organization.
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
What the Work Looks Like
A typical week involves: debugging a model deployment that's serving stale predictions, building a new monitoring dashboard for a feature team, writing Terraform for GPU-enabled inference clusters, reviewing pull requests for the ML platform's CI/CD pipeline, and meeting with data scientists to understand their pain points. You're the bridge between ML and infrastructure.
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
Skills Required
Kubernetes, Docker, and cloud infrastructure are baseline. Most roles want experience with ML-specific tooling: MLflow, Kubeflow, Weights & Biases, or similar. Strong DevOps fundamentals matter more than ML theory. You need to understand model serving (TorchServe, Triton, vLLM), monitoring (Prometheus, Grafana), and infrastructure-as-code (Terraform, Pulumi).
GPU infrastructure knowledge is increasingly valuable as LLM inference becomes a major cost center. Understanding GPU scheduling, multi-node training setups, and inference optimization (quantization, batching, caching) puts you in the top tier. Experience with model registries and feature stores rounds out the profile.
Good MLOps postings specify their ML stack, infrastructure scale, and the problems they're solving (deployment velocity, cost optimization, monitoring gaps). Red flag: companies that want MLOps but don't have any models in production yet. You'll end up doing general DevOps instead.
Compensation Benchmarks
MLOps Engineer roles pay a median of $174,720 based on 43 positions with disclosed compensation. Mid-level AI roles across all categories have a median of $131,300. This role's midpoint ($150K) sits 14% below the category median. Disclosed range: $145K to $155K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Openkyber AI Hiring
Openkyber has 161 open AI roles right now. They're hiring across AI/ML Engineer, AI Consultant, AI Engineering Manager, MLOps Engineer. Positions span GA, US, NJ, US, IL, US. Compensation range: $120K - $199K.
Location Context
Across all AI roles, 7% (1,863 positions) offer remote work, while 24,200 require on-site attendance. Top AI hiring metros: Los Angeles (1,695 roles, $178,000 median); New York (1,670 roles, $200,000 median); San Francisco (1,059 roles, $244,000 median).
Career Path
Common paths into MLOps Engineer roles include DevOps Engineer, Platform Engineer, Data Engineer.
From here, career progression typically leads toward ML Platform Lead, Infrastructure Architect, Engineering Manager.
DevOps engineers with ML curiosity have the shortest path. You already understand deployment, monitoring, and infrastructure. Add ML-specific knowledge (model serving, data pipelines, experiment tracking) and you're competitive. The career ceiling is high: ML Platform Lead roles at top companies pay well because the infrastructure complexity is enormous.
What to Expect in Interviews
Interviews emphasize infrastructure and reliability. Expect questions about CI/CD for ML models, monitoring for data drift, and how you'd design a model serving platform that handles 10K requests per second. Coding rounds focus on Python and infrastructure-as-code (Terraform, Helm). Be ready to discuss tradeoffs between different model serving frameworks and how you'd handle rollback when a new model degrades performance.
When evaluating opportunities: Good MLOps postings specify their ML stack, infrastructure scale, and the problems they're solving (deployment velocity, cost optimization, monitoring gaps). Red flag: companies that want MLOps but don't have any models in production yet. You'll end up doing general DevOps instead.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
MLOps demand tracks closely with production ML adoption. As more companies move models from notebooks to production, the need for MLOps grows. The role is well-established at large tech companies and growing fast at mid-stage startups that are hitting the 'our models work in notebooks but break in production' phase.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.