Interested in this AI/ML Engineer role at Pager Health?
Apply Now →Skills & Technologies
About This Role
Pager Health is a connected health platform company that enables healthcare enterprises to deliver high\-engagement, intelligent health experiences for their patients, members and teams through integrated technology, AI and concierge services. Our solutions help people get the right care at the right time in the right place and stay healthy, while simultaneously reducing system friction and fragmentation, powering engagement, and orchestrating the enterprise. Pager Health partners with leading payers, providers and employers representing more than 28 million individuals across the United States and Latin America.
We believe that healthcare should work for everyone. We believe that it's too important to be as cumbersome and difficult as it is. And we believe that there is a better way to deliver a simplified, more meaningful healthcare experience for all – one that we're determined to enable.
We're looking for an Applied AI/LLM Engineer to design, build, and ship LLM\-powered agents and applications. You'll work closely with our Data Science team, who defines the LLM strategy, guardrails, evaluation criteria, and model selection. Your job is to take that foundation and turn it into reliable, production\-grade agent systems that solve real business problems. This is a builder role, not a research role. You'll spend most of your time writing code — wiring up tools, orchestrating multi\-step workflows, integrating APIs, and making sure agents behave predictably in the wild. You should be deeply comfortable working with large language models in production and operating at the intersection of software engineering and applied AI.
Responsibilities:
- Build agentic applications and workflows using the LLM frameworks, models, and guardrails provided by the Data Science team Design and implement tool integrations, function\-calling patterns, and orchestration logic that allow agents to take actions across internal systems and external APIs
- Translate agent specifications and prompt strategies (authored by Data Science) into robust, deployable services Implement RAG pipelines, including vector store integration, chunking strategies, and retrieval optimization
- Own the full lifecycle of agent systems from prototype through production, including testing, monitoring, logging, and iteration
- Build evaluation and observability infrastructure so the team can measure agent quality, latency, cost, and safety in production
- Collaborate with Data Science on prompt engineering, model behavior tuning, and guardrail enforcement. Implementing their specifications into the runtime layer Work with platform and infrastructure teams to deploy, scale, and maintain agent services in cloud environments (GCP/Vertex AI, AWS Bedrock, Azure OpenAI)
- Contribute to internal tooling, SDKs, and shared libraries that accelerate agent development across the organization
Qualifications:
- 3\+ years of software engineering experience with strong proficiency in Python Hands\-on experience building applications powered by large language models (Claude, GPT, Gemini)
- Familiarity with agent frameworks and orchestration patterns (LangChain, LangGraph, CrewAI, Vertex AI Agent Builder, or custom orchestration)
- Experience implementing function calling, tool use, and multi\-step agent workflows Solid understanding of RAG architectures, embedding models, and vector databases (Pinecone, Weaviate, pgvector, Vertex AI Vector Search)
- Comfort working within defined guardrails and model configurations \- you don't need to pick the model, but you need to know how to get the most out of it
- Experience with API design, microservices, and deploying services in cloud environments Strong debugging and problem\-solving instincts. Agent systems fail in non\-obvious ways and you enjoy chasing down why
- Strong communication skills; ability to work across Data Science, Product, and Engineering teams
- Experience with evaluation frameworks for LLM\-based systems (custom evals, RAGAS, LangSmith, Braintrust), is nice to have
- Familiarity with MLOps tooling and CI/CD for ML systems and experience with streaming responses, async architectures, and real\-time agent interactions, are nice to have
- Background in building multi\-agent systems with routing, delegation, and coordination patterns, is preferred
- Exposure to Google Cloud Platform/Vertex AI ecosystem Contributions to open\-source AI/ML projects, is preferred
For Colorado, Nevada, New York, and Washington DC\-based employment: In accordance with the Pay Transparency laws the pay range for this position is $150,000 \- $160,000\. The compensation package may include stock options, plus a range of medical, dental, vision, financial, generous PTO, stipends for professional development, and wellness benefits. Final compensation for this role will be determined by various factors such as a candidate's relevant work experience, skills, certifications, and geographic location. The range listed only applies to Colorado, Nevada, New York, and Washington DC.
At Pager Health, you will work alongside passionate, talented and mission\-driven professionals – people who are building scalable platforms, solving critical enterprise\-level challenges in health tech and providing concierge services to help individuals access the medical care and wellbeing programs they need.
You will be encouraged to shape your job, stretch your skills and drive the company's future. You will be part of a remote\-first, dynamic and tight\-knit team that embraces the challenges and opportunities that come with being part of a growth company. Most importantly, you will be an industry innovator who is making a positive impact on people's lives.
At Pager Health, we value diversity and always treat all employees and job applicants based on merit, qualifications, competence, and talent. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Please be aware that all official communication from Pager Health regarding employment opportunities will originate from email addresses ending in @pager.com. We will never request personal or financial information via email. If you receive an email purporting to be from Pager Health that does not adhere to this format, please do not respond and report it to security@pager.com.
Pager Health is committed to protecting the privacy and security of your personal information
Salary Context
This $150K-$160K range is above the median for AI/ML Engineer roles in our dataset (median: $100K across 15465 roles with salary data).
View full AI/ML Engineer salary data →Role Details
About This Role
AI/ML Engineers build and deploy machine learning models in production. They work across the full ML lifecycle: data pipelines, model training, evaluation, and serving infrastructure. The role has evolved significantly over the past two years. Where ML Engineers once spent most of their time on model architecture, the job now tilts heavily toward inference optimization, cost management, and integrating LLM capabilities into existing systems. Companies want engineers who can ship production systems, and the experimenter-only role is fading fast.
Day-to-day, you're writing training pipelines, debugging data quality issues, setting up evaluation frameworks, and figuring out why your model performs differently in staging than it did on your dev set. The best ML engineers are obsessive about reproducibility and measurement. They instrument everything. They know that a model is only as good as the data feeding it and the infrastructure serving it.
Across the 26,159 AI roles we're tracking, AI/ML Engineer positions make up 91% of the market. At Pager Health, this role fits into their broader AI and engineering organization.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
What the Work Looks Like
A typical week might include: debugging a data pipeline that's silently dropping 3% of training examples, running A/B tests on a new model version, writing documentation for a feature flag system that lets you roll back model deployments, and reviewing a junior engineer's PR for a new evaluation metric. Meetings tend to be cross-functional since ML touches product, engineering, and data teams.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
Skills Required
Python and PyTorch dominate the requirements. Most roles expect experience with cloud platforms (AWS, GCP, or Azure) and familiarity with ML frameworks like TensorFlow or JAX. RAG (Retrieval-Augmented Generation) has become a top-3 skill requirement as companies integrate LLMs into their products. Docker and Kubernetes show up in about a third of postings, reflecting the production focus of the role.
Beyond the core stack, employers increasingly want experience with experiment tracking tools (MLflow, Weights & Biases), feature stores, and vector databases. Fine-tuning experience is valuable but less common than you'd think from reading Twitter. Most production LLM work is RAG and prompt engineering, not fine-tuning. If you have both, you're in a strong position.
Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
Compensation Benchmarks
AI/ML Engineer roles pay a median of $166,983 based on 13,781 positions with disclosed compensation. Mid-level AI roles across all categories have a median of $131,300. This role's midpoint ($155K) sits 7% below the category median. Disclosed range: $150K to $160K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Pager Health AI Hiring
Pager Health has 1 open AI role right now. They're hiring across AI/ML Engineer. Based in Remote, US. Compensation range: $160K - $160K.
Remote Work Context
Remote AI roles pay a median of $156,000 across 1,221 positions. About 7% of all AI roles offer remote work.
Career Path
Common paths into AI/ML Engineer roles include Data Scientist, Software Engineer, Research Engineer.
From here, career progression typically leads toward ML Architect, AI Engineering Manager, Principal ML Engineer.
The fastest path into ML engineering is through software engineering with a self-directed ML education. A CS degree helps, but production engineering skills matter more than academic credentials. Build something that works, deploy it, and measure it. That portfolio project is worth more than a Coursera certificate. For career growth, the fork comes around the senior level: go deep on technical complexity (staff/principal track) or move into managing ML teams.
What to Expect in Interviews
Expect system design questions around ML pipelines: how you'd build a training pipeline for a specific use case, handle data drift, or design A/B testing infrastructure for model deployments. Coding rounds typically involve Python, with emphasis on data manipulation (pandas, numpy) and algorithm implementation. Take-home assignments often ask you to build an end-to-end ML pipeline from raw data to deployed model.
When evaluating opportunities: Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.