Interested in this AI/ML Engineer role at Invoca?
Apply Now →Skills & Technologies
About This Role
About Invoca
----------------
Commitment to our customers, collaboration, and continuous improvement in a positive environment are more than words written on a wall at Invoca, it's our way of life. We take pride in an inclusive and egoless culture that helps us drive innovation and build value for both our customers and our people. And of course, there's the competitive pay, great perks, and getting to work on an industry\-leading product. If this sounds unlike most tech jobs you've had, you're right. Come join us. We're building something special.
About the Engineering Team
------------------------------
You'll join a team where everyone, including you, is striving to constantly improve their knowledge of software development tools, practices, and processes. We are an incredibly supportive team. We swarm when problems arise and give excellent feedback to help each other grow. Working on our close\-knit, multi\-functional teams is a chance to share and grow your knowledge of different domains from databases to front ends to telephony and everything in between.
We are passionate about many things: continuous improvement, working at a brisk but sustainable pace, writing resilient code, maintaining production reliability, paying down technical debt, hiring fantastic teammates; and we love to share these passions with each other.
### Learn more about the Invoca development team on our blog and check out our open source projects.
About the Role
------------------
Are you passionate about harnessing the power of generative AI and foundation models to build truly intelligent products? At Invoca, we're a team of innovators committed to building exceptional teams and groundbreaking AI solutions. This is a unique opportunity to architect the next generation of AI\-powered customer experiences, making a direct and measurable impact on our products and the success of our clients.
### Why You'll Thrive Here
As a key member of our Data Platform team, you won't just build models; you'll architect the future of how businesses understand and interact with their customers. You will:
- Architect and Deploy Applied AI Systems: Design, build, and deploy scalable, production\-grade applications using foundation models and other advanced AI techniques to directly influence product performance and efficiency.
- Engineer Advanced RAG and Agentic Systems: Go beyond basic implementation. Engineer robust Retrieval\-Augmented Generation (RAG) pipelines and multi\-step agentic workflows that can reason, use tools, and solve complex problems.
- Contribute to Agentic AI System Evaluation: Establish and manage rigorous evaluation frameworks for complex AI agents. You will go beyond simple response quality to assess the entire agentic process, including the validity of reasoning steps, correctness of tool use, task completion rates, and overall robustness.
- Excel in Prompt Engineering and Optimization: Craft, test, and manage sophisticated prompt chains and templates to ensure optimal performance, reliability, and cost\-effectiveness from our AI models.
- Build for Scale and Reliability: Develop resilient serving architectures that seamlessly integrate Large Language Models (LLMs) with enterprise systems, ensuring high availability and performance.
- Champion MLOps for Applied AI: Develop and maintain sophisticated MLOps and CI/CD pipelines tailored for the unique challenges of generative AI, including prompt versioning, RAG pipeline management, and continuous evaluation of agent behavior.
- Translate Business Needs into AI\-Powered Solutions: Work as a strategic partner to product and engineering teams, deeply understanding customer challenges and translating them into innovative, viable, and impactful AI features.
### Our Commitment to You
At Invoca, Applied AI Engineers are empowered by mentorship from leading experts across our data science, engineering, and architecture teams. Our dedicated data platform team leverages a powerful combination of proprietary, patented technologies and best\-in\-class vendor tools to create an exceptionally scalable AI application platform. Our goal is to seamlessly deliver transformative AI\-powered experiences through our robust API platform, and your expertise will contribute significantly in accelerating this mission.
What You Bring to the Table
-------------------------------
We're looking for a creative and driven Applied AI Engineer who is excited to tackle complex problems and build impactful solutions. You likely have:
- Proven Experience: 5\+ years of professional experience in Applied AI Engineering, ML Engineering, or a closely related role with a strong focus on building and deploying AI\-powered products and applications.
- Applied AI \& Python Expertise: Advanced proficiency in Python and hands\-on experience building applications with leading AI/ML frameworks (e.g., LangChain, LlamaIndex, CrewAI, PyTorch). You are an expert with data and ML libraries (e.g., Pandas, spaCy).
- Production Generative AI Champion: Demonstrated success deploying and maintaining applications powered by LLMs and other generative models in a production environment.
- Retrieval\-Augmented Generation (RAG) Expertise: Deep, hands\-on experience designing, building, and optimizing RAG pipelines. This includes expertise in vector databases (e.g., Qdrant, Pinecone, Weaviate), embedding strategies, and chunking techniques.
- Agentic AI System Evaluation Expertise: Demonstrable experience with modern evaluation techniques for multi\-step AI agents. You should be able to speak to the trade\-offs of evaluating reasoning traces, tool usage, and final outcomes using frameworks like LangSmith, DeepEval, Ragas, TruLens, or custom\-built solutions.
- Prompt Engineering \& In\-Context Learning: Demonstrable skill in designing, testing, and optimizing complex prompts and few\-shot examples to maximize model performance for specific tasks.
- Fine\-Tuning Proficiency: Experience in fine\-tuning foundation models for specific downstream tasks, with a clear understanding of when to fine\-tune versus when to use in\-context learning or agentic approaches.
- Scalable Model Serving: Advanced proficiency with API\-driven frameworks for accessing and serving self\-hosted foundation models (e.g., AWS SageMaker/Bedrock, Databricks Model Serving, TGI, vLLM), focusing on building resilient, scalable, and optimized integrations.
- Performance and Cost Optimization: A proven ability to optimize AI systems for low latency and high throughput. You have experience with techniques like model quantization, caching strategies, and infrastructure choices to manage and reduce operational costs.
- MLOps for AI Systems: Intermediate proficiency with MLOps tooling (e.g., MLFlow, Arize), and best practices for CI/CD, monitoring, and maintenance of complex AI systems.
- Educational Foundation: Bachelor's Degree in Computer Science, Engineering, Statistics, or a related field (or equivalent practical experience). An advanced degree (Master's or Ph.D.) is a strong plus.
*This role is remote and open to candidates located in the United States and Canada only.*
*Please note that we are unable to provide visa sponsorship for this position.*
### Salary, Benefits \& Perks:
At Invoca, all new hires in the U.S. receive benefits starting on day one of employment. Our benefits offerings include:
*Please note that benefits for teammates outside the U.S. may vary in accordance with their country's laws and regulations.*
- Flexible Time Off – We encourage a healthy work\-life balance. Our flexible paid time off policy allows you to recharge and take time away as needed.
- Paid Holidays – Invoca provides 16 U.S. paid holidays, including a winter break, giving you ample opportunity to refresh and spend time with friends and family.
- Health Benefits – Our healthcare program includes medical, dental, and vision coverage, with multiple plan options so you can choose what works best for you and your family. Fertility assistance is also included.
- Retirement – Invoca offers a 401(k) plan through Fidelity with a company match of up to 4%.
- Stock Options – All employees are invited to share in Invoca's success through stock options.
- Mental Health Program– Well\-being support on a broad range of issues is available through our SpringHealth program.
- Paid Family Leave – Up to 6 weeks of 100% paid leave is provided for baby bonding, adoption, and caring for family members.
- Paid Medical Leave – Up to 12 weeks of 100% paid leave is provided for childbirth and medical needs.
- InVacation – As a thank\-you to our long\-term team members, we offer a bonus after 7 years of service.
- Wellness Subsidy – We provide a subsidy that can be applied toward gym memberships, fitness classes, and more.
- Position Base Range \- Salary Range $180,000 \- $273,000 plus bonus \+ equity
\#LI\-Remote
Salary Context
This $180K-$273K range is above the 75th percentile for AI/ML Engineer roles in our dataset (median: $100K across 15465 roles with salary data).
View full AI/ML Engineer salary data →Role Details
About This Role
AI/ML Engineers build and deploy machine learning models in production. They work across the full ML lifecycle: data pipelines, model training, evaluation, and serving infrastructure. The role has evolved significantly over the past two years. Where ML Engineers once spent most of their time on model architecture, the job now tilts heavily toward inference optimization, cost management, and integrating LLM capabilities into existing systems. Companies want engineers who can ship production systems, and the experimenter-only role is fading fast.
Day-to-day, you're writing training pipelines, debugging data quality issues, setting up evaluation frameworks, and figuring out why your model performs differently in staging than it did on your dev set. The best ML engineers are obsessive about reproducibility and measurement. They instrument everything. They know that a model is only as good as the data feeding it and the infrastructure serving it.
Across the 26,159 AI roles we're tracking, AI/ML Engineer positions make up 91% of the market. At Invoca, this role fits into their broader AI and engineering organization.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
What the Work Looks Like
A typical week might include: debugging a data pipeline that's silently dropping 3% of training examples, running A/B tests on a new model version, writing documentation for a feature flag system that lets you roll back model deployments, and reviewing a junior engineer's PR for a new evaluation metric. Meetings tend to be cross-functional since ML touches product, engineering, and data teams.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
Skills Required
Python and PyTorch dominate the requirements. Most roles expect experience with cloud platforms (AWS, GCP, or Azure) and familiarity with ML frameworks like TensorFlow or JAX. RAG (Retrieval-Augmented Generation) has become a top-3 skill requirement as companies integrate LLMs into their products. Docker and Kubernetes show up in about a third of postings, reflecting the production focus of the role.
Beyond the core stack, employers increasingly want experience with experiment tracking tools (MLflow, Weights & Biases), feature stores, and vector databases. Fine-tuning experience is valuable but less common than you'd think from reading Twitter. Most production LLM work is RAG and prompt engineering, not fine-tuning. If you have both, you're in a strong position.
Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
Compensation Benchmarks
AI/ML Engineer roles pay a median of $166,983 based on 13,781 positions with disclosed compensation. Senior-level AI roles across all categories have a median of $227,400. This role's midpoint ($226K) sits 36% above the category median. Disclosed range: $180K to $273K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Invoca AI Hiring
Invoca has 1 open AI role right now. They're hiring across AI/ML Engineer. Based in Remote, US. Compensation range: $273K - $273K.
Remote Work Context
Remote AI roles pay a median of $156,000 across 1,221 positions. About 7% of all AI roles offer remote work.
Career Path
Common paths into AI/ML Engineer roles include Data Scientist, Software Engineer, Research Engineer.
From here, career progression typically leads toward ML Architect, AI Engineering Manager, Principal ML Engineer.
The fastest path into ML engineering is through software engineering with a self-directed ML education. A CS degree helps, but production engineering skills matter more than academic credentials. Build something that works, deploy it, and measure it. That portfolio project is worth more than a Coursera certificate. For career growth, the fork comes around the senior level: go deep on technical complexity (staff/principal track) or move into managing ML teams.
What to Expect in Interviews
Expect system design questions around ML pipelines: how you'd build a training pipeline for a specific use case, handle data drift, or design A/B testing infrastructure for model deployments. Coding rounds typically involve Python, with emphasis on data manipulation (pandas, numpy) and algorithm implementation. Take-home assignments often ask you to build an end-to-end ML pipeline from raw data to deployed model.
When evaluating opportunities: Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.