Interested in this AI/ML Engineer role at Grafana Labs?
Apply Now →Skills & Technologies
About This Role
Grafana Labs is a remote\-first, open\-source powerhouse. There are more than 20M users of Grafana, the open source visualization tool, around the globe, monitoring everything from beehives to climate change in the Alps. The instantly recognizable dashboards have been spotted everywhere from a NASA launch and Minecraft HQ to Wimbledon and the Tour de France. Grafana Labs also helps more than 3,000 companies \- including Bloomberg, JPMorgan Chase, and eBay \- manage their observability strategies with the Grafana LGTM Stack, which can be run fully managed with Grafana Cloud or self\-managed with the Grafana Enterprise Stack, both featuring scalable metrics (Grafana Mimir), logs (Grafana Loki), and traces (Grafana Tempo).
We’re scaling fast and staying true to what makes us different: an open\-source legacy, a global collaborative culture, and a passion for meaningful work. Our team thrives in an innovation\-driven environment where transparency, autonomy, and trust fuel everything we do.
You may not meet every requirement, and that’s okay. If this role excites you, we’d love you to raise your hand for what could be a truly career\-defining opportunity.
This is a remote opportunity and we are looking for candidates from the U.S.
The Opportunity
Grafana Labs is looking for a Staff AI Engineer, People Technology to help build the next generation of AI\-powered systems that support our global workforce and People programs. This role sits within the People Technology team, partnering closely with People Operations, Talent Acquisition, Enablement, Finance, and Go\-to\-Market teams to design intelligent workflows and data products that improve how we hire, develop, and support Grafanistas worldwide.
At Grafana Labs, we believe AI should be actually useful. Our approach to AI is grounded in solving real problems and creating practical tools that help teams work better. As part of this philosophy, you’ll apply AI thoughtfully within our People systems, building solutions that improve how we operate and support team members.
As our first People Team AI Engineer, you’ll architect and implement AI\-driven solutions that unlock insights from our People data ecosystem, including BigQuery and systems such as Workday, Greenhouse, Docebo, Tangelo, and Salesforce. Your work will focus on creating scalable data pipelines, automation, and AI\-powered workflows that drive operational efficiency and deliver actionable insights for People leaders.
This role requires both technical depth and a strong understanding of data governance, privacy, and responsible AI practices. You’ll help establish standards for anonymization, ethical AI usage, and secure handling of sensitive People data while building systems that empower the People team to make data\-informed decisions.
As a Staff\-level engineer, you will also set technical direction, mentor other engineers, and influence how AI is adopted and governed across the broader organization, not just within the People team.
This is an opportunity to play a foundational role in shaping how AI supports the employee lifecycle at Grafana Labs, from recruiting and onboarding to development, engagement, and workforce planning.
What You'll Be Doing
AI Architecture \& People Data Intelligence
- Design and build AI\-powered workflows, agents, and analytics tools that transform People data into actionable insights and reduce manual processes across the People Team
- Architect solutions that leverage BigQuery as the central data layer, integrating platforms such as Workday, Greenhouse, Docebo, Tangelo, Salesforce, and other internal systems
- Establish and maintain CI/CD pipelines, testing frameworks, and observability standards for AI systems and automated workflows
- Define prompt engineering standards, version control practices, and evaluation frameworks for LLM\-based systems operating on People data
- Partner with People Analytics to ensure AI systems operate on well\-governed, high\-quality datasets and align with established workforce metrics and data models
- Build internal dashboards, AI assistants, or automated workflows that support reporting, insights, and operational efficiency
- Collaborate with People partners to translate business problems into scalable technical solutions
- Define and track success metrics and measurable business outcomes for AI initiatives, including efficiency gains, time savings, and decision quality improvements
Responsible AI, Security \& Data Governance
- Design systems that securely handle sensitive employee data using anonymization, aggregation, and robust access controls
- Establish governance standards for AI models, prompts, and automation workflows, ensuring compliance with internal security, privacy, and regulatory requirements
- Implement monitoring and evaluation frameworks that ensure AI systems operate accurately, fairly, and reliably over time
Cross\-Functional Collaboration \& Enablement
- Partner closely with Data Engineering and teams across People, IT, Security and Privacy, Finance, and GTM Operations to align data architecture and AI capabilities
- Collaborate with other AI Engineers across the organization to align on architecture patterns, shared tooling, and company\-wide AI standards
- Document systems, architecture decisions, and governance frameworks for People AI initiatives
- Help establish internal standards for AI experimentation, deployment, and measurement of impact
- Provide guidance and enablement to People teams adopting AI\-driven tools and workflows
- Create training materials, playbooks, and scalable frameworks that enable People team members to confidently build, trigger, and measure AI\-assisted workflows independently
Technical Leadership
- Set technical direction for AI architecture within the People technology ecosystem
- Mentor and provide technical guidance to engineers and technical partners working on adjacent systems
- Influence cross\-functional architecture decisions, advocating for responsible, scalable, and well\-governed AI practices across the organization
- Drive alignment between AI initiatives and broader engineering standards, ensuring People systems are not built in isolation
What Makes You a Great Fit
- 7\+ years of engineering or data engineering experience, including 2\+ years working with AI/ML systems or LLM\-based workflows. Demonstrated ability to set technical direction, mentor engineers, and drive organization\-wide impact through AI or automation initiatives
- Strong experience working with Google Cloud Platform and BigQuery for data processing and analytics
- Proficiency in Python, SQL, modern APIs, and LLM platforms or frameworks (OpenAI, Gemini, Claude, or similar), including integrating enterprise SaaS systems such as HRIS or CRM platforms
- Experience designing scalable data pipelines, automation frameworks, and analytics platforms, with a strong knowledge of data modeling, ETL/ELT pipelines, and analytics infrastructure
- Experience working with sensitive data environments and implementing responsible AI practices, including anonymization, access controls, governance, and regulatory considerations when applying AI to People data
- Ability to partner with both technical and non\-technical stakeholders to define and deliver solutions, document systems clearly, and translate evolving business needs into scalable technical solutions
Bonus Points
- Experience working with HR technology ecosystems such as Workday, Greenhouse, and learning platforms
- Background in People Analytics or workforce analytics platforms
- Experience building internal AI tools, copilots, or automation agents for business teams
- Familiarity with workflow automation platforms (Zenphi, n8n, Workato, Zapier, Make, etc.)
- Experience working in high\-growth SaaS or distributed organizations
- Contributions to AI governance frameworks, responsible AI standards, or open\-source AI tooling
In the United States, the base compensation range for this role is USD $174,986 \- USD $209,983\. Actual compensation may vary based on level, experience, and skillset as assessed throughout the interview process. All of our roles include Restricted Stock Units (RSUs), giving every team member ownership in Grafana Labs' success. We believe in shared outcomes—RSUs help us stay aligned and invested as we scale globally.
- *Compensation ranges are country specific. If you are applying for this role from a different location than listed above, your recruiter will discuss your specific market’s defined pay range \& benefits at the beginning of the process.*
Why You’ll Thrive at Grafana Labs:
- 100% Remote, Global Culture \- As a remote\-only company, we bring together talent from around the world, united by a culture of collaboration and shared purpose.
- Scaling Organization – Tackle meaningful work in a high\-growth, ever\-evolving environment.
- Transparent Communication – Expect open decision\-making and regular company\-wide updates.
- Innovation\-Driven – Autonomy and support to ship great work and try new things.
- Open Source Roots – Built on community\-driven values that shape how we work.
- Empowered Teams – High trust, low ego culture that values outcomes over optics.
- Career Growth Pathways – Defined opportunities to grow and develop your career.
- Approachable Leadership – Transparent execs who are involved, visible, and human.
- Passionate People – Join a team of smart, supportive folks who care deeply about what they do.
- In\-Person onboarding \- We want you to thrive from day 1 with your fellow new ‘Grafanistas’ to learn all about what we do and how we do it.
- Balance is Key \- We operate a global annual leave policy of 30 days per annum. 3 days of your annual leave entitlement are reserved for Grafana Shutdown Days to allow the team to really disconnect. *\*We will comply with local legislation where applicable.*
Equal Opportunity Employer: We will recruit, train, compensate and promote regardless of race, religion, color, national origin, gender, disability, age, veteran status, and all the other fascinating characteristics that make us different and unique. We believe that equality and diversity builds a strong organization and we’re working hard to make sure that’s the foundation of our organization as we grow.
*Grafana Labs may utilize AI tools in its recruitment process to assist in matching information provided in CVs to job postings. The recruitment team will continue to review inbound CVs manually to identify alignment with current openings.*
\#LI\-Remote
*For information about how your personal data is used once you’ve applied to a job, check out our* *privacy policy**.*
Salary Context
This $174K-$209K range is above the 75th percentile for AI/ML Engineer roles in our dataset (median: $100K across 15465 roles with salary data).
View full AI/ML Engineer salary data →Role Details
About This Role
AI/ML Engineers build and deploy machine learning models in production. They work across the full ML lifecycle: data pipelines, model training, evaluation, and serving infrastructure. The role has evolved significantly over the past two years. Where ML Engineers once spent most of their time on model architecture, the job now tilts heavily toward inference optimization, cost management, and integrating LLM capabilities into existing systems. Companies want engineers who can ship production systems, and the experimenter-only role is fading fast.
Day-to-day, you're writing training pipelines, debugging data quality issues, setting up evaluation frameworks, and figuring out why your model performs differently in staging than it did on your dev set. The best ML engineers are obsessive about reproducibility and measurement. They instrument everything. They know that a model is only as good as the data feeding it and the infrastructure serving it.
Across the 26,159 AI roles we're tracking, AI/ML Engineer positions make up 91% of the market. At Grafana Labs, this role fits into their broader AI and engineering organization.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
What the Work Looks Like
A typical week might include: debugging a data pipeline that's silently dropping 3% of training examples, running A/B tests on a new model version, writing documentation for a feature flag system that lets you roll back model deployments, and reviewing a junior engineer's PR for a new evaluation metric. Meetings tend to be cross-functional since ML touches product, engineering, and data teams.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
Skills Required
Python and PyTorch dominate the requirements. Most roles expect experience with cloud platforms (AWS, GCP, or Azure) and familiarity with ML frameworks like TensorFlow or JAX. RAG (Retrieval-Augmented Generation) has become a top-3 skill requirement as companies integrate LLMs into their products. Docker and Kubernetes show up in about a third of postings, reflecting the production focus of the role.
Beyond the core stack, employers increasingly want experience with experiment tracking tools (MLflow, Weights & Biases), feature stores, and vector databases. Fine-tuning experience is valuable but less common than you'd think from reading Twitter. Most production LLM work is RAG and prompt engineering, not fine-tuning. If you have both, you're in a strong position.
Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
Compensation Benchmarks
AI/ML Engineer roles pay a median of $166,983 based on 13,781 positions with disclosed compensation. Senior-level AI roles across all categories have a median of $227,400. This role's midpoint ($192K) sits 15% above the category median. Disclosed range: $174K to $209K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Grafana Labs AI Hiring
Grafana Labs has 2 open AI roles right now. They're hiring across AI Software Engineer, AI/ML Engineer. Based in Remote, US. Compensation range: $185K - $209K.
Remote Work Context
Remote AI roles pay a median of $156,000 across 1,221 positions. About 7% of all AI roles offer remote work.
Career Path
Common paths into AI/ML Engineer roles include Data Scientist, Software Engineer, Research Engineer.
From here, career progression typically leads toward ML Architect, AI Engineering Manager, Principal ML Engineer.
The fastest path into ML engineering is through software engineering with a self-directed ML education. A CS degree helps, but production engineering skills matter more than academic credentials. Build something that works, deploy it, and measure it. That portfolio project is worth more than a Coursera certificate. For career growth, the fork comes around the senior level: go deep on technical complexity (staff/principal track) or move into managing ML teams.
What to Expect in Interviews
Expect system design questions around ML pipelines: how you'd build a training pipeline for a specific use case, handle data drift, or design A/B testing infrastructure for model deployments. Coding rounds typically involve Python, with emphasis on data manipulation (pandas, numpy) and algorithm implementation. Take-home assignments often ask you to build an end-to-end ML pipeline from raw data to deployed model.
When evaluating opportunities: Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.