Interested in this AI/ML Engineer role at Create Music Group?
Apply Now →Skills & Technologies
About This Role
Established in 2015, Create Music Group is a leading music and entertainment company. The company operates as a record label, distribution company, and entertainment network which generates over 15 billion music streams each month on DSP’s. Named \#2 on the Inc 5000 Fastest Growth Companies in America in 2020, the company has grown exponentially by leveraging its owned IP with its media and technology platform. The company works with superstar artists, major and independent record labels, and global media brands. It operates a number of companies including Label Engine, one of the largest independent music distribution platforms in the world, with over 75,000 artists and 5,000 label clients; and Flighthouse, a digital entertainment brand focused on Gen Z, which has more than 300 million followers across social media. Create Music Group is based in Hollywood, CA and has 400 employees worldwide.
Job Summary
We are building CreateOS — a next\-generation operating system for modern record labels — and AI is at the center of it. As a Full Stack AI Engineer, you will own the end\-to\-end design and delivery of AI\-powered features that make CreateOS intelligent. This means building the agentic workflows, APIs, and interfaces through which users interact with AI copilots, predictive tools, and automated pipelines — from first concept to production deployment.
This is a high\-ownership role for someone who thinks in full systems. You won't hand work off at the API boundary — you'll own the experience from the data layer to the UI. You will work directly with the VP of AI \& ML Engineering and sit at the intersection of product, engineering, and applied AI. Your immediate impact will be on three of CMG's highest\-priority AI initiatives: M\&A catalog valuation tooling, AI\-driven A\&R discovery surfaces, and marketing automation agents — each directly tied to revenue growth and competitive differentiation.
You'll work in close collaboration with the ML Engineer, who owns the intelligence layer (models, features, evaluation). You own the application and orchestration layer that brings that intelligence to users.
Responsibilities
Agentic AI \& LLM Systems (*Primary Ownership*)
- Design, build, and maintain modular AI agents that automate multi\-step workflows across CreateOS (contracts, accounting, distribution, metadata)
- Own RAG pipelines, retrieval architectures, and semantic search systems grounded in CreateOS's structured business data (contracts, royalty statements, catalog metadata, etc.)
- Implement guardrails, evaluation frameworks, and human\-in\-the\-loop controls for agentic systems
- Integrate LLMs (OpenAI, Anthropic, or open\-source models) into user\-facing features across CreateOS modules
Full Stack Development
- Design, build, and maintain scalable, production\-grade applications across the frontend and backend
- Build intuitive, AI\-native user experiences including chat interfaces, copilot\-style tools, and workflow automation surfaces within CreateOS
- Own features end\-to\-end — from data modeling and API design to UI implementation and deployment
Platform \& Infrastructure
- Deploy and maintain services using containerization and cloud platforms
- Ensure AI\-powered features are reliable, observable, and performant in production
- Collaborate with the ML Engineer to integrate model outputs and feature pipelines cleanly into product surfaces
- Maintain high code quality standards through unit and integration testing, code reviews, and CI/CD pipeline ownership
- Partner with Data Engineering (who owns pipeline infrastructure) to consume and integrate internal data pipelines (dbt, Airflow), third\-party API feeds (DSPs, distributors), webhook and event\-driven data flows, and ETL outputs into CreateOS product surfaces
Iteration \& Product Thinking
- Rapidly prototype and evaluate new AI\-powered features based on internal user feedback
- Contribute to technical architecture decisions with a bias toward shipping and learning
- Communicate tradeoffs clearly across engineering, product, and business stakeholders
- Other duties as assigned
Qualifications
- 5\+ years of software engineering experience with a track record of shipping production applications
- Hands\-on experience building and owning agentic or multi\-step AI workflows in production
- Strong proficiency in a modern frontend framework (React, Next.js) and a backend language (Python or Node.js)
- Hands\-on experience integrating LLMs or AI APIs into user\-facing products
- Familiarity with RAG systems, vector databases, and embedding\-based retrieval
- Experience designing and documenting RESTful APIs
- Proficiency in relational databases (PostgreSQL or similar); comfortable writing and optimizing SQL queries
- Solid understanding of Kubernetes, containerization (Docker), and DevOps practices — including CI/CD pipelines, observability, and deployment workflows
- Experience with AI evaluation practices — LLM output quality assessment, hallucination detection, and building eval frameworks for agentic systems
- Proficiency with AI\-native development tools (Cursor, Claude Code, or similar)
- Ability to work independently and own features from concept to deployment
Preferred Qualifications
- Previous experience at a startup or as an early/founding engineer
- Portfolio of personal or professional AI projects — RAG systems, LLM agents, copilot\-style tools (GitHub links welcome)
- Familiarity with the music industry, rights management, or royalties workflows
- Experience with AI\-native development tools (Cursor, Claude Code, or similar)
- Knowledge of data privacy, compliance, and responsible AI deployment considerations
Tech Stack
- Frontend: React, Next.js, TypeScript, Tailwind CSS
- Backend: Python (FastAPI), Node.js
- AI \& Agent Frameworks: LangChain, LangGraph, DeepEval, MCP
- Vector \& Retrieval: Pinecone, Weaviate, or similar
- Databases \& APIs: PostgreSQL, Snowflake, RESTful API design
- Infrastructure: Docker, Kubernetes, GCP or AWS, Supabase
- Collaboration \& Dev Tools: GitHub, Linear, Cursor / Claude Code
Pay Scale
- US Remote \- $140,000 \- $170,000 USD
- The final compensation within this range will be determined based on the candidate’s location, experience, skills, and overall fit for the role.
Fair Chance Policy
In accordance with the Los Angeles County Fair Chance Ordinance, we will consider employment for qualified applicants with criminal histories. We evaluate candidates based on their qualifications and the nature of the offense in relation to the job for which they are applying.
Role Details
About This Role
AI/ML Engineers build and deploy machine learning models in production. They work across the full ML lifecycle: data pipelines, model training, evaluation, and serving infrastructure. The role has evolved significantly over the past two years. Where ML Engineers once spent most of their time on model architecture, the job now tilts heavily toward inference optimization, cost management, and integrating LLM capabilities into existing systems. Companies want engineers who can ship production systems, and the experimenter-only role is fading fast.
Day-to-day, you're writing training pipelines, debugging data quality issues, setting up evaluation frameworks, and figuring out why your model performs differently in staging than it did on your dev set. The best ML engineers are obsessive about reproducibility and measurement. They instrument everything. They know that a model is only as good as the data feeding it and the infrastructure serving it.
Across the 26,159 AI roles we're tracking, AI/ML Engineer positions make up 91% of the market. At Create Music Group, this role fits into their broader AI and engineering organization.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
What the Work Looks Like
A typical week might include: debugging a data pipeline that's silently dropping 3% of training examples, running A/B tests on a new model version, writing documentation for a feature flag system that lets you roll back model deployments, and reviewing a junior engineer's PR for a new evaluation metric. Meetings tend to be cross-functional since ML touches product, engineering, and data teams.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
Skills Required
Python and PyTorch dominate the requirements. Most roles expect experience with cloud platforms (AWS, GCP, or Azure) and familiarity with ML frameworks like TensorFlow or JAX. RAG (Retrieval-Augmented Generation) has become a top-3 skill requirement as companies integrate LLMs into their products. Docker and Kubernetes show up in about a third of postings, reflecting the production focus of the role.
Beyond the core stack, employers increasingly want experience with experiment tracking tools (MLflow, Weights & Biases), feature stores, and vector databases. Fine-tuning experience is valuable but less common than you'd think from reading Twitter. Most production LLM work is RAG and prompt engineering, not fine-tuning. If you have both, you're in a strong position.
Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
Compensation Benchmarks
AI/ML Engineer roles pay a median of $166,983 based on 13,781 positions with disclosed compensation. Mid-level AI roles across all categories have a median of $131,300.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Create Music Group AI Hiring
Create Music Group has 2 open AI roles right now. They're hiring across AI/ML Engineer. Positions span Remote, US, Los Angeles, CA, US.
Location Context
AI roles in Los Angeles pay a median of $178,000 across 1,695 tracked positions. That's 3% below the national median.
Career Path
Common paths into AI/ML Engineer roles include Data Scientist, Software Engineer, Research Engineer.
From here, career progression typically leads toward ML Architect, AI Engineering Manager, Principal ML Engineer.
The fastest path into ML engineering is through software engineering with a self-directed ML education. A CS degree helps, but production engineering skills matter more than academic credentials. Build something that works, deploy it, and measure it. That portfolio project is worth more than a Coursera certificate. For career growth, the fork comes around the senior level: go deep on technical complexity (staff/principal track) or move into managing ML teams.
What to Expect in Interviews
Expect system design questions around ML pipelines: how you'd build a training pipeline for a specific use case, handle data drift, or design A/B testing infrastructure for model deployments. Coding rounds typically involve Python, with emphasis on data manipulation (pandas, numpy) and algorithm implementation. Take-home assignments often ask you to build an end-to-end ML pipeline from raw data to deployed model.
When evaluating opportunities: Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.