Interested in this LLM Engineer role at CACI International?
Apply Now →Skills & Technologies
About This Role
Job ID
324356
Job Title: Principal AI/ML Engineer (Large Language Model)
Job Category: Science
Time Type: Full time
Minimum Clearance Required to Start: TS/SCI
Employee Type: Regular
Percentage of Travel Required: Up to 10%
Type of Travel: Local
Anticipated Posting End: 8/31/2026The Opportunity:
The Principal AI/ML Engineer will support the development of AI/ML algorithms in a multitude of disciplines from object detection/classification, natural language processing, reinforcement learning, and large language models.
Responsibilities:
- Lead and mentor a multidisciplined team consisting of developers and researchers to implement machine learning algorithms to solve a broad set of challenges for our various customers
- Apply Large Language Models (LLMs) to a variety of applications within remote sensing such as tasking collections, identifying gaps in collection plans, analyzing patterns of life, and more.
- Fine tune foundation models and building adaptors for new applications (llama factory, PEFT)
- Apply retrieval augmented generation (RAG) techniques to data to populate and query vector databases (e.g. Weaviate)
- Build custom applications with LLM frameworks such as LangChain, DSPy
- Deploy LLM solutions across cloud\-based and local resources using kubernetes (llama.ccp, vllm etc)
- Analyze large multi\-domain datasets such as images, text and/or graph data, to identify statistically relevant features to build models that provide analysts with actionable data
- Review relevant publications to understand and apply cutting edge concepts to defense and commercial applications
- Interface with both internal and external leadership to communicate technical status
Qualifications:
*Required:*
- BS in machine learning, computer science, mathematics, or related fields.
- 10\+ years of experience, preferably in software development or as a data scientist with 2\+ years of building LLM applications using some of the following:
+ Fine\-tuning foundational models
- Steering Techniques (e.g Sparse auto encoders, representation tuning)
- Building adapters to use foundational models (e.g. PEFT, llama factory)
+ Prompt engineering techniques / Inference time techniques (e.g. chain of thought, tree of thoughts, etc.)
+ Using Retrieval Augmented Generation techniques to populate and query vector databases (e.g. Weaviate, pinecone)
+ Using LLM Frameworks (e.g. LangChain, DSPy)
+ Using AI APIs ( e.g AWS Bedrock, OpenAI)
+ Using LLM deployment frameworks (eg llama.cpp, vllm, tgi)
+ Developing UIs with ReAct
- Experience leading an interdisciplinary team of researchers and software developers and working with a program manager to define project scope and schedule to ensure we meet project milestones as defined by our customers
- Experience with Python and data science / machine learning libraries (e.g. PyTorch, TensorFlow, Keras, OpenCV, NumPy, Pandas, Polars, scikit\-learn, etc.)
- Active TS/SCI U.S. Government Security Clearance
*Desired:*
- MS or PhD in machine learning, computer science, mathematics, or related fields.
- Experience leading an interdisciplinary team of researchers and software developers
- Experience with any of the following Computer Vision domains:
+ Large Language Models and experience identifying ways to incorporate them into new areas and applications
+ Applying Transformer\-based architectures to domains in other areas outside of Natural Language Processing (NLP) such as computer vision
+ Object detection algorithms such as YOLO and Faster\-RCNN
+ Natural Language Processing algorithms such as BERT
+ Generative Adversarial Networks and Variational Autoencoders
+ Reinforcement learning and familiarity with Gymnasium Gym, RLlib, and Stable Baselines
+ Applying clustering algorithms and/or deep neural networks to real life problems
+ Implementing tracking and pattern\-of\-life algorithms
- Experience with Machine Learning libraries and frameworks such as HuggingFace and LangChain
- Experience with Computer Vision libraries such as OpenCV, Nerfstudio, FiftyOne, etc.
- Experience with Linux
- Familiarity with using AWS cloud computing resources such as EC2, S3, Lambda, etc.
- Experience with any of the following additional languages: Java, C\+\+, Rust, Go, and/or C\#
- Experience implementing algorithms on the GPU in Python or C\+\+ using CUDA and other CUDA libraries
- Experience with implementing tracking and pattern\-of\-life algorithms
- Experience in application deployment, virtualization, and containerization (e.g. Podman, Docker, Kubernetes, Rancher)
- Experience working with various Remote Sensing datasets (e.g. EO/OPIR/SAR images, passive RF, etc.)
- Experience shaping and writing proposals
*
What You Can Expect:
A culture of integrity.
At CACI, we place character and innovation at the center of everything we do. As a valued team member, you’ll be part of a high\-performing group dedicated to our customer’s missions and driven by a higher purpose – to ensure the safety of our nation.
An environment of trust.
CACI values the unique contributions that every employee brings to our company and our customers \- every day. You’ll have the autonomy to take the time you need through a unique flexible time off benefit and have access to robust learning resources to make your ambitions a reality.
A focus on continuous growth.
Together, we will advance our nation's most critical missions, build on our lengthy track record of business success, and find opportunities to break new ground — in your career and in our legacy.
Pay Range:
There are a host of factors that can influence final salary including, but not limited to, geographic location, Federal Government contract labor categories and contract wage rates, relevant prior work experience, specific skills and competencies, education, and certifications. Our employees value the flexibility at CACI that allows them to balance quality work and their personal lives. We offer competitive compensation, benefits and learning and development opportunities. Our broad and competitive mix of benefits options is designed to support and protect employees and their families. At CACI, you will receive comprehensive benefits such as; healthcare, wellness, financial, retirement, family support, continuing education, and time off benefits.
Since this position can be worked in more than one location, the range shown is the national average for the position.
The proposed salary range for this position is:
$114,600\-$252,100*CACI is* *an Equal Opportunity Employer.* *All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, age, national origin, disability, status as a protected veteran, or any* *other protected characteristic.*
Role Details
About This Role
LLM Engineers specialize in building applications powered by large language models. They design RAG systems, fine-tune models, build agent frameworks, and optimize inference pipelines for cost and latency. This is the role that didn't exist three years ago and now has thousands of open positions.
The scope is broad. You might be building a customer support chatbot that needs to pull from a knowledge base of 50,000 documents, or designing an agent that can navigate a company's internal tools to complete multi-step tasks. The common thread is taking a foundation model and making it do something useful, reliably, at scale, without bankrupting the company on API costs.
Across the 26,159 AI roles we're tracking, LLM Engineer positions make up 0% of the market. At CACI International, this role fits into their broader AI and engineering organization.
LLM Engineer is one of the fastest-growing AI job titles. Every company building AI-powered products needs people who understand the full stack: from embedding models to vector stores to inference optimization. The supply of experienced LLM engineers is thin because the field is so new, which keeps compensation high and demand strong.
What the Work Looks Like
A typical week includes: building and testing RAG pipelines (chunking strategies, embedding models, retrieval evaluation), debugging why the agent took a wrong action path, optimizing inference costs (caching, batching, model selection), and working with the product team on new LLM-powered features. You'll context-switch between deep technical work and cross-functional collaboration.
LLM Engineer is one of the fastest-growing AI job titles. Every company building AI-powered products needs people who understand the full stack: from embedding models to vector stores to inference optimization. The supply of experienced LLM engineers is thin because the field is so new, which keeps compensation high and demand strong.
Skills Required
RAG and vector databases are the most common requirements. Expect to work with LangChain or LlamaIndex, embedding models, and at least one vector store (Pinecone, Weaviate, Chroma). Python is non-negotiable. Understanding the cost/latency/quality tradeoffs between different model providers and architectures is what separates senior from junior engineers.
Fine-tuning experience is valuable for specific use cases but most production LLM work is RAG-based. Agent frameworks (LangGraph, CrewAI, custom orchestration) are increasingly important as companies move beyond simple chat interfaces. Evaluation and observability tools (LangSmith, Arize, custom dashboards) are essential for production deployments.
Look for roles that specify the production stack, mention specific use cases, and talk about cost optimization. Companies that understand LLM engineering will mention evaluation methodology, latency requirements, and scale targets. Vague 'build AI features' postings often mean they haven't figured out their architecture yet.
Compensation Benchmarks
LLM Engineer roles pay a median of $285,250 based on 4 positions with disclosed compensation. Senior-level AI roles across all categories have a median of $227,400. This role's midpoint ($183K) sits 36% below the category median. Disclosed range: $114K to $252K.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
CACI International AI Hiring
CACI International has 26 open AI roles right now. They're hiring across AI/ML Engineer, Prompt Engineer, AI Software Engineer, LLM Engineer. Positions span Remote, US, Ashburn, VA, US, Denver, CO, US. Compensation range: $79K - $252K.
Location Context
Across all AI roles, 7% (1,863 positions) offer remote work, while 24,200 require on-site attendance. Top AI hiring metros: Los Angeles (1,695 roles, $178,000 median); New York (1,670 roles, $200,000 median); San Francisco (1,059 roles, $244,000 median).
Career Path
Common paths into LLM Engineer roles include Software Engineer, ML Engineer, Data Engineer.
From here, career progression typically leads toward AI Architect, Principal Engineer, AI Engineering Manager.
The fastest path is through software engineering. If you can build production systems and you understand LLM capabilities and limitations, you're already qualified for most roles. Build a portfolio project that demonstrates RAG implementation, evaluation, and cost optimization. Open-source contributions to LLM frameworks are strong signals to hiring managers.
What to Expect in Interviews
Technical screens cover RAG architecture design, embedding model selection, chunking strategies, and retrieval evaluation. Expect questions about cost optimization: how you'd reduce inference costs by 50% without degrading quality. System design rounds often present scenarios like 'design a customer support chatbot that can access 100K documents' and evaluate your understanding of the full stack from embedding to serving.
When evaluating opportunities: Look for roles that specify the production stack, mention specific use cases, and talk about cost optimization. Companies that understand LLM engineering will mention evaluation methodology, latency requirements, and scale targets. Vague 'build AI features' postings often mean they haven't figured out their architecture yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
LLM Engineer is one of the fastest-growing AI job titles. Every company building AI-powered products needs people who understand the full stack: from embedding models to vector stores to inference optimization. The supply of experienced LLM engineers is thin because the field is so new, which keeps compensation high and demand strong.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.