Interested in this AI/ML Engineer role at Granite Telecommunications?
Apply Now →Skills & Technologies
About This Role
Job Description
We are seeking a highly skilled and experienced Lead AI Engineer to join our dynamic team. The ideal candidate will excel at identifying and articulating complex business problems, and will develop innovative, scalable, and robust AI/ML solutions to address these challenges. Responsibilities will include designing, building, and deploying enterprise\-grade AI systems, specifically focused on:
- Agentic AI solutions to automate operational processes (e.g., interpreting trouble tickets, performing basic troubleshooting, interacting with online portals, data entry).
- Retrieval\-Augmented Generation (RAG) and ColBERTv2 pipelines for parsing, indexing, and querying enterprise documents to facilitate answers related to process guidelines, product knowledge, and training materials.
- Function calling solutions leveraging Large Language Models (LLMs) to automate and perform precise actions in enterprise workflows.
- Developing and applying reinforcement learning strategies to optimize and automate decision\-making processes within enterprise operations.
This role requires hands\-on expertise with model fine\-tuning, training pipelines, post\-training optimization techniques (e.g., model distillation), classification models, and integrating AI systems within complex enterprise environments.
Duties and Responsibilities
- Develop and implement AI solutions leveraging fine\-tuned Large Language Models (e.g., OpenAI models, LLaMA, Mistral).
- Design, develop, and optimize Retrieval\-Augmented Generation (RAG) pipelines using advanced vector databases (e.g., FAISS, Pinecone, Milvus).
- Build and enhance agentic AI systems utilizing frameworks like LangChain, AutoGPT, or similar automation frameworks.
- Deploy scalable ColBERTv2 architectures for semantic retrieval and classification.
- Create robust pre\-processing and post\-processing pipelines to enhance model performance, accuracy, and interpretability.
- Collaborate closely with cross\-functional teams, including product managers, business stakeholders, data scientists, and software engineers.
- Implement best practices in model distillation, quantization, and optimization for deployment in production environments.
- Ensure compliance with enterprise\-grade security, privacy standards, and data governance practices.
- Provide leadership and mentorship to team members, supporting their technical development and career growth through coaching, training, and performance feedback.
- Drive timely and successful completion of AI/ML projects by setting clear milestones, tracking progress, removing blockers, and aligning resources
Required Qualifications
- Bachelor's degree in Computer Science, Data Science, Machine Learning, AI, or related fields; advanced degree strongly preferred.
- 5\+ years of proven experience developing and deploying production\-grade AI/ML systems.
- Strong programming skills in Python, familiarity with libraries/frameworks such as PyTorch, TensorFlow, Hugging Face, and LangChain.
- Demonstrated expertise with LLM fine\-tuning (e.g., LoRA, PEFT), distillation, and model optimization.
- Practical experience implementing RAG pipelines with embedding technologies and vector stores (e.g., FAISS, Pinecone).
- Proven track record building agentic AI systems capable of interacting with multiple enterprise applications and platforms.
- Solid understanding of NLP techniques, Transformer architectures, semantic search, and document retrieval technologies (e.g., ColBERT).
- Hands\-on experience with reinforcement learning techniques, including designing, training, and deploying reinforcement learning models.
Preferred Qualifications:
- Master's or Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related field.
- Familiarity with cloud\-based AI services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
- Experience with containerization (Docker, Kubernetes) and deployment pipelines (CI/CD).
- Knowledge of advanced AI frameworks and model inference engines such as Triton Inference Server, TensorRT, and ONNX.
- Familiarity with model monitoring, observability tools, and techniques to ensure long\-term reliability and performance.
- Strong communication and interpersonal skills with the ability to clearly articulate complex technical solutions to non\-technical stakeholders.
- Experience in regulated industries or environments requiring strict compliance and data governance standards.
Requirements:
Granite delivers advanced communications and technology solutions to businesses and government agencies throughout the United States and Canada. We provide exceptional customized service with an emphasis on reliability and outstanding customer support and our customers include over 85 of the Fortune 100\. Granite has over $1\.85 Billion in revenue with more than 2,100 employees and is headquartered in Quincy, MA. Our mission is to be the leading telecommunications company wherever we offer services as well as provide an environment where the value of each individual is recognized and where each person has the opportunity to further their growth and achieve success.
Granite has been recognized by the Boston Business Journal as one of the "Healthiest Companies" in Massachusetts for the past 15 consecutive years.
Our offices have onsite fully equipped state of the art gyms for employees at zero cost.
Granite's philanthropy is unparalleled with over $300 million in donations to organizations such as Dana Farber Cancer Institute, The ALS Foundation and the Alzheimer's Association to name a few.
We have been consistently rated a "Fastest Growing Company" by Inc. Magazine.
Granite was named to Forbes List of America's Best Employers 2022, 2023 and 2024\.
Granite was recently named One of Forbes Best Employers for Diversity.
Our company's insurance package includes health, dental, vision, life, disability coverage, 401K retirement with company match, childcare benefits, tuition assistance, and more.
If you are a highly motivated individual who wants to grow your career with a fast paced and progressive company, Granite has countless opportunities for you.
EOE/M/F/Vets/Disabled
Role Details
About This Role
AI/ML Engineers build and deploy machine learning models in production. They work across the full ML lifecycle: data pipelines, model training, evaluation, and serving infrastructure. The role has evolved significantly over the past two years. Where ML Engineers once spent most of their time on model architecture, the job now tilts heavily toward inference optimization, cost management, and integrating LLM capabilities into existing systems. Companies want engineers who can ship production systems, and the experimenter-only role is fading fast.
Day-to-day, you're writing training pipelines, debugging data quality issues, setting up evaluation frameworks, and figuring out why your model performs differently in staging than it did on your dev set. The best ML engineers are obsessive about reproducibility and measurement. They instrument everything. They know that a model is only as good as the data feeding it and the infrastructure serving it.
Across the 26,159 AI roles we're tracking, AI/ML Engineer positions make up 91% of the market. At Granite Telecommunications, this role fits into their broader AI and engineering organization.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
What the Work Looks Like
A typical week might include: debugging a data pipeline that's silently dropping 3% of training examples, running A/B tests on a new model version, writing documentation for a feature flag system that lets you roll back model deployments, and reviewing a junior engineer's PR for a new evaluation metric. Meetings tend to be cross-functional since ML touches product, engineering, and data teams.
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
Skills Required
Python and PyTorch dominate the requirements. Most roles expect experience with cloud platforms (AWS, GCP, or Azure) and familiarity with ML frameworks like TensorFlow or JAX. RAG (Retrieval-Augmented Generation) has become a top-3 skill requirement as companies integrate LLMs into their products. Docker and Kubernetes show up in about a third of postings, reflecting the production focus of the role.
Beyond the core stack, employers increasingly want experience with experiment tracking tools (MLflow, Weights & Biases), feature stores, and vector databases. Fine-tuning experience is valuable but less common than you'd think from reading Twitter. Most production LLM work is RAG and prompt engineering, not fine-tuning. If you have both, you're in a strong position.
Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
Compensation Benchmarks
AI/ML Engineer roles pay a median of $166,983 based on 13,781 positions with disclosed compensation. Senior-level AI roles across all categories have a median of $227,400.
Across all AI roles, the market median is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. For comparison, the highest-paying categories include AI Engineering Manager ($293,500) and AI Architect ($292,900). By seniority level: Entry: $76,880; Mid: $131,300; Senior: $227,400; Director: $244,288; VP: $234,620.
Granite Telecommunications AI Hiring
Granite Telecommunications has 1 open AI role right now. They're hiring across AI/ML Engineer. Based in Quincy, MA, US.
Location Context
Across all AI roles, 7% (1,863 positions) offer remote work, while 24,200 require on-site attendance. Top AI hiring metros: Los Angeles (1,695 roles, $178,000 median); New York (1,670 roles, $200,000 median); San Francisco (1,059 roles, $244,000 median).
Career Path
Common paths into AI/ML Engineer roles include Data Scientist, Software Engineer, Research Engineer.
From here, career progression typically leads toward ML Architect, AI Engineering Manager, Principal ML Engineer.
The fastest path into ML engineering is through software engineering with a self-directed ML education. A CS degree helps, but production engineering skills matter more than academic credentials. Build something that works, deploy it, and measure it. That portfolio project is worth more than a Coursera certificate. For career growth, the fork comes around the senior level: go deep on technical complexity (staff/principal track) or move into managing ML teams.
What to Expect in Interviews
Expect system design questions around ML pipelines: how you'd build a training pipeline for a specific use case, handle data drift, or design A/B testing infrastructure for model deployments. Coding rounds typically involve Python, with emphasis on data manipulation (pandas, numpy) and algorithm implementation. Take-home assignments often ask you to build an end-to-end ML pipeline from raw data to deployed model.
When evaluating opportunities: Companies that are serious about AI/ML hiring tend to post specific infrastructure details in the job description: the frameworks they use, their model serving stack, their data pipeline tools. Vague postings that just say 'ML experience required' without specifics are often companies that haven't figured out what they need yet.
AI Hiring Overview
The AI job market has 26,159 open positions tracked in our dataset. By seniority: 2,416 entry-level, 16,247 mid-level, 5,153 senior, and 2,343 leadership roles (Director, VP, C-Level). Remote roles make up 7% of the market (1,863 positions). The remaining 24,200 roles require on-site or hybrid attendance.
The market median for AI roles is $184,000. Top-quartile compensation starts at $244,000. The 90th percentile reaches $309,400. Highest-paying categories: AI Engineering Manager ($293,500 median, 28 roles); AI Architect ($292,900 median, 108 roles); AI Safety ($274,200 median, 19 roles).
Demand for AI/ML Engineers has been strong and consistent. Unlike some AI roles that spike with hype cycles, ML engineering is a foundational need. Every company deploying AI models needs people who can keep them running, and the gap between research prototypes and production systems keeps growing.
The AI Job Market Today
The AI job market spans 26,159 open positions across 15 role categories. The largest categories by volume: AI/ML Engineer (23,752), AI Software Engineer (598), AI Product Manager (594). These three account for the majority of open positions, though smaller categories often have higher per-role compensation because of specialized skill requirements.
The seniority mix tells a story about where AI teams are in their maturity. Entry-level roles (2,416) are outnumbered by mid-level (16,247) and senior (5,153) positions, reflecting that most companies are past the 'build a team from scratch' phase and need experienced engineers who can ship production systems. Leadership roles (Director, VP, C-Level) total 2,343 positions, representing the bottleneck between technical execution and organizational strategy.
Remote work availability sits at 7% of all AI roles (1,863 positions), with 24,200 requiring on-site or hybrid attendance. The remote share has stabilized after the post-pandemic correction. Senior and specialized roles (Research Scientist, ML Architect) are more likely to be remote-eligible than entry-level positions, partly because experienced hires have more negotiating power and partly because these roles require less hands-on mentorship.
AI compensation is structured in clear tiers. The market median sits at $184,000. Top-quartile roles start at $244,000, and the 90th percentile reaches $309,400. These figures include base salary with disclosed compensation. Total compensation (including equity, bonuses, and sign-on) runs 20-40% higher at companies that offer those components.
Category matters for compensation. AI Engineering Manager roles lead at $293,500 median, while Prompt Engineer roles sit at $122,200. The spread between highest and lowest-paying categories reflects the premium on specialized technical skills versus broader analytical roles.
The most in-demand skills across all AI postings: Rag (16,749 postings), Aws (8,932 postings), Rust (7,660 postings), Python (3,815 postings), Azure (2,678 postings), Gcp (2,247 postings), Prompt Engineering (1,469 postings), Openai (1,269 postings). Python dominates, appearing in the vast majority of role descriptions regardless of category. Cloud platform experience (AWS, GCP, Azure) is the second most common requirement. The newer entrants to the top skills list (RAG, vector databases, LLM APIs) reflect the shift from traditional ML toward generative AI applications.
Frequently Asked Questions
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.