Interested in this AI/ML Engineer role at Bedrock Robotics?
Apply Now →Skills & Technologies
About This Role
Machine Learning Engineer: Perception
Bedrock is bringing autonomy to the construction industry! We’re a group of veterans from the autonomous vehicle industry who are passionate about bringing the benefits of automation to areas in the construction industry currently underserved by the market.
We are looking for engineers with expertise in shipping production 3D perception systems at scale. Successful candidates have architected systems, trained models from scratch, understand the full stack (clustering, detection, classification, and tracking), and have shipped at scale. We use both computer vision and LIDAR-based approaches, so knowledge of either or both is key. Models are just part of the system: you understand data and have good intuition about why models fail. You know how to evaluate corner cases, manage or build data pipelines, use autolabels (or not), and have a strong understanding of statistical properties of these systems.
What You’ll Do:
- Design Early Fusion Architectures: Develop and train state-of-the-art models (e.g., BEV-based transformers) that fuse raw Lidar and Camera data to solve for object detection and semantic segmentation.
- Tackle "Messy" Physics: Build perception systems robust enough to handle dynamic occlusion (seeing the robot’s own arm/bucket), particulates (dust, snow, rain), and high-vibration conditions.
- Deploy to the Edge: Optimize models for inference on embedded hardware. You will debug system-level issues, such as sensor calibration drift and latency bottlenecks.
- Collaborating with other teams to create state-of-the-art representations for downstream use cases.
What we're looking for:
- Production ML Experience: Experience taking deep learning models from research to real-world production using PyTorch.
- 3D Geometry & Calibration: You have a deep understanding of SE(3) transformations, homogeneous coordinates, and intrinsic/extrinsic sensor calibration. You understand the math required to project a 3D Lidar point onto a 2D image pixel accurately.
- Early Fusion Expertise: Practical experience with architectures that fuse modalities at the feature level (e.g., BEVFusion, TransFuser, PointPainting) rather than just fusing final bounding boxes.
- SOTA Object Detection experience with modern transformer-based architectures (DETR, PETR, etc…) including similar temporal models (PETRv2, StreamPETR, …)
- Systems Fluency: You are an expert in Python, but you are also comfortable reading and writing systems code in C++ or Rust. You understand memory management and real-time constraints.
- Data Intuition: You understand that in robotics, better data alignment often beats a bigger model. You are willing to dig into the data infrastructure to ensure ground truth quality.
Ways to stand out:
- Bonus: Voxel/Occupancy Experience: Experience working with occupancy grids, NeRFs, or voxel-based representations for terrain mapping.
- Bonus: Top-Tier Research: Published work in conferences such as ICRA, IROS, CVPR, ECCV, ICCV, CoRL, or RSS
Role Details
Get Weekly AI Career Intelligence
Salary data, skills demand, and market signals from 16,000+ AI job postings. Every Monday.