Building Tesla's Factory Observability Video Player
At Tesla, I worked on the factory ML observability platform, building operator-facing features that made machine learning outputs usable in production.
I owned the development of many different features including intake and configuration flows, reactive dashboards, charts, and tables that allowed factory teams to monitor and act on ML detections.
I also led the implementation of the platform's first reusable video player, a foundational component that enabled video-based ML workflows for safety, security, and factory operations.
Note: Due to NDA restrictions, I cannot discuss other projects I worked on during my time at Tesla. This case study focuses on the video player project which I have permission to share.
Project Highlight
I built a production-ready, reusable video viewer for Tesla's factory ML observability platform, designed for operators working with sensitive, high-throughput video data.
The component was built using Video.js alongside an internal rendering library (Drawing Studio) and was responsible for:
- Reliable playback of sensitive factory footage under strict security and performance constraints
- Pixel-accurate, frame-synchronized ML bounding boxes, overlaid in real time
- Operator-first interactions (pause, step-through, fullscreen) aligned with real factory investigation workflows
- Reuse across safety, security, and operational contexts, avoiding one-off implementations
This video viewer became the primary interface for visualizing ML detections in the platform and was later promoted into our internal component library, standardizing video-based ML visualization across teams.
Technical Challenges
Key technical challenges from building a production-ready video player for factory operations.