I built a real-time Guided Photo Capture prototype using in-browser object detection machine learning tasks to detect vehicle framing and provide live visual feedback.
As the sole engineer on the project, I was tasked with creating the experience from the ground up. The goal was loosely defined: enable users to take guided vehicle damage photos based on adjuster-defined angles, with visual feedback when the car is correctly framed within a boundary.
To support fast iteration and feedback, I used Storybook to prototype and share interactive states. After exploring multiple approaches, I landed on a browser-based object detection solution that streams webcam video to a <canvas> and applies ML-driven detection logic on each frame using requestAnimationFrame.
This was one of the most technically and creatively rewarding challenges I’ve worked on — blending real-time video processing, visual UX cues, and client-side machine learning in the browser.