Summary
Join Turing, a rapidly growing AI company, as an Applied Research Engineer specializing in video understanding and machine learning. You will enhance video datasets used in state-of-the-art AI models, collaborating with ML teams and QA leads. Responsibilities include designing annotation pipelines, contributing to model experiments, and improving labeling workflows. The ideal candidate possesses 3β5 years of experience in ML/AI and a strong understanding of video modeling techniques. Success involves creating scalable annotation pipelines, well-documented workflows, and contributions to model fine-tuning. Turing offers a collaborative work culture, competitive compensation, flexible hours, and a full-time remote opportunity.
Requirements
- 3β5 years of experience in computer vision, applied ML, or data-centric AI , especially involving video data or temporal modeling
- Working knowledge of video modeling techniques and benchmarks for tasks like tracking, segmentation, or action recognition
- Some hands-on experience with fine-tuning or evaluating small ML models using tools like PyTorch, TensorFlow, or Hugging Face
- Familiarity with video labeling tools (e.g., CVAT, VOTT, Labelbox, SuperAnnotate ) or experience working with custom platforms
- Strong understanding of the ML data lifecycle , including synthetic data, annotation QA, and human-in-the-loop systems
- Ability to interpret relevant research papers and translate them into actionable annotation or modeling improvements
- Excellent communication skills βcomfortable presenting ideas, gathering requirements, and collaborating across teams
Responsibilities
- Co-develop clear, structured guidelines for video annotation tasks including: Frame-level and segment-level classification Temporal localization and gesture/action recognition Multi-object tracking across frames and scenes Human-object and multi-agent interaction labeling
- Work with ML stakeholders to align labeling specs with downstream use cases such as action classification, event detection, and object tracking
- Identify labeling gaps affecting model performance on public benchmarks (e.g., MVBench, LongVideoBench, Video-MME, AVA-Bench)
- Recommend guideline updates based on error analysis and metric improvements
- Support small-scale model fine-tuning efforts (e.g., vision transformers or temporal CNNs) under the guidance of senior engineers
- Run basic evaluation experiments to assess annotation quality and model impact
- Collaborate with QA leads to build gold sets, spot-check protocols, and error rubrics that improve consistency and reduce ambiguity
- Help close the loop on annotation feedback through structured escalation and review
- Act as a technical bridge between ML engineers, annotators, and QA reviewers
- Create clear documentation and communicate updates across technical and non-technical stakeholders
Preferred Qualifications
- Scalable, reproducible annotation pipelines that improve model accuracy on benchmark tasks
- Well-documented workflows that reduce ambiguity and inter-annotator error
- Hands-on contributions to model fine-tuning and evaluation under mentorship
- Constructive collaboration across ML, data, and QA teams that accelerates iteration and deployment readiness
Benefits
- Amazing work culture (Super collaborative & supportive work environment; 5 days a week)
- Awesome colleagues (Surround yourself with top talent from Meta, Google, LinkedIn etc. as well as people with deep startup experience)
- Competitive compensation
- Flexible working hours
- Full-time remote opportunity
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.