Voice AI Evaluation Lead

Deepgram Logo

Deepgram

📍Remote - Worldwide

Summary

Join Deepgram as a Voice AI Evaluation Lead and take ownership of benchmarking and evaluating the performance of our voice AI models. Build robust benchmarking pipelines, produce clear model cards, and partner with various teams to shape how models are measured, released, and improved. This pivotal role involves designing impactful evaluations, aligning metrics with product goals, and translating data into actionable insights. You will build and maintain scalable benchmarking pipelines, run regular evaluations, develop new methodologies, define evaluation metrics, author model cards, work with data labeling, collaborate with QA engineers, support marketing and product teams, and track market developments. The ideal candidate enjoys translating model outputs into human insights, is motivated by precision and transparency, and thrives on cross-functional collaboration. This position is crucial for shaping how Deepgram defines quality and success in speech AI.

Requirements

  • Experience designing, executing, and iterating on evaluation pipelines for ML models
  • Proficiency in Python and data analysis libraries
  • Ability to develop automated evaluation systems—whether scripting analysis workflows or integrating with broader ML pipelines
  • Comfort working with large-scale datasets and crafting meaningful performance metrics and visualizations
  • Experience using LLMs or internal tooling to accelerate analysis, QA, or pipeline prototyping
  • Strong communication skills—especially when translating raw data into structured insights, documentation, or dashboards
  • Proven success working cross-functionally with research, engineering, QA, and product teams

Responsibilities

  • Build and maintain scalable benchmarking pipelines for model evaluations across STT, TTS, and voice agent use cases
  • Run regular evaluations of production and pre-release models on curated, real-world datasets
  • Partner with Research, Data, and Engineering teams to develop new evaluation methodologies and integrate them into our development cycle
  • Design, define and refine evaluation metrics that reflect product experience, quality, and performance goals
  • Author comprehensive model cards and internal reports outlining model strengths, weaknesses, and recommended use cases
  • Work closely with Data Labeling Ops to source, annotate, and prepare evaluation datasets
  • Collaborate with QA Engineers to integrate model tests into CI/CD and release workflows
  • Support Marketing and Product with credible, data-backed comparisons to competitors
  • Track market developments and maintain awareness of competitive benchmarks
  • Support GTM teams with benchmarking best practices for prospects and customers

Preferred Qualifications

  • Prior experience evaluating speech-related models, especially STT or TTS systems
  • Familiarity with model documentation formats (e.g., model cards, eval reports, dashboards)
  • Understanding of competitive benchmarking and landscape analysis for voice AI products
  • Experience contributing to or owning internal evaluation infrastructure—whether integrating with existing systems or proposing new ones
  • A background in startup environments, applied research, or AI product deployment

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.