Model QA Engineer (Contract role)
Sanas
Quality Assurance
Bengaluru, Karnataka, India
Posted on Jun 6, 2025
Sanas is revolutionizing the way we communicate with the world’s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard.
Sanas is a 200-strong team, established in 2020. In this short span, we’ve successfully secured over $100 million in funding. Our innovation have been supported by the industry’s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you’re not just adopting a product; you’re investing in the future of communication.
This position requires a deep understanding of machine learning model behavior, strong attention to detail, and a passion for building high-quality AI systems. We are seeking a Model QA Engineer with hands-on experience in evaluating real-time speech and language models. In this role, you’ll act as the first line of defense for ensuring the performance, stability, and reliability of our deployed AI systems. You’ll work closely with Machine Learning Researchers, Data Analysts, and Product Managers to detect regressions, surface edge cases, and deliver actionable insights that help shape future models.
Key Responsibilities:
- Conduct end-to-end evaluation of ML models through structured listening tests, regression analysis, and failure case identification
- Analyze customer call recordings to replicate issues and validate model behavior across different environments and configurations
- Participate in internal dogfooding, model AB testing, and cross-version comparisons with clear documentation of findings
- Collaborate with research teams to debug, interpret and report model output anomalies
- Curate diverse, representative datasets for recurring QA evaluations and benchmarks
- Maintain quality and consistency in evaluation documentation, ensuring traceability and reproducibility of results
- Track ongoing model performance and provide timely reports highlighting areas of improvement or risk
- Assist in QA automation efforts to streamline model evaluation workflows over time
Must have qualifications:
- 2+ years of experience in a QA, Model QA, Speech Evaluation, or ML testing role
- Familiarity with speech/audio quality concepts (e.g., fidelity, intelligibility, latency, robustness)
- Experience working with audio listening tools (Audacity, Praat) and evaluation metrics (PESQ, STOI, MOS etc.)
- Excellent communication skills and the ability to write structured, actionable feedback for technical teams
- Proven ability to identify subtle performance issues in AI model outputs
- Comfortable working with large datasets (CSV, JSON) and using tools like Excel, Google Sheets or Python for analysis
- Highly detail-oriented with a proactive and investigative mindset
Preferred qualifications:
- Background in linguistics, phonetics, audio engineering, or speech science
- Experience working in cross-functional ML product teams
- Exposure to ASR, TTS, or real-time audio systemsFamiliarity with tools like Git, Jira, Notion, or Confluence
- Understanding of ML model deployment cycles and CI/CD pipelines
- Prior experience with annotation platforms or feedback loop systems for ML models
Joining us means contributing to the world’s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike.
Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. You'll be part of a team exploring the vast potential of an increasingly sonic future