Research Engineer
Mercor
Location
San Francisco
Employment Type
Full time
Location Type
On-site
Department
Engineering
Compensation
- $130K – $500K • Offers Equity
About Mercor
Mercor is at the intersection of labor markets and AI research. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.
Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $1.5 million a day.
Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.
Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our new San Francisco headquarters.
About the Role
As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll contribute directly to post-training and RLVR, synthetic data generation, and large-scale evaluation workflows that meaningfully impact frontier language models.
Your work will be used to train large language models to master tool use, agentic behavior, and real-world reasoning in real-world production environments. You’ll shape rewards, run post-training experiments, and build scalable systems that improve model performance. You’ll help design and evaluate datasets, create scalable data augmentation pipelines, and build rubrics and evaluators that push the boundaries of what LLMs can learn.
What You’ll Do
Work on post-training and RLVR pipelines to understand how datasets, rewards, and training strategies impact model performance.
Design and run reward-shaping experiments and algorithmic improvements (e.g., GRPO, DAPO) to improve LLM tool-use, agentic behavior, and real-world reasoning.
Quantify data usability, quality, and performance uplift on key benchmarks.
Build and maintain data generation and augmentation pipelines that scale with training needs.
Create and refine rubrics, evaluators, and scoring frameworks that guide training and evaluation decisions.
Build and operate LLM evaluation systems, benchmarks, and metrics at scale.
Collaborate closely with AI researchers, applied AI teams, and experts producing training data.
Operate in a fast-paced, experimental research environment with rapid iteration cycles and high ownership.
What We’re Looking For
Strong applied research background, with a focus on post-training and/or model evaluation.
Strong coding proficiency and hands-on experience working with machine learning models.
Strong understanding of data structures, algorithms, backend systems, and core engineering fundamentals.
Familiarity with APIs, SQL/NoSQL databases, and cloud platforms.
Ability to reason deeply about model behavior, experimental results, and data quality.
Excitement to work in person in San Francisco, five days a week (with optional remote Saturdays), and thrive in a high-intensity, high-ownership environment.
Nice To Have
Real-world post-training team experience in industry (highest priority).
Publications at top-tier conferences (NeurIPS, ICML, ACL).
Experience training models or evaluating model performance.
Experience in synthetic data generation, LLM evaluations, or RL-style workflows.
Work samples, artifacts, or code repositories demonstrating relevant skills.
Benefits
Generous equity grant vested over 4 years
A $20K relocation bonus (if moving to the Bay Area)
A $10K housing bonus (if you live within 0.5 miles of our office)
A $1K monthly stipend for meals
Free Equinox membership
Health insurance
Compensation Range: $130K - $500K