Contract Medical Safety Expert (Pharma MD) - AI-Driven Adverse Event Detection
Hippocractic AI
Location
United States
Employment Type
Contract
Location Type
Remote
Department
Engineering
About Us
Hippocratic AI has developed the only safe, safety-focused Large Language Model (LLM) for healthcare, resulting in the only autonomous patient-facing clinical agents in the industry. We are delivering abundance for the first time in healthcare by bringing deep clinical expertise to every human. No other technology has the potential to have this level of global impact on health. Come join the most capitalized healthcare AI company with the most deployed customers and the broadest platform of applications. Our highly mission-oriented team, coupled with innovative partners like the Cleveland Clinic, Baylor Scott & White, Northwestern, Wellspan, HCA, and Oschner, is building the most transformative company in healthcare in history.
Why Join Our Team
Innovative mission: We are creating a safe, healthcare-focused LLM that can transform health outcomes on a global scale.
Visionary leadership: Hippocratic AI was co-founded by CEO Munjal Shah alongside physicians, hospital administrators, healthcare professionals, and AI researchers from top institutions, including El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.
Strategic investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.
Team and expertise: We are working with top experts in healthcare and artificial intelligence to ensure the safety and efficacy of our technology.
For more information, visit www.HippocraticAI.com.
Overview
We are seeking an experienced Pharmaceutical Safety Medical Doctor with a proven background in health technology to join our cross-functional team as a Contract Medical Safety Expert. This role is central to developing and validating an AI-powered adverse event (AE) detection engine that can accurately identify and categorize FDA-reportable events from unstructured and structured medical data.
This contractor will play a key role in helping Hippocratic AI create a robust system that meets regulatory standards and clinical expectations while working at the intersection of pharmacovigilance, clinical safety, and cutting-edge artificial intelligence.
Key Responsibilities
Subject Matter Expertise: Provide expert clinical and regulatory guidance on pharmacovigilance, especially around US FDA requirements for AE reporting and MedWatch compliance.
-
AI Model Development Collaboration:
Partner with the AI/ML and Clinical Informatics teams to design annotation schemas and decision logic related to AE identification.
Define clinically validated criteria to distinguish reportable vs. non-reportable events.
Contribute to the design and refinement of classification taxonomies for adverse events, seriousness, and expectedness.
-
Quality & Safety Oversight:
Review model outputs and participate in error analysis of AI-generated AE classifications.
Help define “safe fail” protocols for uncertain or ambiguous cases.
-
Data Strategy Support:
Inform data collection strategies and guide labeling/annotation efforts for adverse event corpora.
Collaborate on test sets that simulate real-world AE reporting cases across various therapeutic areas.
-
Regulatory Alignment:
Ensure compliance with FDA pharmacovigilance standards (21 CFR Part 314, ICH E2A/E2D, etc.).
Liaise with internal quality and regulatory stakeholders to audit model readiness for future deployment.
-
Stakeholder Communication:
Summarize clinical safety perspectives for cross-functional leadership and potential external partners.
Contribute to documentation supporting FDA or regulatory body interactions, if required.
Required Qualifications
MD (Doctor of Medicine) with board certification strongly preferred.
5+ years of experience in pharmacovigilance or drug safety within a pharma, biotech, or CRO environment.
At least 2 years of experience collaborating with or working at a health tech or AI/ML-focused company, ideally involving clinical data or regulatory technology products.
Strong familiarity with FDA AE reporting requirements, MedDRA, and ICH E2B guidelines.
Clinical expertise in therapeutic areas such as primary care, internal medicine, or neurology is a plus.
Experience working with or reviewing outputs from NLP/LLM or AI-based systems is highly desirable.
Exceptional attention to detail, with a strong sense of clinical judgment and ethical responsibility.
Nice to Have
Experience with AI model validation or human-in-the-loop review systems.
Familiarity with tools like MedDRA, WHODrug, Argus Safety, or other AE signal detection platforms.
Prior involvement in developing medical ontologies or rule-based decision frameworks.
Compensation
Competitive hourly or monthly consulting rate (commensurate with experience).
Flexible schedule with expected 10–20 hours/week depending on project phase.