HimalayasHimalayas logo
dr kalid AsratDA
Open to opportunities

dr kalid Asrat

@drkalidasrat

Physician-scientist and clinical AI safety specialist evaluating LLM outputs for healthcare accuracy and safety.

Zimbabwe
Message

What I'm looking for

I’m looking to lead rigorous, rubric-based LLM evaluation for healthcare—partnering with teams that prioritize safety alignment, HITL validation, and clinical governance so AI guidance is accurate, evidence-based, and reliably communicated.

I’m a physician-scientist and AI systems architect with 20+ years of clinical and public health leadership experience. I specialize in structured evaluation of large language model (LLM) outputs in healthcare, focusing on clinical reasoning validation, safety auditing, fact-checking against authoritative sources, and rubric-based model benchmarking.

In my work, I actively look for unsafe reasoning patterns, hallucinated medical claims, and incomplete patient guidance in AI-generated responses. I translate clinical-quality thinking into repeatable review methods that teams can use to measure performance, detect risk, and improve reliability.

As the Founder & AI Systems Architect of OmniHealthAI, I designed 100+ healthcare AI agents, including clinical reasoning assistants and evaluation tools. I developed standardized evaluation rubrics for diagnostic reasoning, triage decision-making, and patient communication—using authoritative evidence sources such as CDC, WHO, NICE, and peer-reviewed literature.

Experience

Work history, roles, and key accomplishments

OM

Founder & AI Systems Architect

OmniHealthAI

Designed 100+ healthcare AI agents and evaluation tools, performing structured audits of LLM outputs for factual accuracy, clinical reasoning integrity, and safety alignment. Built standardized rubric-based benchmarking and risk-based review workflows, identifying hallucinated or unsafe recommendations and correcting them using authoritative evidence sources (e.g., CDC, WHO, NICE, peer-reviewed li

Education

Degrees, certifications, and relevant coursework

NS

Not specified

Doctor of Medicine (MD), Medicine

Holds an MD; institution and years not provided. This medical training informs clinical reasoning audit and healthcare LLM evaluation work.

NS

Not specified

Master of Public Health (MPH), Public Health

Holds an MPH; institution and years not provided. This public health background supports evidence verification and risk-focused evaluation of healthcare AI outputs.

Tech stack

Software and tools used professionally

Find your dream job

Sign up now and join over 100,000 remote workers who receive personalized job alerts, curated job matches, and more for free!

Sign up
Himalayas profile for an example user named Frankie Sullivan