medical-reasoning

API Endpoint
Leaderboard
Loading leaderboard...
README

Medical-O1-Reasoning

OpenReward Environment Hugging Face Dataset

Description

Medical-O1-Reasoning is an environment for evaluating medical reasoning capabilities. It contains 90,120 medical questions from the HuatuoGPT-o1 dataset covering clinical diagnosis, treatment, pathophysiology, procedures, pharmacology, and genetics in both English and Chinese.

Capabilities

  • Complex medical reasoning
  • Clinical diagnosis and treatment planning
  • Medical knowledge across multiple specialties
  • Bilingual medical question answering (English and Chinese)

Compute Requirements

Agents are given a standard environment with no sandbox or file system access.

License

Apache 2.0.

Tasks

There are four splits in this environment:

  • en: 19,704 tasks (English medical questions)
  • en_mix: 24,887 tasks (English medical + general instruction)
  • zh: 20,171 tasks (Chinese medical questions)
  • zh_mix: 25,358 tasks (Chinese medical + general instruction)

Reward Structure

This is a single-turn environment. The agent submits an answer via the submit_answer tool. An LLM grader (gpt-5-mini) evaluates the response against expert reference answers on three criteria: (1) whether the answer captures the key medical concepts from the reference, (2) whether the reasoning is clinically sound, and (3) whether the answer is free of factual errors or critical omissions. Reward is binary: 1.0 if correct, 0.0 if incorrect.

Data

Data consists of four Parquet files sourced from HuggingFace FreedomIntelligence/medical-o1-reasoning-SFT. Each row contains a medical question and expert reference answer. Data is stored on the OpenReward platform.

Tools

ToolDescription
submit_answerSubmit your medical answer for expert evaluation. Ends the episode.

Time Horizon

Single-turn. The agent reads the medical question and submits one answer.

Environment Difficulty

[Put environment difficulty statistics here]

Other Environment Requirements

OpenAI API key required for LLM-based grading. Pass via secrets={"openai_api_key": "..."}.

Safety

Agents in Medical-O1-Reasoning answer medical questions in a standard environment. Models trained on this data should not be relied upon for medical advice.

Citation

@article{chen2024huatuogpto1,
  title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
  author={Chen, Junying and Cai, Zhenyang and Ji, Ke and Wang, Xidong and Liu, Wanlong and Wang, Rongsheng and Hou, Jianye and Wang, Benyou},
  journal={arXiv preprint arXiv:2412.18925},
  year={2024}
}
GeneralReasoning/medical-reasoning | OpenReward