Academic

From Physician Expertise to Clinical Agents: Preserving, Standardizing, and Scaling Physicians' Medical Expertise with Lightweight LLM

arXiv:2603.23520v1 Announce Type: new Abstract: Medicine is an empirical discipline refined through long-term observation and the messy, high-variance reality of clinical practice. Physicians build diagnostic and therapeutic competence through repeated cycles of application, reflection, and improvement, forming individualized methodologies. Yet outcomes vary widely, and master physicians' knowledge systems are slow to develop and hard to transmit at scale, contributing to the scarcity of high-quality clinical expertise. To address this, we propose Med-Shicheng, a general framework that enables large language models to systematically learn and transfer distinguished physicians' diagnostic-and-therapeutic philosophy and case-dependent adaptation rules in a standardized way. Built on Tianyi, Med-Shicheng consists of five stages. We target five National Masters of Chinese Medicine or distinguished TCM physicians, curate multi-source materials, and train a single model to internalize all f

arXiv:2603.23520v1 Announce Type: new Abstract: Medicine is an empirical discipline refined through long-term observation and the messy, high-variance reality of clinical practice. Physicians build diagnostic and therapeutic competence through repeated cycles of application, reflection, and improvement, forming individualized methodologies. Yet outcomes vary widely, and master physicians' knowledge systems are slow to develop and hard to transmit at scale, contributing to the scarcity of high-quality clinical expertise. To address this, we propose Med-Shicheng, a general framework that enables large language models to systematically learn and transfer distinguished physicians' diagnostic-and-therapeutic philosophy and case-dependent adaptation rules in a standardized way. Built on Tianyi, Med-Shicheng consists of five stages. We target five National Masters of Chinese Medicine or distinguished TCM physicians, curate multi-source materials, and train a single model to internalize all five knowledge systems across seven tasks, including etiology-pathogenesis analysis, syndrome diagnosis, treatment principle selection, prescription generation, prescription explanation, symptom evolution with regimen adjustment, and clinical advice. Implemented on Qwen2.5-1.5B-Base, Med-Shicheng runs on resource-constrained GPUs while achieving performance comparable to DeepSeek-R1 and GPT-5. We also examine the reliability of LLM-as-a-judge versus physician evaluation: automated judging tracks overall trends but shows bias on fine-grained individualized distinctions, highlighting the need for physician involvement when ground truth is unavailable and for domain-adapted judge models.

Executive Summary

This article proposes Med-Shicheng, a framework that leverages large language models (LLMs) to standardize and scale physicians' medical expertise. By training a single model to internalize the diagnostic-and-therapeutic philosophies of five distinguished physicians, Med-Shicheng aims to address the scarcity of high-quality clinical expertise. The framework is implemented on a resource-constrained GPU, achieving performance comparable to other LLMs. However, the study highlights the limitations of relying solely on LLMs, particularly in fine-grained individualized distinctions, emphasizing the need for physician involvement. The findings have significant implications for the development of medical AI systems and the future of clinical practice.

Key Points

  • Med-Shicheng is a framework that leverages LLMs to standardize and scale physicians' medical expertise
  • The framework is implemented on a resource-constrained GPU, achieving performance comparable to other LLMs
  • The study highlights the limitations of relying solely on LLMs, particularly in fine-grained individualized distinctions

Merits

Strength

The framework demonstrates a novel approach to transferring physicians' medical expertise to LLMs, addressing the scarcity of high-quality clinical expertise.

Demerits

Limitation

The study relies on a small sample size of five physicians, potentially limiting the generalizability of the findings.

Expert Commentary

The article presents a compelling argument for the potential of LLMs in medical practice, highlighting the need for innovative solutions to address the scarcity of high-quality clinical expertise. However, the study's reliance on a small sample size and the limitations of LLMs in fine-grained individualized distinctions caution against over-reliance on these systems. Further research is necessary to fully realize the potential of Med-Shicheng and to address the complexities of clinical expertise.

Recommendations

  • Future studies should aim to replicate the findings of Med-Shicheng with a larger sample size and diverse range of physicians.
  • The development of domain-adapted judge models is essential to mitigate the limitations of LLMs in fine-grained individualized distinctions.

Sources

Original: arXiv - cs.CL