AI News Hub Logo

AI News Hub

Surrogate modeling for interpreting black-box LLMs in medical predictions

cs.CL updates on arXiv.org
Changho Han (Medical Big Data Research Center, Seoul National University Medical Research Center, Seoul National University College of Medicine, Seoul, Republic of Korea), Songsoo Kim (Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Republic of Korea), Dong Won Kim (Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Republic of Korea), Leo Anthony Celi (Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA, Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA), Jaewoong Kim (Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Republic of Korea), SungA Bae (Department of Cardiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea, Center for Digital Health, Yongin Severance Hospital, Yonsei University Health System, Yongin, Republic of Korea), Dukyong Yoon (Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Republic of Korea, Institute for Innovation in Digital Healthcare, Severance Hospital, Seoul, Republic of Korea)

arXiv:2604.20331v1 Announce Type: new Abstract: Large language models (LLMs), trained on vast datasets, encode extensive real-world knowledge within their parameters, yet their black-box nature obscures the mechanisms and extent of this encoding. Surrogate modeling, which uses simplified models to approximate complex systems, can offer a path toward better interpretability of black-box models. We propose a surrogate modeling framework that quantitatively explains LLM-encoded knowledge. For a specific hypothesis derived from domain knowledge, this framework approximates the latent LLM knowledge space using observable elements (input-output pairs) through extensive prompting across a comprehensive range of simulated scenarios. Through proof-of-concept experiments in medical predictions, we demonstrate our framework's effectiveness in revealing the extent to which LLMs "perceive" each input variable in relation to the output. Particularly, given concerns that LLMs may perpetuate inaccuracies and societal biases embedded in their training data, our experiments using this framework quantitatively revealed both associations that contradict established medical knowledge and the persistence of scientifically refuted racial assumptions within LLM-encoded knowledge. By disclosing these issues, our framework can act as a red-flag indicator to support the safe and reliable application of these models.