When Diabetic Wounds Won’t Heal: How AI Helped Find a Promising Treatment 

8 May 2026
Associate Professor
Computer Science
SHARE THIS ARTICLE

When Diabetic Wounds Won’t Heal: How AI Helped Find a Promising Treatment 

For the roughly 500 million people living with diabetes worldwide, even a small cut or blister on the foot can become a crisis. Diabetic wounds heal slowly, resist treatment, invite infection, and in severe cases lead to amputation. Doctors have limited options, and the search for better therapies is painfully slow.

Part of the reason is sheer scale. Thousands of existing drugs could, in theory, be repurposed to help wounds heal faster. And thousands of proteins in the body are involved in the healing process. Mapping every possible drug-protein interaction through traditional lab experiments would take years and cost millions. 

But what if you could teach a machine to read the scientific literature – thousands of papers, across decades of research – and figure out which drugs are most likely to help? 

A multidisciplinary team from the National University of Singapore has built an AI-powered system that mines the vast body of published biomedical research, connects the dots between drugs and the proteins they affect, and narrows the field to a handful of promising candidates – all before a single test tube is touched. Their top pick, folic acid, was then validated in laboratory experiments that showed it significantly accelerated wound healing.

The work, published in ACS Nano Medicine in March 2026, brought together researchers from the NUS School of Computing, the College of Design and Engineering, and the Department of Pharmacy and Pharmaceutical Sciences. It cut the time needed to go from literature review to lab testing by more than 70 per cent. 

Buried treasure in the scientific literature

The insight behind the project is deceptively simple: the answers might already be out there. Decades of biomedical research have produced tens of thousands of papers touching on wound healing, diabetes, drug effects, and protein behaviour. Scattered across these papers are clues about which drugs might promote tissue repair.

The problem is that no human can read it all. The findings are spread across journals, databases, and disciplines, written in different terminologies and framed within different experimental contexts. Piecing together the full picture – drug by drug, protein by protein – is beyond what any individual researcher, or even a team, can do manually.

Associate Professor Kan Min-Yen, who leads the Web Information Retrieval / Natural Language Processing Group (WING) at NUS Computing, recognised this as a problem that natural language processing was built to solve. The scientific insight already existed – scattered across thousands of papers, in fragments no single researcher could reassemble. What was needed was a way to connect those fragments and surface the therapeutic leads buried within them.

How the system works 

The team started by mapping the landscape: querying major biomedical databases to identify nearly 9,000 proteins linked to diabetic wound healing and close to 3,000 existing drugs that might interact with them. That is an enormous search space – millions of possible combinations, and far too many to test one by one. 

To make the problem manageable, they used computational techniques to group similar drugs together and select a representative set of proteins, trimming the field to 35 drugs and 50 proteins. 

Then the AI went to work. The team used large language models – the same family of technology behind tools like ChatGPT – to skim thousands of published papers and extract a specific piece of information for each drug–protein pair: does the evidence suggest this drug increases or decreases this protein’s activity? And is that effect good or bad for wound healing?

This is not as straightforward as it sounds. Scientific papers are dense, context-dependent, and sometimes contradictory. A protein discussed in one study might behave differently under different experimental conditions. The team tested several AI models and found that GPT-4, guided by carefully designed prompts, was the most accurate at the time for making these judgements.

Drs Yanxia Qin and Jiaying Wu, and graduate researcher Yixi Ding, all from NUS Computing’s WING group, developed and refined the AI components – designing the prompts, evaluating the models, and building the system that extracted meaningful patterns from unstructured scientific text.

But knowing what a drug does to a protein is only half the story. You also need to know how strongly it binds. For that, the team turned to Associate Professor Yeow Chen Hua from the Department of Biomedical Engineering at NUS College of Design and Engineering, and Professor Giorgia Pastorin, along with graduate researchers Zhang Ziyang, Ram Pravin Kumar Muthuramalingam and Chng Wei Heng from the NUS Department of Pharmacy and Pharmaceutical Sciences, who used computer simulations at the molecular level to measure binding strength. Jiayi Liu from The Second Xiangya Hospital, Central South University, China, also contributed to the study.

The real power of the approach lay in combining both: the AI’s understanding of direction (does this drug help or hinder healing?) with the simulations’ measure of strength (how tightly does it bind?). Neither alone was sufficient. Together, they produced a ranked list of the most promising drug candidates.

The result: a common vitamin, hiding in plain sight 

Folic acid – an inexpensive B vitamin best known for its role in prenatal health – topped the list. 

The AI had identified it as having beneficial effects on several wound-healing proteins. The molecular simulations confirmed a strong binding affinity with fibroblast growth factor, a protein critical to tissue repair. And when the team’s pharmacy collaborators tested folic acid in lab-based wound healing experiments, the results bore out the prediction: folic acid accelerated wound closure significantly, outperforming even mupirocin, a standard wound-healing agent used as the positive control.

Most of the other drugs the team tested in the lab also behaved as the system predicted, lending confidence to the approach. A few surprises emerged – one candidate performed better than expected, another worse – but that is the nature of biology. No computational model captures every variable in a living system.

Beyond the lab bench 

Drug discovery has always been slow. Most candidates fail before they ever reach a patient. This is why the team targeted drugs already cleared for human use, in what’s known as drug repurposing. Yet-to-be-approved drugs need to clear stringent clinical trials for safety, which can add years before effective drug rollout. But the sheer volume of knowledge already sitting in published research – untapped, unconnected – suggests that some of the best leads may be hiding in plain sight, waiting for something to read fast enough to find them.

That is what this system does. It turns decades of scattered findings into a shortlist that researchers can act on, cutting months of manual work down to a fraction of the time. And because it runs on publicly available databases and literature, the same approach could be pointed at other complex diseases – not just diabetic wounds.

The team is now looking to expand the platform to larger datasets and advance the most promising candidates toward further testing. For the millions of people living with chronic diabetic wounds, a faster path from knowledge to treatment cannot come soon enough.

Read the full research paper here: https://pubs.acs.org/doi/10.1021/acsnanomed.5c00180

Trending Posts