‘AI scientist’ helps suggest potential cancer drugs in research led by University of Cambridge
Hidden patterns buried in mountains of scientific literature that suggest potential new cancer drugs have been identified by an ‘AI scientist’ in tandem with human scientists.
A research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to uncover combinations of cheap and safe drugs already used to treat conditions such as high cholesterol and alcohol dependence that could be effective at treating cancer.
To test it, the researchers prompted GPT-4 to identify potential new drug combinations that could have a significant impact on a breast cancer cell line commonly used in medical research.
The LLM was instructed to avoid standard cancer drugs, identify drugs that would attack cancer cells while not harming healthy cells, and prioritise drugs that were affordable and approved by regulators.
The drug combinations suggested by GPT-4 were then tested by human scientists, in combination and individually, against breast cancer cells.
Three of the 12 drug combinations suggested worked better than current breast cancer drugs in the first lab-based test. Having learned from these tests, the LLM suggested a further four combinations, three of which also showed promising results.
The results are the first instance of a closed-loop system where experimental results guided an LLM, and LLM outputs interpreted by human scientists guided further experiments.
“Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn’t thought of before,” said Prof Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “This can be useful in areas such as drug discovery, where there are many thousands of compounds to search through.”
With this approach, hallucinations – results normally viewed as flaws – became a feature, generating unconventional combinations worth testing and validating in the lab.
The human scientists inspected the mechanistic reasons that the LLM found to suggest these combinations, feeding the system in multiple iterations.
“This is not automation replacing scientists, but a new kind of collaboration,” said co-author Dr Hector Zenil from King’s College London. “Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner - rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach.”
The drugs identified would need extensive clinical trials before they could be used for cancer patients.
The findings are reported in the Journal of the Royal Society Interface.