The pharmaceutical industry faces many headwinds as we enter the third decade of this century, the greatest of which may be the continual loss of branded medications due to expiring patents.
As companies face this ‘patent cliff’, there is increased pressure to develop drugs faster and more cheaply in order to continue to replace products that contribute to top-line revenue.
It is well known that the average drug costs somewhere between $2bn and $5.5bn to develop and that the total time to bring a new molecular entity (NME) to market can take upwards of 13 years with an exceptionally high failure rate.
With these facts in hand, the motivation for machine learning (ML) and artificial intelligence (AI) to address the issues of drug discovery speed, cost and lead candidate identification holds tremendous promise. But, in today’s world of ML and AI, there exist three fundamental ‘blind spots’ that should be addressed.
Firstly, many of today’s ML/AI algorithms are designed for simple, linear tasks. In a linear system, the relationship between the input and the output are proportional and rather easy to predict. If you smoke ten packs of cigarettes per day, you will likely suffer from emphysema or COPD or some other respiratory illness.
But the world is full of non-linear complexity. In these systems there exists no proportionality and no simple causality between the magnitude of responses and the strength of their stimuli: small changes can have striking and unanticipated effects, whereas large stimuli will not always lead to drastic changes in a system’s behaviour.
Non-linear systems often appear to be chaotic, unpredictable or counter-intuitive, and solving them requires tremendous application. To further illustrate this point, in mathematics and physical sciences, a non-linear system is a system in which the change of the output is not proportional to the change of the input. For example, the stock market. Or weather patterns. Or disease. A single word from the Governor of the Bank of England about fiscal policy can send the stock market into a spiral. Or a half-degree temperature change somewhere on the planet can have immeasurable effect for centuries to come. Or a single mutation, inversion or translation of a gene can result in devastating effects for human health. These are all examples of non-linear complexity.
And all examples why ML/AI need to be able to solve this problem. A second challenge with today’s ML/AI platforms is that they are dependent on ‘big data’. The cost and time involved with aggregating huge volumes of data and then ‘cleaning’ the data are not insignificant and may delay attempts at optimising drug discovery.
There are, however, platforms that can learn from as little as a few hundred data points with equivalent or superior insights to alternate ML/ AI predictive models and help pharmaceutical companies reduce failures significantly across the drug development cycle with their solutions.
These solutions uniquely predict the root cause for failure across the drug development cycle, including identification of specific patient subpopulation and placebo response. Finally, the issue of opacity is a troubling problem with some ML/AI solutions.
Known as ‘black box’ syndrome, this problem is exacerbated in industries, like healthcare, that are heavily regulated and where submitting a drug for regulatory approval requires a precise ability to show how a drug acts in a particular way to affect a particular disease pathway. It
is the ability to understand what the system is doing and the basis for its recommendations that are proving to be crucial.
Last year a team at Google used data on eye scans from over 125,000 patients to build an algorithm that could detect retinopathy, the number one cause of blindness in some parts of the world, with over 90% accuracy, on a par with board-certified ophthalmologists. These results had constraints; humans could not always fully comprehend why the models made the decisions they made.
Other such examples are also readily available. And, to be truthful, some are resisting these methods, calling for a complete ban on using ‘non-explainable algorithms’ in high-impact areas such as health because they may lead to forced, faulty or ‘unethical’ logic. Earlier this year, France’s minister of state for the digital sector flatly stated that any algorithm that cannot be explained should not be used.
So, there you have it. A promising technology, like many others before it, that has friction points that may prevent wider adoption. This isn’t the first time we’ve seen this, and it won’t be the last. But, as with all other such examples through the course of history, knowing what the pitfalls are and having our eyes wide open will help us unleash the true power of ML/AI in healthcare in the years to come.
What are the hurdles for machine learning and artificial intelligence in pharma?
This article was originally published here.