Why studies that rely too much on drugs often fail to hold up

A pharmacologist argues that scientists shouldn't depend on pharmacological agents

Anastasia Gorelova

Molecular Pharmacology

University of Pittsburgh

Scientists constantly wrestle with internal conflicts. Mine? I’m a reluctant pharmacologist. Even though I’m trained to develop medications, I don’t like to use them in my own research. I prefer to focus on a more basic question: what unique features of diseased cells can we take advantage of to design better drugs? We need to make sure to minimize the uncertainty in research, and using pharmacological agents to study cells only contributes to it because of all the side effects those drugs might have but we don’t yet know about.

The end goal of what medical researchers do may be to invent new pharmaceuticals, but the only way to do that cost-efficiently – and without shooting in the dark – is to first map out the intricate network of molecular processes inside the cells of our bodies. Getting to that starting point is not a trivial task, so scientists like me spend our waking hours trying to wrap our minds around this problem.

Every living organism is incredibly complex, and the only way to make sense of any part of its biology is to break it down, both literally and figuratively. To determine an order of how molecules interact with one another, a key to efficient drug development, scientists need to be able to switch molecules on and off at will. Unfortunately, because the tools available to us are imperfect, it's almost 99 percent certain that some other molecule than the one we’re actually interested in will be inadvertently affected as well. And the chance of that happening is even higher when using pharmacological interventions – medications.

Poorly designed studies and failure to recognize methodological limitations harm everyone, from the public to scientists, from pharmaceutical companies to patients. On average, only 10 percent of drugs that enter clinical trials make it to the market. Very often, a seemingly great drug candidate that performed beautifully in initial screenings, worked well in cells, and showed fantastic effectiveness in animals reveals some undesired properties once administered to humans. History is full of examples of medical disasters, like thalidomide, that could've been prevented by more rigorous testing of adverse effects.

Thalidomide, which was commonly prescribed to women for morning sickness in Europe in the 1950s, caused birth defects in children. Frances Oldham Kelsey, left, an FDA reviewer, blocked the sale of the drug in the United States and was awarded for her work by President John F. Kennedy, right.


But, in my mind, stakes are as high in “basic” cell biology research as they are in translational “lab bench to bedside” studies. In both cases, an over-reliance on drugs leaves too much space for leeway and unpredictability, which can harm science.

The pharmacological approach sometimes used in basic research to switch off the molecule of interest utilizes small inhibitors, which, in an ideal world, would be specific and selective to their chosen target. Sadly, that is never the case. The more the drug is studied the less specific it becomes: the emergence of side effects and unaccounted interactions is only a matter of time.

Some of the molecules that were instrumental to scientific research in the 1970s have completely lost credibility in the last 30 years. Diphenyleneiodonium, or DPI, a compound then considered to specifically inhibit enzymes critical for the maintenance of cellular oxidative balance, has since shown to affect a whole slew of other targets, so much so that using it in the lab in 2017 is pretty much considered a faux pas. Likewise, apocynin, drug that replaced DPI when the latter fell out of favor, after a careful examination turned out to have no inhibitory effect on its alleged targets and work via completely unrelated mechanism. Thus, apocynin likely produced hundreds of false-positive results and lead researchers astray.

Similarly, using drugs to mimic certain environmental conditions can be problematic and lead to wrong conclusions. Cobalt chloride that was used to chemically imitate exposure of cells to oxygen deprivation in the early 2000s was quickly shown to affect cells differently than growing cells in a chamber with low oxygen pressure. Because of that, any studies that draw conclusions from dumping cobalt chloride on a Petri dish might be straight up wrong. At the end of the day humans are not normally exposed to cobalt salts, but elevation up to high altitudes is pretty common.

Another important problem is drug concentration. Even drinking too much water can kill, let alone using something much less harmless. After a certain point, when cell buffering systems overflow, any pharmacological agent loses its specificity and starts affecting everything on its way indiscriminately, like a rolling snowball. Whenever I come by a study that uses drugs in very high concentrations, I become suspicious: can their results be trusted? But even if those results are real, the potential to translate them into clinics is meager, simply because of how expensive it would be to use that much of a drug on every patient.

All that to say: no approach is perfect. The second way to switch off molecules in vitro is to manipulate them genetically. This has unpredictable side effects as well, which is why using proper controls is critically important, regardless of what methods scientists choose to use.

Like every other engineering problem, be it building a skyscraper or designing a drug against aggressive cancers, having strong foundation is key to success. Rushing into conclusions doesn't just stall progress – it can be dangerous. It’s impossible and unnecessary to avoid using drugs in research altogether, but it’s absolutely vital to be aware of their limitations.