The Tribulations of Clinical Trials
Efforts are afoot to improve the output of the drug research pipeline | By Susan Warner
A plain tablet in a no-name blister-pack. It could save a life.
Or maybe not.
Since 1994, the Food and Drug Administration has approved year-to-year increases in the number of new candidate drugs for human testing in the United States, rising from 3,350 in 1996 to 3,900 in 2002. But the number of drugs that successfully negotiate the trial process and ultimately receive FDA approval is frustratingly low. Despite pharmaceutical companies' and the National Institutes of Health's research budgets doubling since 1993, the number of approvals for new drugs with a novel chemical structure fell from 53 in 1996 to 21 in 2003.
"Currently, a striking feature of this path [to market] is the difficulty, at any point, of predicting ultimate success with a novel candidate," states a recent FDA report.
And herein lies a large part of the problem: While new technologies have lead to significant breakthroughs at the front-end of the drug development process, the same hasn't happened at the drug-trial stage. "Not enough applied scientific work has been done in creating new tools to get fundamentally better answers about how the safety and effectiveness of new products can be demonstrated, in a faster time frame, with more certainty, and at lower costs. As a result, the vast majority of investigational products that enter clinical trials fail," the FDA states in its report.
Other issues that contribute to the paucity of new drug approvals include: poor trial design (see Feature | From P Valves to Bayesian Statistics, It's All in the Numbers), the shortage of adequately trained clinicians, problems with surrogate endpoints, the industry's reluctance to move toward a more streamlined approach to data collection, and attracting enough patients to fill the trials.
BEYOND OBSERVATION The foundation of the modern clinical trial process was laid in 1938 with the Federal Food, Drug, and Cosmetic Act. In the early 1960s, the current system of clinical trials evolved, in which new drugs pass through three distinct phases of controlled testing for safety and efficacy, says Paul Herrling, head of global research at Novartis. Historically, physicians with an interest in research worked with drugs in small patient groups to develop treatment strategies based largely on experience and anecdotal evidence. Edward Jenner, in 1796, determined that the cow-pox virus could thwart small pox and about a century later, Bayer chemist Felix Hoffmann learned about the pain-relief aspects of aspirin: He gave it to his father, who had arthritis.
The three phases progressively involve more people who are tested over longer periods of time.
Phase I assesses how the compound affects the body: how it is absorbed, distributed, metabolized, and excreted. A typical study involves between 20 and 100 healthy volunteers; about 70% of new drugs pass this stage.
Phase II studies assess the effectiveness of the novel drug in treating patients. Several hundred subjects are generally involved in these proof-of-concept studies, and 33% of the drugs pass muster. Subsequent Phase II studies assess the most appropriate dose.
In Phase III, hundreds, maybe thousands, of patients take the drug or a placebo to confirm both efficacy and safety. Phase III studies frequently are longer in duration in order to provide additional reassurance regarding safety. Between 25% and 30% of the drugs clear this hurdle. Thus, only about 8% of drug submissions make it to market. Herrling says that today's more scientific process is necessary and appropriate, but it shifts much of drug development away from the clinic until the final stages. This remoteness from patients, he says, invites a higher attrition rate.
Mark B. McClellan, former head of the FDA and now administrator of the Centers for Medicare and Medicaid Services, agrees. In the 1980s and 1990s, researchers followed a simple path to a receptor target and then synthesized the molecule to fit it, he said in an interview with The Scientist last year. Now, the process is more complex. "Most product developers are testing compounds on glass slides doing microarray work, seeing if there is an impact on gene expression," McClellan said. "The problem is, [we're] not developing a whole lot of knowledge on what it means for patients." To help, the FDA outlines in its March report a goal of developing new, publicly available scientific tools such as assays, standards, computer-modeling techniques, biomarkers, and new clinical-trial endpoints that are designed to make the drug development and testing processes more effective.
A further confounding factor regarding the paucity of new drugs is what biotechnology analyst Brian Rye calls a poverty of riches: So many molecules to investigate, and so many of them miss their targets. "[Researchers are] having a hard time deciphering which of those targets will be validated," says Rye, of Janney Montgomery Scott in Philadelphia.
Consequently, it is not the drugs that are failing per se, says Eric Rowinsky, an oncology researcher who directs the Institute for Drug Development, University of Texas Health Science Center, San Antonio. "We have to select the disease and develop the methodologies to identify the tumors in which the target is really being turned on," he says. The molecular dissection of disease, to determine the importance of the target that is driving the cancer, is desperately needed; it goes hand in hand with the development of targeted therapies. "We should be conducting clinical trials in those tumors in which the target is functionally important, or in those tumors in which the target is driving the malignancy."
TIME, MONEY, AND TRIALS The stakes in drug development are high. It takes nearly eight years to develop a drug, almost twice as long as it took 20 years ago. The cost of failure can be mighty. Last fall, Merck & Co. shocked financial markets when it announced it would discontinue work on a promising new antidepressant, aprepitant, which had already received marketing approval as an antinausea drug, after it failed in Phase III trials. Shares in the company's stock tumbled 6.5% in a single day. Even the costs of success are staggeringly large: more than $1 billion per drug, from concept to market, according to a report by the Tufts Center for the Study of Drug Development, cited by the FDA. Rye says the number seems high, but adds that Pharma could be including the costs of failed molecules.
Despite this, the FDA identifies two key issues that will inevitably cost more in terms of money and time. The first is the testing of a wider range of doses of candidate drugs; the other is the detection of rare adverse effects.
FDA staff and some academics suspect that one issue contributing to Phase III failures may be that companies do not conduct adequate dose-response studies in Phase II. Typically, dosing is a trade-off. Companies want to use a high enough dose to guarantee effectiveness, but not so much that it triggers safety issues.
Robert Temple, who directs the Office of Medical Policy in the FDA's Center for Drug Evaluation and Research, says companies are sometimes reluctant to spend additional time and money testing dosage. "Sometimes if you just plunge forward and study a single dose in Phase III and you get it right, you can save yourself a couple years and be out there ahead of somebody else," he says.
Susan Ellenberg, director of the Office of Biostatistics and Epidemiology in the FDA's Center for Biologics Evaluation and Research, recalls an early AZT study that showed its effectiveness in treating HIV. The results were so good that the study was canceled. "Later on we found out the dose was twice as high as needed to be," she says. "Everyone was glad we got it on the market quicker, but it proved so toxic a lot of people wouldn't take it."
The issue of safety can be dealt with better if the drug trials were larger, the FDA says. "Clinical trials ... in most cases are neither large enough nor long enough to provide all information on a drug's safety. At the time of approval for marketing, the safety database of a new drug will often include 3,000 to 4,000 exposed individuals, an insufficient number to detect rare adverse events. For example, in order to have a 95% chance of detecting an adverse event with an incidence of 1 per 10,000 patients, an exposed population of 30,000 patients would be required."
Considering the struggle that clinical researchers have now in trying to attract subjects, the FDA's position may be unrealistic. Rare side effects, and those that develop with chronic use or that have a long latency period are difficult or impossible to detect. Furthermore, drug trials rarely include special populations, such as children and the elderly, who are often taking several types of medications. And, trials do not always represent the population that may be exposed to the drug after approval.
Put yourself in the shoes of a small-to-midsize biotech, Rye says. On the one hand is a promising drug; a large trial would be ideal. But the patent has a finite shelf life, and the costs of signing everyone up, including patients, doctors, and centers, are significant. "You have got to balance," says Rye.
A further potential hurdle: Congress recently ordered the FDA to give new guidance on safety procedures. The agency is considering mandatory so-called risk-management plans, which would require additional safety information, including larger safety studies to screen earlier for potential adverse reactions that may affect only a few people. Formal rulings on risk- management programs are expected this year.
Until now, these risk-management programs were negotiated as part of the approval process. Ira Loss, executive vice president at Washington Analysis, an equity research firm, points to Accutane, which has had additional safety requirements and warning labels attached to it because of its danger to pregnant women. Lotronex, the drug for irritable bowel syndrome that the FDA withdrew, was allowed back on the market only after its manufacturer, GlaxoSmithKline (GSK), complied with a risk-management program.
"Industry doesn't like it because it is more red tape and a burden, but I think we have come to a point where every product will have a risk-management program of some sort," says Loss.
TRIAL MANAGEMENT WOES A noted measure of the relative inefficiency of the drug industry is that it did not move toward electronic data management until recently. The pharmaceutical sector lags behind other industries in developing computer-data capabilities, mainly because it has been successful under the old paper-based system, says Glenn Gormley, vice president of US clinical development at the British pharmaceutical firm, AstraZeneca. "We've been doing clinical trials more or less the same way for 20 years. The technology is there; it's just a matter of adopting it."
Last year researchers wrote in The Journal of the American Medical Association that "although cost-benefit models of hypothetical computer-based patient record systems have shown a significant cost advantage, the entire healthcare industry continues to invest significantly less in information technology (IT) than any other information-intensive industry."
But there is movement. Novartis, for example, designed its data-management system and has been using it since December 2001. The company reported saving $65 million (US) in 2003.
Another problem is actual trial implementation. There are too few researchers to conduct them, and currently, only 8% of principal investigators conducting industry-sponsored clinical trials are younger than 40 years of age.
For the past 30 years, many top physicians have turned away from going into medical research, says David Korn, senior vice president for biomedical and health sciences research at the Association of American Medical Colleges. "Bright MDs and PhDs chose to go after the experimental models. Doing research on people is difficult and, one could say, heavily burdened by regulations."
And then there's the issue of trying to attract, and keep, trial participants in the first place; roughly 5 million or so people are enrolled currently. The American Cancer Society says that only 5% of adult patients with cancer participate in drug trials.
MOVING FORWARD Many anticipate that a mixture of new technologies and improved regulations will increase the efficiency of the clinical trials. Some companies are trying to improve the chances of success by further studying drugs before they reach the trial stage. "The companies who will be able to do this efficiently and fast will have a huge advantage over those who can't," says Gormley. AstraZeneca and other pharmaceutical companies are changing their drug-design strategies to include what's called front-loading risk: to evaluate risk early on and find ways to test for fatal flaws in a compound.
Ron Krall, GSK's senior vice president of global development, says the company is putting a new emphasis on improving the quality of lead drugs. Using new high-throughput screening equipment, GSK is sifting through more compounds to find those that generate a stronger response to disease targets. "One of the things that makes it difficult to do clinical trials is when the molecule is messy: It's not very selective or it has different mechanisms of action or produces a lot of effects," says Krall. "As we get more potency ... and more specificity in our molecules, that allows us to really narrow the focus of our clinical trials and improve their quality."
Like other companies, GSK is applying pharmacogenetics, the study of genetic variation underlying differential response to drugs, particularly genes involved in drug metabolism. "We don't know for sure that this is going to be successful in improving the efficacy and safety of our medicines, but we're confident it will pay off in identifying sets of patients who have improved efficacy and safety profiles," says Krall. He adds that the company is using large patient databases to narrow down the investigators with whom it wants to work. "A lot of the noise in clinical trials comes from the differences in patients that exist among the centers."
A controversial issue involves the use of surrogate endpoints and markers as proxies for the ultimate endpoint--disease eradication, or longer survival--which could take years to prove. Some patient activists say that these endpoints may ultimately hurt patients by creating useless new drug therapies. For example, postmenopausal hormones can reduce cholesterol, an endpoint that researchers hoped would prove the benefit of using hormone replacement therapy (HRT) to prevent heart disease. However, when large-scale trials were run through the Women's Health Initiative, it was shown that HRT lead to increased cardiovascular problems.
Adopting a new biomarker or surrogate endpoint for effectiveness standards can speed up clinical development. For example, the adoption of CD4 cell counts and, subsequently, measures of viral load as surrogate markers for anti-HIV drug approvals, allowed the rapid clinical workup and approval of life-extending antiviral drugs, with time from first human use to market as short as 3.5 years.
Barbara Brenner, executive director of Breast Cancer Action in San Francisco, says that using surrogate endpoints to reach approval could be tantamount to guesswork. "The problem we're now seeing is a move to yet earlier stages in the biology of cancer and the use of surrogates for the endpoints we're looking for," she says. "It is mind-boggling that this is the direction of cancer-drug approval."
The FDA's Temple acknowledges that there are disagreements. "We approve some drugs based on endpoints that some people don't think are as good as we should have," he says. "But there are [others] who would argue that what we should be doing is [shooting for] survival ... We've not accepted that position." Endpoints other than survival were the basis for 73% of all cancer drug approvals (including accelerated approvals) between 1990 and 2002, and for 67% of all regular (nonaccelerated) cancer drug approvals during the same time period.3
Another goal is to improve interactions between companies and the FDA. Companies complain about delays and multiple reviews, although most acknowledge the process sped up when the Prescription Drug User Fee Act of 1992 was passed. The law, which was reauthorized in 2002, requires companies to fund FDA staffing in return for guaranteed review times. Even with the new funds, the agency can be overwhelmed, pharmaceutical executives say. "What we would like to do is interact regularly with the agency, but they have many companies asking to do the same thing," says Gormley.
Former FDA director McClellan says that early contact can help in reducing cost and time. In trying to avoid multiple cycle reviews, one of the FDA's new initiatives is a test project in which the agency is meeting earlier with the drug companies and giving them earlier feedback on applications, even on ideas still in development.
Discovering viable, agreeable solutions to make the drugs flow more freely and quicker through the pipelines will involve a host of remedies, including tolerance. "I think we are going to have to be more patient," says Rowinsky. "We live in a short-term society. Investors are screaming for results right away, bean counters are screaming for big numbers right away; we're often slaves to investors. ... Patience is a virtue."
The Scientist. Volume 18 | Issue 8 | 20 | Apr. 26, 2004
Hepatitis C virus linked to non-hodgkin's
18 Oct 2004
Patients infected with the hepatitis C virus (HCV) are six times as likely to develop non-Hodgkin's lymphoma (NHL) than individuals that are virus free, according to research presented today at the Third Annual Frontiers in Cancer Prevention Research meeting.
HCV infected patients have a seventeen fold higher risk for developing diffuse large B-Cell lymphoma, researchers from British Columbia documented. Diffuse large B-cell lymphoma is the most common variety of NHL, comprising approximately 30 percent of all NHL patients.
Compared to Europe and Japan, incidence of hepatitis C viral infection is fairly low in North America, and previous studies from Canada and the United States have not shown an association between the virus and development of NHL, said Ms Agnes Lai, lead author for the research. The British Columbia study examined HCV status in 550 NHL cases and 205 population controls. The study had the strength of numbers of patients to ascertain an association between HCV and NHL, confirming the viral-cancer link suspected in studies from other areas of the world where the virus is more prevalent.
"People who have been exposed to the virus comprise a high risk group for developing non-Hodgkin's lymphoma, particularly diffuse b-cell lymphoma," said John Spinelli, a cancer researcher from the British Columbia Cancer Agency, Vancouver, BC, and principal investigator of the research study.
The spread of hepatitis C in the United States has dropped significantly since the 1980s. Currently, the number of new cases per year is around 25,000. Approximately 3.8 million Americans have been infected with the virus. The most common means of infection in the past was blood transfusion, and in recent years is among drug users who share needles.
Approximately 53,000 patients were diagnosed with NHL in the United States in 2003. There were 23,000 deaths from the disease that year.
Spinelli and Lai conducted their research with colleagues Randy Gascoyne, Joseph Connors, Pat Lee, Rozmin Janoo-Galani, and Richard Gallagher, BC Cancer Agency; Anton Andonov, Health Canada National Microbiology Laboratories, Winnipeg, Manitoba, and Darrel Cook, British Columbia Centre for Disease Control.