Cancer Forums and News by PhD's


News | Forums Register

Go Back   Cancer Forums and News by PhD's > Main Category > Main Forum

Reply
 
Thread Tools Display Modes
  #1  
Old 01-27-2013, 10:47 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Chemotherapy Recommendations Based on Published Reports of Clinical Trials

The American Society of Clinical Oncologists (ASCO) says oncologists should make chemotherapy treatment recommendations on the basis of published reports of clinical trials and a patient’s health status and treatment preferences.

All the rigorous clinical trials identified are the best treatments for the “average” patient (do cancer cells like Coke or Pepsi). But cancer is far more heterogeneous in response to various individual drugs than are bacterial infections. The tumors of different patients have different responses to chemotherapy.

[url]http://www.ncbi.nlm.nih.gov/pubmed/21788567

How about published reports of clinical trials?

More chemotherapy is given for breast cancer than for any other form of cancer and there have been more published reports of clinical trials for breast cancer than for any other form of cancer.

According to NCI’s official cancer information website on state of the art chemotherapy for recurrent or metastatic breast cancer, it is unclear whether single-agent chemotherapy or combination chemotherapy is preferable for first-line treatment. At this time, no data support the superiority of any particular regimen. So, it would appear that published reports of clinical trials provide precious little in the way of guidance (1).

In the total absence of guidance from published reports of clinical trials then, what basis are treatment regimens selected instead? ASCO says that this should be further based on a patient’s health status and patient treatment preferences.

So what is being done?

Published in the journal Health Affairs is a joint Harvard/Michigan study entitled, Does reimbursement influence chemotherapy treatment for cancer patients? The authors documented a clear association between reimbursement to the oncologists for the chemotherapy of breast, lung, and colorectal cancer and the regimens which the oncologists selected for the patients. In other words, oncologists tended to base their treatment decisions on which regimen provided the greatest financial remuneration to the oncologist (2).

A March 8, 2006 New York Times article described the study. One of the more interesting aspects of the story was a comment from an executive with ASCO, Dr. Joseph S. Bailes, who disputed the study’s findings, saying that cancer doctors select treatments only on the basis of clinical evidence (3).

So ASCO’s Dr. Bailes maintains that drugs are chosen only on the basis of clinical evidence. Yet, Dr. Neil Love reported in a survey of breast cancer oncologists based in academic medical centers and community based, private practice medical oncologists. The former oncologists do not derive personal profit from the administration of infusion chemotherapy, the latter oncologists do derive personal profit from infusion chemotherapy, while deriving no profit from prescribing oral-dosed chemotherapy.

The results of the survey could not have been more clear-cut. For first line chemotherapy of metastatic breast cancer, 84-88% of the academic center-based oncologists (who are motivated to keep off-protocol patients out of their chemotherapy infusion rooms to reserve these rooms for on-protocol patients) prescribed an oral-dose drug (capecitabine), while only 13% prescribed infusion drugs, and none of them prescribed the expensive, highly remunerative drug docetaxel. In contrast, among the commuity-based oncologists, only 18% prescribed the non-remunerative oral-dose drug (capecitabine), while 75% prescribed remunerative infusion drugs, and about 40% prescribed the expensive, highly remunerative drug docetaxel (4).

There are patients who have progressive disease after first-line therapy, only to enjoy a dramatic benefit from second or even third line therapy, and these patients would have been much better served by receiving the most probable active treatment the first time around. While being faced with a large number of choices of otherwise equally acceptable therapies, oncologists select the treatments which generate the most income for private practices or generate the least inconvenience for the clinical research institutions.

What needs to be done is to remove the profit incentive from the choice of chemotherapy treatments. Medical oncologists should be taken out of the retail pharmacy business and let them be doctors again.

Sources:

(1) [url]http://www.cancer.gov/cancertopics/pdq/treatment/breast/HealthProfessional/page8#Section 297

(2) [url]http://content.healthaffairs.org/cgi/content/abstract/25/2/437

(3) [url]http://www.nytimes.com/2006/03/08/health/08docs.html?ex=1145160000en=584b5c2aa35995a3ei=507 0

(4) [url]http://patternsofcare.com/2005/1/editor.htm (figure 37, volume 2, issue 1, 2005)

Why Most Published Research Findings Are False

[url]http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124
__________________
Gregory D. Pawelski

Last edited by gdpawel : 03-07-2013 at 11:27 AM. Reason: corrected url address
Reply With Quote
  #2  
Old 01-27-2013, 11:02 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Measuring the Efficacy of Cancer Drugs

The existence of this profit motive in drug selection has been one of the major factors working against the individualization of cancer chemotherapy based on testing the cancer biology.

Two scientific studies giving us a dose of reality that once a decision to give chemotherapy is taken, oncologists receiving more-generous reimbursements used more-costly treatment regimens.

It’s not that all oncologists are bad people, it’s the SYSTEM which is rotten. Some oncologists prescribe chemotherapy drugs with equal efficacies and toxicities. I would imagine that some are influenced by the whole state of affairs, possibly without even entirely admitting it. There are so many ways for humans to rationalize their behavior. The solution is to take medical oncologists out of the retail pharmacy business and forece them be doctors again.

A good benchmark assessment of the efficacy of cancer drugs that come out of clinical trials, could be functional cytometric profiling (Cell Function Analysis).

At present, clinical trials are highly empirical, they test drugs on general populations and then look for a clincial response and a treatment effect that is not likely to be a chance result. However, the side effect of this is inflexibility, some patients may unnecessarily be exposed to inferior experimental therapies.

A problem with the empirical approach is it yields information about how large populations are likely to respond to a treatment. Doctors don't treat populations, they treat individual patients. Because of this, doctors give treatments knowing full well that only a certain percentage of patients will receive a benefit from any given medicine. The empirical approach doesn't tell doctors how to personalize their care to individual patients.

The number of possible treatment options supported by completed randomized clinical trials becomes increasingly vague for guiding physicians. Even the National Cancer Institute's official cancer information website states that no data support the superiority of more than 20 different regimens in the case of metastatic breast cancer, a disease in which probably more clinical trials have been done than any other type of cancer.

More clinical trials have not produced more clear-cut guidance, but more confusion in this situation. It is more difficult to carry out clinical trials in early stage breast cancer, because larger numbers of patients are needed, as well as longer follow-up periods. But it is likely that more trials would lead to the identification of more equivalent chemotherapy choices for the average patient in early stage breast cancer and in virtually all forms of cancer as well.

So, it would appear that published reports of clinical trials provide precious little in the way of "gold standard" guidance. Almost any combination therapy is acceptable in the treatment of cancer these days. Physicians are confronted on nearly a daily basis by decisions that have not been addressed by randomized clinical trial evaluation.

The needed change in the "war on cancer" will not be on the types of drugs being developed, but on the understanding of the drugs we have. The system is overloaded with drugs and underloaded with the wisdom and expertise for using them.

Patient tumors with the same histology do not necessarily respond identically to the same agent or dose schedule of multiple agents. Laboratory screening of samples from a patient's tumor can help select the appropriate treatment to administer, avoiding ineffective drugs and sparing patients the side effects normally associated with these agents.

It can provide predictive information to help physicians choose between chemotherapy drugs, eliminate potentially ineffective drugs from treatment regimens and assist in the formulation of an optimal therapy choice for each patient (not average populations). This can spare the patient from unnecessary toxicity associated with ineffective treatment and offers a better chance of tumor response resulting in progression-free survival.

Identifying patients with resistant neoplasms may not only spare them toxicity but may prolong their lives, by sparing them from the life shortening effects of ineffective chemotherapy.

Patients would certainly have a better chance of success had their cancer been chemo-sensitive rather than chemo-resistant, where it is more apparent that chemotherapy improves the survival of patients, and where identifying the most effective chemotherapy would be more likely to improve survival above that achieved with empiric chemotherapy.

With the absence of effective laboratory tests to guide physicians, many patients do not even get a second chance at treatment when their disease progresses. Spending six to eight weeks to diagnose treatment failure often consumes a substantial portion of a patient's remaining survival, not to mention toxicities and mutagenic effects.

Good review papers exist and are increasingly appreciated, understood, and applied by private sector and European clinicians and scientists. This literature is not understood by many NCI investigators and by NCI-funded university investigators. NCI studies never determine if fresh tumor assays worked. All of the considerable literature which supports the use of these assays in patient management has been based on true fresh tumor (non-passaged) cell assays.

The NCI used "cell lines" because the major expertise of the investigators who carried out any study was in the creation of cancer cell lines, and they wanted to see if they could perform assays on these cell lines to use in patient therapy. The results were that they were able to test successfully only 22% of specimens received, including only 7% of primary lesions. This contrasts with a 75% overall success rate reported by earlier investigators who used the same assay system in fresh tumors and a routinely obtained > 95% success rate using improved methods available today.

Cell culture analysis using the functional profiling method differs from other tests in that it assesses the activity of a drug upon combined effect of all cellular processes, using several metabolic (cell metabolism) and morphologic (structure) endpoints, at the cell "population" level (rather than at the "single cell" level), measuring the interaction of the entire genome.

I believe that improving cancer patient treatment through cellular-based testing will offer predictive insight into the nature of an individual's particular cancer and enable oncologists to prescribe treatment more in keeping with the heterogeneity of the disease. The biologies are very different and the response to given drugs is very different. Having a good tumor-drug match not only would improve survival rates, it would be cost-effective, and the high cost of the newer cancer therapies reinforces the necessity of choosing the right therapy the first time around.

In cases where there are several equivalent treatments available, patients can benefit from these assay-testing results as a supplement to other clinical data when deciding on a treatment option. There may be some resistance to this approach from some oncologists, but no one has ever shown that harm could result from the use of these technologies.

Why Community Oncology Can Benefit From Cell Function Analysis

[url]http://cancerfocus.org/forum/showthread.php?t=574

World renowned Oncologists are challenging the cancer industry to recognize a Chemo-Screening test (CSRA) that takes the "guesswork" out of drug selection. One of the reasons medical oncologists don’t like in vitro chemosensitivity tests is that it may be in direct competition with the randomized controlled clinical trial paradigm.

[url]http://vimeo.com/72389724
__________________
Gregory D. Pawelski

Last edited by gdpawel : 02-26-2014 at 01:38 AM. Reason: added url address
Reply With Quote
  #3  
Old 02-02-2013, 07:06 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Chemotherapy Dosing

Silvana Martino, M.D.

In the August 10, 2012 issue of the Journal of Clinical Oncology, Dr. John Marshall and Dr. Otto Ruech from Georgetown University in Washington, DC, raised the question of how they choose the approved and recommended dose for each chemotherapy drug as well as some of the newer non-chemotherapy drugs.

For each drug there is a recommended dose which is generally multiplied by an individual’s meter-squared number. A person’s meter-square (m2) number is a mathematical relationship of their height and weight.

For example, in the Adriamycin + Cytoxan regimen, the dose of Adriamycin is 60 mg per m2. Therefore, if you are 1.5 m2, one multiplies 60 times 1.5 to calculate a total dose of 90 mg for that individual.

How is that first number arrived at, the Adriamycin number? That number, which is different for every drug, is arrived at experimentally. When a drug is first studied in people, one begins at a low dose calculated from studies done in animals.

As several individuals are treated at the low dose, one observes and records side effects. If there are no intolerable side effects, the dose is increased and a new group of people are treated with the higher dose.

Again, if no intolerable side effects are noted, the dose is further increased for administration to another group. This process continues with increasing doses being used until a dose is reached where the toxicity is considered too high.

The dose just below this toxic level is then chosen as the maximum tolerated dose and recommended for use. This schema which has been used for many years is based on the concept that more is better and if we could only manage to give a higher dose, we would be more effective against a tumor.

What may appear to be a logical conclusion may not actually be a correct biological conclusion. It is not clear that anything in nature really works this way.

For example, if you give a plant too little water, it will not do well. Likewise, if you give it too much water it will also not do well. There is a level in the middle of the two extremes that works best. It is likely, that drug dosing also works in the same manner.

The authors of this article question the method by which they have chosen drug dosing in the past, and suggest that other ways should be considered. Ideally, there should be some aspect of how a drug interacts with its target in a cell that should guide us as to what dose achieves the desired effect in a cell.

Though presently this is difficult to do with most drugs, it must be the ultimate goal. The concept of using toxicity to guide dose calculations cannot be the best solution.

Reference: Maximum-Tolerated Dose, Optimum Biologic Dose, or Optimum Clinical Value: Dosing Determination of Cancer Therapies, Marshall JL, and Ruech OJ, the Journal of Clinical Oncology, Vol 30, No 23, August 10, 2012: pp2815-2816
__________________
Gregory D. Pawelski
Reply With Quote
Sponsored Links
Advertisement
  #4  
Old 02-15-2013, 02:50 AM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Chemotherapy Recommendations Based on Published Reports of Clinical Trials

Ralph W. Moss, Ph.D.

An important paper had been published in the journal Clinical Oncology. This meta-analysis, entitled “The Contribution of Cytotoxic Chemotherapy to 5-year Survival in Adult Malignancies” set out to accurately quantify and assess the actual benefit conferred by chemotherapy in the treatment of adults with the commonest types of cancer. Although the paper has attracted some attention in Australia, the native country of the paper’s authors, it has been greeted with complete silence on this side of the world.

All three of the paper’s authors are oncologists. Lead author Associate Professor Graeme Morgan is a radiation oncologist at Royal North Shore Hospital in Sydney; Professor Robyn Ward is a medical oncologist at University of New South Wales/St. Vincent’s Hospital. The third author, Dr. Michael Barton, is a radiation oncologist and a member of the Collaboration for Cancer Outcomes Research and Evaluation, Liverpool Health Service, Sydney.

Prof. Ward is also a member of the Therapeutic Goods Authority of the Australian Federal Department of Health and Aging, the official body that advises the Australian government on the suitability and efficacy of drugs to be listed on the national Pharmaceutical Benefits Schedule (PBS) roughly the equivalent of the US Food and Drug Administration.

Their meticulous study was based on an analysis of the results of all the randomized, controlled clinical trials (RCTs) performed in Australia and the US that reported a statistically significant increase in 5-year survival due to the use of chemotherapy in adult malignancies. Survival data were drawn from the Australian cancer registries and the US National Cancer Institute’s Surveillance Epidemiology and End Results (SEER) registry spanning the period January 1990 until January 2004.

Wherever data were uncertain, the authors deliberately erred on the side of over-estimating the benefit of chemotherapy. Even so, the study concluded that overall, chemotherapy contributes just over 2 percent to improved survival in cancer patients.

Yet despite the mounting evidence of chemotherapy’s lack of effectiveness in prolonging survival, oncologists continue to present chemotherapy as a rational and promising approach to cancer treatment.

“Some practitioners still remain optimistic that cytotoxic chemotherapy will significantly improve cancer survival,” the authors wrote in their introduction. “However, despite the use of new and expensive single and combination drugs to improve response rates…there has been little impact from the use of newer regimens” (Morgan 2005).

The Australian authors continued: “…in lung cancer, the median survival has increased by only 2 months [during the past 20 years, ed.] and an overall survival benefit of less than 5 percent has been achieved in the adjuvant treatment of breast, colon and head and neck cancers.”

The results of the study are summarized in two tables, reproduced below. Table 1 shows the results for Australian patients; Table 2 shows the results for US patients. The authors point out that the similarity of the figures for Australia and the US make it very likely that the recorded benefit of 2.5 percent or less would be mirrored in other developed countries also.

Basically, the authors found that the contribution of chemotherapy to 5-year survival in adults was 2.3 percent in Australia, and 2.1 percent in the USA. They emphasize that, for reasons explained in detail in the study, these figures “should be regarded as the upper limit of effectiveness” (i.e., they are an optimistic rather than a pessimistic estimate).

How is it possible that patients are routinely offered chemotherapy when the benefits to be gained by such an approach are generally so small? In their discussion, the authors address this crucial question and cite the tendency on the part of the medical profession to present the benefits of chemotherapy in statistical terms that, while technically accurate, are seldom clearly understood by patients.

For example, oncologists frequently express the benefits of chemotherapy in terms of what is called “relative risk” rather than giving a straight assessment of the likely impact on overall survival. Relative risk is a statistical means of expressing the benefit of receiving a medical intervention in a way that, while technically accurate, has the effect of making the intervention look considerably more beneficial than it truly is. If receiving a treatment causes a patient’s risk to drop from 4 percent to 2 percent, this can be expressed as a decrease in relative risk of 50 percent. On face value that sounds good. But another, equally valid way of expressing this is to say that it offers a 2 percent reduction in absolute risk, which is less likely to convince patients to take the treatment.

It is not only patients who are misled by the overuse of relative risk in reporting the results of medical interventions. Several studies have shown that physicians are also frequently beguiled by this kind of statistical sleight of hand. According to one such study, published in the British Medical Journal, physicians’ views of the effectiveness of drugs, and their decision to prescribe such drugs, was significantly influenced by the way in which clinical trials of these drugs were reported. When results were expressed as a relative risk reduction, physicians believed the drugs were more effective and were strongly more inclined to prescribe than they were when the identical results were expressed as an absolute risk reduction (Bucher 1994).

Another study, published in the Journal of Clinical Oncology, demonstrated that the way in which survival benefits are presented specifically influenced the decision of medical professionals to recommend chemotherapy. Since 80 percent of patients chose what their oncologist recommends, the way in which the oncologist perceives and conveys the benefits of treatment is of vital importance. This study showed that when physicians are given relative risk reduction figures for a chemotherapy regimen, they are more likely to recommend it to their patients than when they are given the mathematically identical information expressed as an absolute risk reduction (Chao 2003).

The way that medical information is reported in the professional literature therefore clearly has an important influence on the treatment recommendations oncologists make. A drug that can be said, for example, to reduce cancer recurrence by 50 percent, is likely to get the attention and respect of oncologists and patients alike, even though the absolute risk may only be a small one – perhaps only 2 or 3 percent – and the reduction in absolute risk commensurately small.

[url]http://www.icnr.com/articles/ischemotherapyeffective.html
__________________
Gregory D. Pawelski

Last edited by gdpawel : 09-09-2014 at 07:54 PM. Reason: Added url address
Reply With Quote
  #5  
Old 02-22-2013, 11:56 AM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Understanding Medical Studies

How can I know if a particular treatment will make a big or little difference in my life?

Understanding the numbers in a research study can be very confusing. They're based on all kinds of statistical calculations that you may not want, or need, to understand. There's one very important thing you should know, however, and that's the difference between relative and absolute statistical comparisons. Sound complicated? It is! Even doctors can get confused. But unless you know the difference, you may not be able to understand how helpful a particular treatment may be for you.

Relative vs. absolute comparisons in real life

Here's an everyday example of how these comparisons work: Let's say that your son's room will be messy seven days a week unless you do something. One thing you can try is to yell every day, "Clean up your room right now! Do you hear me? I said NOW!!"

With the "yelling treatment" under way, he cleans up his room three days a week. The result: Yelling compared to no yelling (the "control group") results in a RELATIVE drop in the risk of a messy room by 43%—from seven days a week down to four days a week (3 is 43% of 7).

The difference in the number of days your son cleans his room before and after yelling is three days a week (7–4 = 3). This number represents the ABSOLUTE reduction in the risk of a messy room.

Was it was really worth all that yelling for three more days of a clean room each week? To answer that, you have to consider the side effects: You're annoyed that you need to yell, your son is annoyed too, and your throat hurts!

Understanding the results of medical studies

Study design: Consider a hypothetical study, comparing the success of treatment A and treatment B in lowering blood pressure.
In the study, equal numbers of people were given treatment A and treatment B. Their blood pressure was measured every day over the two-year course of the study.

Study results: Researchers found that treatment A lowered people's blood pressure by an average of 10%. Treatment B lowered people's blood pressure by an average of 12%.

RELATIVE comparison of the results: In relative terms, treatment B was 20% better at lowering blood pressure than treatment A. How do we get 20%? The difference between 12 and 10 is 2, and 2 is 20% of 10.

Therefore, the makers of treatment B can report that treatment B is 20% better at lowering blood pressure than treatment A.

ABSOLUTE comparison of the results: In absolute terms, the difference between the two treatments in lowering blood pressure is 2%. How do we get 2%? Simply by subtracting 10% from 12%.

The makers of treatment A can report that if you take treatment B, your blood pressure will be only about 2% lower than if you take treatment A.

Both statements are true. They're just based on different ways of comparing the same numbers. That's what makes this so tricky. Generally, when people want to make results sound more impressive, they use the relative comparison. When they want to suggest that the difference between two treatments is small, they use the absolute numbers.

Here's another example of how relative and absolute comparisons work, this time with reference to the drugs' side effects:

More study results: Researchers found that people who took treatment A had a 0.5% chance of having a stroke, and people who took treatment B had a 1% chance.

RELATIVE comparison of side effects: In the study, the risk of getting a stroke with treatment A was 50% less than with treatment B. How do we get 50%? The difference between 1 and 0.5 is 0.5, which is 50% of 1.

Therefore, the makers of treatment A can rightfully claim that, according to the study, you'll have a 50% lower chance of having a stroke if you take treatment A than if you take treatment B.

ABSOLUTE comparison of side effects: In absolute terms, the difference between the two treatments in causing stroke is 0.5% (1%–0.5% = 0.5%).

Mixing relative and absolute results

The makers of treatment B will want to say that treatment B is 20% more effective in lowering blood pressure than treatment A (RELATIVE comparison) and that it increases your chance of having a stroke by only 0.5% (ABSOLUTE comparison).
This is a classic example of how relative and absolute comparisons can be used in a confusing way. The claim made about treatment B uses both relative and absolute comparisons in the same sentence.

To avoid the confusion, the claim would have to include either two relative comparisons, or two absolute ones. Or, it could include all four.

Two relative comparisons: "Treatment B is 20% better at lowering blood pressure, but if you take treatment A you have a 50% lower chance of having a stroke."

Two absolute comparisons: "If you take treatment B your blood pressure will probably go down by about 2% more than it would if you took treatment A, and your chance of having a stroke would increase by about 0.5%."
You can see how treatment B can look far superior when relative and absolute comparisons are mixed.

Comparing oranges and oranges

In relative comparisons we're comparing the PERCENTAGES of oranges, and in absolute comparisons, we're comparing the NUMBER of oranges.

But what's even more confusing is that there are actually three ways to compare these two numbers: one absolute, and two relative.

It's easy to see that the ABSOLUTE DIFFERENCE between the two groups is 2. B is 2 greater than A, and A is 2 less than B.

What about the difference in percentages (the RELATIVE DIFFERENCE)?

2 is 20% of 10.

That means 12 is 20% greater than 10, or group B is 20% greater than group A.

2 is also 16.6% of 12.

That means 10 is 16.6% less than 12, or group A is 16.6% less than group B.

So here the absolute difference is 2, but we have two relative differences: 20% and 16.6%.

Comparing percentages

Now let's say each orange also represents the mathematical concept of 1%. Group A is 10%. Group B is 12%.

Again, the ABSOLUTE DIFFERENCE between the two groups is 12% minus 10%, which equals 2%. The calculation is the same as with the oranges. You just add a percent sign at the end.

In RELATIVE terms:

When the oranges are percentages, the comparison is exactly the same as when the oranges are just oranges: 2% is 20% of 10%, so group B is 20% greater than group A.

2% is also 16.6% of 12%. So when we compare 10% and 12%, group A is still 16.6% smaller than group B.

A good way to look at the relative comparisons is to ignore the percent signs. Think of the numbers as representing groups of oranges. The relative comparison answers the question: By what percentage is one group greater (or smaller) than the other?
The only difference is that before we were comparing oranges, and here we're comparing percentages.

Take-home message: It's extremely important to know how to figure out whether a particular treatment can help you, and how much help it can provide. Decisions about taking nothing, starting a new medication, or choosing one treatment over another all require an understanding of relative and absolute risk.

Oranges, percentages, comparisons—it might still seem impossibly confusing. It takes time and practice to really understand what's going on. But it's worth the effort. Once you "get it," you'll really be able to know whether a new treatment is much better than an old one, just slightly better, or no better at all.

Next time you come across a statement in a study about "absolute risk" or "relative risk," take a minute to return to this explanation. Read it through again and try to apply it to the numbers in the study. The more times you do it, the easier it will be.

[url]http://healthjournalism.org/uploads/publications/Covering-Medical-Research.pdf
__________________
Gregory D. Pawelski
Reply With Quote
Sponsored Links
Advertisement
  #6  
Old 03-17-2013, 04:59 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Plenty of pitfalls in reporting on medical studies

Paul Tullis is an independent journalist in Los Angeles. He attended the Health Journalism 2013 on an Association Of Health Care Journalists (AHCJ)-California Health Journalism Fellowship, which is supported by The California HealthCare Foundation.

Seventy percent of news articles on medical studies fail to discuss costs of the treatment studies, quantify potential harms and benefits, and evaluate the quality of evidence, said Gary Schwitzer, publisher of Health News Review (HNR), which reviewed 1,800 such stories over the past seven years.

“Seventy percent of articles make things look terrific, risk-free and without a price tag,” Schwitzer said at a panel on the first day of Health Journalism 2013. “It strikes me that we can do a better job helping to educate patients, health consumers, news consumers and voters.”

The criteria Schwitzer mentioned are among 10 that Health News Review presents as crucial elements to any article about medical studies.

HNR publishes “systematic, objective, criteria-driven reviews” of news articles and broadcast segments. Each is looked at by a journalist and someone with an advanced degree – a medical degree, doctorate of philosophy or a master’s in public health.

Another mistake that Schwitzer’s reviewers frequently cite is “idolatry of the surrogate marker.” This comes from not understanding, or not reporting, that surrogate outcomes, such as tumor shrinkage, do not always translate into meaningful outcomes, e.g. longer life. Others include failure to recognize that publication in a medical journal does not mean that the findings are important, or even true. Medical journals retract about 400 articles a year, said Ivan Oransky, M.D., executive editor of Reuters Health, co-founder of Retraction Watch and founder of Embargo Watch.

“Not all journals are created equal,” Oransky said. “If you’re a freelance writer, do you go to the worst-paying and least-visible outlet first? Of course not, and neither do scientists.”

Oransky’s talk was titled “How Not To Get It Wrong,” and he delivered a list of things reporters can do to avoid common mistakes. One is to always read the whole study, not just the abstract. Recognizing deadline pressures and the near-audible groans in the room, Oransky said that at Reuters, his reporters typically write two stories a day and when they report on studies, they always read the whole study and seek comments from experts. (Health News Review has a list of scientists who have averred that they have no conflict of interest and will offer quotes for reporters on study design and methodology.)

Only by reading the study, Oransky said, can a reporter learn whether a study was well-designed. Health News Review’s “hierarchy of evidence” places meta-analyses at the top, with randomized double-blind control trials below that, and cohort studies at number three. At the bottom are ideas, editorials and opinions.

(This reporter was reminded at this point of a quotation delivered to him by Dr. Robert Fox of the Cleveland Clinic, for a story written last year about patient-driven research in MS. “People without a scientific background,” he said, “often view all scientific papers with equal weight. Well, scientists don’t.”)

Another necessary question to ask is whether the study was on humans. “It’s remarkable there are any mice left with cancer, depression or restless leg syndrome,” Oransky wryly noted.

He also offered a couple of smart questions reporters can ask of study authors. “I try to impress sources that I’m going to ask them meaningful questions, and that I expect them to answer them, because as a reporter I’m skeptical but fair.” Do a “find” search in the paper for “power” to get the power calculation — a measure of whether the study was big enough to answer the question it’s researching. “If you don’t find one, ask,” Oransky said. “If they were unable to recruit all the subjects they wanted, why not?”

Other smart questions include, “Were those your primary endpoints?” (i.e., did the researchers not find what they were looking for and instead just publish some data they found interesting?) And “Is that endpoint clinically significant?” He gave the example of a reduction of blood pressure from 120/80 to 119/79 as something that’s statistically significant, but unlikely to reduce the risk of stroke or heart attack.

“Journals were never meant to be a source of daily news, but as a conversation between scientists,” Schwitzer said. “Yet that’s what they’ve become.”

“If you cover health, do you only cover studies and clinical news?” he concluded, “Ask yourself: How many of your stories are about treatments, tests, products and procedures? Do you think you might be reporting too much of this?” He pointed to a session at AHCJ 2013 on covering health delivery and health insurance to the unemployed and homeless as an example of other stories reporters on the health beat might cover, to perhaps greater public service. ‘Have you spoken to your editor about this? Can we help? Let us know.”

A 70-page slim guide is available free to AHCJ members and the association has a fast-growing core topic area with resources on covering medical research.

[url]http://healthjournalism.org/medicalstudies
__________________
Gregory D. Pawelski
Reply With Quote
  #7  
Old 05-21-2015, 11:00 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Maximum Tolerated Dose

Robert A. Nagourney, M.D.

Cancer patients and their physicians can find themselves at the wrong end of many scientific discoveries. For example, the drug capecitabine, sold commercially as Xeloda, was originally marketed at a daily dose of 2500 mg/m2 given for two weeks.

This schedule developed by the pharmaceutical investigators, is known as the maximum tolerated dose (MTD) and it performed well against other regimens for breast and colon cancer. With an FDA approval in hand, oncologists began administering the drug on the recommended schedule.

It did not take long before physicians and their patients realized that 2500 mg/m2/day was more than many patients could tolerate. Hand-foot Syndrome (an inflammation of the skin of palms and soles), mucositis (oral ulcers) myelosuppression (lowered blood counts) and diarrhea were all observed. Immediately clinical physicians began to dose de-escalate. Soon these astute practitioners established more appropriate dose schedules and the drug found its rightful place as a useful therapeutic in many diseases.

What was interesting was that activity continued to be observed. It appeared that the high dose schedule was simply toxic and that lower doses worked fine, with fewer side effects.

Modern targeted agents have been introduced over recent years with dose schedules reminiscent of capecitabine. The drug sunitinib, approved for the treatment of renal cell carcinoma, is given at 50 mg daily for four weeks in a row, followed by a two week rest. Despite good activity, toxicities like mucositis and skin rash often set in by the third week. What remained unclear was whether these schedules were warranted. A recent report in the Annals of Oncology examined this very question. In a retrospective analysis of patients with kidney cancer the physicians found that lowering the dose of sunitinib preserved activity but reduced toxicity.

As a practitioner, I have long reduced my patient’s schedule of sunitinib to two weeks on, one week off or even 11 days on, 10 days off. In one patient that I treated for a gastrointestinal stromal tumor (GIST), I achieved a durable complete remission with just 25 mg/day, given seven days each month, a remission that persists to this day, seven years on.

We are in a new world of targeted therapy, one in which very few people understand the kinetics, pharmacodynamics and response profiles of patients for novel drugs. In our laboratory, favorable dose response curves often suggest that many agents could be administered at lower doses. More interestingly, some patients who do not carry the “targets” for these drugs nonetheless respond. This has broad implications for multi-targeted inhibitors like sunitinib that can influence multiple targets simultaneously.

As so often happens, it is the nimble clinical physicians with their feet on the ground, confronting the very real needs of their patients who can outmaneuver and outthink their academic colleagues. The trend toward consolidation in medicine and the absorption of clinical practices into hospital groups all using standardized algorithms has the risk of stifling the very independence and creativity of practicing oncologists that has proven both effective and cost-effective for our patients and our medical system at large.
__________________
Gregory D. Pawelski
Reply With Quote
  #8  
Old 05-21-2015, 11:05 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default Chemotherapy Dosing

In the August 10, 2012 issue of the Journal of Clinical Oncology, Dr. John Marshall and Dr. Otto Ruech from Georgetown University in Washington, DC, raised the question of how they choose the approved and recommended dose for each chemotherapy drug as well as some of the newer non-chemotherapy drugs.

For each drug there is a recommended dose which is generally multiplied by an individual’s meter-squared number. A person’s meter-square (m2) number is a mathematical relationship of their height and weight.

For example, in the Adriamycin + Cytoxan regimen, the dose of Adriamycin is 60 mg per m2. Therefore, if you are 1.5 m2, one multiplies 60 times 1.5 to calculate a total dose of 90 mg for that individual.

How is that first number arrived at, the Adriamycin number? That number, which is different for every drug, is arrived at experimentally. When a drug is first studied in people, one begins at a low dose calculated from studies done in animals.

As several individuals are treated at the low dose, one observes and records side effects. If there are no intolerable side effects, the dose is increased and a new group of people are treated with the higher dose.

Again, if no intolerable side effects are noted, the dose is further increased for administration to another group. This process continues with increasing doses being used until a dose is reached where the toxicity is considered too high.

The dose just below this toxic level is then chosen as the maximum tolerated dose and recommended for use. This schema which has been used for many years is based on the concept that more is better and if we could only manage to give a higher dose, we would be more effective against a tumor.

What may appear to be a logical conclusion may not actually be a correct biological conclusion. It is not clear that anything in nature really works this way.

For example, if you give a plant too little water, it will not do well. Likewise, if you give it too much water it will also not do well. There is a level in the middle of the two extremes that works best. It is likely, that drug dosing also works in the same manner.

The authors of this article question the method by which they have chosen drug dosing in the past, and suggest that other ways should be considered. Ideally, there should be some aspect of how a drug interacts with its target in a cell that should guide us as to what dose achieves the desired effect in a cell.

Though presently this is difficult to do with most drugs, it must be the ultimate goal. The concept of using toxicity to guide dose calculations cannot be the best solution.

Reference: Maximum-Tolerated Dose, Optimum Biologic Dose, or Optimum Clinical Value: Dosing Determination of Cancer Therapies, Marshall JL, and Ruech OJ, the Journal of Clinical Oncology, Vol 30, No 23, August 10, 2012: pp2815-2816
__________________
Gregory D. Pawelski
Reply With Quote
  #9  
Old 05-21-2015, 11:06 PM
gdpawel gdpawel is offline
Moderator
 
Join Date: Feb 2007
Location: Pennsylvania
Posts: 4,354
Default True Synergy

True synergy is rather uncommon in most adult solid tumors. Most drug combinations in diseases such as cancer are merely additive, where the whole equals the sum of its parts, and not synergistic.

In cases where drugs are only additive and not synergistic, nothing is learned by testing the drugs in combination over what is learned by testing them separately. So drugs in combination are only tested in cases where there is the realistic possiblity of seeing true synergy.

The best combinations are those in which there is true synergy and in which the toxicities of the drugs in the combination are non-overlapping, so that full doses of each drug may be given safely.

The theory behind combination chemotherapy is that you can't give full doses of all drugs when you give them together. They have overlapping toxicity, which means you need to cut the doses when you give them together, so you get down to "homeopathic" dose levels.

Pharmaceutical companies have been attracted to studies looking at the maximum tolerated dose of any treatments. Cancer sufferers have been taking doses of expensive and potentially toxic treatments that are possibly well in excess of what they need.

Many of the highly expensive targeted cancer drugs may be just as effective and produce fewer side effects if taken over shorter periods and in lower doses. The search for minimum effective doses of treatments should be one of the key goals of cancer research.

Molecular testing methods detect the presence or absence of selected gene mutations which theoretically correlate with single agent drug activity (either Avastin or Sutent). Tests are performed using material from dead, fixed or frozen cancer cells, and are never exposed to anti-cancer agents.

Cell culture methods assess the net effect of all inter-cellular and intra-cellular processes occurring in real-time when cells are exposed to anti-cancer agents. Tests are performed using intact, living cancer cells plated in microclusters.

Cell culture methods allow for testing of different drugs within the same class and drug combinations to detect drug synergy and drug antagonism.

Literature Citation: Eur J Clin Invest 37 (suppl. 1):60, 2007
__________________
Gregory D. Pawelski
Reply With Quote
Sponsored Links
Advertisement
Reply


Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 05:20 AM.