Becoming Your Own Advocate

When the Answer Isn’t So Clear: Interpreting the Results of Medical Research

There are few things in life that are black and white and medical research is certainly no exception. Being able to skillfully evaluate the authority, usefulness, and reliability of medical information is a crucial step towards making informed decisions in one’s own healthcare.

There are few things in life that are black and white and medical research is certainly no exception. Being able to skillfully evaluate the authority, usefulness, and reliability of medical information is a crucial step towards making informed decisions in one’s own healthcare. While discerning what’s what and making sense of all the jargon can be intimidating, through this article, we will address some of the important issues surrounding medical research and to give you useful tools that will help you better review and draw conclusions from evidence-based literature. We’ll also describe some of the “red flags” to watch for.

A Daily Dilemma
We make thousands of decisions every day, big and small. In terms of medical diagnosis and treatment, these decisions can have profound implications for a patient’s future health and longevity: “Which treatment do I choose?”, “Do those test results mean I should change my treatment?” and “Is eating that bacon a bad idea?” While each of these decisions are often influenced by factual medical data, the way in which this information is interpreted and presented can often be unclear or misleading.

How information is reported: bad writing, bad reporting
Many people correctly perceive that the quantity of available medical information is overwhelming. Thus it’s often desirable to have a succinct summary of a medical report that gives the purpose, approach, and result of the study in question. The abstract (summary) section of a published research article fulfills this purpose and is the section that is most likely to be read. But up to a third of the abstracts of medical studies “contain data that were either inconsistent with corresponding data in the article’s body (including tables and figures) or not found in the body at all,” even in major medical journals. Reading the full text of a study can usually clear up these discrepancies, but often requires significant time and experience to read and interpret.

Who’s paying for the research?
It has long been known that trials funded by a drug manufacturer are often biased.(Barden, Derry et al. 2006) C. Seth Landerfield, a researcher from the University of California at San Francisco, found that among abstracts presented at a national medical society, industry support increased by 30-fold the odds that the result of a study would be favorable to the drug being researched.(Landefeld 2004)

Medical sensationalism
It should perhaps not come as a surprise that this financial bias is often not acknowledged by news media reporting on the results of clinical studies.(Hochman, Hochman et al. 2008) Media reports, perhaps in their zeal to grab attention with a great headline, often fail to mention important facts about studies on which they report. A review of 187 media reports found that 34% did not mention study size, 53% did not mention or were ambiguous about the study design, 40% did not quantify the main result, and only 29% mentioned the possibility of side effects.(Woloshin and Schwartz 2006) Given these problems in reporting the results of research, it’s often difficult to answer fundamental questions such as, “When is a number a correct number?” and “When is a number an important number?”

The bias effect of commercial funding extends even further, and can include the prescribing of medications. Much of the educational prescribing information being read by medical providers, and the lavish conferences where they learn about these medications, are also funded by major pharmaceutical manufacturers.(Rutledge, Crookes et al. 2003) Fortunately, efforts are underway to avoid commercial funding bias in the way drug information is presented to medical students.(McMahon, Neubauer et al. 2003) For example, both researchers and the World Health Organization have begun to issue calls for the establishment of independent grant-making institutes for funding pharmaceutical research.(Hardon 2003)

Math and medical literacy
All too often, one hears the phrase “I’m not good at math.” Nevertheless, medical writing is also a type of writing, and should follow basic principles of clarity and common sense. A well-written medical article is well-presented and clear to understand because the authors avoid using too much technical jargon.

A considerable amount of research has been published that focuses not only on how research should ideally be presented, but also on how well readers actually understand what they’re reading (this is especially important among those who should know how to read research, such as practicing clinicians). Whether patient or doctor, the issue of math illiteracy is unfortunately very common and can significantly undermine the process of informed decision making.(Gaissmaier and Gigerenzer 2008) The results of a medical study on treatment effectiveness, for example, are apparently much better understood when the results are presented as frequencies (e.g. “4 out of 5 people”) rather than as probabilities (e.g. “80%”). Although the difference might seem trivial or obvious, the way research is presented is important and can be an example of “when framing influences judgement.”(Gigerenzer 2003)

Closely related to math literacy is fundamental medical literacy. There is a substantial volume of information involved in any domain of medical care, and patients diagnosed with rare medical conditions often find themselves knowing more about their conditions than some of the medical providers with whom they consult. This is understandable, as who would be more motivated than the person diagnosed with the medical problem in question? This has been confirmed by researchers conducting a study on basic medical knowledge, who examined a concept they called the “minimum medical knowledge” needed to understand typical signs and/or risk factors of common medical problems such as heart attack, stroke, chronic obstructive pulmonary disease, and HIV/AIDS. They found that people with a university degree, some medical background, or personal illness experience had only a moderately higher “minimum medical knowledge” than those without these advantages. Among other things, this suggests that no one person can possibly have all the answers, and should have the effect of encouraging patients in a medical consultation to ask questions about their diagnosis and treatment.

Am I like those people?
When considering the results of a medical study, it is important to ask questions about whom the study included as volunteers. Clinical trials often select a very narrow group of volunteers to participate; quite often the age, gender, or other important criteria of clinical trial volunteers are not representative of the general population, nor of a specific individual considering a healthcare decision that may require information based on research.

In a strident call for equality in research, Dr. Somnath Saha, a researcher at the Portland VA Medical Center, has pointed out “a historical bias favoring white men.” As with most other institutions in the United States, medical research no longer actively excludes women and minorities. But the history of these institutions, the way they were designed and built – predominantly by and for white men – slants them in a way that continues to limit access for other groups.”(Saha 2009) Dr. Saha is careful to point out that “the distrust of research is usually applied to the research enterprise – the people and institutions carrying out research – not to the concept of research per se.” People want the results of research, but they expect it to be conducted in a rational and sensible way.

Simple tools for presenting information clearly
With the vast sums of money being spent on medical research, it’s reasonable to expect that a considerable amount of time and effort would be spent making sure the results are clear and well-presented. Although this is not always the case, there is hope. For the past twenty years, Edward Tufte has been writing books advocating clear communication of information reporting, which include Visual Explanations, Envisioning Information, The Visual Display of Quantitative Information, and Data Analysis for Politics and Policy. These principles have slowly begun to contribute to improvements in the clarity with which results of medical research are presented.(De Amici, Klersy et al. 1997)

Another important contributor to clearer scientific literacy is Gerd Gigerenzer of the Max Planck Institute for Human Development in Germany, who has written numerous tutorials for researchers suggesting how to clearly present written information. One such paper, addressing the theme of how doctors can “improve the presentation of statistical information so that patients can make well informed decisions,” is available on our website.(Gigerenzer and Edwards 2003)

Why is this important?
In the context of a clinical visit, a process of shared decision-making often leads to higher satisfaction with the visit.(Mathieu, Barratt et al. 2007; Nagle, Gunn et al. 2008) In order for that process to be effective, beneficial, and truly shared, it’s important that the information being used is understandable by both patient and practitioner alike.(Hoffrage, Lindsey et al. 2000; Gigerenzer and Edwards 2003)

Guidelines for evaluating research results for yourself
In evaluating the quality and reliability of medical research, the ideal is information that is objective and unbiased, so that we know the pros and cons of a particular test or treatment will be clear. Medical studies are reported first in the primary scientific literature (conferences or journals) and second in the general media. Most people will typically hear of a study first in the general media. Here are some pointers for weighing that information:

» Is the article’s headline or writing presented in a sensationalized way that might serve to discount essential weaknesses or disadvantages of the study results?

» Does the article provide background information that is helpful in evaluating whether the new finding is relevant or important? For example, does the article address what’s already known about the topic and whether this new study represents progress or provides new cautions?

» Does the article being presented relate in some way to advertisers associated with the show, publication, or website? Is the article identified as an advertisement?

» Is the news presented in a balanced way that also highlights the new treatment’s side effects? Are alternate points of view presented in interviews with other researchers or clinicians?

When reading the original results of a study, for example in a medical journal, here are some additional guidelines to assess the usefulness of that information:

» What type of study is it: laboratory or cell culture, animal, or human? The results of laboratory and animal research do not always translate into success in treating people.

» Was the article presented at a meeting or published in a peer-reviewed medical journal? Although meeting results are more current, treatments reported at conferences have not yet been subjected to as much scrutiny as would be done in a journal. If results are posted on the investigator’s web site or in a company’s brochure, the same caution is justified.

» Who conducted the study? Although researchers at well-known universities or research groups are typically considered to be more objective than individuals without institutional affiliations, this is by no means always the case. Many questionable relationships between university-based research programs and their funding sources have been reported. These relationships are often difficult to assess or clearly understand.

» How was the study funded? Funding from government, independent grant agencies, or foundations that rely on a peer-reviewed selection process to critically evaluate and select studies to be funded are preferred. Ideally, commercial sponsorship of research will be disclosed in the publication.

» How large was the study? Of the people originally recruited, how many completed the trial? Results of small studies do not always accurately predict how well the treatment or test will perform in larger groups of people.

» Who were the people who volunteered for the study, in terms of age, sex, ethnic group, medical history, etc.? This is important in extrapolating the results to a specific person.

» Is this a new treatment being first reported or has the approach been tested by other researchers as well? Results that are validated by repeat trials tend to be more reliable.

» In drug trials, were side effects mentioned and, if so, how do potential side effects compare to the potential benefits?

Other useful study details to consider
Components of study design that help research conform to these parameters include randomization and blinding. Typically, randomized trials will provide more reliable results. When participants are randomized to control groups (measuring the known medication or procedure) and treatment groups (comparing the know medication or procedure to either a new medication, a new procedure, an herbal formula, etc.), we are assured of a selection process that is not resulting from an individual’s or an institution’s biases. Similarly, when study volunteers (a “Single Blinded” study) or both the volunteers and the researchers themselves (“Double Blinded”) don’t know which group participants are assigned to, this further helps avoid bias and typically produces more useful results.

Some other types of studies you may be reviewing include cross-sectional studies, which take place at a single point in time (such as estimating how many people in the city of Seattle currently have ovarian cancer), versus a longitudinal study, which involves a series of measurements taken over a long period of time (such as following a group of smokers for 10 years to see who develops cancer). Cross-sectional studies examine the relationship between different variables at a fixed point in time. However, since exposure and disease status are measured at the same point in time, it may not always be possible to distinguish whether the exposure preceded or followed the disease. A “cohort study,” where a group is identified before the appearance of the disease that is under investigation, is often undertaken to obtain evidence to try to refute the existence of a suspected association between cause and disease. Results that are obtained from long-term cohort studies are considered to be higher quality studies than retrospective or cross-sectional studies.

As with any human endeavor, there is a wide range of quality and validity in the arena of medical research. Fortunately, in evaluating research results, the principles of what constitutes good research are easily accessible. We hope that the guidelines and perspectives raised in this article will be helpful to you in considering the veracity and relevance of medical information to your particular question.

Barden, J., S. Derry, et al. (2006). “Bias from industry trial funding? A framework, a suggested approach, and a negative result.” Pain 121(3): 207-18.

De Amici, D., C. Klersy, et al. (1997). “Graphic data representation in anaesthesiological journals: a proposed methodology for assessment of appropriateness.” Anaesth Intensive Care 25(6): 659-64.

Gaissmaier, W. and G. Gigerenzer (2008). “Statistical illiteracy undermines informed shared decision making.” Z Evid Fortbild Qual Gesundhwes 102(7): 411-3.

Gigerenzer, G. (2003). “Why does framing influence judgment?” J Gen Intern Med 18(11): 960-1.

Gigerenzer, G. and A. Edwards (2003). “Simple tools for understanding risks: from innumeracy to insight.” Bmj 327(7417): 741-4.

Hardon, A. (2003). “New WHO leader should aim for equity and confront undue commercial influences.” Lancet 361(9351): 6.

Hochman, M., S. Hochman, et al. (2008). “News media coverage of medication research: reporting pharmaceutical company funding and use of generic medication names.” Jama 300(13): 1544-50.

Hoffrage, U., S. Lindsey, et al. (2000). “Medicine. Communicating statistical information.” Science 290(5500): 2261-2.

Landefeld, C. S. (2004). “Commercial support and bias in pharmaceutical research.” Am J Med 117(11): 876-8.

Mathieu, E., A. Barratt, et al. (2007). “Informed choice in mammography screening: a randomized trial of a decision aid for 70-year-old women.” Arch Intern Med 167(19): 2039-46.

McMahon, B. J., R. Neubauer, et al. (2003). “Developing and implementing a program of grand rounds for internists that is free of commercial bias.” Ann Intern Med 139(1): 77-8.

Nagle, C., J. Gunn, et al. (2008). “Use of a decision aid for prenatal testing of fetal abnormalities to improve women’s informed decision making: a cluster randomised controlled trial [ISRCTN22532458].” Bjog 115(3): 339-47.

Pitkin, R. M., M. A. Branagan, et al. (1999). “Accuracy of data in abstracts of published research articles.” Jama 281(12): 1110-1.

Rutledge, P., D. Crookes, et al. (2003). “Do doctors rely on pharmaceutical industry funding to attend conferences and do they perceive that this creates a bias in their drug selection? Results from a questionnaire survey.” Pharmacoepidemiol Drug Saf 12(8): 663-7.

Saha, S. (2009). “Rectifying institutional bias in medical research.” Arch Pediatr Adolesc Med 163(2): 181-2.

Woloshin, S. and L. M. Schwartz (2006). “Media reporting on research presented at scientific meetings: more caution needed.” Med J Aust 184(11): 576-80.

Leave a Reply

Your email address will not be published. Required fields are marked *