top of page
Search

Defending Doctor Google, Part 1: How to Be Your Own Health Expert

Updated: May 20


You don't really have a choice. You are the one who oversees your physicians and other health providers, as they relate to you. It is a power that you can't delegate away. You might as well be informed about what your providers are doing. The internet is a helpful tool for that. You can access medical journals and textbooks there. Websites provide opinions from doctors and researchers, and the information is usually footnoted.


When health care providers access this information on the web, it's called "study" or "research".


For some reason, when you do it, you are said to be "consulting Doctor Google."


Here is a link to my Medical Disclaimer. This reminds you that I’m not a health professional, that I’m not giving medical advice, and so on.


Some health professionals scoff at laypeople looking up health information for themselves. Some of these professionals seem to enjoy using the “Doctor Google” zinger on patients when they catch those patients "reading things". Such health professionals don't want you looking up your health issues on your own.


Self-diagnosis using the internet can be a problem. It should be done carefully. On average, health professionals will be better at it than laypersons. But health professionals can get it wrong, while a layperson can get it right. Health professionals can disagree strongly over a diagnosis. When two health professionals diagnose in contradictory ways, they cannot both be right. When that happens, you may be the one who has to decide. (The doctors can't do it for you.) Doctor Google might help.


Many health professionals have skepticism about the quality of health information on the web generally. They are right in a way, but the problem isn’t “the web” itself. As with questions of diagnoses, the problem is that well-credentialed health professionals hold conflicting views on major health issues (diets, nutrition, medicine safety and efficacy, interpretations of lab results).


The good news is that everyone agrees that about half of the health information on the web is garbage.

The bad news is that people can’t agree on which half.


It's a running joke today that people are confused by the conflicting health advice they hear in the news. Much of it is about food. The classic examples involve eggs and coffee. It goes back and forth; they're good for us until they're bad for us until they're good for us again. My mom commented about this over the years. In the mid-1950s, she heard raw egg yolks were good for kids, so she put raw egg yolks on my cereal when I was a toddler. It's probably because God is merciful that I don't have any conscious memories of raw egg yolk on my cereal. Over the years, Mom heard shifting pronouncements from the food police over whether eggs were good to eat. Whenever eggs would fall out of favor, she would feel a little bad about what she did to me. But when the prestige of eggs as nutritious food sources would rebound, she'd feel better. I still eat plenty of eggs.


Today’s major health topics are shrouded in debates between highly credentialed health experts. Medical doctors and other health authorities don’t even agree on whether you need to eat vegetables. They don’t agree on whether high cholesterol causes heart disease. Many doctors don’t even agree about whether counting calories helps to reduce weight. We live in a world where experts disagree over issues that might seem like they should be easily decidable. This isn’t the layperson’s fault.  No matter what food or medicine choices they make, it will often have to involve rejecting the opinions of some experts. There may be no way to avoid it. When you make these choices, are you “playing doctor”? No.

Expert credentials cannot be a decisive basis for medical choices when experts disagree with each other.


 By “looking it up on the web” – that is, by consulting “Doctor Google” – you are simply informing yourself so you can make decisions that you have to make anyway.


This blog post focuses on highlighting some of the logic, math, and science skills you can apply in evaluating medical and other scientific information. Fully credentialed experts are human, like all of us, and they can make reasoning errors. Neither clinicians nor researchers (nor laypersons!) can always “think of everything” or do everything perfectly. Different angles are missed, bad assumptions are made, statistical tests are misapplied, pressure from professional organizations and drug companies can twist people’s opinions, and “groupthink” can stop people from arriving at better answers. Sometimes researchers and journalists lie. If you can catch these problems when they occur, then you’re ahead. Often, only a little bit of effort is needed to spot an error. If you can interpret medical information correctly, then you have some legitimate basis for elevating one medical opinion over another. This won’t turn you into a medical doctor, but it may help you to make a good decision for yourself about an issue that’s important.


The “Expensive Urine” Argument Doesn’t Hold Water: A Logical Error

Some doctors claim that if you take vitamins in excess of what the body can use up (megadosing vitamins), the excess is just excreted in the urine (or other waste). Because of this, the doctor will say that, when you paid for your vitamin supplements, all you did is pay for Expensive Urine. The claim is that you get no benefit from the extruded excess vitamins. Let's talk about this in the context of vitamin C, for convenience.

The “expensive urine” argument against massive doses of vitamin C (or other vitamins) is illogical. It depends on an unstated false premise. The fact that most of a megadose (1000mg/day to 100,000 mg/day) of vitamin C – the portion of the dose above about 250mg/day – leaves with your urine as “waste” doesn’t mean that the megadose was a "waste" of money. There is a model that says that having a higher concentration of vitamin C in the body elevates the “partial pressure” of this nutrient in any small area of the body experiencing infection or other stress that vitamin C could alleviate. The higher concentration promotes a more rapid infusion of C into the tissues, which aids in healing. That tissue region might be very small, so it doesn’t “use up” much of the C. But it makes a difference to the particular tissue area, even though most of the C is extruded in the urine. This model has a lot of support. It has to be refuted if the Expensive Urine argument is to be accepted.


Your doctor may claim, “Well, it’s just common sense that if the body releases the excess, then that excess didn’t serve any purpose”. This is the unstated premise that I mentioned earlier. Your doctor may have gotten that from the internet. Or he may have fabricated it on his own. You could respond with the following analogy about breathing air. The air we breathe is 21% oxygen. The exhaled air is 16% oxygen. So, was only a 5% concentration needed? No. You did need the other 16% oxygen to maintain the oxygen partial pressure so that the surfaces of your lung tissues were even able to absorb the 5% of the oxygen, the part of the oxygen that you “used” metabolically. People suffocate by breathing air that is only 5% oxygen. The analogy to vitamin C concentrations in the blood and body is clear. As Sherlock Holmes once said in another context, “the parallel is exact”.


I shared this argument with one of my sons. He pointed out another, more glaring problem. If the Expensive Urine argument were valid, then if we ever drank even just enough water to cause us to urinate, we drank more than we “needed”. But drinking so little water that you never urinate would leave you in pretty bad health. The obvious health downside of drinking so little water shows that the implied premise that “excreted substances provide no benefits” is just false.


Regardless of the efficacy of megadoses of C, my point here is just to show that, if megadoses of C don’t help, it is not due to anything that the Expensive Urine argument has to say. You can ignore it.



Your Doctor Has a Good Answer: Check to Make Sure it Actually Matches Your Question


Years ago, I developed hyperparathyroidism. Soon after that, but before the HPTH was treated, I developed GERD, or acid reflux.  I asked my specialist doctor if he thought it was likely that the HPTH was the cause of my GERD. (I hoped the answer would be "yes" because that could suggest that the GERD would just disappear after the HPTH surgery.) He said “no”. He then explained that “only 5% of HPTH patients get GERD as a result”.  Do you see the problem with this answer? It’s a little subtle. But that’s why we love math. I believe the doctor thought he was answering me by saying that there was only a 5% chance that my GERD resulted from the HPTH. But that’s not really what he said. I wasn’t asking him, “Because I have HPTH, how likely is it that I’ll get GERD in the future?”. I knew that I already had GERD, so that wasn’t my question. I wanted to know the likelihood that the HPTH caused the GERD that I did have. That’s a different question. The answer to one question was 5% (the statistic that the doctor cited). The answer to my actual question could be anything, even given the truth of the doctor’s 5% statistic.


I caught this misunderstanding during the office visit, but I didn’t press the issue. It turned out that after my HPTH surgery, the GERD went away on its own. I think the GERD was caused by the HPTH.


This matters because these kinds of statements of probabilities and likelihoods are often used by people to make decisions. We need to know what they mean.


OUCH and DOUBLE OUCH!

Getting Both the Denominator and the Numerator Wrong in the Double Jab Trials


Example 1: A recent alleged vaccine used a protocol of two injections spaced about a week or two apart. Regardless of the scientific validity of this protocol, this two-jab protocol was wrongly used as an excuse for miscounting adverse events. When the alleged vaccines were tested by at least one drug company, the test subjects who developed adverse reactions after just the first injection (and who therefore immediately dropped out of the trials, never getting their second injections) were not counted as contributing to the number of vaccinated people who had adverse events; a verbal trick was played in which these people were said to have “not been ‘vaccinated’ ” just because they didn’t complete the full vaccination protocol with the second injection.  And because they were "unvaccinated", it was pretended that their adverse side effects were not caused by the vaccine. This resulted in statements of side effect risks that were misleadingly low. If the alleged vaccines were to be used on the public at large, people would want to assess the risk of adverse effects after getting either of the two injections, not just the risk after getting the second injection.


Example 2: Many drug trials have “washout” periods when people who have early side effects bad enough to drop out of the trials just aren’t counted. They are not counted as having been “in” the trial. Well, yeah, they were in the trial, from the standpoint of any rational person who, outside the context of the trials, may take the drug. In the real world, semantic games aren’t going to save you from side effects. Doctors have noticed that, for statin drugs, some of the side effect risks are said to only have a 2% to 5% likelihood; that was the risk for test subjects who stayed in the trials after the washout period. It turns out that for clinicians and patients -- in that nuisance of a place called the real world -- the risk is closer to 30%. Doctors have pointed out that the pharmaceutical industry did start admitting the higher side effect rates only after PCSK9 inhibitors came out, and pharmaceutical companies wanted people to switch from statins over to the new PCSK9 inhibiting drugs. Higher side effect rates for statins would promote that switch.


Senator Ted Cruz once asked me what I do for a living. I told him I’m a mathematician. He lit up just a little, saying that both of his parents are mathematicians. I said, “Wow. I’ll bet they hate the mathematics that’s done in Washington!”  Ted answered, “It’s not even so much the mathematics that bothers them as it is the arithmetic.”  The problem with miscounting adverse events for drugs is just a problem in arithmetic, involving addition and division. Medical degrees and official titles can’t convert bad arithmetic into good.


Your Medical Test Said What!


Suppose you get tested for a disease and the test is positive. You don’t like the result, so you ask the doctor about the test accuracy. You’re hoping the test isn’t very reliable. The doctor bursts your hopes and says that the test has over 90% reliability in detecting the disease when it’s present, and it wrongly detects the disease (“false positive”) less than 5% of the time. This sounds like a pretty reliable test. This sounds like you have over a 90% chance of having the disease, because you tested positive. However, if the disease is rare enough, it may still be more likely that you don’t have it, even with a positive test result. It may be that the probability of a false positive is more than the probability of a correct positive. This is shown using a probability formula and “Bayes' Theorem” that we won’t try to teach here. It also involves the notions of “sensitivity” and “specificity” of a medical test.  These are good things to look into. You may not memorize Bayes' formula, but in the back of your mind, you might at least remember that this could be an issue.


In all fairness, I must point out that if your doctor wanted you to take the test for the disease, then you probably are presenting some symptoms that do raise your “prior probability” somewhat. This would affect the probability calculations.


I certainly am not saying that medical test results should be ignored. It’s just good to be aware that test results should be scrutinized.


Observational Studies and Surveys Can (Almost) Never Show That One Thing Causes Another


The most common health "studies" that we hear about in the headlines are the observational studies in which a large population of people is surveyed about what they eat. Then, disease statistics are calculated for the people who ate one way vs. the people who ate some other way. Suppose that an observational study reveals that, over ten years, 4% of the people who don't eat much red meat developed cancer. Also, suppose that over ten years, 4.2% of the people who do each a lot of red meat developed cancer. (This is an observational study because there was no intervention with the test subjects, and there was no randomized selection of who ate red meat and who didn't. It's not an experiment.) With these numbers, researchers will observe that the 0.2% difference is 5.0% of the 4%, and they will claim (or insinuate) that "eating a lot of red meat will increase your 10-year risk of cancer by 5%". Stated less delicately, researchers or news reporters may also simply say that "eating red meat causes cancer".


It is complete nonsense to interpret the above study as implying that red meat causes cancer. When things are correlated (cancer and eating red meat), that doesn't mean that one thing caused the other. In observational studies about red meat, confounding variables are ignored, such as the tendency for smokers to eat more red meat. Confounding factors like that change everything. I myself have implemented the very clever strategy of eating a lot of red meat, but just not smoking.


There is a set of mathematical guidelines called the Bradford Hill criteria which suggest when we should and shouldn't interpret observational studies as suggesting causation. These criteria involve mathematical and scientific principles. I won't expand on the Bradford Hill criteria. But Doctor Google can tell you about it. Anyway, most or all of the food headlines that you've been reading all your life don't satisfy the Bradford Hill criteria for observational studies showing causation.


The kerfuffle over cholesterol as it relates to cardiovascular disease also has its roots in badly done observational studies, but this is better discussed in a follow-up blog post.


There is a lot more to say here.

I have several more interesting, surprising observations about health research and medical information, along with some concluding remarks.

Don't miss Part 2 of Defending Doctor Google, which will be published soon.



 
 
 

Comments


bottom of page