The Problem with “Evidence-Based” Fitness

By | 27 May, 2023

When I was younger, still very natural, and still very underdeveloped, I often found myself completely dismissing the opinions, takes, and general experience of enhanced athletes. In some respects, I was absolutely right to do so… but, overall, I was missing the point. Their information wasn’t “bad” or even necessarily “wrong” — it just often didn’t apply very well to me because I was a member of a completely different population of lifters. Not only was I not enhanced, but I was weak and a noob.

Any experienced coach can tell you that the “best” approach for someone with minimal experience and development is completely different to the best approach for someone that has been training for over a decade. Realistically, you’re never going to put someone going for their first two-plate bench on the same exact program as someone trying to take their bench from 550lbs to 600lbs. It just doesn’t make a ton of sense. Their recovery capacities are entirely different, their needs are entirely different, and their bodies’ just aren’t going to respond to stress in the same way when the absolute loading difference is so significant. These people should not be doing the same number and type of sets, reps, training sessions, and exercises. Optimal programming requires more specificity than that.

Almost everyone will agree with what I’ve said so far here. So, if you do agree, I need you to consider that the scientific literature is even more unnaturally specific than the example I’ve listed above. 

The scientific process can generally only move forward by isolating variables that are being tested as much as possible. For example, if we don’t equate training volume when comparing two exercises, we might falsely attribute any differing results to the exercises themselves when it is quite possible that any differences in outcomes could be explained entirely by the volume differences. That’s very reasonable and logical enough.

Here’s the problem: Fitness results don’t happen in isolation. Leg curls are easier to recover from than Romanian deadlifts. Trying to determine which is the better hamstring exercise in a very precise, controlled way is generally difficult and impractical. In the real world, you’d never try to do the same number of sets for these two exercises nor would you try to choose one over the other in perpetuity. To continue, when we try to isolate volume as the tested variable, we must standardize rates of progressive overload otherwise we might attribute to volume levels what could be explained by differing rates of strength progression. Again though, in the real world, picking the approach that results in faster rates of progression is precisely the point. Isn’t it obvious that doing more volume will produce more gains than less volume if strength progression is held constant in both cases? That’s not what we wanted to know. 

And, so, what is my point? People take hyperspecific outcomes from literature and then try to generalize the findings far beyond their scope. In almost all cases, training studies are setup in such a way as to look nothing like real world training… because real world training involves the interplay of DOZENS of variables in order to work. Volume and intensity interact to create optimal frequency. Holding all but one variable constant is not something we do in the real world. 

This is further complicated by the fact that the population studied in most of the literature is not actually well-trained individuals. The studies are often comprised of young undergraduate students with a few years of lifting experience. Most studies that use “experienced lifters” will have the male subjects averaging out to roughly 200lbs/90kg bench presses… for a 1RM. And haven’t we already previously agreed that lifters with vastly different experience aren’t going to respond best to the same protocols?

Science is hyperspecific and study results are not as easily generalized as many would-be evidence-based research interpreters make them out to be. For example, in elderly diabetic patients who don’t lift, the use of metformin decreases net protein synthesis. As a result of that finding, some have claimed that hard-training individuals on anabolics should avoid the use of metformin. The issue, of course, is that metformin has a completely different effect in healthy, lifting individuals than it does in sedentary diabetics. The results aren’t generalizable to that degree. 

There are more examples. Human Growth Hormone (HGH) has been tested in isolation in athletes. While it appears to have a minor effect on body composition long-term, in isolation, it doesn’t increase sports performance or lead to higher levels of muscle mass. This actually led sports governing bodies to claim that it didn’t work in the 80s. Hilarious. We should all know there is a very different result when hard training bodybuilders who use anabolics add HGH to their drug regimen. It is quite literally a game-changer. 

That brings me to a huge issue for all enhanced athletes: we aren’t studied. You can’t make people take anabolics for a study; that wouldn’t be ethical because of the negative health repercussions. So, at best, just like the metformin and HGH research, for enhanced athletes, most of the literature can only be taken as a guide post that is hopefully pointing in the right general direction. From there, it MUST be ground in actual practice… which means real world, trench data. In simple terms, you’ve got to actually go and try stuff to see what works. 

As a brief recap, one, we have studies being designed that don’t represent real world training very well at all. Consider Schoenfeld’s infamous volume studies where individuals performed up to 30-50 sets of single leg, leg Press training to failure on 60-90 seconds rest.  The studies ultimately couldn’t find an upper limit to productive volume with this setup. However, one small problem: no one does serious leg training to failure on 60 seconds of rest. Especially not for a dozen sets in a row. You’re guaranteeing junk volume. The design was absurd and so were the results.

Two, the populations being studied in most of the literature are, realistically, usually, late stage novices at best. These people have absolutely nothing in common with advanced naturals or any kind of enhanced athlete. In fact, the populations are so different that, in my opinion as a coach who has designed programming for almost ten years, the approach I would use for both groups has virtually nothing in common. Again, I want you to imagine the ideal program for someone with a 200lbs bench press. Now, imagine the ideal program for someone with a 500lbs bench press. Seriously, how different are they? The ones in my head have very little in common. 

How can we realistically generalize any of the literature to enhanced athletes? Well, we are left to try things in the real world within the context of programming that is actually realistic. Pause. Wait… do we realize what I’ve just said? There’s no choice but to go out and try things to see what works. The research isn’t prescriptive; it helps elucidate principles. 

Do you know what all so-called meatheads do that succeed? They try things and see what works. Wow, guess what? With the current state of the literature, the most scientific thing you can actually do is trust your own n=1 results… especially when you get the same results multiple times. Again, at best, the literature can point us in the right general direction and help save some time doing trial error… but it can NEVER eliminate trial and error. It isn’t even meant to! You can’t copy research study designs in your real world training.

When you dismiss the results of people who have figured what works best for them through trial and error, you’re literally dismissing the biggest and highest quality body of evidence that we have in training today: collective experience… also known as ANECDOTE. There is absolutely nothing scientific about dismissing anecdotes wholesale. All the training programs out there today are almost entirely derived from personal experience. They are NOT replicas of study designs.

Last, but certainly not least, if you do not read the research yourself, you are no different than the meathead bro who quotes Arnold’s training practices. You have merely traded one God for another. You are no different than someone who claims to be Christian, but doesn’t read the Bible and just relies on their Pastor to tell them what the book says. You believe more in “science” than in practitioner experience. Your psychology is being exploited by a clever marketing approach. Both groups are ultimately using programs derived from real world experience. 

What I set out to accomplish here was nothing more than to have people reconsider the value of anecdote. Science does not, and cannot, test training approaches holistically. You will not see a study on RIR training vs. HIT. Each piece must tested one variable at a time according to the rigors of the scientific process. And, even in doing so, there are still confounding variables. There is no approach and no system that is entirely “scientific”. Even so-called “evidence-based” programs are still ultimately someone’s best guess of how to put it all together in the real world. 

If we’re all putting this stuff together in the real world primarily through trial and error anyway, maybe we shouldn’t be so quick to dismiss the experience of so many people just because they don’t attempt to lionize science. Their experience is still incredibly valuable to those of us who DO value the contributions from the scientific literature. It doesn’t have to be an either-or situation.

Evidence-based practice still requires practice. The practice is the most important part. The practice is where the largest amount of the evidence comes from. The practice is where we confirm what works and what doesn’t. Without practice, you’re just a lab coat who is guessing. Never forget that.

Leave a Reply

Your email address will not be published. Required fields are marked *