• Hello, this board in now turned off and no new posting.
    Please REGISTER at Anabolic Steroid Forums, and become a member of our NEW community!
  • Check Out IronMag Labs® KSM-66 Max - Recovery and Anabolic Growth Complex

Here is a good writing that some of you might enjoy!

Arnold

Numero Uno
Staff member
Administrator
Joined
Nov 29, 2000
Messages
82,681
Reaction score
3,072
Points
113
Location
Las Vegas
If you have ten minutes, read this:


Exercise Research

Well, before posting this next series, I want to make a few comments. This eleven part series (you read that right) was written about 4 weeks ago in a fit of something (anger, perhaps) at the current trend on misc.fitness. When I wrote this, I intended it as more of an informational type thing to point out some of the problems associated with trying to relate exercise science studies to real world training without targeting anyone deliberately. At least that was my intention. Well, as I look back on it, I realize that it was aimed at certain people (who should know who they are without my naming names). These people are the ones who are hell-bent on not accepting any information based purely on empirical observation. They seem to feel that if something does not have a basis in science (regardless of how well it might have been proven in the "real world") in cannot possibly be acceptable. Some of you who have followed my posts might have gotten that idea about me as many of my posts use studies and other scientifically based references to help support my point. However, I have tried in the past (and will try in the future, assuming that I stay on misc.fitness) to include more real world examples of training, etc... The points I'm going to try to make, in my typically long-winded fashion, are these.

Science is great but it does not have all the answers. All possible permutations and combinations of training, nutrition, etc have not and cannot be tested so trying to use science to "prove" that one concept is better than another is nebulous at best.
Research studies have innumerous problems associated with them. If you don't like a particular study (because it doesn't support your point of view) it is far too easy to criticise it on any of a multitude of levels. Thus, anyone can rationalize almost any study to be good or bad based on it's statistics, protocol, or whatever. This point, perhaps more than any other, is aimed at certain unnamed people who like to discredit studies which don't support them. We all do it but certain people seem to constantly crop up in this regard.
Empirical observation is useful but must be tempered with certain realities in mind. For example, it's easy and convenient to say that since most bodybuilders train in a high-volume type of way that this is the best way to train. Well, most pro-bodybuilders also 'roids, which is a major confounding variable. Using Dorian Yates to "prove" HIT concepts is as absurd as using Jim Quinn to "prove" high volume concpets. Both take steroids and would probably grow from just about anything.
These three points, and perhaps others, will be made much more clear to any of you bored enough to read all of my ramblings. If you want to make comments on what I've written or just say hi, drop me a line as I don't check into m.f more than once per week (my modeming definitely follows HIT principles in terms of frequency ;-). Enjoy.

Lyle McDonald, ACSM, NSCA certified (for those who are impressed with such things). I'm also CPR and First Aid certified but I digress. :-)

lylemcd@edge.edge.net

--------------------------------------------------------------------------------

As I continue to work on a series on fat and athletic performance, I would like to digress (as is my wont) and talk about something else. This is an article that I've had in mind since attending the NSCA annual meeting. It is also fueled by some recent e-mail debates that I have been a part of relative to some of my recent articles, namely on periodization and plyometric training. I hope it is taken in the right way and is not meant as a personal attack on anyone. As much as anything, it is a criticism of myself and my fellow colleagues both on this usenet feed and in the "real" world of fitness training. In this series, I would like to both editorialize, opine, and rant on the topic of "Exercise Science vs. Exercise Training".

The field of exercise training, whether it be for general fitness or athletic competition, has recently become the center of much scientific attention. As it is a new field, more and more research continues to be done into the varying aspects of exercise. This is not necessarily a bad thing. However, as with many other fields, sometimes this research can be mis or over-used. Now, those of you who have followed my postings on this feed probably know that I am the first to cite this study or that reference to back up my opinions and info. So, if anything, I am criticizing myself as much as anyone else in my field. The problem that I see developing is a bit too much reliance on scientific data. Although I'll get into it in more detail, the problem seems to be this: many people have come under the impression that if something is not scientifically validated, it cannot be valid at all (i.e. empirical observation is useless), and by corollary, if something is scientifically validated, it must be a valid concept (no matter how silly the study in question is, but I'm getting ahead of myself). This type of gross over-generalization takes things a bit too far in my opinion. Yes, scientific studies can lend some validity to certain concepts in the field of exercise science. However, I hope to argue (most-non scientifically, although I will use specific exmples to get my point across), that exercise science is not the be-all and end-all to answering exercise related questions. I will break things up into discrete concepts to, hopefully, illuminate some of the problems with scientific research.

Science vs. empiricism:

First, let me address the idea that something must be scientifically validated before it can be thought of as "true" since this is really at the heart of some of my upcoming criticisms. Lately on misc.fitness, I have seen more and more people demanding references as if that is the only criteria by which an idea can be judged. Again, I'm one of the first ones to quote a given study as evidence but I also sometimes fall into the trap of just saying "studies show" or something similar. This occurs for a couple of reasons. Usually, I'm just far too lazy to cite my references as I don't want to dig through my notebooks to find a particular study. Also, some of the information I have was presented in various classes or seminars where actual journal references weren't given (or I didn't write them down). Now, I totally understand that some type of rationale should probably be provided if a particular concept is new or controversial (for example, when talking about supplements, word of mouth is difficult to trust due to factors like bias and placebo effect). However, for really basic concepts (i.e. lifting weights will increase strength, carbohydrate provides energy), I don't see that citing references is all that terribly necessary as these concepts are pretty well established.

Let's look at a concrete example of taking the need for scientific validation too far. One of my colleagues, a fellow personal trainer, has a PHD in Exercise Physiology. Well, one day, I overheard this person talking about getting a full suspension mountain bike. I told her that I had heard bad things about power loss on this type of bike and that they were generally only good for downhill (based on the opinions of my friends who compete in Los Angeles) as they tend to bleed power on the uphills due to the pogo effect of the shock. She then asked if they (the ubiquitous "they") had controlled for rider position. She thought that I was referring to some study as if that's the only way an idea or opinion can be valid. I'm sorry, but I trust my friend's empirical evidence and observation much more than any study in this case. My friends know mountain bikes and I don't have to see a study for me to understand some of the problems endemic to full suspension bikes: i.e. the shock has no idea if the impact is from above or below and any shock stiff enough not to pogo will probably be too stiff to absorb many bumps other than the really big ones. Anyway.

Let's look at another example to tie in this discussion with some comments about empirical evidence. I know of many exercise scientists who firmly believe that changing the angle of an exercise (i.e. flat bench press to incline bench) does not change the movement and that there is no point in changing exercise angles as any movement across a joint will stress the muscles involved (i.e. triceps pushdowns and lying french press are both elbow extension movements and thus identical). Now, I haven't seen any real research into this (although Shape magazine does test exercises with EMG and there is a book called "Effective Bodybuilding" which purports to have MRI data showing that different exercises work the muscles differently). However, empirically, I think we should know better. If you look at a weight lifter who only does heavy flat benches, there tends to be an overdevelopment of the middle part of the chest (colloquially known as "benchers tits") with little or no development of the upper part of the pecs. Bodybuilders found out a long time ago that various movements are needed for full muscular development. Sure, for athletic training, it may not matter as much, but bodybuilders (and what I call "bodybuilding theory") have known this for years empirically. They tried different movements and determined what developed to find out what exercises benefitted what part of the muscle. This also makes sense biomechanically as muscles like the biceps and tricep (with multiple branches) are biomechanically more effective at different elbow positions. I do agree, however, that you cannot change the shape of your muscles (i.e. no amount of preacher curls will put a peak where none exists) but there is little doubt in my mind that different exercises work different parts of the muscle differently (this statement brought to you by the Department of Redundancy Department).

So, what of empirical evidence? Can it be valid in it's own sense? IMHO, yes. Many of the recent research studies are supporting some of the concepts that bodybuilders have known for a while. Recent studies into protein needs have discovered what bodybuilders have known: that protein needs increase with intense training. Even still, the American Dietetic Association and many nutritionists do not believe that any more protein than the RDA is necessary for athletes. Bodybuilders and other strength athletes have always consumed high protein intakes believing (somewhat correctly) that it was better for increased strength/muscle mass.

Other research has found that the greatest growth hormone and testosterone response occurs with a program of moderate reps (in this case 10 rep max's), multiple sets and short rest periods (1 minute). This may be part of the reason that bodybuilders (who generally train in about the 8-12 rep range with many sets and short rest periods) show the hypertrophic response that they do: i.e. they get the best hormonal response from this type of training. Did the bodybuilders know this beforehand? No, they determined empirically what worked which was later supported by scientific evidence.

However, I think that reliance on empirical evidence can be taken too far. I recall an argument a while back on misc.fitness to the effect of "Well, I'll believe the bodybuilding advice of whomever has the largest arms". While somewhat useful, I've met very muscular people who grew almost in spite of their programs rather than because of them. They didn't have the knowledge base outside of what works for them to train people who weren't competitive bodybuilders (who should be trained somewhat differently). Hey, I'm a skinny little cyclist but I like to think (perhaps incorrectly) that I know enough of bodybuilding type training to design that type of workout for someone. While I might not have the know-how or experience to train a Mr. Olympia, I think I could train someone to put on muscle. In any case, I feel that some mix of scientific data should be tempered with what people really do in the gym. But, I'm getting ahead of myself again.

Next time: scientists don't work out and other things.

References:

My own little brain. That's a bit of sarcasm.

--------------------------------------------------------------------------------

Alright, continuing with my longwinded opining, I'd like to address another topic which confounds the application of scientific studies to real life training.

Researcher often don't work out:
Based on some of the studies that I've seen, it's frequently hard for me to imagine that researchers have any idea of how people really train in the real world. Luckily, this is changing and more recent research is actually based on real world training programs. However, for a variety of reasons, it is sometimes difficult to apply much of the exercise research to real life training.

For example, at a recent seminar, the topic was possible problems with combining endurance exercise with strength training. Well, one of the problems with reaching a definitive conclusion is the incredible variance of program designs. Although I don't remember all of the studies presented, one particularly sticks out. The researchers had the athletes either do 1 minute of isokinetic leg extensions 3 times per week or 5 minutes (I think) of high intensity intervals 3 times per week, and a third group combined the two. While I'm sure the results were interesting (so interesting that I forgot them), so what? This is nothing like the way that any athlete I know of would actually train. So, the results are pretty much moot and cannot be applied to any type of real training situation. It's obvious to me that these researchers need to get out of their lab for a little while and go see how people actually train before designing useless studies like this one.

Other studies examining this phenomena (interference effects) combined high intensity endurance training with high rep weight training (which builds muscular endurance more than strength). Again, the results, while interesting, don't really reflect how most athletes regularly train.

Another example I recently saw looked to see if there were any benefits to putting beginners on a split routine vs. a standard twice per week full body routine. While interesting, the way the study was designed invalidates any type of useful results. Basically, one group performed a rather standard full body workout 2 times per week while the other did the Exact Same Workout except divided into four days so they trained twice as often but with no higher volume or intensity. The results were that there is no difference in strength gains. Well, as most people know, the whole point of a split routine is that it allows you to work the bodyparts on any given day with greater volume (# of exercises) and intensity (since you've got more time and energy to devote to them). This study didn't follow a typical split routine making the results, while academically interesting, fairly moot.

Too many variables:
While I'm talking about training programs, let me talk about another problem with trying to apply scientific research to the quest for the ultimate workout. Researchers have been looking for the ultimate workout for a while. Unfortunately, there are way too many variables to ever figure out what the best one is. Not to mention the incredible variability that is seen among people on the same program. Some of the possible variables (some of which have been studied and some of which haven't) are number of sets, number of reps, rest period between sets, order of exercise, type of muscle action (concentric, eccentric, isometric, some combination), etc... There are an infinite number of programs which can be generated from all the possible variables and it's impossible to test them all. In all actuality, there probably is no best workout program. I'm sure all the varying systems available (whether it be straight sets, pyramiding, super-setting, circuit training, etc...) have some potential to be useful. Looking through the mags, I have this month alone come across Power Factor Training, High Intensity Training (ala Mike Mentzer), the Ironman Training System, Leo Costa's Bulgarian Blast, PowerBuilding from Muscle Media 2000, etc). All claim to be the "best" system for increasing muscle mass/strength. Is one truly better than another? IMHO, maybe, maybe not. Sure, I don't really agree with certain of these systems but I'm sure they can all be used at some point in one's training. As most of you may know, I am a big fan of periodization but even it can have an infinite number of variations. What it probably comes down to is to find what works for you rather than listening to me or someone else telling you this is the best path for success.

Look at it another way, if there were one ultimate training program and one only, then every genetically gifted athlete would look and perform the same since they would all use the same ultimate program. Since this obviously isn't the case (despite what Mike Mentzer may say about the "one true scientific principle of training"), the take home message is to find what works for you.

The only real absolute to increasing mass/strength is that the muscles must do more during this training session than they did the last. That is the basis of progressive resistance training. The particular means of accomplishing this is up to you.

Beginners vs. Trained individuals:
Another problem with applying research is that frequently the studies don't take into account the training status of the subjects. By status, I mean are the subjects well trained athletes or complete beginners?? As you probably know, the greatest and fastest gains in any program comes in the first weeks to months of any training program for someone who's never worked out before. This is why many early studies found no difference between modes (for example isotonic vs. isometric training) for different programs. For a complete beginner, doing anything will probably increase strength significantly. Hell, for someone who's never lifted before, they may increase strength considerably between sessions due to simply learning how to lift properly. This fact may tend to mask actual results. And, unfortunately, most studies done are on previously untrained individuals (usually college age). Well, why is this?

First off, most athletes don't want to give up any part of their training season to be in a scientific study in case the particular program being tested doesn't work. In fact, some early studies with Nautilus type machines had the results confounded by the fact that the subjects (college football players) were sneaking into the weight room on their own time for another weight workout since they didn't trust the machines they were supposed to be testing.

Because of this, most studies are done on graduate students or undergrads who generally aren't athletes. This is because they are easy subjects since their grade may be jeaopardized by not participating in the study. Incidentally, this is also why most training studies are 10-12 weeks long. This is the length of the quarter or semester during which the students are available. Again, 10-12 weeks may not be long enough to evaluate a particular program for it's efficiacy and worth.

Again, as I mentioned, untrained students will respond much differently than trained athletes on a training program so it may be hard to generalize the results from one group to another.

Next time: statistics, or how to lie with numbers and other things.


--------------------------------------------------------------------------------

Continuing on with my polemic.

Significance, shmignificance:
What I want to talk about now is the concept of statistical significance. Basically, this is a concept which exists (in theory) to make reasonable comparisons between data possible. Let me say now that I am no statistician. Although I am familiar with the terms, I couldn't tell you honestly what a T-test, ANCOVA or anything else are. These are statistical tests created to compare data. What I want to really talk about is how statistical data (and data in general) can be manipulated to yield a desired answer.

What is statistical significance? Basically, this is a term which refers to this: is the change from pre-test to post-test (or between subject groups) different enough from each other statistically for them to be differentiated. If you ever read scientific data, you will always see this mentioned. For example, in a test of 1 RM squats, a subject with a max of 275 lbs and a subject with a max of 285 lbs might be considered to have the same max squat as these values do not differ enough from each other. Now, perhaps statistically there is no difference, but we all know that in the real world this is a difference especially if you are talking about a competition. This has to do with the error of measurement (as well as many other factors).

Here are some specific examples of how this idea of significance can be misused. About a year ago, I attended an ACSM certification seminar where some current data about the state of the field was presented. One of the topics covered was about warming up. Now, I think most of us would agree that warming up is considered necessary for maximal performance. Well, the study in question that was presented argued this point. The study took two groups, warmed one group up but not the other, and then ran them in a 100meter sprint. The difference between groups was only like 0.1 seconds which didn't achieve statistical significance. But, as we all know, 0.1 sec in a 100m run is the difference between first and last place. Now, admittedly, this difference is within the error for the timing equipment (which is at least one of the factors in determining significance) but the small scientific difference amounts to a huge real world difference. A 2% increase in strength (which may not reach statistical significance) we would all agree will make the difference between the winner and loser in a powerlifting contest.

Let's look at another study which addressed the idea of metabolic rate and muscle mass. Again, the prevailing idea (which I agree with) is that increasing muscle mass will increase resting metabolic rate (the value I've heard is about 30-50 calories per day or so). In this presentation, the speaker said in this order:

Larger people (i.e. higher weight) have higher metabolic rates
Gaining mass (as muscle or fat) does not raise metabolic rate.
This seems contradictory to me. The study he cited was this: two (or three) groups were taken, one lifted, one ran or something, and the other sat around. After 10 weeks both exercise groups gained like 2-3 pounds of muscle or so. The groups metabolic rate was measured before and after and no significant difference was found between pre and post-test. However, there was a small trend towards a higher metabolism in the exercise groups. Well, looking closer at the data he presented, I found what I consider a major flaw. The metabolic rate was measured for 15 minutes and expressed in calories per minute. Maybe the researchers brain farted or maybe it was deliberate, but choosing this particular method of data expression would most certainly mask any real results.

Here's the problem as I see it, 3 pounds of muscle (according to popular values) should raise metabolic rate approx 120 calories per day. If you divide that out by the number of minutes in a day (24 hours * 60 minutes = 1440 minutes) you get an increase of all of 0.8 calories per minute. Well, across 15 minutes of measurement, there is no possible way that this would show up (it works out to 12 calories more in this time period) statistically. I approached the presenter with this line of reasoning but he had already made up his mind and would have none of it. Oh, well.

Don't get me wrong, statistics are necessary to give some objective means of comparing data. However, I think that sometimes it's easy to overlook the actual results because of too high a reliance on the statistical methods.

As an aside, this is one of the easiest ways to criticise a study (for example, if it doesn't support your particular stance). Merely state that the statistics were faulty so that it can be dismissed as invalid.

But, again, I digress and am getting ahead.


--------------------------------------------------------------------------------

Still going.

N = ?:
Along with the huge number of other factors which must be considered when looking at a study is the number of subjects used. This part of the study is frequently entertwined with the significance problems that I talked about before as number of subjects is one of the factors involved in determining significance (at least that's my understanding, albeit small, of statistics).

Having a study with a huge number of subjects is ideal but quite expensive. A study with too few subjects may invalidate the results due to significance problems. Also, a small study size may have other problems associated with it that I'll discuss in a moment.

If a small number of subjects is used, several questions must be asked. (Actually, these questions must be asked when examining every study).

Was the group used a random assortment of people (i.e. 10 people off the street) or some particular subset of people (female college aged runners)? If the group is very specific, the results might not be applicable to everyone at large.

For example, a couple of years ago, boron became the big anabolic supplement when a study was cited which found 300% increases in testosterone with 3 mg boron supplementation. What was omitted from the citation is that the study was of post-menopausal, boron- deficient women. Bringing their boron status up to normal increased testosterone by 300% but it was still quite negligible as these women have exceptionally low testosterone levels to begin with. The companies (as ethical as ever) marketed boron to young athletes, especially males, for whom this particular study has no relevance (and no effect).

Another study, which I'm going to cover when I get to fats, looked at a variety of things in female runners some of whom had ammenorhea (loss of menstruation). Without giving away my thunder, the study found that the women who had lost their period had not only the highest bodyfat percentage but also the lowest dietary fat intake. The women with the lowest bodyfat had the highest fat intake. This led several people to theorize that a high fat diet is better for bodyfat loss which may or may not be true. And, it may be in female runners who train 12 hours or more per week. However, the study size was small (5 women in three different groups) so it's tough to generalize the results to other athletes. With such a small sample size, it could be a totally random thing. Until another similar study is done which reproduces this data, it's hard to take at face value.

Another example is this. It has been discovered that many of the various bodyfat prediction equations are very population specific (i.e. teenage, college aged male, asian, etc) due to differences in fat accumulation and bone density for various groups. Well, unless research is done on every individual sub-group, it may be hard to generalize some research. Most research is done on white, college-aged males. The last groups ever studied for anything is always minority and pregnant women which explains in part the lack of available research into this group. Yes, there is only a limited amount of research money available but more research needs to be done on other groups so we can get more general data.

How many subjects were used?

As mentioned above, a small number of subjects increases the chance that the results are totally random in nature. This is why case studies (examples based on the results of one person) are not scientifically valid: results on one subject could be totally random. If you run an exercise protocol on 500 people and 450 get positive results, it's a lot easier to generalize the results and feel that you have found a real result than if 2 of 3 get positive results which support your hypothesis.

Were the subjects even human?

There are a lot of problems with human research. Animal research is much easier to do for ethical reasons as well as for experimental design. It's a whole lot easier to control the diet's of lab rats for nutritional studies than humans unless you keep the humans in a ward and prepare all their foods which is expensive.

Also, animals are easy to motivate to go to failure in certain studies (which measure this sort of thing on a treadmill). Simply put a shockplate behind the treadmill and the rat will definitely keep running. When it lies on the shocker without trying to move, it's truly exhausted. Unfortunately ;-), with humans, this isn't allowed. Some early nutritional research was done on prisoners whose diets are easy to control. Nowadays, this cannot be done. Some recent studies have been done on military personnel who are a little easier to motivate with threats for non-compliance and benefits for those who do comply (i.e. no 5 am revelry).

Animal research raises the question of whether it's applicable at all to humans. In some cases, it may provide a starting point for human research when new ideas are postulated, but in general it's difficult to generalize as there are major differences between us and most animals in terms of muscle type and general adaptations.

More next time.


--------------------------------------------------------------------------------

And going. And going. And going.

Cross-sectional vs. longitudinal

Another factor related to the subjects used was whether the study was longitudinal (meaning that the same people were tracked over a period of time) or cross-sectional (two groups, which are hopefully similar) are compared and the differences looked at. Most exercise studies are longitudinal in nature. Two groups are used, they are tested at the beginning of the study, they undergo whatever protocol is being used, and they are tested at the end of the study. This is probably the more accurate of the two types but requires more time and money.

Cross-sectional studies are rife with problems. For example, just the other day, I saw a report on a morning show about a recent study which found that women under 40 who exercised 1-6 hours per week had a 30-60% less chance of developing breast cancer. However, the way the study was done makes it difficult to be sure. Basically, about 1000 women were surveyed on a variety of things. The women were matched for age and race (I think) so that some type of reasonable comparison could be made. What the surveys turned up was that the women who reported exercising had less of an incidence of breast cancer. While it's attractive to conclude that exercise prevents cancer, this may not be possible (see the next section).

Correlation is not causation:

What this means is just because there is a correlation between two events (for example, the women above who exercised regularly got less breast cancer), you can't necessarily conclude that the exercise per se prevented the breast cancer. Perhaps there was some other contributing factor which the survey didn't ask about. For example, it's fairly well established that people who exercise regularly are more attentive to their diet and eat more healthily than people who don't. So, maybe in this study, the women who exercised ate less saturated fat in their diet which was the cause of the decreased incidence of breast cancer. Maybe exercise along with a low fat diet was the cause. It's just hard to say.

Another example was a study a couple of years back that found a correlation between high serum ferritin (the bodies storage site for iron) levels and heart disease. The researchers concluded that excess iron was dangerous and the cause of heart disease. This, they explained is why pre-menopausal women (who lose blood and iron monthly) have a decreased incidence of heart disease while post-menopausal women have about the same chances as men of devloping it. Since iron is an oxidizing metal, they reasoned that it caused choesterol in the blood to oxidize and cause heart disease (this may be true to a degree). But, did the high iron levels directly cause the observed results? Maybe, maybe not. What the researchers didn't mention (to my knowledge) was the dietary source of iron. Since iron is best absorbed from red meat, it's pretty safe to assume that the subjects were big meat eaters. Well, red meat is high in saturated fat which is known to contribute to high cholesterol levels. A better study would take animals, feed one group lots of iron and the other not and see if they develop more heart disease. Also, maybe the subjects (they were Finns I think) smoked or didn't exercise and that caused the heart disease. Maybe all the above factors came together synergistically to cause the heart disease.

A final example is the so-called "French Paradox". Basically, the French, whose diet is high in fat and alcohol do not exhibit many of the diseases that we do (obesity, heart disease, etc) to the degree in America. People have tried to postulate various causes (such as the red wine/HDL connection) with little result. Perhaps, the results seen are because the French walk everywhere. Maybe it's because much of their fat comes from vegetable sources. Maybe it's because they eat more fresh vegetables. You can maybe yourself to death. Maybe it's some combination of all these factors. Ultimately, the problem is that any one factor cannot be differentiated from all the others.

The point is that you can't always know what's causing what. My 10th grade physics teacher always used to ask what made the wind blow? He told us it was the leaves flapping. He reasoned that since every time you see leaves flapping, the wind blows, the leaves cause the wind, not vice versa. Correlation is not causation.

These problems are merely a few dealing with subject sample size and type. Next time, some other stuff I want to talk about.


--------------------------------------------------------------------------------

Continuing on. Some other problems.

One against the masses:

One of the problems (especially with supplement and nutritional studies) are the use of one study to reach a conclusion. This is completely un-scientific and invalid for the most part (although I'm guilty of this too). Here's why.

When only one study on a particular topic is done, it's tough to say if the results obtained were actual results or completely random. If, say, 10 similar studies show similar results, it's fairly safe to reach a definite stance or conclusion. Examples? There are tons. Here's a couple.

Recently a big brouhaha (great word isn't it?) was made over a study which found no effect from beta-carotene supplementation against cancer. This study (which the media immediately jumped on) goes against a ton of other studies documenting the beneficial effects of anti-oxidant supplementation. However, many criticisms have been made of the study to discredit it. The subjects were Finnish males (the Finns have a notoriously bad diet and most smoke) who had smoked an average of 22 years (you read that right). Well, the researchers administered 6 mg of beta-carotene (I think) daily for 6 years (research like this must be so boring :-)) and looked at the incidence of cancer down the road. They actually found a slight increase in the number of cancer occurrences with the supplementation. However, let's look a bit closer at this. One question to ask is "Is anything going to counteract 22 years of smoking?" Answer: probably not. Some criticism has also been made of the dosage used (fairly low compared to many other studies) and this study has basically been discounted.

Another example that I mentioned last time: A study was done with female ammenorheic runners that suggested that higher fat intake leads to lower bodyfat percentages. Another recent study suggests that a high fat intake may lead to better running performance in terms of VO2 max and time to exhaustion (I'll talk about both of these soon).

The problem is that both studies (in addition to other flaws) had small sample sizes and go against tons of other research that says this isn't the case. Now, it may be that higher fat intakes (notice I didn't say high) may improve performance but the fat intake level in both of these studies (approx 38% of total calories) may be very dangerous in terms of heart disease and other problems.

As an aside, let's talk about the idea of proving a concept. As you may remember from high school, it is not possible to prove anything. In reality, even gravity is not "proven". Yes, we have an overwhelming amount of data that suggests that it works but it's still not proven, just accepted. You can only disprove something. That is, if even once, you were to drop something and it didn't fall down (as gravitational theory states), you would have to change the current model of gravity to include the new data. The same is true of research. Research may "suggest" certain things but it can never prove them. You can reach a consensus (i.e. if 100 studies are in agreement, you've got a pretty good idea that something is more or less correct but it's not proven).

But what about the studies that totally contradict each other. For example, one study finds that chromium works great, another discounts it as ineffective. Well, here the waters muddy a bit. Sometimes researchers have a hidden agenda (like, say, there research money is coming from the company selling the subject) and design the study to show positive results. Or, for whatever reason, it's equally possible to design a study to give negative results.

This is where looking at the details of the study comes into play to see if one of the studies was done particularly poorly and should be discounted. However, herein lies yet another problem (i.e. bias). But, you'll have to wait for it.

Personally, to somewhat counteract the problem of single studies, I prefer to use review papers in many of my articles. Basically, someone with less of a life than me looked up all the available research on a topic and summarized it to try to reach some conclusions.

next, problems with nutritional studies


--------------------------------------------------------------------------------

Ok, nutritional studies. These may be the hardest of all studies to perform and interpret and are possibly the easiest to criticise. As we'll see, there are so many variables that must be considered (which can be potentially criticised) that I wonder if any nutritional studies can be trusted.

What I say I eat isn't what I eat:

Frequently, nutritional studies will rely on what's called a food record to determine various things like nutrient intake. The studies I mentioned about the high fat thing did this. Generally, these studies have the person write down their food intake (type and amount) for 3-7 days including at least one weekend day (where most people's diets tend to slip and change the most) and plug the foods into a computer program to determine intake of carbs, fats, proteins, etc. This particular method (while the cheapest) has several related problems.

1. Frequently, people's recollection of what they eat (in terms of portioning) is totally off. Studies of obese men and women found that, although they reported eating only 1000 calories per day, they were in actually eating over 2000. There just seems to be some discrepancy between what we eat and what we think we eat. This has to do with portion reporting. Unless you measure out every morsel of food, it's hard to estimate how much of something you are eating. I mean, how many of use know how much 1/2 cup of broccoli or 3 oz of meat is by eyeballing it. Unless you measure your food regularly, I'm willing to bet very few.

Another recent study looked at the food record of college students and found that their perceived caloric intake was significantly different from their actual intake because most didn't include condiments (mayo, mustard, etc) in their records. The students thought that the calories in the condiments were inconsequential and ignored them. Well, as you know, many condiments (mayo especially) have lots of calories and fat and are not insignificant.

2. Also, it's been found that the mere act of recording what you eat may change what you eat. Basically, making the person aware of their food intake causes them to digress from normal eating. Thus, it's tough to get an accurate picture of their "normal" dietary habits.

There are proabably others but they escape me for now. What's the solution?

1. One is to house the people in food wards and basically prepare everything they eat so that it can be weighed and recorded. While accurate, this type of study is prohibitively expensive and requires that the person stay cooped up for a period of time. Early nutritional studies were done on prisoners (as I mentioned earlier) but this type of study can't be done anymore for ethical reasons.

2. Do animal studies. Animals are the easiest to control in terms of lifestyle and diet because you can feed them whatever you want to. But, again this leads to the problem of trying to apply animal research to humans.

What about other problems with nutritional research?? Again, there are too many to list completely but I'll touch on a few, especially as they relate to supplement studies.

Length of study:

One frequent criticism of supplement studies is that they weren't long enough for any real effect to occur. Some substances may take weeks to months to show any positive effect. Most nutritional studies last a few weeks at most making it difficult to tell if something really works or not. Certain herbs (ginseng for example) is reported to take months of constant use to exert any effect (however small).

How much stuff:

Another common criticism is that not enough of a substance was administered to see an effect. One example is the Finnish beta-carotene study I mentioned previously. Another is studies with l-carnitine. Several studies have found effects with 1-6 grams of carnitine per day over several weeks to months of use. Other (using .5 gram per day or less) found no effect.

What kind of stuff:

I.e. What form of the nutrient was used. Most supplement studies use very high quality (frequently pharmaceutical or drug quality) grade nutrients which behave very differently from the stuff you buy at the local GNC.

Also, there are different forms of these substances available. For example, if you see a study which looked at Vitamin C usage, was the Vitamin C administered ascorbic acid, ascorbyl palmitate, vitamin C with rose hips, etc... Certain forms of nutrients are absorbed better than others and this can have a profound effect in applying the data to the real world as, in some cases, the experimental substance used isn't even available to the public.

How was it administered:

Frequently, nutritional studies (especially on animals) use intravenous administration rather than oral administration. Arginine is a great example. With intravenous admin. of 20 grams of arginine, there is a large, reproducible increase in growth hormone levels. Does this mean that taking arginine orally will have the same effect. Maybe, maybe not. Many nutrients are broken down in the stomach and liver (many steroids are in this category) so that taking them orally may have little or not effect.

Initial status:

Another problem is that of initial nutritional status. Frequently, supplement studies find major effects for no other reason than they are bringing the bodies stores up to normal. However, in very few cases do levels above normal incur any positive effects. For example, a couple years ago, magnesium was touted as the wonder supplement as a study found increases in strength when a supplement was given. However, the study compared one group at more than the RDA to a group that took less than the RDA and were most likely deficient. Thus, you can't really say that mega-dosing magnesium will exert positive effects assuming you're not deficient. You might be able to say, however, that levels of magnesium below the RDA hinder strength though.

N.B. Many studies have found that athletes (despite their higher food intake) may be deficient in many vitamins and minerals. Thus, despite the null effect of mega-dosing, taking additional vitamins to insure normal levels is warranted in many cases. Also, it is not known if athletes require more of certain nutrients than sedentary people.

Compliance:

This is a big problem in nutritional and drug studies. If you do not monitor your subjects constantly, how can you know if they are taking the substance you tell them to take? You don't. You have to assume that they are and hope for the best.

With exercise studies, this generally isn't a problem as most studies have the subjects being supervised at all times while exercising.

However, if you tell somone to take 10 grams of arginine (which tastes like total shit) daily, odds are that they won't do it for very long. However, they may tell you they did for whatever reason.

As an aside, this is one of the problems with survey studies. Frequently, people tell you what they think you want to hear which skews the data in a big way. For example, some of the sex studies have generated data that just doesn't seem to jibe with real world observations (i.e. I saw a thing on tv about a study that found 75% of married people claimed they had never had an affair). Well, even anonymously, if someone asks you if you've ever cheated on your wife, are you going to tell the truth? Maybe, maybe not.

Placebo effect:

You've all heard about this. Is the result in question really being caused by the substance or by the subject expecting to see an effect. There is considerable evidence that subject expectations will affect the outcome of the measurement. For example, if you tell someone who's sick that the pill you're giving them (which may just be a salt or vitamin pill) will make them better, they will frequently being to get well because they expect this. Researcher expectation may have similar effects. That is, if the researcher is testing whether a substance will increase endurance or VO2 max, he may unconsciously urge the subject recieving the substance more than the placebo control.

This is generally gotten around by doing the study placebo controlled, double blind. This means that oen group is given an active substance while the other is given a pill (or drink) with no physiological effects. Neither the subjects or the researchers know which is which so results can be measured without any bias (hopefully). At the end of the study, the packets are broken open to see who got the real stuff and who didn't.

Also, frequently the subjects are tested under both conditions (i.e. real substance vs. placebo) in what's called a cross-over study which helps to validate the study. However, this brings up another whole slew of problems.

A. Was the trial design random. That is, was it random who got the substance first and who got it last.

B. Washout period. For supplement studies, there is usually a washout period of several days to weeks to allow whatever substance is being tested to clear the body. This is to avoid the following problem: let's say that you take 6 grams of carnitine for 3 weeks and are then tested for endurance and stuff. Than, you will take a week with no supplement (real or placebo) and do three more weeks with the placebo before being tested (again, you don't know which period is the real stuff or not but let's say that you got the carnitine first). Well, was the week with no supplementation (followed by three weeks of placebo) long enough for the effects of the supplement to wear off. If not, the resuts may be skewed.

A great example of this (though for different reasons) was the study I alluded to which found that a high fat diet improved endurance performance. The study took runners and fed them either a normal diet, high fat diet, or high carb diet in that order for one week each and then tested them on a treadmill. They found the highest time to exhaustion with the high fat diet even though no more fat was being burned during the test.

One of the big criticisms of this study was that it was neither random nor did it have a washout. Critics have suggested that the lower time in the high carb test was that by the third maximal test, the runners were just burnt out.

There are a ton of other problems with nutritional studies but this is a farily good sample.

Next, more ranting.


--------------------------------------------------------------------------------

Well, in between writing the first eight parts of this ramble and posting it, a couple more things ocurred to me to write about.

In vitro vs. in vivo (also in the lab vs in the real world):

Literally, in vitro refers to experiments done on a culture or tissue in a test tube or dish. In vivo refers to experiments done directly on a living organism. While in vitro testing is sometimes useful (especially in the beginning of researching a new topic), it isn't all that directly applicable to real world situations. Put it this way, an isolated piece of tissue which is subjected to whatever condition (say the presence of some growth factor) will not respond the same way as tissue in the body where many many more complex reactions can occur.

What about the second part of the title of this section. As I've ranted about before, results obtained in a laboratory cannot always be directly applied to a real world situation. I bring this up again based on someting I hear a colleague (the same one from the mountain bike example in part 1). First, some background. As many of you know, carbohydrate is not stored as efficiently as bodyfat as fat is. In fact, the conversion given to me in class was that about 77% of excess carbo will be stored as fat vs. 97% of excess fat calories.

Well, some recent research has questioned this idea. Basically, the researchers have subjected the subjects (I don't know if they were animal or human) to an excess carbohydrate load to see what happened. What they observed is that the body turns up it's oxidation of carbohydrate so that none is stored. So, what the researchers (incorrectly, IMHO) is that excess carbohydrate cannot be stored as bodyfat no matter what. And, this may be true in the short term or even in this one case. Hell, what they were probably observing was the thermic effect of food: i.e. the phenomena that the body can raise it's metabolism to counter the effects of excess food in the short term. But, if that increase was enough to completely counter the excess calories, there would be no obesity in the world. In any case, I think that it's safe to say that overfeeding of any calories, no matter of what type will result in excess bodyfat. Otherwise, how can you explain why people on a non-fat diet cannot lose weight and frequently gain. And, why, if America is eating less fat, are we more obese.

In any case, what my colleague said to her client was that, since jellybeans have no fat, you can't gain weight from them. She is incorrectly assuming from the results above that excess carbo, no matter what cannot be stored. So, if I eat 10000 calories worth of jellybeans a day, I can't get fat, right? Not in the least. Just another example of mis-applying research results to the real world.

How to read journals:

Throughout this series, I have talked primarily about the flaws inherent in research. Now, however, I want to talk directly about the journals in which said research is often times printed. Without over-generalizing, I will differentiate the journals into two types: peer reviewed, and non-peer reviewed.

By far, peer reviewed journals are the most reputable. Generally speaking, new studies and articles are submitted (I think anonymously to avoid any bias on the reviewer's part) and reviewed by comittee to (hopefully) insure that the studies were well performed and legitimate in it's methods and such stuff. Yes, some questionable studies do get through but they are all reviewed. Most scientific journals fall into this category.

Much less reputable are non-peer reviewed journals. Basically, the person can print whatever the hell he or she wants to print. Obviously, this is much more affected by author bias and other factors so it must be taken with a grain of salt. Bodybuilding magazines definitely fall into this category.

However, there are sometimes a middle category. One thing you have to be careful of when reading a well referenced article is to examine where the references came from. For example, one of my favorite "journals" (it's really more like a magazine) is the NSCA Strength and Conditioning publication. Bi-monthly, it publishes articles of practical use for the S&C practicioner. However, it is notorious for referencing previous NSCA journal articles which, by definition, aren't peer reviewed articles. Also, frequently, authors (Colgan at Twinlab does this a lot) will use a popular media source (like Time or Newsweek) as a reference. Now, he generally doesn't use these as scientific references (which they're not) but rather to make a point or example.

However, some people will publish articles and then reference stuff that they themselves published in another non-peer reviewed "journal". Many of the nutritional magazines sold at the health food stores do this. Or, they will reference a book which may or may not use scientific references in it's own bibliography.

In the next to last part (I swear this will end soon), I will give my own brief reviews about some of the major publications in the field including both journals and popular magazines.


--------------------------------------------------------------------------------

Back on the topic of journals and the actual research, there are a couple more items to consider before I discuss the actual journals.

Data:

When most people read research, they tend to skip the data which is presented. This is because it's generally very dull and I'd much rather read the conclusions than see the raw data which led them to those conclusions. This can lead to problem, though.

Frequently, by looking at the data, you can see some of the problems that may have ocurred with the study that aren't discussed. As an aside (to help make my next example a bit clearer), frequently in studies, the number of people who begin the study aren't the same number who fininsh. This occurs for various reasons (injury, sickness, etc) and is usually mentioned somewhere in the study. Generally, the data from the drop-outs is not considered in the statistical analysis.

One of my favorite examples (this is a bit morbid I admit but funny in it's morbidity) was this. In my heart disease class, we talked about the benefits that exercise has for post-heart surgery patients. In general, low-intensity, long duration type exercise has been found to confer some cardiovascular conditioning to a population which is in dire need of it. Well, a couple of studies looked at the impact of high intensity exercise (i.e. intervals) on this subject group.

In looking at the studies, one thing came up: the number of persons beginning the study didn't complete it. However, this fact was ignored in the results and concluding remarks section. What happened was that the subjects, whose hearts were already weak, DIED during the study from the exercise. Yeah, that's right, died. The researchers, whose asses and jobs were probably on the line, choose to ignore this little fact when presenting the results of their study. It's not that little if you ask me.

Data manipulation:

One last thing that must be considered (but which is nearly impossible to ever be sure of) is whether the data was manipulated before statistical analysis. It's easy to manipulate, or even create data.

My favorite example of this is an advertisement that Joe Weider submitted to Ironman magazine a few years ago. In the ad, Joe quoted a study which had supposedly been done at University of California San Diego (or was it San Bernadino) and cited data from that study showing the efficacy of the particular supplement. Well, Ironman has on their board one of my teachers as their nutrition consultant named Eric Sternlicht who teaches at UCLA. And, he knew that the study that Weider cited had never been performed. It was a total fake.

Weider was taken to court for fraud and settled out of court paying UCSD to do the study. FYI, the product in question didn't work when the real study was performed. If you haven't figured it out yet, I don't like Weider very much.

Another example comes from a lab I worked in at one point. We were doing exercise study on a particular group of people which measured breath data during exercise. Well, breath data is notoriously noisy. You're damn lucky if you get anything usable out of it. Well, after the subjects had been run, we had a ton of raw data to do curve fitting for (that was my job by the way). Well, some of the data looked great. Some of it looked really bad. Some of it looked fairly good with some small problems. In general, this last set of data looked like the researcher expected it to look. Except that some of the data was way far from the curve that seemed to exist. Her solution, delete it (the data that didn't conform to the curve she wanted).

Well, this is a tough call. It may be that you can statistically eliminate certain data as it's assumed to be random. However, to my knowledge, she just eyeballed it and deleted certain data points before we curve fit them. Well, her eyeballing method is going to be very much affected by her expectations of what the data "should look like". Not good. Not good at all.

Next time, various journals, magazines and stuff. Then, the wrapup.


--------------------------------------------------------------------------------

This is the next to last part. I promise. Last time, I said that I would give a brief review of some of the major publications available in the field of exercise science and training.

Journals (peer-reviewed):

Medicine and Science in Sports in Exercise (Med Sci Sports Exerc): this is the monthly publication of the American College of Sports Medicine and is highly reputable in the field. It divides it's space between original investigations, review articles and frequently has themes of the studies it prints. This is a very info-dense publication. If you don't have the background (or inclnation) to wade through some dense scientific research, don't bother with this one. Also, it only occasionally has articles oriented towards real sports practice. The most recent issue has articles ranging from "Perception of chest pain during exercise testing in patients with coronary artery disease" to "Carnitine supplementation: efect on muscle carnitine and glycogen content during exercise". Subscriptions are $82/year or free with a membership in ACSM ($120/year for professionals).

Sports Medicine: this is a great journal that offers several original investigations plus several review articles (my personal favorite) in each issue. However, sometimes the reviews are on some pretty obscure topics. Also, the cost ($300/year) is prohibitive unless you're very, very into this field. Read it at the library instead.

Physician and Sports Medicine: this is a bit more of a magazine than a journal but the info is top notch and well researched. It is really aimed at the practicing sports physician moreso than the active exerciser and many of it's articles are on rehab and things of that sort. However, it does have the occasionally pertinent article on nutrition or training. Price is $46/year.

Journal of Strength and Conditioning Research: this is the NSCA's quarterly research journal (not to be confused with Strength and Conditioning which is a magazine) which prints original research pertinent to S&C professionals. While generally very good, the studies are occasionally out there and not useful except in rare cases (one a while back was a physiological analysis of different ice hockey positions). Subscriptions are $36 per year from Human Kinetics.

Others include International Journal of Sports Medicine and many others (the names escape me now). If you have access to a biomed library, your best bet is to go check out individual issues rather than subscribe. Also, very interesting articles do sometimes turn up in journals like the Journal of Applied Physiology (JAP), Nature, and things of that sort.

Magazines:

Muscle & Fitness (M&F): while having the occasionally good article, this is basically a glossy catalog of Weider supplments and steroid built athletes. I'm not a big fan of Weider as you might guess. I read this one sometimes but believe it only ocassionally. 1 star.

Muscular Development (MD): this one is a catalog for Twinlab supplements. However, I generally find it's articles to be of a much higher caliber than M&F. I read it mainly for the nutrition articles and recipes. Also, William Kraemer and Stephen Fleck (two of the foremost researchers in strength training) have monthly articles in this one on the science of strength training (not necessarily bodybuilding). 2.5 stars.

Ironman: while a great magazine, it's essentially the same each month. Basically, every article mentions the typical hardgainer and how he or she must train differently than the genetically blessed or drug assisted bodybuilder. This gets old after about two articles. Still, it does have good articles by Jerry Robinson (of HFL fame), Fred Koch, and others in addition to some good training routines. 3 stars

Muscle Media 2000: This may be my favorite mag of them all. Although it does push Met-Rx a bit too much for my taste, it isn't owned by any supplement company (tho' it does distribute some) and can be a bit more objective about their use. Also, they have what I feel is a good steroid stance. Rather than Weider (who won't admit most of his athletes use), they figure that since people will use them, they might as well get straight answers about them. 4 stars of 4.

Crosstrainer: a new one which tries to be all things to all people. It covers (or rather tries to cover) every aspect of exercise from aerobics to anaerobics (intervals) to weight training to cross training to nutrition. Thus, it spreads itself a bit thin. I've only read a couple though and it seems to be pretty good. 2.5 stars.

Strength and Conditioning: the other publication by the NSCA. This one is more of a practically written magazine about S&C. It has articles ranging over every topic from plyometrics, to power cleans, to how to make you own cheap medicine ball in the most recent issue. 3 stars if you do S&C. 1 otherwise as none of the articles will apply to you (i.e. if you're a bodybuilder or endurance athlete).

Powerlifting USA: this one is geared, as you would guess, towards powerlifting solely. I've only read a couple but it looks good if that is your focus. 2 stars.

For specific sports, there are many magazines. I read Bicycling (has the occasional good article), Speedskating Times (if you're into inline or speedskating). These all have the occasional article about physiology or whatever but it's usually applied to the specific sport in question. Others are Running World, Triathlete or whatever.

Newsletters: there are far too many newsletters to name them all. Hell, I haven't even seen but a smattering of them. They range in quality from excellent (Penn State Sports Medicine Newsletter) to ok (Connelly Report, another way to sell more Met-Rx) to just plain silly (most health food store newsletters which exist solely to help sell supplements and the other crap sold at health food stores). Another that I've just been made aware of is the High Intensity Newsletter. If Strength and Conditioning is the major proponent of NSCA dogma, than the HIT newsletter is it's anti-publication. It follows primarily Nautilus/Elliot Darden type training principles and has articles by Ken Liestner, Wayne Wescott and others. I've been told that it can be quite info-dense and it does get a bit technical at times but, based on what I saw in this one issue, is an excellent little newsletter.

Next time, the wrap up. Finally.

Wrapping up. When I began this rather longwinded editorial/polemic, I wanted to give a few examples of the problems associated with scientific exercise research. Well, as usual, I wrote far more than I originally intended. Well, it's time to wrap it up here, get on with my life, and quit boring you people.

I have come down pretty hard on exercise science (of which I am a practicioner and a big believer in). Rather than attacking anyone in specific, I wanted to point out some of the problems with research in general and exercise research in specific which I feel I've done (albeit in a very long fashion). What can we conclude from this?

Am I saying that all research should be thrown out the door since it cannot be valid by any stretch? No. Not in the least.

Exercise research (and any research for that matter) carries with it certain problems and responsibilities for the researchers. It's far too easy to fall into the trap of "My study says this so I'm right". As has been proven time and time again on misc.fitness, there are lots of contradictory ideas (many supported by research, many supported by observation) floating around. Some are valid, some aren't.

Rather than dismiss all research as useless since it's so fraught with problems, we must simply realize the problems associated with it and move on. Research provides a good springboard for many things. Studies which contradict others may provide a further insight into what is optimal or ideal. The studies on high fat, rather than suggesting that a diet of 38% of total calories is ideal, might suggest that athletes need more fat than we think.

Animal studies, while far from ideal, provide a start into determining certain effects within the human body. They can save valuable time and effort by providing a starting point for human research on exercise. However, care must be taken when interpreting the results or attempting to apply the research to humans.

Nutritional studies, perhaps the least accurate, are ultimately better than nothing for providing some decent guidelines. However, the results obtained range from the misleading to the flat out misguided. The other option (which is to trust only athlete testimonials) is dangerous because athletes are frequently being sponsored to say a product works. And, the opinions of others are fequently tainted by the placebo effect. So, what do you do?

Should you (hell, can you) go to the library and look up every reference on every ad or idea that you see? If you have no life and like reading research (I do but it can get pretty dense sometimes), perhaps. Even so, it's hard to have the critical eye for spotting flaws in research unless you're very well versed in it. At some point, it may just be easiest to trust the opinions of others. For supplement info, I like Muscle Media 2000. Although they do push Met-Rx quite heavily, I feel that they are a bit more disocciated from the business aspect of it since they don't manufacture any supplements. They do however distribute the ones that they belive work (this includes creatine monohydrate, GKG, Vanadyl and V2G, and of course Met-Rx).

For good training type info, Muscular Development, Ironman, and Muscle Media 2000 are all pretty good, IMHO.

Finally, and this brings up my last point. I promise. As we have seen, it's easy to criticise studies on many grounds, whether statistical, design wise or any of many other criteria. It's easy to fall into the trap of criticising the studies which don't support our views (see just about any article by Michael Colgan or Brian Liebowitz in Muscular Development about nutrition and supplements. If a study doesn't support their opinions, it's immediately discounted on some ground or another) while at the same time praising the studies (no matter how poorly designed) that do support your idea. As an example, most of the research that I've read on carnitine says that it doesn't work (except for one that found very small results). This one paper is the one always cited in articles on carnitine especially by Colgan. The reason why companies push it so hard? My opinion is because it's so damn expensive (about 1$ per gram) with recommended dosages of 2-4 grams or more per day. Colgan and Leibowitz are on Twinlab's payroll. The Weider research group (whatever the hell that is) must answer to big Joe Weider. Thus it is in their best interest (despite their posturing about "truth" and "honesty" in the marketplace) to support this stuff. Now, don't get me wrong, I do follow Colgan and like many of his ideas. However, when reading any of his articles, you must remember that his loyalty is to his paycheck (as it is with all of us).

If you want Colgan's most recent advertisement (err, book), pick up a copy of "Optimum Sports Nutrition" published by Advanced Research Press (owned by Twinlab supplements and the publishers of Muscular Development). While generally very good, it is almost laughable how Colgan will introduce a subject (say BCAA supplementation or whatever) mention a few studies and conclude with "Oh, yeah, Twinlab make an excellent version of supplement x". Well, of course they do Mike, of course they do.

If you want a slightly less biased review of supplement studies, pick up a copy of Nutrient as Ergogenic Aids for Sports and Exercise by Luke Bucchi from CRC press. Unfortunately, it's $69.95 but it's available for a 30 day look-see from them.

And if you want the ultimate text on current thinkings in sports nutrition, pick up a copy of "Nutrition in Exercise and Sport, 2nd edition" for a whopping $125.00 also from CRC press. It's probably the most comprehensive book on sports nutrition avaiable. It's also probably the most expensive as well.

Warning this is sarcasm coming up next

To counteract the problem with analyzing research, I think that it should be mandatory on misc.fitness that whenever you present information, no matter how well accepted, you should include not only a completely referenced bibliography but also the full text of the papers you are referencing.

GIF is probably the preferred format for tables and graphs. If you don't have a scanner, tough, type in the articles by hand and reproduce the graphs in your favorite drawing program. By providing the original papers, we on misc.fitness can more easily evaluate the information you've presented without going to the library and looking the shit up ourselves.

This is the end of the sarcasm. If you think I'm being serious, you need to get a life much worse than I do (which is hard to believe. I mean, hey, I wrote an 11 part series on why scientific research is useless when I'm one of it's biggest proponents. How depressing. I really do need a life.).

So, the ultimate message of all of this (without turning this into a Public Service Announcement) is to educate yourself and think critically. If I write something and you question it, by all means let me know and I'll try to defend myself (and believe me people have been of late but that's okay, too). Hell, try it out if it seems valid. No one thing works for everyone. If I say that periodization is great, try it. If it doesn't work for you, try something else. No one, despite what they'd have you believe, knows everything. To end with I'll relate a quote I heard somewhere. Ahem.

"An expert is someone who knows one more thing than you do."

And if that isn't that the god's honest truth, I don't know what is.

And on that note, I'll end this little rambling. Next time, fats and athletics, maybe. I'd promise but I'll probably come up with something to deter me from finishing it again and write about something wholly unrelated.

See ya' on the net,
Lyle McDonald

lylemcd@edge.edge.net




------------------
Just because the majority believes it, does not make it true!
 
Originally posted by Cenox
Seems nobody read it :)
Don't get me wrong I tried but after having to stop for meal 2 and 3 I sort of lost interest.....:lol: I'll be sure to finish it soon though!

Animal
 
Originally posted by Cenox
Seems nobody read it :)

That's way too long for me too:D

Butt seeing at the bottom that Lyle wrote it...I'll force rep this puppy out soon! :D

DP
 
Copy....Paste to word.....save for later.:yes:
 
While Lyle may an expert in the field of exercise and fitness, his understanding of the scientific method is not great enough to refute it's use. He is definitely right on in regards to the manipulation of data to prove one's point, however.
 
That was quite an infromational article. I must admit that by the time I finished reading it, I now have no way of knowing if anything I read about any protein, supplement, or drink is ever valid. Although I agree with his stance on the way scientific data can be slanted to support a specific point of view, I'd rather see at least some attempt at research for a product than just an article, or magazine trying to push it with nothing to back it up. That would be the same as a company advertising " cocoa is the greatest supplement for getting ripped muscles" without anything to support it. Not sure if that all makes sense or not but I have a hard time spending my cash on a product without data or at least some opinions from people that have tested/used the products.
 
You can tell by simply reading a study's abstract if it was done on the "up and up." At least, if you know what to look for.

A study showing a supplement's effectiveness in athletes, for example, is vague, unless you know whether that athlete was endurance trained or resistance trained, for example. A lot of mistakes have to do with the (mis-)interpretation of the person reading it.
 
owe!

i think i now need glasses.

i must say, that does really open your eyes about reasearch in many ways, i never thought to look at any research in general so deeply as to what can cause the information to be wrong.

thnx for taking the time to type this out, and thanx to my job being so boring it gave me time to read it all.
 
Muscle Gelz Transdermals
IronMag Labs Prohormones
Are these all cut and pasted from volleyweb.com?

I haven't read them, just scanned them, but if they are from volleyweb, then they are slightly dated and Lyle has changed some of his ideas and opinions, judging on more recent studies and other influences.

Still some decent writing and worth the read, but if you spot any contradictions with other advice he offers it's probably because it's aged.

(I just noticed this sticky now, purely because the title is horrendously worded).
 
copy and paste to word, I agree,


Good materials
 
Well, as I look back on it, I realize that it was aimed at certain people (who should know who they are without my naming names). These people are the ones who are hell-bent on not accepting any information based purely on empirical observation. They seem to feel that if something does not have a basis in science (regardless of how well it might have been proven in the "real world") in cannot possibly be acceptable.

Hmm.. I get the feeling I know who that statement was directed towards and what the 'debate' was about.

Kc
 
Nice info!
 
lol, gotta love lyle.
 
Robert DiMaggio said:
. . .
Science vs. empiricism:
Science by its nature is necessarily empirical. It should read, 'Science vs. Anecdote'
This article lists anecdotal evidence..i.e., personal experience.
 
Geez. Robert, if anyone else had posted this I wouldn't have bothered. But your posts are almost always interesting and revelant so I gave it a go. While I completely agree with the author, he could have made his point in 2 paragraphs instead of 200. Talk about beating a dead horse!
 
Day1: Begun reading, have managed to progress onto 4th paragraph. Must stop for rest soon.

Day 2: After first light, proceeded to read past paragraph 4. Several interuptions throughout day prevents further reading, such as food, and work.

Day3: Have somehow managed to progress onto the 1000000th paragraph, oxygen in short supplies, may need relief soon.

....3 years past

Finally finished reading article....It's been emotional.


GREAT article, Rob. WELL WORTH TAKING TIME TO READ, folks.
 
Back
Top