Irritable Bowel Syndrome and Digestive Health Support Forum banner
Status
Not open for further replies.
1 - 14 of 14 Posts

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #1 ·
Scand J Gastroenterol. 2006 Feb;41(2):170-7. Proximal and distal gut hormone secretion in irritable bowel syndrome.Van Der Veek PP, Biemond I, Masclee AA.Department of Gastroenterology and Hepatology, Leiden University Medical Centre, Leiden, The Netherlands.Objective. Sensory and motor dysfunctions of the gut are both important characteristics of irritable bowel syndrome (IBS). Several gut peptides contribute to the regulation of gastrointestinal function but little is known about gut hormone secretion in IBS. Material and methods. We evaluated perceptual thresholds and fasting and postprandial plasma levels of proximal (cholecystokinin (CCK), motilin) and distal (peptide YY) gut peptides up to 1 h after ingestion of a high caloric meal in 99 IBS patients and 40 age- and gender-matched healthy controls. Results. Fasting plasma CCK levels were significantly elevated in patients (1.2+/-0.8 pM) compared with those in controls (0.8+/-0.7 pM, p=0.006), as was the incremental postprandial CCK response (72+/-73 versus 40+/-42 pM.60 min, respectively; p=0.003). No differences in fasting and postprandial motilin or PYY levels were found. The postprandial PYY response was significantly increased in hypersensitive compared to normosensitive patients (215+/-135 versus 162+/-169 pM, p=0.048). Patients with a diarrhoea predominant bowel habit had higher fasting motilin levels compared to constipated patients or alternating type IBS patients (82.1+/-36.5 versus 60.8+/-25.1 versus 57.5+/-23.9 pM, one-way ANOVA p=0.003). Conclusions. IBS patients have increased fasting and postprandial plasma levels of CCK. Changes in plasma levels of motilin and PYY may contribute to the clinical expression of IBS, such as the presence of visceral hypersensitivity or a predominant bowel habit.
 

·
Banned
Joined
·
160 Posts
Not sure what happened with this abstract, K, but take a close look at the stats, which don't show what they claim to show in the slightest. Errors in one might be ascribed to typos, but they're *all* non significant.
 

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #3 ·
Ok, in case I forgot my stats class again.Statistically significant usually at least for biological papers is when p is less than 0.05.In this paper all of them are less than 0.05, right? What definition of statistically significant are you using?IIRC P values give the probability that something would happen by chance alone. So a P of 1 is they are completely identical. p=0.1 is that there is a 10% probability that the results are due to chance alone.p=0.05 is usually used in the biological sciences where there is only a 5% probability that the results are due to chance alone.I think for some things they do use p=0.01 rather than 0.05 as the cut off. And some of these meet that higher standard.Or am I just really confused
K.
 

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #5 ·
Now in biological systems you can have overlaping error ranges where the p value is less than 0.05. Depends on how many sample points you have.When samples have very close, but different means you can still see statistical significance in Anova or T-tests and all of that. There is significant variation in biological systems between one organism and another even when they are all identical. It is why geometric measures are sometimes used rather than arithmatic (mean and SD). Especially since the arithmatic means can be thrown way off by a few outliers. The wide range on the errors probably mean they could have stood to have done the geometric version rather than the arithmatic. You get a couple of out-liers and you can really screw up the spread on the +/- even when the majority of the data is really very compact. The t-tests and anova will pick out the difference between the means even when a few out-liers screw the range.You are deterimining if the means are statistically different with the test, not that the ranges of the two samples are completely non-overlapping.
It would be nice if the data was cleaner, but it may take a look at the whole paper before you decide that the statistical tests done cannot be right. (I assume they know how to run an Anova and T-test for the means)
 

·
Banned
Joined
·
160 Posts
I spent 15 years doing stats to bash my expts into shape for publication - and in the first couple of years we were doing the stats by hand, from first principles !I know what p<.05 means - it means if you do an equivalent expt 20 times, where there is no real difference, then, on average, one of those 20 will show an *apparent* effect where none exists. The point of stats, I learnt a long time ago, is *not* to generate a difference where common sense shows there clearly is not one, but to show where there is *not* a difference, where you think on the face of it there is. In other words, they are important in that they show up the flaws in the data, not that they make something out of bad data.Back to the abstract posted - remember we are not looking at matched pairs, or differences *within* individuals, but differences between two, unmatched, populations, so you can't do anything fancier than a simple mean +/- se.1.2 +/- 0.8 pM is *never* significantly different from 0.8 +/- 0.7 pM, no matter what n is (as a rough rule of thumb, if the means differ by 2 se, you may see p<.05).72 +/- 73 (!) is never significantly different from 40 +/- 42.215 +/- 135 pM is never significantly different from 162 +/- 169 pM.82.1 +/- 36.5 pM vs 60.8 =/- 25.1 pM vs 57.5 +/- 23.9 pM does not generate any signifince either (and certainly not at p<.003).I am not having a go at you, K, you don't have time to analyse everything you post up, but there must be errors with these figures. The Scand J Gastro is a respected journal, not one of the joke ones, so I don't understand why these particular stats are so silly.
 

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #8 ·
Like I said I'd have to see the full paper.I don't have it available on-line.I assume the people who peer-reviewed it looked over the numbers.Without the whole data set I can't comment too much.I have seen data sets that look like this that generate this kind of statistics. (it is the not all data comes in nice normally distributed packages)If you think it is so bad write a letter to the editor? I can't fix their data here.I have seen times where you get a a few data points way outside the mean that throws the standard deviation way off like that (we usually then go to geometric means and standard deviations for the X +/- Y)I mean I just created a data set that does do that and which is what we sometime do see happen. first set n=45 low=0 high=300 most of the data clusters near the mean of 70 and my st. dev is 70.second set n=45 low=0 high=100 most cluster near mean. mean is 39 stdev is 32. P on a t-test with unequal variances is 0.01 when two-tailed.Like I said especially in environmental samples we see this sort of thing. FWIW Geometric done with changing the zero's to ones to get the logs to work right were in the +/- 5-6 range (which is also typical of the data we get where the +/- standard deviation is about the same as the mean)But I really don't want to go into every single number of every single abstract from every single peer-reviewed paper where someone else has the whole data set with you.I have seen data sets like this. I can understand how they arise (which is how I made up a data set that worked really fast, I know what they look like when they give stuff like this). Some biological samples or environmental are highly variable. I'd rather have them report the whole data set even when you get really funky standard deviations than do the "just throw out all the zeros and all the high data to make it look better" thing which too many do.It looks like a preliminary study, not a definitive one. The data sets look different in those cases as I am sure you are well aware. It shows enough of a something that it may be worth doing the better study with the prettier numbers. You gotta do the rough and dirty ones before you can get the $$ (usually mega$$) to get the solid data done.Really, if you wanna go after everything I post and pick it apart even when I DIDN'T MAKE THE DATA SET go ahead, but I'm not going to play anymore. Because based on experience you will always find some fault with everything I say or do and it isn't worth it to me to get into something like this ever again.Have fun
K.
 

·
Registered
Joined
·
9,141 Posts
Is this explains why the drug Motilium makes me worst?
 

·
Banned
Joined
·
160 Posts
Like I said, it ain't personal, but the stats on the abstract as quoted are *####*, non significant. You shouldn't have tried to defend the original paper without understanding stats, which, I'm sorry to say, you clearly don't. Try throwing 72 +/- 73 (n=99) vs 40 +/- 42 (n=40) at any competent mathematician or scientist over the age of 16 - seriously, K, do it, don't just walk away. Then tell me what they say. No "ah, yes depends on the data set." Bollocks. I've played that game. Trying to see how far the stats will take you. That's part of the reason why I said some time ago (to some disdain) that 95% of published science was corrupt, pointless or inaccurate. Maybe 10% of scientists understand stats ? That's generous !!You mentioned geometric means - but you need a good reason to do that, like evidence that your sample sets are logarithmically distributed, and you don't have that. In any case, that doesn't affect the mean +/- se as reported.By all means get angry with me, WRITE IN CAPITALS IF YOU MUST, but that doesn't change the laws of mathematics.The paper was interesting, at first glance, and it attracted my attention because I had once worked on developing a binding assay for CCK, in smooth muscle & CNS tissue. As always when I first look at a paper I look at one or two of the results from the raw data, just to see how significant the results are - in the same way as I would scan the methods, to see if any short cuts have been taken. In this particular case, it was apparent that the results as reported did not support the conclusions drawn.But now your response is to walk away from a debate. That's fine when you were 6 & I pulled your pigtails, but we are now adults, and I say this is not a personal thing (it is flux who gets my goat, not you
); you have always taken the time to research & answer my queries & on *many* aspects of medicine & biology I bow to your experience and knowledge, and am grateful for your input. (Honesty Corner: You are a very helpful individual; but one of the tetchiest also.) But you are wrong on the stats, and my original question, before you flew off the handle, was why are the stats on this abstract so out of line with the conclusions; and thus, can we trust the conclusions ?Peace, K, seriously, but serious debate also.
 

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #11 ·
I just don't like someone getting what I consider "pissy" with me over someone elses data. If it was my paper and you were reviewing it, I got it. But it isn't even my data and yet I feel attacked because I bothered you by posting it here. It seems you want to make it my problem that the stats are the way they are. I'm just the messanger here, I can't fix it. It got peer-reviewed, in a good journal, and I'm sure someone who looked over the whole data set knows better what is going on than you or me from the abstract.Maybe it is a personality conflict, or a communication style issue, but why keep debating when all I'm gonna get is slammed no matter what I do. I feel that I have slammed my head into a brick wall enough with you lately that I don't want to do that again. The definition of insanity is doing the same thing over and over and hoping you get something different.Which is what I feel like here.You say nice things to try to rope me back in, and yet, I always seem to come out the bad guy in every interaction lately.It isn't worth going round after round with you.Yep, I've seen much better data in papers, but like I said you wanna argue the stats they presented, write the authors, write the journal, without the whole paper in front of me I'm not going to argue about it. You wanna go look up the whole paper and fire off letters to the editors about how they let that sort of thing get published, go ahead. You may call walking away childish. I call it not staying in what is becoming an increasingly toxic on-line relationship.K.
 

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #12 ·
quote:Originally posted by SpAsMaN*:Is this explains why the drug Motilium makes me worst?
Really have no idea. Just because the name of this drug is similar to the name of the hormone they looked at does not mean you can infer this study tells you why the drug with a name that sounds like the hormone they looked at works or doesn't work for you.
 

·
Registered
Joined
·
34,955 Posts
Discussion Starter · #13 ·
On the why one shouldn't jump to conclusions based soley on a few numbers in an abstract, and why they had most likely non-normal distributions in the data front.Since the library is between my office and my car I popped in to look at the whole article, and skimmed the results section (I didn't have the hour or two it takes me at least to really dissect a paper completely) to see why they might have this wide varience in the data.Seems the authors noticed that little fact and did an analysis of the data to figure it out. There are significant differences between men and woman (the overall numbers in the abstract I posted have both men and woman in them). Some of which they thought were interesting because of the differences in the frequency of IBS in men and woman (could be a clue here folks). Also these are things where the amount you produce significantly changes as you get older (they had mixed ages as well as genders in the overall numbers in the abstract).This is why getting into extended debates about the abstract are often not worth it to me. Until I've read the article I don't have the whole story. Most of the time things that read funky in the abstract were, in fact, noticed by the authors and reviewers and there are things in the article that discuss that.K.
 

·
Banned
Joined
·
160 Posts
This is weird.There is no statistician in the world who could argue that:72 +/- 73 (n=99) is different from 40 +/-42 (n=40).That doesn't require a letter to the editor to sort out. That's just - well, common sense. The stats just back up the common sense. If you want me to take a tough line - then I think that being accused of contributing to "an increasingly toxic on relationship" is what I cal debating. When you lose, you lose gracefully. If you feel, as you say here, that "getting into extended debates about the abstract are often not worth it to me ", then why the f??? did you start debating ? Oh, yes, as you say here "Until I've read the article I don't have the full story."And yet, you have enough of the story to bash me, mekis. Wrongly.You misread me. I like you. More importantly, I respect you. Really. Seriously. No smileys. You have helped me a lot. You have taught me a lot. You are good at what you do here. But (please bear with me) that paper is wrong as written. Ask a friend, re the stats. It's not your fault. This has got out of hand. How do I show my goodwill, whilst maintaining the integrity of my analysis ? There is no graemlin code which sends flowers from mekis to Kathleen M, Ph.D., otherwise I would use it !!!!Peace, Friendship, Abject Obeisance !! I am only trying to debate facts, not diss you. Seriously!!! You misread me. Write to my email.
 
1 - 14 of 14 Posts
Status
Not open for further replies.
Top