Ramazzini Study on RF effects in Rats

frankywashere

New Member
Friend of mine was trying to say rediculous things like "5g causes cancer" and cited this article as proof:

Ramazzini Study on Radiofrequency Cell Phone Radiation: The World’s Largest Animal Study on Cell Tower Radiation Confirms Cancer Link
https://www.globalresearch.ca/ramaz...-tower-radiation-confirms-cancer-link/5695275


Does anyone have any information that debunks this? I remember reading something about this study exposed rats to very close proximity and that no humans would be exposed that way. Can't remember though.

Frank
 

Dingo

Member
Friend of mine was trying to say rediculous things like "5g causes cancer" and cited this article as proof:

Ramazzini Study on Radiofrequency Cell Phone Radiation: The World’s Largest Animal Study on Cell Tower Radiation Confirms Cancer Link
https://www.globalresearch.ca/ramaz...-tower-radiation-confirms-cancer-link/5695275


Does anyone have any information that debunks this? I remember reading something about this study exposed rats to very close proximity and that no humans would be exposed that way. Can't remember though.

Frank

Hi Frank,

Thanks for the link, I wasn't aware of this paper. I need more time to read through it myself, but I had a quick look around and found a fairly succinct rebuttal of this study:

https://betweenrockandhardplace.wor...-not-increase-risk-for-schwannoma-and-glioma/

The author mainly focuses on the differences between the Italian study and the NTP study I mentioned above, pointing out that if there was a genuine response at the very low doses given in the Italian study, the NTP study should have shown MUCH higher cancer rates than it did.
 

Mechanik

Active Member
Friend of mine was trying to say rediculous things like "5g causes cancer" and cited this article as proof:

Ramazzini Study on Radiofrequency Cell Phone Radiation: The World’s Largest Animal Study on Cell Tower Radiation Confirms Cancer Link
https://www.globalresearch.ca/ramaz...-tower-radiation-confirms-cancer-link/5695275


Does anyone have any information that debunks this? I remember reading something about this study exposed rats to very close proximity and that no humans would be exposed that way. Can't remember though.

Frank

You’re right @frankywashere, the exposure levels are ridiculous.

I tried to read the study, and found the first page At this link here. The summary says that they exposed the rats at 50V/m to a 1.8 GHz signal, which is essentially the same frequency as a wireless home phone, or an older WiFi unit. The key question is “How powerful is 50 V/m”?

I coundn’t find a handy guide to comparative field strength of RF, but the the following article, from the National Institue of Heath, covers a field survey of RF strengths measured in different indoor locations. They say:
. Link is: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6466609/

So..., the study exposed rats to roughly 50-380 times the highest levels found by the NIH across the whole spectrum and not just 1.8 GHz. The reason I looked at indoor studies is that they exposed the rats for 19 hours a day and the only place you’ll get that is at home, and even that’s debatable.
 

Mick West

Administrator
Staff member
I tried to read the study, and found the first page At this link here. The summary says that they exposed the rats at 50V/m to a 1.8 GHz signal, which is essentially the same frequency as a wireless home phone, or an older WiFi unit. The key question is “How powerful is 50 V/m”?

This kind of related to what I was looking at in the "Measusring 5g" thread: https://www.metabunk.org/threads/measuring-5g-emf-and-using-icnirp-guidelines.11178/



The "Incident E-field" measurement of V m-1 (V/m) is the measurement in the rat study. at 1.8 Ghz (1800 Mhz) the reference level is 1.375*sqrt(1800) or 58 V/m. So 50V/m is slightly less than this limit (the occupation limit, public limits are much lower).

But V/m isn't "power", power would be the last column, which is is the E-field multiplied by the H-field. As seen in note 5, above, the limits require all three of the values to exceed the reference values if they are in the "far field", but the definitions of the "fields" is confusing. see the 2020 standard:
 

Mick West

Administrator
Staff member
I tried to read the study, and found the first page At this link here.
Attached https://www.metabunk.org/attachments/falcioni2018-pdf.40227/
Metabunk 2020-04-07 10-36-00.jpg


Metabunk 2020-04-07 10-36-25.jpg

On the face of it, this looks like bullshit, but maybe someone can explain the statistics. There were 3/207 (1.4%) schwannoma in male rats at 50V/m, but 0 in the control. However there was 9/410 (2.2%) in female rat at 5V/m, vs 4/817 (1.0%) in the control, but that's not statistically significant?

And the 25V figures are "safer" than the 5V figures, and essentially the same as the control.

The Schwannoma incidence seems essentially random to me. What am I missing?
 

Attachments

  • falcioni2018.pdf
    959.1 KB · Views: 154

Mendel

Senior Member.
On the face of it, this looks like bullshit, but maybe someone can explain the statistics. There were 3/207 (1.4%) schwannoma in male rats at 50V/m, but 0 in the control. However there was 9/410 (2.2%) in female rat at 5V/m, vs 4/817 (1.0%) in the control, but that's not statistically significant?
If you have 0 Schwannoma in 412 rats, then you'd be 95% confident that the probability is lower than 0.89% (1.28% for 99% confidence). If you have a probability of 1.4% to see a Schwannoma, then the probability that you will see none in 412 samples is 0.3%. This means that the chance that the same basic probability of Schwannoma incidence underlies both groups is lower than 5%.

If you have 4 in 405, the 95% interval goes from 0.27% to 2.51%, which includes the observed 2.2%, so that is not significant at the p <= 0.05 level. If the base probability is 2.2%, the probability that you have 4 in 410 (as in the control) or less is 5.25%, which is not low enough. The basic probability here is more likely to be the same, at a likelihood of over 5%.
 
Last edited:

Mendel

Senior Member.
Now, the question is, how likely is it to see a 95% significant result if you run 6 tests? (Or if you run 20 tests and publish only 6?) That's where reproducibility comes in.

image.png
https://xkcd.com/882/
 

Rory

Senior Member.
The Schwannoma incidence seems essentially random to me.

I would say the same: the incidences among the different types of Schwannoma are very randomly scattered too. Sample size isn't large enough to deduce anything. Probably need to repeat it with at least 10,000 subjects to begin to see if there's any sort of effect.
 

Mechanik

Active Member
On the face of it, this looks like bullshit
My favorite quote so far, @Mick West. I want to post a longer note later with full references, but let me add a few things first. I have so many browser tabs open right now, I'm afraid I'm going to lose focus. With regards to the endocardial Schwannoma column only:

1. The only statistically significant finding in the report was an increase in endocardial Schwannoma in male rats at the highest exposure (3.2.1).
2. Sprague-Dawley rats are genetically prone to cancer and this is greater in females than males, even though more males get malignant Schwannoma. In fact, one important paper on cancer from Glyphosate was withdrawn, in part, due to the use of Sprague-Dawley rats.
3. Also in 3.2.1 they mention that 20 years of male control rats have exhibited a .6% rate of Schwannoma. From the chart @Mick West pasted, malignant Schwannoma was 1%. If the background rate is .6% then is the .4% difference statistically significant? I think not, at least not at the 95% confidence level.
4. Statistically, they do not report error bars on their results. A 1% incidence rate in a small population could generate error bars greater than the result. It does not say this, but #3 above indicates that the result of 1% should be plus or minus some number which again, (maybe .6, but my statistics are weak), may render the results meaningless. They're literally basing this paper on one rat.

Between the 2 items listed, it seems like bullshit is the appropriate call. Don't get me started on the article in the original link from @frankywashere.
 

Mick West

Administrator
Staff member
If you have 0 Schwannoma in 412 rats, then you'd be 95% confident that the probability is lower than 0.89% (1.28% for 99% confidence). If you have a probability of 1.4% to see a Schwannoma, then the probability that you will see none in 412 samples is 0.3%. This means that the chance that the same basic probability of Schwannoma incidence underlies both groups is lower than 5%.

If you have 4 in 405, the 95% interval goes from 0.27% to 2.51%, which includes the observed 2.2%, so that is not significant at the p <= 0.05 level. If the base probability is 2.2%, the probability that you have 4 in 410 (as in the control) or less is 5.25%, which is not low enough. The basic probability here is more likely to be the same, at a likelihood of over 5%.

Just going by the male rats, there's 1229 of them, and 7 Schwannomas, or 0.57%, if we say that 0.57% is the probability of an individual rat getting a Schwannomas in all cases, then we can run a simulation of that study. Doing a 100k trials will smooth out any variance. Here's some code:
Code:
// given a number of rats, and a probability percentage, return the nubmer of rats that pass that probability for a single random incident
// e.g. pass in 1000 and 1% and you will get about 10 back, but maybe more
function someRats(N, probability) {
    var pass = 0;
    for (var i=0; i<N; i++) {
        if (Math.random()*100 < probability) pass++;
    }
    return pass;
}

var trials = 100000   // number of times we run the sim. 100k is fine
var Ns = [412,401,209,207] // number of rats in each individual group

var incidence = [0.57,0.57,0.57,0.57] // optional different incidence for each group (default 0.57)
var cut = 3;

var above = new Array(); // counter for 3 or above
var count_0_3 = 0;
var groups = Ns.length;
var sum = new Array(groups)
for (var i = 0; i < groups; i++) {
    sum[i] = new Array()
    above[i] = 0;
}
var highest = 0;
var hits = new Array(groups); // stores one run of hits for each group
for (var i = 0; i < trials; i++) {
    for (var g = 0; g < groups; g++) {
        hits[g] = someRats(Ns[g], incidence[g])
        if (sum[g][hits[g]] == undefined) sum[g][hits[g]] = 0;
        sum[g][hits[g]]++;
        if (hits[g] > highest) highest = hits[g];
        if (hits[g] >= cut) {
            above[g]++;
        }
    }
    if (hits[0] == 0 && hits[groups - 1] >= cut) count_0_3++;
}

console.log("" + trials + " trials")

for (var g = 0; g < groups; g++) {
    for (var i = 0; i < highest; i++) {
        if (sum[g][i] == undefined) sum[g][i] = 0;
    }
    console.log("Group " + g + ": (N=" + Ns[g] + ", incidence= "+incidence[g]+"%) " + (100 * sum[g][0] / trials).toFixed(2) + "% of trials had 0 hits, " + (100 * above[g] / trials).toFixed(2) + "% had >= "+cut+" hits")
}
console.log("percentage of trials where first group had 0 and last group had "+cut+" or more = " + (100 * count_0_3 / trials).toFixed(2))
You can run this


Results:
Code:
Group 0: (N=412, incidence= 0.57%) 9.56% of trials had 0 hits, 41.79% had >= 3 hits
(index):56 Group 1: (N=401, incidence= 0.57%) 10.08% of trials had 0 hits, 40.14% had >= 3 hits
(index):56 Group 2: (N=209, incidence= 0.57%) 30.47% of trials had 0 hits, 11.84% had >= 3 hits
(index):56 Group 3: (N=207, incidence= 0.57%) 30.73% of trials had 0 hits, 11.54% had >= 3 hits
(index):58 percentage of trials where first group had 0 and last group had 3 or more = 1.11

(You can copy this code into a something like a https://playcode.io/ window if you want to play with it)

So about 10% of group 1 is expected to have 0 hits, and about 10% of group 4 is expected to have >=3 hits, so combined that's a rate of about 1% for both things to happen by chance. I'm pretty sure it's not quite as simple as that though.
 

Mendel

Senior Member.
So about 10% of group 1 is expected to have 0 hits, and about 10% of group 4 is expected to have >=3 hits, so combined that's a rate of about 1% for both things to happen by chance. I'm pretty sure it's not quite as simple as that though.
That's approximately it. (Did you run it for the female rats by comparison?)
The thing is, the statistical test that they used is agnostic of the long-term probability. The researchers probably input that table into a statistical package and had it compute the significance. (I'm not good enough at statistics to understand the Fischer test.) If you make P(cancer)=0.57% your null hypothesis, then both values are well within the 95% confidence margin, and the result is no longer significant (but then the control group doesn't really matter (except it does because we know that there wasn't a confounding factor in the setup that caused all rats to develop cancer)).

The xkcd argument (with the gummi bears) is that they examined two types of cancer , brain and heart (schwannoma), so given that they segmented the set in 6 groups and the cancer types in 3 groups each and THEN also computed totals for the schwannomas, that gives them 24 spots in the tables to see a significant result, so with a 95% confidence value, we'd expect to see a "significant" number at least once in these tables, simply by random chance. This is somewhat oversimplified since not all numbers are independent of each other, but you get the idea.
 

Rory

Senior Member.
Fancy language aside, through both my paid work and some of the research projects I've done for hobbies, I've learned to become incredibly wary of the words "statistically significant" and anyone trying to prove anything by typing "p<0.05"; I've seen too many PhD papers, respectably presented and on the face of it convincing, that were simultaneously full of these things and completely flawed.

For me, sample size is everything. 10,000 rats would be the bare minimum. Ten or a hundred times that and then there might be something to take notice of.
 

Mendel

Senior Member.
For me, sample size is everything. 10,000 rats would be the bare minimum. Ten or a hundred times that and then there might be something to take notice of.
Sample depends entirely on the size of the effect you're trying to prove. You can get good data from sample sizes as low as 30, if +/- 10% is sufficient accuracy for you. I expect with 15 dead rats, N=200 would have been quite sufficient to prove an "we're all going to die" scenario. But the smaller the effect is that you want to pin down, the greater is your required accuracy, and the greater your sample size must be. Unfortunately, if I remember correctly, this works exponentially, so for 10 times the accuracy you'd need 100 times the rats.
 

Rory

Senior Member.
Interesting. What would be an example of a study that would return good data with a sample size of only 30?
 

Mendel

Senior Member.
Interesting. What would be an example of a study that would return good data with a sample size of only 30?
Every study returns good data with a sample size of 30. The idea is that if we observe a mean, we want to compute some kind of confidence that we found the correct mean, and if the sample size is at least 30, the means that we randomly get by running experiments follow a distribution that is approximately normal even if the data we are looking at is not normal distributed.
So by having at least 30 samples, we can more easily compute an error bar and a confidence that the mean we found is close to the real mean, which is one definition of "good data". If you get a result but have no clue what its error margin is, you're looking at bad data or anecdotal evidence.
http://www.wormbook.org/chapters/www_statisticalanalysis/statisticalanalysis.html

There's a lot of handwaving and "judgment" involved here, of course, and 30 isn't always a suitable number.

Here is an example of a study with n=16:
"Outbreak of COVID-19 in Germany Resulting from a Single Travel-Associated Primary Case" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3551335
image.jpeg
The knowledge that Covid-19 is infectious even before the onset of symptoms has informed German contact tracing ever since February.
 

Rory

Senior Member.
I guess that's a different type of study, though - it's looking at behaviour in a very specific case, and answering the question "can x happen?" I think the type of study we're talking about here is one that seeks "statistical significance".
 

Mendel

Senior Member.
I guess that's a different type of study, though - it's looking at behaviour in a very specific case, and answering the question "can x happen?" I think the type of study we're talking about here is one that seeks "statistical significance".
Yes. And to determine how significant your result is, you need to be able to model the probability distribution that underlies it. Mick made a simulation that assumes all male rats have basically the same chance to get cancer. What if that is not true? Is our average chance still correct then? If you have looked up my first quote in the book you linked, you'll have seen the graphs right below it, where a skewed distribution is shown, and the averages look skewed as well. In that case, any idea we might have of how accurate the observed average is goes out the window.

But if you have 30 rats or more, you can usually assume that your average is going to behave much the same as if every rat had the same chance to get cancer, and the discussion we've been having becomes possible. Only then can we say, well, if the true chance is 0.57%, then there's an 11.49% chance that we see 3 or more sick male rats in that group, because that's a statement on how samples behave in normal distributions. You can then decide if that chance is small enough for you to attach significance to it, and it kinda tells you how high your chance is that it isn't, based on that set of data.

Now, by convention, in most sciences we'll say "significant" means 5% chance or less, and that makes results easy to compare and combats sensationalism. But what we really would want to know is not a binary yes/no, but the actual probability. If the chance to be wrong is 4.9% that's worse odds than if it was 0.3%, right? Significance is a choice that we assign to a specific level of confidence that we want to have in the result, and we can only do that if we can compute how probable it is that we got that result by random chance. And for that, we usually want the distribution to be like a normal distribution, so 30+ samples or rats or whatever.

If you have that, your study data (usually) tells you how accurate your result is.
And then the other thing I said earlier comes into play: if you need a lot of accuracy because you want to show a very small effect, you have to use a small sample size; but if you're ok with getting an approximate idea, a small sample is enough. The study I cited computed the average incubation length and the average serial interval, and because there were only 16 samples, the error was probably quite big; but since all they probably wanted was to check if that lines up with the data we had from China, they did it anyway.

Long story short, to be able to assess the error inherent in the data, we need a minimal sample size. If we can do that, if we can talk about the error, then we have scientific data that can be significant if judge it so; but if we can't, we only have anecdotes.
 

Rory

Senior Member.
Well, we shall have to agree to disagree on that. But, for me, after doing the "birth rate vs lunar cycle" and "menstrual cycle vs lunar cycle" projects - both of which featured university studies with sample sizes in the hundreds that claimed all sorts of "statistical significance" - all of which fell apart on closer investigation - I don't think I can be convinced on this one. :)
 
Last edited:

Mendel

Senior Member.
Well, yeah, just havingt numbers isn't enough. There's no magic in them. ;-)
If you're good at sociology but bad at statistics, you miss things.
 

Mechanik

Active Member
Finally found one of the reference I was looking for:

This is in contrast to the standard 50 rats described elsewhere in the linked paper.

Link is here. Hope this works.

My point is that SD rats don't live as long as other strains, and the SD rats' predisposition to cancer sort of make you question the use of SD rats at all. Granted, they used quite a few more than 65 rats/sex/group, but the researchers run the risk of producing null results if the observed incidence is low. as in this case where only one result was significant.
 

themaxednoob

New Member
You’re right @frankywashere, the exposure levels are ridiculous.

I tried to read the study, and found the first page At this link here. The summary says that they exposed the rats at 50V/m to a 1.8 GHz signal, which is essentially the same frequency as a wireless home phone, or an older WiFi unit. The key question is “How powerful is 50 V/m”?

I coundn’t find a handy guide to comparative field strength of RF, but the the following article, from the National Institue of Heath, covers a field survey of RF strengths measured in different indoor locations. They say:
. Link is: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6466609/

So..., the study exposed rats to roughly 50-380 times the highest levels found by the NIH across the whole spectrum and not just 1.8 GHz. The reason I looked at indoor studies is that they exposed the rats for 19 hours a day and the only place you’ll get that is at home, and even that’s debatable.
Another reason to simply not focus on frequencies themselves but rather on other factors (power voltage, lvls of exposure). Many people look at this and see "OMG 1.8ghz is harmful because of this study" but fail to look at the exposure levels and power invovled.
 
Thread starter Related Articles Forum Replies Date
Oystein Final Report: Hulsey/AE911Truth's WTC7 Study 9/11 24
JFDee NY Times Article about YouTube CT Study General Discussion 2
Rory Debunked: Study shows link between menstrual cycle and the moon Health and Quackery 30
Mick West Debunked: UAF Study Shows WTC7 Could Not Have Collapsed from Fire 9/11 43
MikeG Debunked: Mike Adam's Claims Regarding HPV "Shock Study" Health and Quackery 5
mrfintoil Study: When Debunking Scientific Myths Fails (and When It Does Not) Practical Debunking 3
deirdre study on how to 'sway people' Practical Debunking 0
Critical Thinker Study: 'On the Viability of Conspiratorial Beliefs Practical Debunking 10
M Claim: Study finds nearly all scientific papers controlled by six corporations? Conspiracy Theories 1
Balance Met Office - Case Study Contrail-induced Cirrus Contrails and Chemtrails 0
Pete Tar Debunked: Most recent NASA study shows ice growth in Antartica Science and Pseudoscience 15
Leifer Bumblebee Pupae Contain High Levels of Aluminium General Discussion 15
MikeC Jet fuel damages human cells - study Health and Quackery 16
derwoodii Claim: Photo shows control rats with tumors in Séralini study Conspiracy Theories 10
RFMarine 1240 Scientists Demand Seralini GMO Study be Republished Health and Quackery 21
Leifer Gov’t Study: 75% of AIR and Rain Samples Tested Positive for Roundup ?? Science and Pseudoscience 5
Leifer "FCC Commissioner Warns of Agency’s Plan to Monitor Newsrooms" study ? General Discussion 2
Mick West Debunking "Contrails don't Persist" with a Study of 70 Years of Books on Clouds Contrails and Chemtrails 98
AluminumTheory Debunked: Infowars Study: Conspiracy theorists’ sane; government dupes crazy, hostile General Discussion 25
David Fraser They study Human Engineering for Climate Change. Conspiracy Theories 0
Mick West Debunked: Monsanto and USAF School of Aerospace Medicine chemtrail study 1977 Contrails and Chemtrails 22
walliswallis OU Study: Use of the Word "Sheep" Effective in Defeating Reason Practical Debunking 9
dizzle Dartmouth study about fluoride Conspiracy Theories 3
MikeC Carnicom institute morgellons study Health and Quackery 0
JFDee Study: "Dead and Alive: Beliefs in Contradictory Conspiracy Theories" General Discussion 19
MikeC CDC morgellons study out - it's all in your mind Health and Quackery 4
T 2 EMF studies showing strange effects on rats (Not Ramizinni) 5G and Other EMF Health Concerns 0
Dan Wilson Vaccine compensations do not prove that vaccines are dangerous Health and Quackery 1
Trailblazer People Blaming Bad Weather on Side Effects of Technology Through History Contrails and Chemtrails 4
Mick West Debunked: Paramagnetic Paint - Color Changing Cars [Hoax: After Effects Fake] Science and Pseudoscience 13
Hitstirrer Effects of fire on the structural capabilities of steel structures 9/11 68
SR1419 Interstate 580 overpass tanker fire as illustration of possible 9/11 fire effects 9/11 2
FireOfficer1822 Fire on the SS Noronic as an illustration of possible 9/11 fire effects 9/11 28
Mick West Debunked: Acupuncture for Relief of Cancer Medication Side Effects Health and Quackery 15
Representative Press WTC7 Fire Temperatures and effects on the East Floor System 9/11 58
M Health Effects - Jet Fuel Exposure (JP8+100) Health and Quackery 4
Mick West Debunked: The Harmful Effects of Marijuana - Dr. Sanjay Gupta General Discussion 123
Mick West Debunked: Effects of Intentionally Enhanced Chocolate on Mood Science and Pseudoscience 6
Related Articles






































Related Articles

Top