Ramazzini Study on RF effects in Rats

frankywashere

New Member
Friend of mine was trying to say rediculous things like "5g causes cancer" and cited this article as proof:

Ramazzini Study on Radiofrequency Cell Phone Radiation: The World’s Largest Animal Study on Cell Tower Radiation Confirms Cancer Link
https://www.globalresearch.ca/ramaz...-tower-radiation-confirms-cancer-link/5695275


Does anyone have any information that debunks this? I remember reading something about this study exposed rats to very close proximity and that no humans would be exposed that way. Can't remember though.

Frank
 
Friend of mine was trying to say rediculous things like "5g causes cancer" and cited this article as proof:

Ramazzini Study on Radiofrequency Cell Phone Radiation: The World’s Largest Animal Study on Cell Tower Radiation Confirms Cancer Link
https://www.globalresearch.ca/ramaz...-tower-radiation-confirms-cancer-link/5695275


Does anyone have any information that debunks this? I remember reading something about this study exposed rats to very close proximity and that no humans would be exposed that way. Can't remember though.

Frank

Hi Frank,

Thanks for the link, I wasn't aware of this paper. I need more time to read through it myself, but I had a quick look around and found a fairly succinct rebuttal of this study:

https://betweenrockandhardplace.wor...-not-increase-risk-for-schwannoma-and-glioma/

The author mainly focuses on the differences between the Italian study and the NTP study I mentioned above, pointing out that if there was a genuine response at the very low doses given in the Italian study, the NTP study should have shown MUCH higher cancer rates than it did.
 
Friend of mine was trying to say rediculous things like "5g causes cancer" and cited this article as proof:

Ramazzini Study on Radiofrequency Cell Phone Radiation: The World’s Largest Animal Study on Cell Tower Radiation Confirms Cancer Link
https://www.globalresearch.ca/ramaz...-tower-radiation-confirms-cancer-link/5695275


Does anyone have any information that debunks this? I remember reading something about this study exposed rats to very close proximity and that no humans would be exposed that way. Can't remember though.

Frank

You’re right @frankywashere, the exposure levels are ridiculous.

I tried to read the study, and found the first page At this link here. The summary says that they exposed the rats at 50V/m to a 1.8 GHz signal, which is essentially the same frequency as a wireless home phone, or an older WiFi unit. The key question is “How powerful is 50 V/m”?

I coundn’t find a handy guide to comparative field strength of RF, but the the following article, from the National Institue of Heath, covers a field survey of RF strengths measured in different indoor locations. They say:
The highest maximum mean levels of the exposure considering the whole RF-EMF frequency band was found in offices (1.14 V/m) and in public transports (0.97 V/m), while the lowest levels of exposure were observed in homes and apartments, with mean values in the range 0.13–0.43 V/m.
Content from External Source
. Link is: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6466609/

So..., the study exposed rats to roughly 50-380 times the highest levels found by the NIH across the whole spectrum and not just 1.8 GHz. The reason I looked at indoor studies is that they exposed the rats for 19 hours a day and the only place you’ll get that is at home, and even that’s debatable.
 
I tried to read the study, and found the first page At this link here. The summary says that they exposed the rats at 50V/m to a 1.8 GHz signal, which is essentially the same frequency as a wireless home phone, or an older WiFi unit. The key question is “How powerful is 50 V/m”?

This kind of related to what I was looking at in the "Measusring 5g" thread: https://www.metabunk.org/threads/measuring-5g-emf-and-using-icnirp-guidelines.11178/



The "Incident E-field" measurement of V m-1 (V/m) is the measurement in the rat study. at 1.8 Ghz (1800 Mhz) the reference level is 1.375*sqrt(1800) or 58 V/m. So 50V/m is slightly less than this limit (the occupation limit, public limits are much lower).

But V/m isn't "power", power would be the last column, which is is the E-field multiplied by the H-field. As seen in note 5, above, the limits require all three of the values to exceed the reference values if they are in the "far field", but the definitions of the "fields" is confusing. see the 2020 standard:

Different reference level application rules have been set
for exposure in the far-field, radiative near-field and reactive
near-field zones. The intention of ICNIRP’s distinction between these zones is to provide assurance that the reference
levels are generally more conservative than the basic restrictions. In so far as the distinction between the zones is concerned, the principle (but not only) determinant of this is
the degree to which a field approximates plane wave conditions. A difficulty with this approach is that other factors
may also affect the adequacy of estimating reference level
quantities from basic restriction quantities. These include
the EMF frequency, physical dimensions of the EMF source
and its distance from the resultant external EMFs assessed,
as well as the degree to which the EMFs vary over the space
to be occupied by a person. Taking into account such
sources of uncertainty, the guidelines have more conservative rules for exposure in the reactive and radiative nearfield than far-field zone. This makes it difficult to specify whether, for the purpose of compliance, an exposure should
be considered reactive near-field, radiative near-field or farfield without consideration of a range of factors that cannot
be easily specified in advance. As a rough guide, distances > 2D^2/l (m), between l/(2π) and 2D^2/l (m), and < l/(2π) (m) from an antenna correspond approximately to the far field, radiative near-field and reactive near-field, respectively, where D and l refer to the longest dimension of the antenna and wavelength, respectively, in meters.
Content from External Source
 
I tried to read the study, and found the first page At this link here.
Attached https://www.metabunk.org/attachments/falcioni2018-pdf.40227/
Metabunk 2020-04-07 10-36-00.jpg


Metabunk 2020-04-07 10-36-25.jpg

On the face of it, this looks like bullshit, but maybe someone can explain the statistics. There were 3/207 (1.4%) schwannoma in male rats at 50V/m, but 0 in the control. However there was 9/410 (2.2%) in female rat at 5V/m, vs 4/817 (1.0%) in the control, but that's not statistically significant?

And the 25V figures are "safer" than the 5V figures, and essentially the same as the control.

The Schwannoma incidence seems essentially random to me. What am I missing?
 

Attachments

  • falcioni2018.pdf
    959.1 KB · Views: 550
On the face of it, this looks like bullshit, but maybe someone can explain the statistics. There were 3/207 (1.4%) schwannoma in male rats at 50V/m, but 0 in the control. However there was 9/410 (2.2%) in female rat at 5V/m, vs 4/817 (1.0%) in the control, but that's not statistically significant?
If you have 0 Schwannoma in 412 rats, then you'd be 95% confident that the probability is lower than 0.89% (1.28% for 99% confidence). If you have a probability of 1.4% to see a Schwannoma, then the probability that you will see none in 412 samples is 0.3%. This means that the chance that the same basic probability of Schwannoma incidence underlies both groups is lower than 5%.

If you have 4 in 405, the 95% interval goes from 0.27% to 2.51%, which includes the observed 2.2%, so that is not significant at the p <= 0.05 level. If the base probability is 2.2%, the probability that you have 4 in 410 (as in the control) or less is 5.25%, which is not low enough. The basic probability here is more likely to be the same, at a likelihood of over 5%.
 
Last edited:
Now, the question is, how likely is it to see a 95% significant result if you run 6 tests? (Or if you run 20 tests and publish only 6?) That's where reproducibility comes in.

image.png
https://xkcd.com/882/
 
The Schwannoma incidence seems essentially random to me.

I would say the same: the incidences among the different types of Schwannoma are very randomly scattered too. Sample size isn't large enough to deduce anything. Probably need to repeat it with at least 10,000 subjects to begin to see if there's any sort of effect.
 
On the face of it, this looks like bullshit
My favorite quote so far, @Mick West. I want to post a longer note later with full references, but let me add a few things first. I have so many browser tabs open right now, I'm afraid I'm going to lose focus. With regards to the endocardial Schwannoma column only:

1. The only statistically significant finding in the report was an increase in endocardial Schwannoma in male rats at the highest exposure (3.2.1).
2. Sprague-Dawley rats are genetically prone to cancer and this is greater in females than males, even though more males get malignant Schwannoma. In fact, one important paper on cancer from Glyphosate was withdrawn, in part, due to the use of Sprague-Dawley rats.
3. Also in 3.2.1 they mention that 20 years of male control rats have exhibited a .6% rate of Schwannoma. From the chart @Mick West pasted, malignant Schwannoma was 1%. If the background rate is .6% then is the .4% difference statistically significant? I think not, at least not at the 95% confidence level.
4. Statistically, they do not report error bars on their results. A 1% incidence rate in a small population could generate error bars greater than the result. It does not say this, but #3 above indicates that the result of 1% should be plus or minus some number which again, (maybe .6, but my statistics are weak), may render the results meaningless. They're literally basing this paper on one rat.

Between the 2 items listed, it seems like bullshit is the appropriate call. Don't get me started on the article in the original link from @frankywashere.
 
If you have 0 Schwannoma in 412 rats, then you'd be 95% confident that the probability is lower than 0.89% (1.28% for 99% confidence). If you have a probability of 1.4% to see a Schwannoma, then the probability that you will see none in 412 samples is 0.3%. This means that the chance that the same basic probability of Schwannoma incidence underlies both groups is lower than 5%.

If you have 4 in 405, the 95% interval goes from 0.27% to 2.51%, which includes the observed 2.2%, so that is not significant at the p <= 0.05 level. If the base probability is 2.2%, the probability that you have 4 in 410 (as in the control) or less is 5.25%, which is not low enough. The basic probability here is more likely to be the same, at a likelihood of over 5%.

Just going by the male rats, there's 1229 of them, and 7 Schwannomas, or 0.57%, if we say that 0.57% is the probability of an individual rat getting a Schwannomas in all cases, then we can run a simulation of that study. Doing a 100k trials will smooth out any variance. Here's some code:
Code:
// given a number of rats, and a probability percentage, return the nubmer of rats that pass that probability for a single random incident
// e.g. pass in 1000 and 1% and you will get about 10 back, but maybe more
function someRats(N, probability) {
    var pass = 0;
    for (var i=0; i<N; i++) {
        if (Math.random()*100 < probability) pass++;
    }
    return pass;
}

var trials = 100000   // number of times we run the sim. 100k is fine
var Ns = [412,401,209,207] // number of rats in each individual group

var incidence = [0.57,0.57,0.57,0.57] // optional different incidence for each group (default 0.57)
var cut = 3;

var above = new Array(); // counter for 3 or above
var count_0_3 = 0;
var groups = Ns.length;
var sum = new Array(groups)
for (var i = 0; i < groups; i++) {
    sum[i] = new Array()
    above[i] = 0;
}
var highest = 0;
var hits = new Array(groups); // stores one run of hits for each group
for (var i = 0; i < trials; i++) {
    for (var g = 0; g < groups; g++) {
        hits[g] = someRats(Ns[g], incidence[g])
        if (sum[g][hits[g]] == undefined) sum[g][hits[g]] = 0;
        sum[g][hits[g]]++;
        if (hits[g] > highest) highest = hits[g];
        if (hits[g] >= cut) {
            above[g]++;
        }
    }
    if (hits[0] == 0 && hits[groups - 1] >= cut) count_0_3++;
}

console.log("" + trials + " trials")

for (var g = 0; g < groups; g++) {
    for (var i = 0; i < highest; i++) {
        if (sum[g][i] == undefined) sum[g][i] = 0;
    }
    console.log("Group " + g + ": (N=" + Ns[g] + ", incidence= "+incidence[g]+"%) " + (100 * sum[g][0] / trials).toFixed(2) + "% of trials had 0 hits, " + (100 * above[g] / trials).toFixed(2) + "% had >= "+cut+" hits")
}
console.log("percentage of trials where first group had 0 and last group had "+cut+" or more = " + (100 * count_0_3 / trials).toFixed(2))
You can run this


Results:
Code:
Group 0: (N=412, incidence= 0.57%) 9.56% of trials had 0 hits, 41.79% had >= 3 hits
(index):56 Group 1: (N=401, incidence= 0.57%) 10.08% of trials had 0 hits, 40.14% had >= 3 hits
(index):56 Group 2: (N=209, incidence= 0.57%) 30.47% of trials had 0 hits, 11.84% had >= 3 hits
(index):56 Group 3: (N=207, incidence= 0.57%) 30.73% of trials had 0 hits, 11.54% had >= 3 hits
(index):58 percentage of trials where first group had 0 and last group had 3 or more = 1.11

(You can copy this code into a something like a https://playcode.io/ window if you want to play with it)

So about 10% of group 1 is expected to have 0 hits, and about 10% of group 4 is expected to have >=3 hits, so combined that's a rate of about 1% for both things to happen by chance. I'm pretty sure it's not quite as simple as that though.
 
So about 10% of group 1 is expected to have 0 hits, and about 10% of group 4 is expected to have >=3 hits, so combined that's a rate of about 1% for both things to happen by chance. I'm pretty sure it's not quite as simple as that though.
That's approximately it. (Did you run it for the female rats by comparison?)
The thing is, the statistical test that they used is agnostic of the long-term probability. The researchers probably input that table into a statistical package and had it compute the significance. (I'm not good enough at statistics to understand the Fischer test.) If you make P(cancer)=0.57% your null hypothesis, then both values are well within the 95% confidence margin, and the result is no longer significant (but then the control group doesn't really matter (except it does because we know that there wasn't a confounding factor in the setup that caused all rats to develop cancer)).

The xkcd argument (with the gummi bears) is that they examined two types of cancer , brain and heart (schwannoma), so given that they segmented the set in 6 groups and the cancer types in 3 groups each and THEN also computed totals for the schwannomas, that gives them 24 spots in the tables to see a significant result, so with a 95% confidence value, we'd expect to see a "significant" number at least once in these tables, simply by random chance. This is somewhat oversimplified since not all numbers are independent of each other, but you get the idea.
 
Fancy language aside, through both my paid work and some of the research projects I've done for hobbies, I've learned to become incredibly wary of the words "statistically significant" and anyone trying to prove anything by typing "p<0.05"; I've seen too many PhD papers, respectably presented and on the face of it convincing, that were simultaneously full of these things and completely flawed.

For me, sample size is everything. 10,000 rats would be the bare minimum. Ten or a hundred times that and then there might be something to take notice of.
 
For me, sample size is everything. 10,000 rats would be the bare minimum. Ten or a hundred times that and then there might be something to take notice of.
Sample depends entirely on the size of the effect you're trying to prove. You can get good data from sample sizes as low as 30, if +/- 10% is sufficient accuracy for you. I expect with 15 dead rats, N=200 would have been quite sufficient to prove an "we're all going to die" scenario. But the smaller the effect is that you want to pin down, the greater is your required accuracy, and the greater your sample size must be. Unfortunately, if I remember correctly, this works exponentially, so for 10 times the accuracy you'd need 100 times the rats.
 
Interesting. What would be an example of a study that would return good data with a sample size of only 30?
 
Interesting. What would be an example of a study that would return good data with a sample size of only 30?
Every study returns good data with a sample size of 30. The idea is that if we observe a mean, we want to compute some kind of confidence that we found the correct mean, and if the sample size is at least 30, the means that we randomly get by running experiments follow a distribution that is approximately normal even if the data we are looking at is not normal distributed.
So by having at least 30 samples, we can more easily compute an error bar and a confidence that the mean we found is close to the real mean, which is one definition of "good data". If you get a result but have no clue what its error margin is, you're looking at bad data or anecdotal evidence.
As it happens, this lack of normality in the distribution of the populations from which we derive our samples does not often pose a problem. The reason is that the distribution of sample means, as well as the distribution of differences between two independent sample means (along with many20 other conventionally used statistics), is often normal enough for the statistics to still be valid. The reason is the The Central Limit Theorem, a “statistical law of gravity”, that states (in its simplest form21) that the distribution of a sample mean will be approximately normal providing the sample size is sufficiently large. How large is large enough? That depends on the distribution of the data values in the population from which the sample came. The more non-normal it is (usually, that means the more skewed), the larger the sample size requirement. Assessing this is a matter of judgment22. Figure 7 was derived using a computational sampling approach to illustrate the effect of sample size on the distribution of the sample mean. In this case, the sample was derived from a population that is sharply skewed right, a common feature of many biological systems where negative values are not encountered (Figure 7A). As can be seen, with a sample size of only 15 (Figure 7B), the distribution of the mean is still skewed right, although much less so than the original population. By the time we have sample sizes of 30 or 60 (Figure 7C, D), however, the distribution of the mean is indeed very close to being symmetrical (i.e., normal).
Content from External Source
http://www.wormbook.org/chapters/www_statisticalanalysis/statisticalanalysis.html

There's a lot of handwaving and "judgment" involved here, of course, and 30 isn't always a suitable number.

Here is an example of a study with n=16:
"Outbreak of COVID-19 in Germany Resulting from a Single Travel-Associated Primary Case" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3551335
image.jpeg
The knowledge that Covid-19 is infectious even before the onset of symptoms has informed German contact tracing ever since February.
 
I guess that's a different type of study, though - it's looking at behaviour in a very specific case, and answering the question "can x happen?" I think the type of study we're talking about here is one that seeks "statistical significance".
 
I guess that's a different type of study, though - it's looking at behaviour in a very specific case, and answering the question "can x happen?" I think the type of study we're talking about here is one that seeks "statistical significance".
Yes. And to determine how significant your result is, you need to be able to model the probability distribution that underlies it. Mick made a simulation that assumes all male rats have basically the same chance to get cancer. What if that is not true? Is our average chance still correct then? If you have looked up my first quote in the book you linked, you'll have seen the graphs right below it, where a skewed distribution is shown, and the averages look skewed as well. In that case, any idea we might have of how accurate the observed average is goes out the window.

But if you have 30 rats or more, you can usually assume that your average is going to behave much the same as if every rat had the same chance to get cancer, and the discussion we've been having becomes possible. Only then can we say, well, if the true chance is 0.57%, then there's an 11.49% chance that we see 3 or more sick male rats in that group, because that's a statement on how samples behave in normal distributions. You can then decide if that chance is small enough for you to attach significance to it, and it kinda tells you how high your chance is that it isn't, based on that set of data.

Now, by convention, in most sciences we'll say "significant" means 5% chance or less, and that makes results easy to compare and combats sensationalism. But what we really would want to know is not a binary yes/no, but the actual probability. If the chance to be wrong is 4.9% that's worse odds than if it was 0.3%, right? Significance is a choice that we assign to a specific level of confidence that we want to have in the result, and we can only do that if we can compute how probable it is that we got that result by random chance. And for that, we usually want the distribution to be like a normal distribution, so 30+ samples or rats or whatever.

If you have that, your study data (usually) tells you how accurate your result is.
And then the other thing I said earlier comes into play: if you need a lot of accuracy because you want to show a very small effect, you have to use a small sample size; but if you're ok with getting an approximate idea, a small sample is enough. The study I cited computed the average incubation length and the average serial interval, and because there were only 16 samples, the error was probably quite big; but since all they probably wanted was to check if that lines up with the data we had from China, they did it anyway.

Long story short, to be able to assess the error inherent in the data, we need a minimal sample size. If we can do that, if we can talk about the error, then we have scientific data that can be significant if judge it so; but if we can't, we only have anecdotes.
 
Well, we shall have to agree to disagree on that. But, for me, after doing the "birth rate vs lunar cycle" and "menstrual cycle vs lunar cycle" projects - both of which featured university studies with sample sizes in the hundreds that claimed all sorts of "statistical significance" - all of which fell apart on closer investigation - I don't think I can be convinced on this one. :)
 
Last edited:
Well, yeah, just havingt numbers isn't enough. There's no magic in them. ;-)
If you're good at sociology but bad at statistics, you miss things.
 
Finally found one of the reference I was looking for:

OECD further states that ‘‘for strains with poor survival such as SD rats, higher numbers of animals per group may be needed in order to maximize the duration of treatment (typically at least 65/sex/group).’
Content from External Source
This is in contrast to the standard 50 rats described elsewhere in the linked paper.

Link is here. Hope this works.

My point is that SD rats don't live as long as other strains, and the SD rats' predisposition to cancer sort of make you question the use of SD rats at all. Granted, they used quite a few more than 65 rats/sex/group, but the researchers run the risk of producing null results if the observed incidence is low. as in this case where only one result was significant.
 
You’re right @frankywashere, the exposure levels are ridiculous.

I tried to read the study, and found the first page At this link here. The summary says that they exposed the rats at 50V/m to a 1.8 GHz signal, which is essentially the same frequency as a wireless home phone, or an older WiFi unit. The key question is “How powerful is 50 V/m”?

I coundn’t find a handy guide to comparative field strength of RF, but the the following article, from the National Institue of Heath, covers a field survey of RF strengths measured in different indoor locations. They say:
The highest maximum mean levels of the exposure considering the whole RF-EMF frequency band was found in offices (1.14 V/m) and in public transports (0.97 V/m), while the lowest levels of exposure were observed in homes and apartments, with mean values in the range 0.13–0.43 V/m.
Content from External Source
. Link is: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6466609/

So..., the study exposed rats to roughly 50-380 times the highest levels found by the NIH across the whole spectrum and not just 1.8 GHz. The reason I looked at indoor studies is that they exposed the rats for 19 hours a day and the only place you’ll get that is at home, and even that’s debatable.
Another reason to simply not focus on frequencies themselves but rather on other factors (power voltage, lvls of exposure). Many people look at this and see "OMG 1.8ghz is harmful because of this study" but fail to look at the exposure levels and power invovled.
 
Back
Top