Claim: the greenhouse effect being logarithmic means global warming ain't a problem

TheNZThrower

Active Member
A common claim you see among the Global Warming contrarians is the postulation that since the greenhouse effect is logarithmic, as in it has a diminishing effect in proportion to an equal rise in CO2, it therefore must implicitly follow that any global warming experienced will not be significant, or that it won't get severe enough to have a significant adverse effect. To quote fossil fuel stan Alex Epstein again:
While I’ve met thousands of students who think the greenhouse effect of CO2 is a mortal threat, I can’t think of ten who could tell me what kind of effect it is. Even “experts” often don’t know, particularly those of us who focus on the human-impact side of things. One internationally renowned scholar I spoke to recently was telling me about how disastrous the greenhouse effect was , and I asked her what kind of function it was. She didn’t know. What I told her didn’t give her pause, but I think it should have.

As the following illustration shows, the greenhouse effect of CO2 is an extreme diminishing effect–a logarithmically decreasing effect. This is how the function looks when measured in a laboratory.
This is the graph Epstein cites:
1644855897691.png

The first major issue is that Epstein fails to elaborate on the expertise of the scholar he mentioned. Is she a climatologist or meteorologist? Does she have any relevant expertise in climate issues? Even if she does, it is a fallacy of hasty generalisation to assume that the relevant experts often don't know just because one scholar of unverified expertise doesn't know. This is a rhetorical sleight of hand meant to cast doubt on the relevant climate experts by making it seem like they don't even know the basic fact that CO2 is logarithmic, which makes them intellectually inferior to Epstein and his grand wisdom.

Assuming that it is valid for the sake of argument, it still does not prove his thesis at all in any way. This is because the bulk of the logarithmic effect occurs within CO2 concentrations at preindustrial levels and below. As CO2 rises above preindustrial levels, the logarithmic effect vastly diminishes. In effect, any logarithmic effect after rises in CO2 concentration above preindustrial levels is so small as to be insignificant, rendering any warming experienced roughly linear.

But all that aside, just because greenhouse forcing is logarithmic does not mean that whatever forcing there is won't have a significant if not adverse effect on the climate. It could be the case that forcing above preindustrial levels will decrease with each unit increase, as in it could be the case that an increase in CO2 from 280 to 560ppm will increase temperatures by 5 degrees, and an increase from 560 to 940ppm would increase temperatures by 3 degrees. Though the greenhouse effect is logarithmic in this case, it still results in a significant increase in global temperature by 8 degrees. This also isn't even factoring into the fact that CO2 has increased exponentially; so exponentially that even when graphed on a logarithmic scale, the increase in CO2 is still an upwards curve when it should, like all other exponential lines, be flat according to SkepticalScience:
Screen Shot 2022-02-17 at 2.19.47 am.png
So even if Epstein was right about the logarithmic greenhouse effect, this still does not imply that future warming won't be severe enough to have adverse effects on human civilisation. Double the current CO2 levels any number of times, and you will begin to see a significant warming trend.

So that's my take for today, and as Epstein would make a few more points in the article I linked, I will be following up with another post in this thread soon. Did I miss out on any important details? Let me know.
 
You are correct.

We must take the Figure 4 logarithmic graph of CO2 emission by time, which show that C02 emissions are currently growing exponentially with time, and input this data into the first graph of effect with respect to CO2 (Epstein). The exponential growth in CO2 inverts the logarithmic effect, and we end up with the graph of effect w.r.t time being a diagonal line directed upwards.

The effect is only going to get worse with time, and will not flatten out, unless we reduce reduce carbon emissions to flatten or reduce this curve.

The situation is in fact worse. Figure 4 shows increasing exponential growth: it is not a straight diagonal line, it's logarithm would not be a flat line.
 
Last edited:
A common claim you see among the Global Warming contrarians is the postulation that since the greenhouse effect is logarithmic, as in it has a diminishing effect in proportion to an equal rise in CO2, it therefore must implicitly follow that any global warming experienced will not be significant, or that it won't get severe enough to have a significant adverse effect. To quote fossil fuel stan Alex Epstein again:

This is the graph Epstein cites:
1644855897691.png

The first major issue is that Epstein fails to elaborate on the expertise of the scholar he mentioned. Is she a climatologist or meteorologist? Does she have any relevant expertise in climate issues? Even if she does, it is a fallacy of hasty generalisation to assume that the relevant experts often don't know just because one scholar of unverified expertise doesn't know. This is a rhetorical sleight of hand meant to cast doubt on the relevant climate experts by making it seem like they don't even know the basic fact that CO2 is logarithmic, which makes them intellectually inferior to Epstein and his grand wisdom.

Assuming that it is valid for the sake of argument, it still does not prove his thesis at all in any way. This is because the bulk of the logarithmic effect occurs within CO2 concentrations at preindustrial levels and below. As CO2 rises above preindustrial levels, the logarithmic effect vastly diminishes. In effect, any logarithmic effect after rises in CO2 concentration above preindustrial levels is so small as to be insignificant, rendering any warming experienced roughly linear.

But all that aside, just because greenhouse forcing is logarithmic does not mean that whatever forcing there is won't have a significant if not adverse effect on the climate. It could be the case that forcing above preindustrial levels will decrease with each unit increase, as in it could be the case that an increase in CO2 from 280 to 560ppm will increase temperatures by 5 degrees, and an increase from 560 to 940ppm would increase temperatures by 3 degrees. Though the greenhouse effect is logarithmic in this case, it still results in a significant increase in global temperature by 8 degrees. This also isn't even factoring into the fact that CO2 has increased exponentially; so exponentially that even when graphed on a logarithmic scale, the increase in CO2 is still an upwards curve when it should, like all other exponential lines, be flat according to SkepticalScience:
Screen Shot 2022-02-17 at 2.19.47 am.png
So even if Epstein was right about the logarithmic greenhouse effect, this still does not imply that future warming won't be severe enough to have adverse effects on human civilisation. Double the current CO2 levels any number of times, and you will begin to see a significant warming trend.

So that's my take for today, and as Epstein would make a few more points in the article I linked, I will be following up with another post in this thread soon. Did I miss out on any important details? Let me know.
There is no doubt about the logarithmic relation between radiative forcing and CO2 concentration. But Epsteins reasoning is a non sequitur. It is a bit like what you could hear from the merchants of doubt a few decades ago, that 400 ppm CO2 is such a tiny amount, that it could not possibly have any effect on temperature.
It doesn't matter how mathematically CO2 causes whatever temperature rise -- logarithmically, quadratically, or ? -- the question should be, what effects does a certain temperature rise have on our lives?
The "scholar" (vague...) could have any expertise in a wide range of specialisms that are all too easily lumped together onder the name "climate science". It could be astronomy, atmospheric physics, glaciology, geology, ecology, paleontology, chemistry, etc. A glaciologist could tell us more about the effects of temperature rise on sea level rise than an atmospheric physicist, whereas the latter knows more about how CO2 absorbs longwave radiation.
 
A misunderstanding arises when the concentration of CO2 is measured at the surface of the earth, when it is the layer miles up in the troposphere that forms the "greenhouse" layer. The gases at that altitude cannot readily mix nor react with the surface without a lag time of decades-to-centuries. Bottom line: there is no quick fix, and whatever we've produced in CO2 in recent days will be around to affect the climate our grandchildren will have. And contrary to the claims of the rabid anti-warming crowd, "The trees will take up CO2" is wishful thinking.
 
Moving on...

Epstein proceeds to claim that climate models haven't matched temperature observations as follows:
Here’s the summary of what has actually happened– a summary that nearly every climate scientist would have to agree with. Since the industrial revolution, we’ve increased CO2 in the atmosphere from .03 percent to .04 percent, and temperatures have gone up less than a degree Celsius, a rate of increase that has occurred at many points in history. Few deny that during the last fifteen-plus years, the time of record and accelerating emissions, there has been little to no warming– and the models failed to predict that. By contrast, if one assumed that CO2 in the atmosphere had no major positive feedbacks, and just warmed the atmosphere in accordance with the greenhouse effect, this mild warming is pretty much what one would get.

Thus every prediction of drastic future consequences is based on speculative models that have failed to predict the climate trend so far and that speculate a radically different trend than what has actually happened in the last thirty to eighty years of emitting substantial amounts of CO2.
The first issue with Epstein's claim is that he mentions the increase in CO2 in terms of a percentage of atmospheric composition rather than in terms of parts per million. This allows him to make the CO2 increase seem insignificant, which essentially implicitly reiterates the common contrarian canard that ''CO2 is a tiny amount of the atmosphere, ergo it must not have a significant effect.'' Though Epstein might accuse me of strawmanning him (if he even knows I exist), it isn't a strawman because that's exactly the conclusion many people will draw when he mentions the rise in CO2 as a percentage of atmospheric composition.

The second fallacy Epstein makes is mentioning that the same rate of increase we're currently observing has occurred in the past. Even assuming what he says is true, it does not follow that either the current period of warming isn't going to accelerate in the future with increasing CO2, nor does it follow that it won't have or isn't already causing adverse effects on the climate.

The third fallacy that Epstein makes is mentioning that the warming has been relatively stagnant in the past 15 years preceding. Even if this is true, expecting the trends in CO2 emissions to exactly and perfectly sync up with the trends in temperature is fantastical, and cherry picking tiny periods where warming has been stagnant while CO2 has been rising is another tired old contrarian canard.

Epstein then reinforces his claims by citing this graph developed by known contrarian climate/atmospheric scientist John Christy:
1645064550465.png
Seems pretty damning for the theory of global warming, right? Not so fast. Gavin Schmidt, climatologist and director of the NASA Goddard Institute for Space Studies, proceeds to explain what is wrong with Christy's graph in this RealClimate article.

Now I don't currently have the time nor the understanding on climate science to go through the article, so I'mma leave the analysis of it along with Christy's data to you guys for now.
 

Attachments

  • 1645069959334.png
    1645069959334.png
    107.3 KB · Views: 188
  • Screen Shot 2022-02-17 at 11.30.24 am.png
    Screen Shot 2022-02-17 at 11.30.24 am.png
    590.9 KB · Views: 167
  • 1645068591304.png
    1645068591304.png
    117.2 KB · Views: 166
Last edited:
Gavin Schmidt, climatologist and director of the NASA Goddard Institute for Space Studies, proceeds to explain what is wrong with Christy's graph in this RealClimate article.
In the end, Schmidt produces his own graphs:
christy_new.png
christy_trend.png
Content from External Source
These graphs show that there is uncertainty in the satellite data, but there's still a discrepancy between model projections and observations.

A commenter notes that a climate model makes assumptions about future events such as volcano eruptions etc., and so what we should do to assess the validity of the model is to pick the projection where the assumptions match reality, instead of comparing to the ensemble or the average.

Another point is that 15 years is "weather" and not climate, and that it's become warmer again after 2015.

From your Epstein quote:
temperatures have gone up less than a degree Celsius, a rate of increase that has occurred at many points in history.
Content from External Source
I think the point to be made here is that if these historic increases are consistent with and explained by our current climate science, then we should trust that science when it says that this time, it's different.
 
A commenter notes that a climate model makes assumptions about future events such as volcano eruptions etc., and so what we should do to assess the validity of the model is to pick the projection where the assumptions match reality, instead of comparing to the ensemble or the average.
So you're saying that the best way to assess the accuracy of a model is to select the models, or the projections thereof, that are the closest match to the observed temperature trends, as opposed to comparing the ensemble or average of the climate models to the real temperatures?
 
The first major issue is that Epstein fails to elaborate on the expertise of the scholar he mentioned. Is she a climatologist or meteorologist? Does she have any relevant expertise in climate issues? Even if she does, it is a fallacy of hasty generalisation to assume that the relevant experts often don't know just because one scholar of unverified expertise doesn't know. This is a rhetorical sleight of hand meant to cast doubt on the relevant climate experts by making it seem like they don't even know the basic fact that CO2 is logarithmic, which makes them intellectually inferior to Epstein and his grand wisdom.
In fact, pointing out that the “scholar” was actually in a relevant field would only strengthen his argument so the fact he declines to state what kind of scholar the person was casts immediate doubt that they were relevant.

He seems to hope that the mere juxtaposition of the word “scholar” in that sentence with the reference to “experts” in the previous sentence will cause the casual reader to conflate the two.
 
So you're saying that the best way to assess the accuracy of a model is to select the models, or the projections thereof, that are the closest match to the observed temperature trends, as opposed to comparing the ensemble or average of the climate models to the real temperatures?
If I want to assess the quality of models that were used in 2000 to predict the next 15 years, I would choose those model runs where the input parameters matched what actually happened in 2000-2015.
 
Another point I would like to make about Epstein's claim that the warming currently experienced is somehow insignificant as it is less than a degree is that even a warming slightly less than a degree is already a significant portion, as in almost a fifth, of the warming that brought the Earth out of the last ice age. To quote the 2009 book ''Climate Change, the Science, Impacts and Solutions'' by Albert Barrie Pittock - a ''climate researcher'' (for the lack of a better term as I wasn't able to find good info on his precise expertise):
Three things are notable about these IPCC conclusions. First, it shows that a warming of at least 0.56ºC almost certainly occurred. Second, the most likely value of 0.74ºC, while it may appear to be small, is already a sizeable fraction of the global warming of about 5ºC that took place from the last glaciation [ice age] around 20,000 years ago to the present interglacial period (which commenced some 10,000 years ago). Prehistoric global warming led to a complete transformation of the Earth's surface, wth the disappearance of massive ice sheets, and continent wide changes in vegetation cover, regional extinctions and a sea level rise of about 120 metres.

- ''Climate Change, the Science, Impacts and Solutions'' p.3
In case anyone needed further data on the temperature trends of the past 20,000 years, here is a good graph from Wikipedia, with sources attached.
1645678087349.png
As you can see in the fifth segment, the temperature increase from the interglacial period does appear to match up with Pittock's statements. In addition, all the proxy data and temperature records were published before Epstein's article.

But all that aside, the rate of warming also matters just as much as the amount of warming. This to which Pittock has the following to say:
Most importantly, the average rate of warming at the end of the last glaciation was about 5ºC in some 10,000 years, or 0.05ºC per century, while the observed rate of warming in the last 50 years is 1.3ºC per century and the estimated rate could be more than 5ºC per century, which is 100 times as fast as during the last deglaciation. such rapid rates of warming would make adaptation by natural and human systems extremely difficult or impossible.
However, the rate of warming over such a period isn't consistent, and the Greenland Ice Core data in the above graph noted several periods of extremely rapid warming during the last deglaciation (periods termed Dansgaard-Oeschger events). So could Pittock be cherry picking when he compares the average warming rate of the past 50 years with a longer average warming rate of approx. 10,000 years? Especially since the warming rate during then isn't that consistent?
 
To quote the 2009 book ''Climate Change, the Science, Impacts and Solutions'' by Albert Barrie Pittock - a ''climate researcher'' (for the lack of a better term as I wasn't able to find good info on his precise expertise):
Article:
(Albert) Barrie Pittock worked at the Aspendale laboratory, CSIRO Marine And Atmospheric Research from 1965 until officially retiring on 1999. Since then he worked as a CSIRO honorary Fellow until 2017. Barrie did research in Meteorology, Geochemistry and Climatic change and impacts, and has continuinig interest in renewable energy and policy re climate change. He is still writing and archiving re his career and interests, and received an Order of Australia medal in 2019 for his voluntary work on Aboriginal affairs since the late 1950s, including renewable energy as a major resource for Aboriginal communities.

That page also lists a number of his publications. I think it's fair to call him a climate scientist without "apostrophes".
 
In case anyone needed further data on the temperature trends of the past 20,000 years, here is a good graph from Wikipedia, with sources attached.
1645678087349.png
Edit to add: cut-n-paste unfortunately truncated the salient portion of the graph. Darn it! Try again:
https://upload.wikimedia.org/wikipedia/commons/f/f5/All_palaeotemps.png

I find it illuminating - and chilling - that almost all of what we refer to as civilization occurs during that ten thousand years of relatively stable temperature. Perhaps it's because that's the period when most people no longer had to spend all their time on sheer survival.

Nice world. Be a shame to lose it...
 
Last edited:
To continue with the logarithmic effect of CO2, wattsupwiththat claims the following:
In 2006, Willis Eschenbach posted this graph on Climate Audit showing the logarithmic heating effect of carbon dioxide relative to atmospheric concentration:





And this graphic of his shows carbon dioxide’s contribution to the whole greenhouse effect:



I recast Willis’ first graph as a bar chart to make the concept easier to understand to the layman:



Lo and behold, the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas...



The natural heating effect of carbon dioxide is the blue bars and the IPCC projected anthropogenic effect is the red bars. Each 20 ppm increment above 280 ppm provides about 0.03° C of naturally occurring warming and 0.43° C of anthropogenic warming. That is a multiplier effect of over thirteen times. This is the leap of faith required to believe in global warming.

The whole AGW belief system is based upon positive water vapour feedback starting from the pre-industrial level of 280 ppm and not before. To paraphrase George Orwell, anthropogenic carbon dioxide molecules are more equal than the naturally occurring ones. Much, much more equal.
Content from External Source
So how valid is the representation of the logarithmic graph as a bar chart?
 

Attachments

  • 1669564197888.png
    1669564197888.png
    20.8 KB · Views: 75
So how valid is the representation of the logarithmic graph as a bar chart?
Mostly valid as long as you use it to compare like with like. However including the "no plants survive" part of the data could be considered to be deliberately adding off-the-scale data in order to shrink the visibility of the relevant part of the data. Adding noise to decrease the signal, as it were.

However, if you're not comparing like with like, then you're committing an error, or just plain lying, and none of the charts you include go that far...

... but ...

... is a perfect example of attempting to deceive.

One of the sets of data is deltas, the other is cumulative, you simply can't compare them like that - they have different dimensions, it's quite literarily an apples-to-oranges comparison. If you want 15 deltas that sum to 6, then on average those bars should be 0.4 in height. But that wouldn't look as impressive as something all the way up at 6.

Of course, 6C is the highest of the IPCC models, they cherry picked that one because taking their more moderate ones again wouldn't look so impressive.
 
CO2 isn't the only greenhouse gas, and it does matter where in the atmosphere you release it.
For example, permafrost soil thawing out releases methane.

So if someone argues that the increasing near-ground CO2 concentration doesn't fully account for the projected global warming by itself, I think everyone will agree. It's a nothingburger based on a straw man. It's not enough to replace an in-depth critique of the prevailing climate models, its only purpose is to rile up underinformed lay people.
 
Mostly valid as long as you use it to compare like with like. However including the "no plants survive" part of the data could be considered to be deliberately adding off-the-scale data in order to shrink the visibility of the relevant part of the data. Adding noise to decrease the signal, as it were.

However, if you're not comparing like with like, then you're committing an error, or just plain lying, and none of the charts you include go that far...

... but ...

... is a perfect example of attempting to deceive.

One of the sets of data is deltas, the other is cumulative, you simply can't compare them like that - they have different dimensions, it's quite literarily an apples-to-oranges comparison. If you want 15 deltas that sum to 6, then on average those bars should be 0.4 in height. But that wouldn't look as impressive as something all the way up at 6.

Of course, 6C is the highest of the IPCC models, they cherry picked that one because taking their more moderate ones again wouldn't look so impressive.
What does delta mean in this context?
 
What does delta mean in this context?
It means "change" or "difference", sorry. It comes from the traditional use of the greek delta symbol in equations to represent such quantities, e.g. ( https://wikimedia.org/api/rest_v1/media/math/render/svg/515ac019ee8fc51a74404e14c3e09714d32ee9d4 from https://en.wikipedia.org/wiki/Delta_(letter)#Uppercase , alas SVG images don't seem to get rendered inline here). So if your readings are 100, 110, 115, 118, 120, 119, then your deltas are 10, 5, 3, 2, -1.

[edit - polished it a bit]
 
Last edited:
Mostly valid as long as you use it to compare like with like. However including the "no plants survive" part of the data could be considered to be deliberately adding off-the-scale data in order to shrink the visibility of the relevant part of the data. Adding noise to decrease the signal, as it were.

However, if you're not comparing like with like, then you're committing an error, or just plain lying, and none of the charts you include go that far...

... but ...

... is a perfect example of attempting to deceive.

One of the sets of data is deltas, the other is cumulative, you simply can't compare them like that - they have different dimensions, it's quite literarily an apples-to-oranges comparison. If you want 15 deltas that sum to 6, then on average those bars should be 0.4 in height. But that wouldn't look as impressive as something all the way up at 6.

Of course, 6C is the highest of the IPCC models, they cherry picked that one because taking their more moderate ones again wouldn't look so impressive.
Which one of the graphs is cumulative, and which is the delta? I find your language confusing, so can you explain the issues with the graphs in layperson's terms?
 
Which one of the graphs is cumulative, and which is the delta? I find your language confusing, so can you explain the issues with the graphs in layperson's terms?
the red line is cumulative, the bar graphs are deltas

the issue, in layman's terms, is that the junk pile in Bob's back yard grows bigger year after year (cumulative), but Bob can't understand the issue because the amount of junk he throws away each year stays the same (delta).

If you have a graph that shows the size of the junkpile vs. what Bob throws away, year by year, it'll look similar to that CO2 graph.
 
the red line is cumulative, the bar graphs are deltas

the issue, in layman's terms, is that the junk pile in Bob's back yard grows bigger year after year (cumulative), but Bob can't understand the issue because the amount of junk he throws away each year stays the same (delta).

If you have a graph that shows the size of the junkpile vs. what Bob throws away, year by year, it'll look similar to that CO2 graph.
Which of the two CO2 graphs is cumulative VS delta?
 
I still don’t understand. Is it that the warming graph from the IPCC is a time series graph, while the logarithmic graph doesn’t have a time series, so such an overlapping of graphs is incorrect and deceptive?

I just think it would be better if you explain why the graph comparison is incorrect without having to use jargon I don’t understand well such as “delta”.
 
I still don’t understand. Is it that the warming graph from the IPCC is a time series graph, while the logarithmic graph doesn’t have a time series, so such an overlapping of graphs is incorrect and deceptive?
Now I don't get which graphs you are talking about.
I just think it would be better if you explain why the graph comparison is incorrect without having to use jargon I don’t understand well such as “delta”.
Have you understood my post about Bob's junk pile?
 
The graph to which @FatPhil responded to.
There's no time series there?
I understood your post about the junk pile. I just didn't understand how does it apply to the graph.
the bars say, "if you have X amount of CO2, and increase it by 20 ppm, that throws Y amount of heat on the pile". (That's a delta.)

the red line says, "if you have X amount of CO2, your heat pile is Z big." (that's cumulative)

those are conceptually different things.

I recast Willis’ first graph as a bar chart to make the concept easier to understand to the layman:
Content from External Source
very obviously this recasting did not make the matter any easier to understand
 
There's no time series there?

the bars say, "if you have X amount of CO2, and increase it by 20 ppm, that throws Y amount of heat on the pile". (That's a delta.)

the red line says, "if you have X amount of CO2, your heat pile is Z big." (that's cumulative)

those are conceptually different things.

I recast Willis’ first graph as a bar chart to make the concept easier to understand to the layman:
Content from External Source
very obviously this recasting did not make the matter any easier to understand
AFAIK, the chart is saying that based on the logarithmic nature of the CO2 increase, the actual temp increase for a doubling of pre-industrial levels of CO2 is only approx. 1 degrees C according to the bar chart, yet the IPCC predicts a larger increase than what the effect suggests.

I don't know where they got the IPCC graph from. But I do know that the latest IPCC report available at the time of the writing of the article says that the expected increase for a doubling of CO2 is 2-4.5 degrees:

Exec summary: https://www.ipcc.ch/site/assets/uploads/2018/02/ar4-wg1-spm-1.pdf
The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defi ned as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C.
Content from External Source
 
I still don’t understand. Is it that the warming graph from the IPCC is a time series graph, while the logarithmic graph doesn’t have a time series, so such an overlapping of graphs is incorrect and deceptive?

I just think it would be better if you explain why the graph comparison is incorrect without having to use jargon I don’t understand well such as “delta”.

If you're looking at cars, which is more impressive - having a top speed of 230 km/h, or having a fully-charged range of 650 km?

Neither - the question is *meaningless*. You cannot compare things that are not in the same unit type (technically called "dimension"). There is a "per hour" difference in the above dimensions.

Which is the same type of error as in the graph I highlighted. One is just the value of a measurement, and the other is the difference in value of that measurement over the course of a year - i.e. the change per year.

If my car analogy doesn't persuade you that such comparisons are just plain errors, nor does @Mendel's example above, let's look about how we can use the same data to get completely different answers. Were you to change the graph to be monthly, the deltas (I will keep using that term, it's the correct term to use, and a very useful one for you to add to both your passive and your active vocabulary) would be twelve times shorter, as they're the changes of only one month, barely visible against the axis. However, were you to change the graph to be decadal, then the deltas would be ten times higher, because they would be ten years' changes summed, and would therefore look far more significant. This is not surprising change/month, change/year, and change/decade will obviously be about an order of magnitude apart from each other. So which one is the correct one to compare against the absolute reading - their number, my smaller decadal one, or my larger monthly one? There's only one sensible answer: none of them. Any time interval could have been chosen, so the numbers could be made arbitrarily smaller or larger.

OT, but closely related, for the hardcore to ponder over: The Buffet Indicator isn't a dimensionless number, it is actually a measure of time! Right now, the BI isn't "2", it's "2 years". Market Capitalisation is measured in $, GPD is measured in $/yr, thefore the Buffet Indicator is $/($/yr) which is yr. To treat it as a dimensionless value - which every economic pundit seems to - is an oversight. It's the time it would take for money, at the rate of the GDP, to accumulate until it was equal in total to the MarCap.
 
If you're looking at cars, which is more impressive - having a top speed of 230 km/h, or having a fully-charged range of 650 km?

Neither - the question is *meaningless*. You cannot compare things that are not in the same unit type (technically called "dimension"). There is a "per hour" difference in the above dimensions.

Which is the same type of error as in the graph I highlighted. One is just the value of a measurement, and the other is the difference in value of that measurement over the course of a year - i.e. the change per year.

If my car analogy doesn't persuade you that such comparisons are just plain errors, nor does @Mendel's example above, let's look about how we can use the same data to get completely different answers. Were you to change the graph to be monthly, the deltas (I will keep using that term, it's the correct term to use, and a very useful one for you to add to both your passive and your active vocabulary) would be twelve times shorter, as they're the changes of only one month, barely visible against the axis. However, were you to change the graph to be decadal, then the deltas would be ten times higher, because they would be ten years' changes summed, and would therefore look far more significant. This is not surprising change/month, change/year, and change/decade will obviously be about an order of magnitude apart from each other. So which one is the correct one to compare against the absolute reading - their number, my smaller decadal one, or my larger monthly one? There's only one sensible answer: none of them. Any time interval could have been chosen, so the numbers could be made arbitrarily smaller or larger.

OT, but closely related, for the hardcore to ponder over: The Buffet Indicator isn't a dimensionless number, it is actually a measure of time! Right now, the BI isn't "2", it's "2 years". Market Capitalisation is measured in $, GPD is measured in $/yr, thefore the Buffet Indicator is $/($/yr) which is yr. To treat it as a dimensionless value - which every economic pundit seems to - is an oversight. It's the time it would take for money, at the rate of the GDP, to accumulate until it was equal in total to the MarCap.
I get what you're referring to now.

Isn't the IPCC data cited in the graph also an indicator of the warming effect for a doubling of CO2 above pre-industrial? How did you find out that the data was over a per year basis?
 
I get what you're referring to now.

Isn't the IPCC data cited in the graph also an indicator of the warming effect for a doubling of CO2 above pre-industrial? How did you find out that the data was over a per year basis?

Good catch, I may have extrapolated or interpolated around their statements too much, and dragged time into it accidentally, as it features elsewhere in the arguments. Their abscissa (the input to the function they are trying to represent, or the "x axis") in that graph is the CO2 level itself. The mis-representation of data error is the same error - you can differentiate (find the rate of change) with respect to any variable, there's nothing special about time. If F is forcing, and C is CO2 levels, then they're effectively plotting dF(C)/dC and F(C) on the same graph. My mistake, sorry. I reacted so quickly on seeing what I knew was a mistake to work out exactly what instance of the mistake it is. The thing to look out for is wheher the bars would change in height depending on whether they had made the bars wider or narrower. Anything that doesn't stay exactly the same has a dependency on exactly how they've sliced the data, and that's the warning sign.
 
So can
Good catch, I may have extrapolated or interpolated around their statements too much, and dragged time into it accidentally, as it features elsewhere in the arguments. Their abscissa (the input to the function they are trying to represent, or the "x axis") in that graph is the CO2 level itself. The mis-representation of data error is the same error - you can differentiate (find the rate of change) with respect to any variable, there's nothing special about time. If F is forcing, and C is CO2 levels, then they're effectively plotting dF(C)/dC and F(C) on the same graph. My mistake, sorry. I reacted so quickly on seeing what I knew was a mistake to work out exactly what instance of the mistake it is. The thing to look out for is wheher the bars would change in height depending on whether they had made the bars wider or narrower. Anything that doesn't stay exactly the same has a dependency on exactly how they've sliced the data, and that's the warning sign.
Can you explain in tl;dr the issue.
 
Back
Top