Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I don't want to believe that scientist don't want to advance science anymore. Destroying the temperature record by altering would be just that.
You have seen ample evidence of just that. Like it or not, some climate scientists are willing to alter the record in order to gain money, fame, and political power.
You have shown nothing but that adjustments were made to the data - and explanations were given for those adjustments. You have NOT shown that those adjustments weren't called for and you have NOT provided any evidence that those making the adustments had ulterior motives of any sort. You see the adjustments and simply assume that they were done for deceptive and malicious purposes when you have no evidence indicating that at all. They call that PREJUDICE. In severe cases, they call it BIGOTRY. In either case, it is the result of the application of ignorance.
You have seen ample evidence of just that. Like it or not, some climate scientists are willing to alter the record in order to gain money, fame, and political power.
You have shown nothing but that adjustments were made to the data - and explanations were given for those adjustments. You have NOT shown that those adjustments weren't called for and you have NOT provided any evidence that those making the adustments had ulterior motives of any sort. You see the adjustments and simply assume that they were done for deceptive and malicious purposes when you have no evidence indicating that at all. They call that PREJUDICE. In severe cases, they call it BIGOTRY. In either case, it is the result of the application of ignorance.
Really? Let's hear a rational scientifically sound reason for altering temperatures prior to 1960.
From: Tom Wigley <[email protected]>
To: Phil Jones <[email protected]>
Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer <[email protected]>
<x-flowed>
Phil,
Here are some speculations on correcting SSTs to partly
explain the 1940s warming blip.
If you look at the attached plot you will see that the
land also shows the 1940s blip (as I'm sure you know).
So, if we could reduce the ocean blip by, say, 0.15 degC,
then this would be significant for the global mean -- but
we'd still have to explain the land blip.
I've chosen 0.15 here deliberately. This still leaves an
ocean blip, and i think one needs to have some form of
ocean blip to explain the land blip (via either some common
forcing, or ocean forcing land, or vice versa, or all of
these). When you look at other blips, the land blips are
1.5 to 2 times (roughly) the ocean blips -- higher sensitivity
plus thermal inertia effects. My 0.15 adjustment leaves things
consistent with this, so you can see where I am coming from.
Removing ENSO does not affect this.
It would be good to remove at least part of the 1940s blip,
but we are still left with "why the blip".
Let me go further. If you look at NH vs SH and the aerosol
effect (qualitatively or with MAGICC) then with a reduced
ocean blip we get continuous warming in the SH, and a cooling
in the NH -- just as one would expect with mainly NH aerosols.
The other interesting thing is (as Foukal et al. note -- from
MAGICC) that the 1910-40 warming cannot be solar. The Sun can
get at most 10% of this with Wang et al solar, less with Foukal
solar. So this may well be NADW, as Sarah and I noted in 1987
(and also Schlesinger later). A reduced SST blip in the 1940s
makes the 1910-40 warming larger than the SH (which it
currently is not) -- but not really enough.
So ... why was the SH so cold around 1910? Another SST problem?
(SH/NH data also attached.)
This stuff is in a report I am writing for EPRI, so I'd
appreciate any comments you (and Ben) might have.
Tom.
</x-flowed>
Attachment Converted: "c:\eudora\attach\TTHEMIS.xls"
Attachment Converted: "c:\eudora\attach\TTLVSO.XLS"
You are the one with the burden of proof.
However, since you will very likely make no effort in that regard, being satisfied with your prejudices:
Time of Observation Bias Adjustments
Next, monthly temperature values were adjusted for the time-of-observation bias (Karl, et al. 1986; Vose et al., 2003). The Time of Observation Bias (TOB) arises when the 24-hour daily summary period at a station begins and ends at an hour other than local midnight. When the summary period ends at an hour other than midnight, monthly mean temperatures exhibit a systematic bias relative to the local midnight standard (Baker, 1975). In the U.S. Cooperative Observer Network, the ending hour of the 24-hour climatological day typically varies from station to station and can change at a given station during its period of record. The TOB-adjustment software uses an empirical model to estimate and adjust the monthly temperature values so that they more closely resemble values based on the local midnight summary period. The metadata archive is used to determine the time of observation for any given period in a station's observational history.
and
The USHCN version 2 "pairwise" homogenization algorithm addresses these and other issues according to the following steps, which are described in detail in Menne and Williams (2009). At present, only temperature series are evaluated for artificial changepoints.
First, a series of monthly temperature differences is formed between numerous pairs of station series in a region. Specifically, difference series are calculated between each target station series and a number (up to 40) of highly correlated series from nearby stations. In effect, a matrix of difference series is formed for a large fraction of all possible combinations of station series pairs in each localized region. The station pool for this pairwise comparison of series includes USHCN stations as well as other U.S. Cooperative Observer Network stations.
Tests for undocumented changepoints are then applied to each paired difference series. A hierarchy of changepoint models is used to distinguish whether the changepoint appears to be a change in mean with no trend (Alexandersson and Moberg, 1997), a change in mean within a general trend (Wang, 2003), or a change in mean coincident with a change in trend (Lund and Reeves, 2002) . Since all difference series are comprised of values from two series, a changepoint date in any one difference series is temporarily attributed to both station series used to calculate the differences. The result is a matrix of potential changepoint dates for each station series.
The full matrix of changepoint dates is then "unconfounded" by identifying the series common to multiple paired-difference series that have the same changepoint date. Since each series is paired with a unique set of neighboring series, it is possible to determine whether more than one nearby series share the same changepoint date.
The magnitude of each relative changepoint is calculated using the most appropriate two-phase regression model (e.g., a jump in mean with no trend in the series, a jump in mean within a general linear trend, etc.). This magnitude is used to estimate the "window of uncertainty" for each changepoint date since the most probable date of an undocumented changepoint is subject to some sampling uncertainty, the magnitude of which is a function of the size of the changepoint. Any cluster of undocumented changepoint dates that falls within overlapping windows of uncertainty is conflated to a single changepoint date according to
a known change date as documented in the target station's history archive (meaning the discontinuity does not appear to be undocumented), or
the most common undocumented changepoint date within the uncertainty window (meaning the discontinuity appears to be truly undocumented)
Finally, multiple pairwise estimates of relative step change magnitude are re-calculated (as a simple difference in mean) at all documented and undocumented discontinuities attributed to the target series. The range of the pairwise estimates for each target step change is used to calculate confidence limits for the magnitude of the discontinuity. Adjustments are made to the target series using the estimates for each shift in the series.
and
Estimation of Missing Values
Following the homogenization process, estimates for missing data are calculated using a weighted average of values from highly correlated neighboring stations. The weights are determined using a procedure similar to the SHAP routine. This program, called FILNET, uses the results from the TOB and homogenization algorithms to obtain a more accurate estimate of the climatological relationship between stations. The FILNET program also estimates data across intervals in a station record where discontinuities occur in a short time interval, which prevents the reliable estimation of appropriate adjustments.
and your favorite
Urbanization Effects
In the original USHCN, the regression-based approach of Karl et al. (1988) was employed to account for urban heat islands. In contrast, no specific urban correction is applied in USHCN version 2 because the change-point detection algorithm effectively accounts for any "local" trend at any individual station. In other words, the impact of urbanization and other changes in land use is likely small in USHCN version 2. Figure 2 - the minimum temperature time series for Reno, Nevada - provides anecdotal evidence in this regard. In brief, the black line represents unadjusted data, and the blue line represents fully adjusted data. The unadjusted data clearly indicate that the station at Reno experienced both major step changes (e.g., a move from the city to the airport during the 1930s) and trend changes (e.g., a possible growing urban heat island beginning in the 1970s). In contrast, the fully adjusted (homogenized) data indicate that both the step-type changes and the trend changes have been effectively addressed through the change-point detection process used in USHCN version 2.
An example:
Figure 1. (a) Mean annual unadjusted and fully adjusted minimum temperatures at Reno, Nevada. Error bars indicating the magnitude of uncertainty (±1 standard error) were calculated via 100 Monte Carlo simulations that sampled within the range of the pairwise estimates for the magnitude of each inhomogeneity; (b) difference between minimum temperatures at Reno and the mean from its 10 nearest neighbors.
http://cdiac.ornl.gov/epubs/ndp/ushcn/monthly_doc.html#steps
What is it you think this (I assume stolen) email says about the adjustments to the various temperature records?
You are the one with the burden of proof.
However, since you will very likely make no effort in that regard, being satisfied with your prejudices:
Time of Observation Bias Adjustments
Next, monthly temperature values were adjusted for the time-of-observation bias (Karl, et al. 1986; Vose et al., 2003). The Time of Observation Bias (TOB) arises when the 24-hour daily summary period at a station begins and ends at an hour other than local midnight. When the summary period ends at an hour other than midnight, monthly mean temperatures exhibit a systematic bias relative to the local midnight standard (Baker, 1975). In the U.S. Cooperative Observer Network, the ending hour of the 24-hour climatological day typically varies from station to station and can change at a given station during its period of record. The TOB-adjustment software uses an empirical model to estimate and adjust the monthly temperature values so that they more closely resemble values based on the local midnight summary period. The metadata archive is used to determine the time of observation for any given period in a station's observational history.
and
The USHCN version 2 "pairwise" homogenization algorithm addresses these and other issues according to the following steps, which are described in detail in Menne and Williams (2009). At present, only temperature series are evaluated for artificial changepoints.
First, a series of monthly temperature differences is formed between numerous pairs of station series in a region. Specifically, difference series are calculated between each target station series and a number (up to 40) of highly correlated series from nearby stations. In effect, a matrix of difference series is formed for a large fraction of all possible combinations of station series pairs in each localized region. The station pool for this pairwise comparison of series includes USHCN stations as well as other U.S. Cooperative Observer Network stations.
Tests for undocumented changepoints are then applied to each paired difference series. A hierarchy of changepoint models is used to distinguish whether the changepoint appears to be a change in mean with no trend (Alexandersson and Moberg, 1997), a change in mean within a general trend (Wang, 2003), or a change in mean coincident with a change in trend (Lund and Reeves, 2002) . Since all difference series are comprised of values from two series, a changepoint date in any one difference series is temporarily attributed to both station series used to calculate the differences. The result is a matrix of potential changepoint dates for each station series.
The full matrix of changepoint dates is then "unconfounded" by identifying the series common to multiple paired-difference series that have the same changepoint date. Since each series is paired with a unique set of neighboring series, it is possible to determine whether more than one nearby series share the same changepoint date.
The magnitude of each relative changepoint is calculated using the most appropriate two-phase regression model (e.g., a jump in mean with no trend in the series, a jump in mean within a general linear trend, etc.). This magnitude is used to estimate the "window of uncertainty" for each changepoint date since the most probable date of an undocumented changepoint is subject to some sampling uncertainty, the magnitude of which is a function of the size of the changepoint. Any cluster of undocumented changepoint dates that falls within overlapping windows of uncertainty is conflated to a single changepoint date according to
a known change date as documented in the target station's history archive (meaning the discontinuity does not appear to be undocumented), or
the most common undocumented changepoint date within the uncertainty window (meaning the discontinuity appears to be truly undocumented)
Finally, multiple pairwise estimates of relative step change magnitude are re-calculated (as a simple difference in mean) at all documented and undocumented discontinuities attributed to the target series. The range of the pairwise estimates for each target step change is used to calculate confidence limits for the magnitude of the discontinuity. Adjustments are made to the target series using the estimates for each shift in the series.
and
Estimation of Missing Values
Following the homogenization process, estimates for missing data are calculated using a weighted average of values from highly correlated neighboring stations. The weights are determined using a procedure similar to the SHAP routine. This program, called FILNET, uses the results from the TOB and homogenization algorithms to obtain a more accurate estimate of the climatological relationship between stations. The FILNET program also estimates data across intervals in a station record where discontinuities occur in a short time interval, which prevents the reliable estimation of appropriate adjustments.
and your favorite
Urbanization Effects
In the original USHCN, the regression-based approach of Karl et al. (1988) was employed to account for urban heat islands. In contrast, no specific urban correction is applied in USHCN version 2 because the change-point detection algorithm effectively accounts for any "local" trend at any individual station. In other words, the impact of urbanization and other changes in land use is likely small in USHCN version 2. Figure 2 - the minimum temperature time series for Reno, Nevada - provides anecdotal evidence in this regard. In brief, the black line represents unadjusted data, and the blue line represents fully adjusted data. The unadjusted data clearly indicate that the station at Reno experienced both major step changes (e.g., a move from the city to the airport during the 1930s) and trend changes (e.g., a possible growing urban heat island beginning in the 1970s). In contrast, the fully adjusted (homogenized) data indicate that both the step-type changes and the trend changes have been effectively addressed through the change-point detection process used in USHCN version 2.
An example:
Figure 1. (a) Mean annual unadjusted and fully adjusted minimum temperatures at Reno, Nevada. Error bars indicating the magnitude of uncertainty (±1 standard error) were calculated via 100 Monte Carlo simulations that sampled within the range of the pairwise estimates for the magnitude of each inhomogeneity; (b) difference between minimum temperatures at Reno and the mean from its 10 nearest neighbors.
Long-Term Monthly Climate Records from Stations Across the Contiguous United States
Physic equations that show co2, water vapor and methane are green house gases
Science also shows that wood is combustible. And that the world is covered in trees. Uh-oh, global burning!!
And that passes your expert judgement in logical argumentation as a valid and meaningful response?
Abraham still has not proven that man made global warming is a fact. There is no consensus in the scientific community. Period.
Okay. Done. Now how does that modify the author's statement that Rossby waves are driven by the delta T between the tropics and the poles and that the reason the US is getting such shit weather is the warming of the Arctic?
"The outflow from the cell creates harmonic waves in the atmosphere known as Rossby waves. These ultra-long waves play an important role in determining the path of the jet stream, which travels within the transitional zone between the tropopause and the Ferrel cell. By acting as a heat sink, the Polar cell also balances the Hadley cell in the Earth’s energy equation."
and
Ferrel cell
The Ferrel cell, theorized by William Ferrel (1817–1891), is a secondary circulation feature, dependent for its existence upon the Hadley cell and the Polar cell. It behaves much as an atmospheric ball bearing between the Hadley cell and the Polar cell, and comes about as a result of the eddy circulations (the high and low pressure areas) of the mid-latitudes. For this reason it is sometimes known as the "zone of mixing." At its southern extent (in the Northern hemisphere), it overrides the Hadley cell, and at its northern extent, it overrides the Polar cell. Just as the Trade Winds can be found below the Hadley cell, the Westerlies can be found beneath the Ferrel cell. Thus, strong high pressure areas which divert the prevailing westerlies, such as a Siberian high (which could be considered an extension of the Arctic high), could be said to override the Ferrel cell, making it discontinuous.
While the Hadley and Polar cells are truly closed loops, the Ferrel cell is not, and the telling point is in the Westerlies, which are more formally known as "the Prevailing Westerlies." While the Trade Winds and the Polar Easterlies have nothing over which to prevail, their parent circulation cells having taken care of any competition they might have to face, the Westerlies are at the mercy of passing weather systems. While upper-level winds are essentially westerly, surface winds can vary sharply and abruptly in direction. A low moving polewards or a high moving equator wards maintains or even accelerates a westerly flow; the local passage of a cold front may change that in a matter of minutes, and frequently does. A strong high moving polewards may bring easterly winds for days.
The base of the Ferrel cell is characterized by the movement of air masses, and the location of these air masses is influenced in part by the location of the jet stream, which acts as a collector for the air carried aloft by surface lows (a look at a weather map will show that surface lows follow the jet stream). The overall movement of surface air is from the 30th latitude to the 60th. However, the upper flow of the Ferrel cell is not well defined. This is in part because it is intermediary between the Hadley and Polar cells, with neither a strong heat source nor a strong cold sink to drive convection and, in part, because of the effects on the upper atmosphere of surface eddies, which act as destabilizing influences.
Atmospheric circulation - Wikipedia, the free encyclopedia
You're going to have to convince me that I'm wrong when I suspect that prior to this thread you'd never heard of Rossby waves or Ferrel cells and that you looked this article up and pulled the term from the text. Now I hadn't either, but I made it pretty clear I was just passing on an article I'd read. This would be an attempt to falsify your creds - a failure to give credit where credit was due (Wikipedia) and I may just have to neg you for it. ;-)
Okay. Done. Now how does that modify the author's statement that Rossby waves are driven by the delta T between the tropics and the poles and that the reason the US is getting such shit weather is the warming of the Arctic?
"The outflow from the cell creates harmonic waves in the atmosphere known as Rossby waves. These ultra-long waves play an important role in determining the path of the jet stream, which travels within the transitional zone between the tropopause and the Ferrel cell. By acting as a heat sink, the Polar cell also balances the Hadley cell in the Earths energy equation."
and
Ferrel cell
The Ferrel cell, theorized by William Ferrel (18171891), is a secondary circulation feature, dependent for its existence upon the Hadley cell and the Polar cell. It behaves much as an atmospheric ball bearing between the Hadley cell and the Polar cell, and comes about as a result of the eddy circulations (the high and low pressure areas) of the mid-latitudes. For this reason it is sometimes known as the "zone of mixing." At its southern extent (in the Northern hemisphere), it overrides the Hadley cell, and at its northern extent, it overrides the Polar cell. Just as the Trade Winds can be found below the Hadley cell, the Westerlies can be found beneath the Ferrel cell. Thus, strong high pressure areas which divert the prevailing westerlies, such as a Siberian high (which could be considered an extension of the Arctic high), could be said to override the Ferrel cell, making it discontinuous.
While the Hadley and Polar cells are truly closed loops, the Ferrel cell is not, and the telling point is in the Westerlies, which are more formally known as "the Prevailing Westerlies." While the Trade Winds and the Polar Easterlies have nothing over which to prevail, their parent circulation cells having taken care of any competition they might have to face, the Westerlies are at the mercy of passing weather systems. While upper-level winds are essentially westerly, surface winds can vary sharply and abruptly in direction. A low moving polewards or a high moving equator wards maintains or even accelerates a westerly flow; the local passage of a cold front may change that in a matter of minutes, and frequently does. A strong high moving polewards may bring easterly winds for days.
The base of the Ferrel cell is characterized by the movement of air masses, and the location of these air masses is influenced in part by the location of the jet stream, which acts as a collector for the air carried aloft by surface lows (a look at a weather map will show that surface lows follow the jet stream). The overall movement of surface air is from the 30th latitude to the 60th. However, the upper flow of the Ferrel cell is not well defined. This is in part because it is intermediary between the Hadley and Polar cells, with neither a strong heat source nor a strong cold sink to drive convection and, in part, because of the effects on the upper atmosphere of surface eddies, which act as destabilizing influences.
Atmospheric circulation - Wikipedia, the free encyclopedia
You're going to have to convince me that I'm wrong when I suspect that prior to this thread you'd never heard of Rossby waves or Ferrel cells and that you looked this article up and pulled the term from the text. Now I hadn't either, but I made it pretty clear I was just passing on an article I'd read. This would be an attempt to falsify your creds - a failure to give credit where credit was due (Wikipedia) and I may just have to neg you for it. ;-)
Nothing. You seemed to be attempting to better your knowledge so as any good per fessor would, I pointed you in the direction of more information.
to reiterate-- a cooling trend is likely to be 'corrected' even if it is authentic, a warming trend is likely to be accepted even if it is spurious. the homogenization process adds a large increase to the warming trend, but in a non-transparent way that is difficult to track down or remove.
Have you ever heard anyone say that AGW was a process that has taken place throughout the "overall history of the Earth"? No. AGW is a process that has been taking place since the beginning of the Industrial Revolution and became a serious issue in the 20th century.....Once again, the climate is not a "geological activity". It does not operate on a geological scale.
The oldest ice cores ever taken go back 800,000 years.