Dec 10 2009

How Not To Create A Historic Global Temp Index

I was having this minor debate on the fallacies of alarmists’ science in the comment section of Air Vent and decided I needed to post on this because the infant climate science (and it is in its infancy) is making some fundamental mistakes.

The momentum that built up behind the man-made global warming fad (and it is nothing more than an unproven hypothesis surrounded by a silly fan club) has not allowed the basic approach to be tested or challenged. You had a movement build up around an idea, which launched the idea into ‘established fact’ before the idea was validated. It probably will never be validated because the methodology is flawed to its core – as I will explain in painful detail.

To create a global temperature index for the past 30 years – and then project that back to the 1880’s (when global temp records were began) and then project it back centuries before that – is not trivial. And in my opinion the current approach is just plain wrong.

I take this position as someone who works ‘in space’ – where we have complex and interrelated models of all sorts of physical processes. And yet we have to keep refining the models to fit the data to do what we do. Climate science naively (and ignorantly in my mind) does just the opposite; it keeps adjusting the data (for no good reason) until they get the result they want!

For this comparative exercise Al Gore, a genius in his own mind, provides the perfect analogy – gravity. Yes Al, it’s there. But we still can’t predict how a body will travel through the atmosphere or space to an accuracy that is stable beyond a few seconds (for the atmosphere) or days (for Earth orbits). Our window of certainty is not months, seasons, years, decades, centuries or millennia. And yet gravity is very well understood and simple mathematically.

Add to that the fact our measurement systems for space systems blow away those being used by alarmists, who claim a science fiction level of accuracy in measurement and prediction. Maybe that is why they have a cult following instead of scientific proof?

In Earth’s orbit a satellite is pretty much free of atmosphere and can fly for 10-20 years. But it cannot keep its position (for GEO birds) or we cannot know its position (for GEO, MEO and LEO birds) without constantly taking measurements and correcting the models. (Sorry, geeked out there for a moment: LEO = Low Earth Orbit, MEO = Mid Earth Orbit, GEO = Geosynchronous Earth Orbit (stays above one spot on Earth, very high altitude).

While gravity is well understood, it is not the only factor working on the satellite’s flight path. After nearly half a century of exploring space we have unraveled some of the factors (sunshine pushing on the satellite, heat escaping causing a small thrust, an atmosphere which expands and contracts daily and seasonally, solar flares, etc). We cannot accurately model these beyond a few days. After about 7 days these forces build up enough change in the orbit in completely random ways that we have to remeasure the orbit and compute another prediction.

Gravity is simple, but we cannot predict out beyond a week with any accuracy.

For satellite orbits it would make no sense at all to ‘adjust’ the data to fit a curve as the alarmists do for temperature. If a data point is bad it is either consumed inside a sea of good data points or rejected because we have a sea of good data points to use. If there are sufficient data points you don’t adjust the data – bottom line. Either you have enough data to draw a conclusion or you don’t. You don’t make up data to fill your need either.

If the rocket scientists can only predict the path of an object orbiting the globe for 7 days, what sane person thinks a hodge-podge of randomly accurate and aging sensors around the globe can measure a global index, let alone predict the future or unravel the past? It cannot. But what ‘scientists’ do to the data to pretend they can is downright silly.

They make adjustments or homogenizing stations or fill in grids with pretend stations. A total unscientific joke. The measurement is the measurement. It has a fixed accuracy and uncertainty. Each station has a unique accuracy and uncertainty due to its siting, technology and the accuracy with which its readings are made each day. (geeked out again: if there is a drift in time when you read the sensor, or that sensor’s reference to UTC (the world wide time reference) is unknown or dynamic, then you increase the error bars to the measurement). If the sensor is sited wrong or has problems you extend the error bars. You DON”T adjust the data!

(Sorry, geeked out again. Error bars are the range in which the real world value could be. If I measure an orbit to a accuracy of +/- 50 meters, then I know what ever number comes out of my system is not real. Reality is in that 100 meter range around the value. Statistically all I know is the satellite is in that range, the value I compute only centers the box it can be found in.)

You cannot adjust data to remove error or increase accuracy on a single sensor! If you do a regular regime of calibrating the sensor against a known source, you can remove BIAS. But that is all. What we do with sensor nets when we combine them is we can remove some error by comparing measurements that overlap in time and space. But that again something totally different than what the global scientists think they are doing.

Another example: moving stations. When a temperature station is moved it should simply become a new station at that point in time, with a new set of siting errors (and accuracy if the sensor is upgraded). It has a different time window than it previous incarnation – it is a new data set. When I see crap like this I realize these people are just not up to this kind of complex analysis.

Before:

After:

You don’t ‘homogenize’ neighboring stations into a mythical (and fictional) virtual station. That is just clueless! And there is no need to.  When that happens start a new data set. Those stations measured real temperatures, as shown in the top graph. They are three independent data sets with fixed attributes for the locale. Whatever that mess is in the bottom graph, it is nothing more than shoddy modelng. It destroys the historic record and replaces it with someone’s poor mathematical skills or scientific understanding.

I mean think of what that graph says in my world. If I had measurements of the moon’s position in the night sky from these three points I could reproduce the Moon’s orbit. But what happens in that second ‘adjusted’ graph is silly. I would be changing the measured position of the Moon for two ‘adjusted’ stations to make it closer to the first station – while not moving the two stations physically! They would produce a lunar vector similar to the others, but did I really move the Moon? Of course not, all I did was insert a lot of error. Now my calculation on the Moon’s position over that period does not reflect reality (or the established gravitational model). The question is, does it fit someone’s half cocked new theory of gravity – yet unproven!

Basically what alarmists needed to do was not adjust data, they needed to create a thermal atmosphere model which would take into account siting characteristics both local and large. This would include distance from large bodies of water, altitude, latitude, etc. A three dimensional model that would explain why various stations have their unique siting profiles and temperature records. It would explain why temperatures near oceans fluctuate less than stations inland 100-200 miles. It would show how a global average increase of 1°C would result in a .6°C increase at high latitudes or altitudes. It would EXPLAIN the data variations in the measurements.

But we don’t have this model. Alarmists cannot explain with accuracy why stations 10 miles apart show different temperature profiles each and every day of the year. So they pretend to know how to ‘adjust’ the data and their groupies applaud them for their brilliance. Yet the result, like my Moon example, is they simply lost site of reality.

Another irresponsible gimmick is creating mythical stations in grids without measurements. As we all know the temperature for a town or city 20 miles away can be totally different than that for our home town. Just bring up a local weather map. As weather fronts move through the dynamics over a region are dramatic. These changes happen all year at any time of day. 20 miles down the road things can be totally different.

Yet the CRU and others create fictional stations 750 kilometers away from the nearest data point – as if that makes any sense at all. There is no data in these regions – don’t make up data and call it truth! No data means no measurement.

In my world we can interpolate a trend forward to fill measurement gaps. For example we don’t measure each point on the orbit, we measure a couple of times a day to get a set of points on the orbit from which we can derive an accurate orbit curve. Because gravity is so damn simple we have incredibly high confidence in those computed positions in the measurement gaps. But as I said, they decay over time (5-7 days). If we don’t remeasure, the errors increase with time.

Global Climate is nowhere near as simple as gravity. It decays over distance and time rapidly, just as ballistic flight through the atmosphere is unpredictable over distance and time, no matter how predictable gravity is. If they wanted to validate that 3-D model I proposed, they would predict temperatures in regions without measurements and then go measure them to see if their model was right. You don’t mix models and raw data – that is just wrong (though that is the essence of Michael Mann’s “Nature Trick”).

Another disturbing problem with climate science is identification of error and uncertainty. In the alarmists’ world there is no degradation over distance or time – which makes their results pure science fiction. If they had a reviewed, verified and defendable error budget they could move from fiction to science. They would also understand why their conclusions are standing on seriously shaky ground.

OK, going full geek here. An error budget shows how much error is added to the final computed number at each stage of its processing from the base measurement. For the global warming problem this budget covers the point of a temperature measurement at a station to the point a global annual index is derived, and it must contain the following error steps:

  1. Measurement error: all the errors associated with taking the raw measurement. This includes the sensor accuracy and biases (if unknown or measured these become noise), siting induced errors, time of measurement induced errors (one must take the measurement the same time every day to within a certain tolerance to create a consistent historic or annual record).
  2. Local geographic error: A sensor measurement like temperature is only good for a certain distance. The farther you move from it the more the accuracy degrades as the error increases. No one has demonstrated the distance a single station’s measurement can be considered valid. At this stage we have a raw station temperature set from specified times of day.
  3. Station integration error: when you take data from two or more stations to create a regional index, you must integrate the first two error sources described above and carry that to the integrated station level for a region or grid. In some systems combining sensors can increase accuracy. Land temperature sensor nets are not one of those kinds of systems. There are too many factors due to siting and distance (the temp decay problem) to increase accuracy. To do that you would need to have sensors located geographically close (under 5 miles I would estimate) to actually remove sensor and siting errors. At this stage we have a local regional data set (more than one station).
  4. Day-to-Month integration errors: Temperatures are taken daily at fixed times and then integrated to make a daytime and nigh time index.  Then these daily indices are integrated into monthly indices. The error from the daily computations must then be integrated and added onto the monthly index.
  5. Month-to-year integration errors: The AGW alarmists need to create a historic record, so they look at a yearly index (CRU actually looks at the 4 seasons first, then integrates). What ever the methodology, there will be additional error introduced to create an annual index for a geographic region. At this stage we have a local regional data set per for a single year.
  6. Large geographic integration errors: Finally, you integrate mid sized regions from step 3 above into data sets for countries or hemispheres or the globe. Again, we are compounding the errors from the previous steps – some offset, some don’t. They all have to be accounted for – no hand waving!

Each local region going into step 3 has a very unique set of errors due to the unique nature of the errors defined in steps 1 & 2. From step 3 on you have a homogenized set of local regions, which have errors integrated in a consistent manner as we move from daily measurements to monthly and annual (steps 4 & 5). Finally we have a consistent method to capture additional errors as we integrate up to cover the globe (step 6).

A defendable error budget is an obvious requirement for any number spewed from any alarmists. Without it the numbers mean nothing. In my business we use these budgets to fly rockets (atmosphere) and spacecraft safely. We use them to understand when we need to remeasure and recompute a new predict. If space programs did not have a handle on this we could not fly through Al Gore’s gravity field and Earth’s atmosphere. The fact is, for launch and ascent and because the error in position can increase so quickly, we measure and adjust the guidance at incredible rates to make it into orbit safely.

We have to. We cannot adjust the data, we have to adjust to the data.

Now what I presented above is just the errors in making a measurement today for one year. What happens when you go back in time? Well you have to recompute the error budget for each station for each year. What you should see (if done correctly) is rapidly increasing error bars as the technological accuracy is lost as we go back in time. The errors in step 1 would start to increase by orders of magnitude.

But if you look at the silly claims of NCDC, GISS and CRU you see very small changes in uncertainty going back in time  – proof positive they screwed up their error budget. One of my first posts on Climategate was on errors in climate estimates over time, and I used space exploration again as the example.

I used the accuracy know as ‘image resolution’ as the example everyone can relate to (more pixels more detail, less error or blur). I used two pictures of Mars to demonstrate the state-of-the-art capabilities of humankind separated by ~50 years. First was an image taken in 1956 from the Mt Wilson Observatory:

Second was taken by the Hubble Space Telescope in 2001.

We can all see the effect going back in time has on accuracy and error. In 1880, when the global  temperature record began, humans were drawing Mars not photographing it.

The CRU data dump made public a very interesting document. It was an early attempt at an error budget, though it does not show the steps, just their initial estimate of a bottom line. Also interestingly enough, they computed it for 1969. The following graph is from that document and proves (per CRU) that the current temperature reconstructions are way too imprecise to confirm the warming claims of the IPCC and other alarmists (click to enlarge):

The title of this graph indicates this is the CRU computed sampling (measurement) error in C for 1969. It clearly shows much of the global temperatures for 1969 are +/- 1°C or more. Which means that until our current temperature rises well above 1°C over that computed for 1969 we are statistically experiencing the same temps as back then. And we know these error estimates will have to grow as we go back another 80 years to the 1880’s, let alone even farther back in time.

I don’t even think the CRU estimates are right and complete, but I do know they alone disprove the IPCC claims that there has been 0.8°C rise in temperature over the last century, mostly due to human activities. The data cannot make that determination.

Prior to 1880 there are no real global temperature records, so scientists tried to find proxies. One good proxy is ice cores, which capture the chemical composition of the snow and ice going back thousands and thousands of years. Chemical signatures are very accurately tied to temperature since these are physical processes. No surprise but the ice cores show no significant warming today. Instead, these ice cores show many warmer periods in the history of humankind. Update: WUWT has more ice core perspective. – end update

Therefore Mann and Jones and other alarmists went to a much less reliable measure of historic temperature – tree rings. Tree rings are effected by a lot of factors, the least of which is temperature (after a certain minimum has been attained to activate is growth processes). Tree growth depends on sunshine, nutrients, water and number days above the optimal temperature. A tree ring should show the same growth under 30 days of 40°F temps with plenty of moisture (afternoon showers) and nutrients as it would under 30 days of 55°F temps and the same conditions. Trees are not thermometers.

Using a living organism to measure temperature is dodgy compared to the physics of chemistry used with ice cores. The error bars on a tree ring mapped to a temperature range (and it can only be mapped to a range, not a value) are huge. But the alarmists don’t do proper science, they run statistics until they get the answer they like, then throw out the error bars as if they are meaningless. There is no way for trees rings to define any historic temperature value. Therefore claims that the MPW or Roman Period were a degree or two warmer or colder using trees is all bunk.

This post is too long and too geeky already, but I need to note that there are few people in the world capable of discerning which scientific argument is more sound. No journalist or politician can discern whether I am right or wrong. Al, give it up baby. You did not invent the internet (I know those who did) and you have a 3rd grade grasp of science (and I have two 4th graders who can prove it). I suggest you stay away from any debates outside talking to journalists. They never know when you drop one of your classic whoppers of ignorance.

The sad fact is the science behind man-made global warming is not good science. It is rather pathetic actually. I work with premier scientists and in fact review their missions for feasibility to return the results advertised. I would fail this mess without a second thought.

In addition, you cannot leave the verification of man-made global warming (AGW) in the hands of those whose careers and credibility rest on AGW being proven to be true. When you do, you get those questionable ‘adjustments’ that turn raw temp data (processed to at least step 5 in the error budget above) into something completely different. For example, here is my version of a classic graph now making its rounds on the internet (click to enlarge):

What is shows by the blue dots and lines is the raw temperature data for Northern Australia. This overlays perfectly with what the UN IPCCs Global Climate Models predict would be the temperature record for the region without AGW (the blue region in the underlying graph. The red dots and lines are what is produced from alarmists’ ‘adjustments”, and unsurprisingly these line up with the Global Climate Models predictions if there is AGW (reddish region).

Alarmists adjust temp data that magically proves alarmists’ theories, based on alarmists’ models. Impressed? I’m not. I tend more towards disgusted.

The science of global warming is a mess. They have no error budget that proves they can detect the warming they say they have detected. Their tree data is applied wrong by assuming a temp value when all you can estimate is a range (and tree ring recent divergence with current temps just proves trees are lousy indicators of temperature anyway). The alarmists have made all sorts of bogus and indefensible site adjustments, station combing while regularly making up stations from thin air to alter (or hide) the real temperature record.

Instead of explaining the data, they adjust the data to meet their explanations. The Global Climate research has not made it to a professional level of scientific endeavor as we see in more established areas of science.. If their science was so settled the supporters could answer these challenges without lifting a finger. But they cannot, instead they play PR games and smear their opponents. Houston, they have a problem.

24 responses so far

24 Responses to “How Not To Create A Historic Global Temp Index”

  1. crosspatch says:

    I don’t consider myself a “denier”, I consider myself a “skeptic”. I don’t deny that Earth warmed from 1976 to the early 2000’s. That is obvious. But I am skeptical of the claims by some labs that the warming has been as extreme as claimed.

    Take the Brisbane example in the previous posting. You have the raw data that show cooling. You have adjusted data that show warming. The crux of the entire skeptic “argument” can be made with one simple question:

    “May be please see the adjustment process?”

    The answer is always “no”. That they refuse to show their work and allow others to replicate their results goes against a very fundamental ethic of science.

    But it is when this is placed in a larger perspective that one becomes REALLY skeptical.

    For example, when one looks at this graph of Greenland ice cap temperatures over the past 5000 years, one sees a rather startling trend since about 0 AD. Greenland icecap temperatures have been generally trending downwards for about the last 2000 years. Now that graph ends at around 1900 and temperatures have warmed about another 0.5 degrees since the end of that graph, but even so, if you draw a line at -31 degrees, we are still cooler than it has been for the past 5000 years.

    But wait, it gets worse.

    If you look at this graph of the past 10,000 years or so, you can see that for most of that time the temperatures have been well above -31 degrees. We are STILL in a relatively cold period.

    So the basis of the skepticism is:

    We are not in a period of unusual warmth in a historical sense. Temperatures have risen farther and faster in the relatively recent past (on a geological timescale). There is apparently nothing unusual about today’s temperature value or behavior of its change.

    The people who would alarm us about CO2 and tell us that we are causing the warming produce data that is quite different from the actual temperature readings. They refuse to explain how they arrived at those figures. They issue ad hominem attacks and threats to anyone who questions them.

    But if you notice the trend since about 0 AD, temperatures are falling overall at a quite rapid rate. While there are periods of warming in the overall trend down, we see that each warm period tops out a little cooler than the one before.

    The message of the raw data is actually quite scary for for a different reason. We might actually already be well into the slide down toward re-glaciation and it may have started 2000 years ago at around 0 AD. Even with the modern warming, we are in one of the coldest periods of the current interglacial.

    Glaciation or “ice age” is the NORMAL condition over the past few million years. We spend about 100,000 years in glacial conditions and a relatively short 10,000 years or so in warmer “interglacial” conditions such as we enjoy today. Time is just about up.

    So taken in context with this interglacial getting “long in the tooth”, temperatures showing a clear accelerated cooling trend since 0 AD, current temperatures being some of the coolest of the current interglacial period despite the modern warming cycle … it doesn’t look good.

    Chances are better that the next several hundred years will see continued cooling than they will see continued warming if the past record is any indication of the future (and the past record is rather cyclical).

    We have every reason to be “skeptical” of their claims. They refuse to show how they arrive at their doomsday scenario, the raw data don’t reflect any such thing, and the longer term raw data show rather dramatic cooling on the millennial scale.

    They can put an end to the skepticism once and for all. Simply release their data and methods. And still they refuse.

    Why?

  2. Paul_In_Houston says:

    FYI – Sir: I just recommended this post in one of my own ( A True Professional’s Perspective on Climate Change Data… )

    It’s quite possible that this could eventually net you two or three new readers.

    You’re quite welcome. 🙂

  3. AJStrata says:

    Hey thanks Paul!

  4. crosspatch says:

    I don’t know what it is about this blog but I can never seem to make a comment without a gross typo:

    “May be please see the adjustment process?”

    Should be:

    “May we please see the adjustment process?”

    Sheesh.

  5. Neo says:

    They grow these models as “genetic algorithms”

    There was story a while back about their use on Wall Street. What was so interesting, or maybe not, was that in the end, the developed algorithms worked against the training data they were fed, but no body knew what they were doing. When the algorithm said buy Apple, they did .. and usually they made money. What made the “genetic algorithm” flag Apple for a particular day ? … not a clue.
    They had no idea what data was important and what data was totally ignored. This means that you have to pour in the “kitchen sink” and hope the “trigger mechanism” was in there somewhere. It becomes science by induction.

  6. Paul_In_Houston says:

    # crosspatchon 10 Dec 2009 at 1:21 pm

    I don’t know what it is about this blog but I can never seem to make a comment without a gross typo:.

    Sir: I doubt it’s the blog.

    Just the perversity of the universe; the same one that makes the typo glaringly obvious just after you hit the “Submit Comment” button.

    (Were I a conspiratorialist, I would suspect AJ of setting the button to introduce random typographical errors, just to releive the boredom. But that would conflict with his reverence fo data, so that cannot be possible.) 🙂

  7. Redteam says:

    Very well written, good details as usual. Will it change minds? probably not. why not? you can’t get the AGW crowd into the room to hear the discussion or arguments.

    You’ve put together an excellent ‘scientific’ argument but now Management is needed and written words that people are free to read or disregard usually doesn’t change too many “made up” minds.

    I spent the majority of my career as a manager. A manager usually has to decide between different positions. Many times he can get the differing parties into the same room and let them present their evidence, much like a courtroom, and then using his intelligence, reasoning ability and the ‘true’ facts, he can come up with the truth. If he’s ‘usually wrong’ he won’t stay a manager very long.

    In this case, you can’t require the ‘warmers’ to come into the room, or present their evidence and methods and they are not required to abide by what the truth really is.

    I don’t even need all this ‘argument’, there are too many places to get historical data, such as ice core temps, sea temps, etc and if you view those graphs, no argument is necessary.
    but AJ, keep up the great work on this subject.

    NEO:
    interesting. I recall many years ago when computer controls were being put into operation in most manufacturing processes. One process that was resistant to ‘scientific’ methods was the temp controls of a lime kiln.
    Attempts to control it using just scientifically derived algorithms just would not work 100% of the time. So an alternate method called ‘fuzzy logic’ was tried. Simply what it did was used a combination of algorithms and ‘human’ intelligence. If the temp began to deviate using the algorithms, then the controller (computer) simply said to itself “what would the operator do under this circumstance?” and that is the control that would be applied. The combination of methods was very satisfactory for that day and time. (back in the early ’80’s)

    That sounds much like the logic that is/or was used to buy Apple stock. There was no ‘scientific’ reason to do it, but what the heck, it works.

  8. […] This post was mentioned on Twitter by Malcolm Bell, AJ Strata. AJ Strata said: new: How Not To Create A Historic Global Temp Index http://strata-sphere.com/blog/index.php/archives/11824 […]

  9. blueguitarbob says:

    Another irresponsible gimmick is creating mythical stations in grids without measurements.

    The technique of infilling missing data is not irresponsible by itself; however, accepted statistical methods require a great deal of care and transparency with the treatment of missing data, and these standards do not seem to have been followed in any of the temperature data sets.

    Most statistical methods fail when applied to data sets with missing data points. Therefore, it is extremely common – even routine – to infill missing data using a variety of accepted techniques. Without filling in the missing data, the statistical analysis cannot be performed, so this is a compromise in the name of expediency. Decades of papers and books have analyzed the various techniques for infilling missing data, so researchers almost always choose one of these accepted methods. These methods have one thing in common, though: they become more suspect when the proportion of missing data increases. If more than 10% of your data is missing and needs to be infilled, then you need to step cautiously. More than 20%, and you need to hang a red flag on your analysis to warn other researchers. Those are general rules of thumb that may change with the discipline, but you get the idea.

    In the case of the temperature data sets, the infilling of grids that have missing instrumental measurements is an accepted technique among climate scientists; however, I don’t know if they are following accepted statistical practice from the larger statistical community. My research has never involved that type of data structure, so I can’t say for sure. It seems that the technique was born out of expediency. They needed to fill in the missing data, or they would not have been able to use the gridded data sets in their climate models. The technique needs to be validated, though; I have not found where they have done that.

    Moreover, the amount of missing data should be more transparently explained. It seems that a great deal of infilling is taking place, and the missing data is not randomly distributed in the data set. The effects of that much missing data should be analyzed, and I have not found where they have done that either.

    The techniques used for infilling the gridded instrumental data may be perfectly valid. They should, however, give any statistician pause. Skepticism is definitely warranted until these techniques are addressed.

  10. blueguitarbob says:

    Another disturbing problem with climate science is identification of error and uncertainty

    A shorter comment, this time… this is an “engineer v. scientist” thing. As an ex-engineer (instrumentation design, 10yrs) and a social scientist/statistician, I can testify that engineers do end-to-end error analyzes, while scientists do not. They just don’t. I do not understand why.

    It may have something to do with the fact that engineers are tasked with building things that have the potential to kill people when they fail. Scientists simply suffer professional embarrassment when they fail. The greater stakes of failure forces engineers to be more cautious and aware of error. I find that scientists tend to be completely unaware of the compounding nature of error and uncertainty in complex systems. It’s a blind spot, and not just for climate science.

  11. Bishop Hill says:

    The map of the error budget is fascinating. Note however that this is only for the land record (CRUTEM). Presumably the error bounds on the seas surface temps are wider still.

  12. Paul Dennis says:

    AJ this is an excellent dissection of much that is wrong with the global instrumental and proxy temperature record. Well done.

    I’m an isotope geochemist and have and continue to do research on palaeoclimates. I’m constantly amazed that their is such a lack of rigour when assessing and propagating errors in measurements. I constantly refer students and colleagues to the NIST guide on estimating errors in measurements and how to propagate these errors.

    It is a truism that many palaeoscientists have very little background in the physical sciences per se and come from what I would call the soft sciences. They understand some statistics but have very little appreciation of measurement errors.

  13. AJStrata says:

    Thanks Paul!

    Yes, these people show all the signs of a bush league outfit. And now that the Emperors are seen to have no clothes, it is going to be a very tough tumble for them.

  14. AJStrata says:

    Bishop – many thanks for stopping in. Yes, that document was a bit of an eye opener for me.

  15. Researcher says:

    Global Warming is not science. An engineer must prove his science since anything he builds must work as designed. But there is no way to verify anything being discussed as Global Warming Science.

    Think of the people who put forth these methods. What were they thinking? How does a scientist act this way? Why would someone change temperature proxies in the middle of a project or substitute actual recorded temperatures when the proxies being used fail to follow current recorded temperatures.

    I suggest they have a problem with a simple phenomenon discovered when it caused mental breaks for office workers. The cubicle was designed to deal with it by 1968. There it is believed to cause a harmless temporary episode.

    Do you think these people use Cubicle Level Protection where they work with computers? I would bet they are completely unaware of Subliminal Distraction and the problems it can cause.

    What else would explain how they thought they could get away with this farce?

    VisionAndPsychosis.Net has other cases including mental problems on scientific expeditions going back a hundred years. There was a full psychotic break on Soyuz-21. The Cosmonaut recovered when he returned to Earth, away from the too-small space capsule and his fellow Cosmonaut’s movement in his peripheral vision.

  16. fader says:

    If it looks like a turd and smells like a turd, you can’t pick out the peanuts and make it a rose. That message needs to be heard by the so-called scientists preaching for the cult of AGW.

    Excellent blog — one I can absolutely agree with as I have first hand experience with much of what you have discussed. While my concerns are with electron flow and not spacecraft dynamics, I know to be true your examples relating to space. Been there, done that (at least six times, not counting anomalies!)

    What a wonderfully clear explanation and comparison. Your blog is now on my favorites list!

  17. Paul_In_Houston says:

    AJ, this may well be one of the most read and most popular posts you’ve written.

    What’s my data (of course, it may need “adjusting”:-) )?

    I have recently started a very modest blog of my own, that only had a handful (as in literally the number of fingers on your hands) of readers.

    Way up near the top of this post’s comments, I commented that I had recommended this post in a post of my own. My post quoted a very small part of your’s and basically boiled down to “You gotta see this” (meaning your post).

    I have gotten hundreds of hits on my site, from all over the world, and over 95% of them show as coming from your post.

    So, if actual commenters are only a small fraction of the number of people who read this post, the readership must be HUGE.

    FWIW

  18. Allan Gay says:

    This retired computer programmer finds your rigor refreshing and your clarity delightful. Here is science I can respect.
    Thank you.
    The three thermometers in my house have not enabled me to compute the temperature of Northamptonshire.

  19. M. Simon says:

    One critically important point you leave out is that the land temperature measurements (until electronic instruments came along) are mostly min/max which are added together and divided by 2 to come up with an “average” daily temperature for a site. The error from this has got to be huge.

    For the less technical (this is a hypothetical for illustration) suppose a site has a reading of 50 deg F for 23 hours and 80F for one hour. Is the actual average 65 deg F or 51.25 deg F?

    With radiation going up as T^4 this is going to cause some errors in the radiation if it is computed from such numbers.

    But the models fit the “adjusted” averaged data.

    “give me four adjustable parameters, and I can fit an elephant, give me five, and I can fit the tail”

  20. M. Simon says:

    AJ.,

    I have a bit on adjusting temps and the min/max problem at:

    http://powerandcontrol.blogspot.com/2009/12/hope-and-change-data.html

    Which is why I knew the numbers without calculating them this time.

    BTW engineers are not scrupulously honest. As a group they are more honest than most and have come up with social institutions (design reviews) to raise the level of honesty and critical thinking.

    What science needs is not better peer review (no obvious mistakes or unwarranted deviation from accepted knowledge) but something more akin to design review. In theory the need for replication is the check on the system. But it takes time.

    Follow the whole Cold Fusion Fiasco:

    1. Eureka!!!!! It works. Free Energy!!!!!!!!!!!!!!!!
    2. It is a fraud – no one can replicate it.
    3. There is something there. No one knows what it is for sure. We have a hypothesis or thirty.

    We might be 20 years farther along in answering the question “is it big time useful or just interesting or is it good for something we previously have not thought of”.

    BTW I remember when Anthony Watts started doing his Stevenson Screen experiments and that morphed into SurfaceStations. Along with McIntyre they have done excellent work first on the analysis (McIntyre) and then on the underlying data (Watts).

    There was a long discussion at Climate Audit about engineers vs Climate “Scientists”.