This is a wee story of days gone by, imparted to me in the
seventies.
Once upon a time there was a small laboratory on the edge of
the great unknown. Not too far from Birmingham to be more precise. This little
laboratory carried out basic tests on waste water such as sewage effluent,
including a test for ammonia.
A simple chemical test was used where a colour develops in
the test cell, the intensity of colour indicating how much ammonia is present in the water sample. Light
absorbed by the colour allows an ammonia concentration to be read off from a previously
prepared calibration graph.
In this case the calibration graph was kept in a drawer where
it had been stored for so long that it had become tatty and disreputable as
laboratory paperwork usually did in those days. It had never been checked either,
until that fatal day when some keen person decided to recalibrate the test and draw
a brand new graph on a fresh sheet of graph paper.
Oh dear.
The old calibration graph turned out to be wrong by a factor
of two. For years, ammonia concentrations in the effluent had been reported as
twice what they actually were.
What to do?
During the following few months, the scientists concerned
made a series of small adjustments to their calibration graph, eventually
bringing it into line with reality. Nobody was any the wiser, although a
welcome improvement in the ammonia levels of the effluent did not go unnoticed.
Not a typical episode in scientific history I should add. It
took place in the sixties too. Not a reliable period, yet by analogy it
provokes a question. Do climate modellers intend to do something similar if the
climate continues to misbehave? Of course in a sense the Met Office already
has.
4 comments:
I think they part solved their problem by 'harmonising' (or some such lexical magicery) the datasets of past readings downwards.
Woodsy - yes, I'm suspicious of harmonising. In my field, data was data.
The analysis can only be as good as the data allows it to be. So there is always the temptation to declare or insist that the data is good, even when it is difficult for it to be good. My problem is that there have been times when you have to look for what cannot be seen and also to see what is not there.
Demetrius - I'm reluctant to go beyond the data. Usually, if it has something to tell, then it should be obvious from a graph.
Post a Comment