UK temperatures since 1956 – physical models and interpretation of temperature change

In this post we present evidence that suggests 88% of temperature variance and one-third of net warming observed in the UK since 1956 can be explained by cyclical change in UK cloud cover. The post is co-authored by Clive Best and builds on an earlier post that described the UK Met Office climate station data from 1933 to present (links given below).

A copy of a manuscript submitted to and rejected by Nature can be downloaded here. This post is also based on a seminar given at The University of Aberdeen on 12th November that can be downloaded here (4.1MB).


The objective of this study is to explain an observed cyclical relationship between sunshine hours and temperature from 23 UK Met Office weather stations (Figures 1 and 2) [1]. The relationship (R2=0.8 on 5y means) is observed in data from 1956 to 2012. The pre-1956 data are believed to be affected by air pollution as previously described on Energy Matters and

Figure 1 Tmax and sunshine hours averaged for 23 UK weather stations. The UK Met Office report monthly data. The first stage of data management was to compute annual means. The above chart shows a 5 year running mean through the annual data.

Figure 2 Data from Figure 1 cross plotting Tmax and sunshine hours, 1956-2012.

We recognised that the temperature trend could in part be controlled by dCloud and in part by dCO2 and wanted to determine the relative importance of these two forcing variables. Other variables such as dCH4, are of secondary importance, and have not been included in our analysis.

The CO2 radiative forcing model

Line by line radiative transfer codes calculate the forcing of CO2 in the atmosphere. CO2 absorbs infrared (IR) photons from the surface in tight bands of quantum excitations of vibrational and rotational states of the molecule and on Earth the 15 micron band is dominant. The central region is saturated at current CO2 levels so the enhanced greenhouse effect is mainly due to increases in side lines. The net effect of this is that CO2 forcing is found to increase logarithmically with concentration. This dependence has been parameterised by Myhre et al. (1988) [2] to be:

S = 5.3 ln(C/C0) watts/m2

where C is the new level of CO2 relative to a start value C0. Climate Sensitivity is defined as the temperature increase following a doubling of CO2 levels in the atmosphere. The change in forcing is:

5.3 ln(2) = 3.66 watts/m2

so applying the value of the Planck response (3.5 Watts/m2/˚C) we get a CO2 climate sensitivity of 1.05˚C. Global circulation models (GCM)  include multiple feedback effects from H2O, clouds and aerosols resulting in larger values of (equilibrium) climate sensitivity ranging from 1.5˚C to 4.5˚C  (AR5) [3].

Our CO2 forcing model applied to the UK is simply:

CS x 5.3 ln(C/C0)

where CS represents a “feedback” factor to be determined by the data.

We use the annually averaged Mauna Loa measurements of CO2 [4] and assume these values apply to the UK. Then the annual change in temperature due to Anthropogenic Global Warming (AGW) between year y-1 to year y is given by:

DT = (5.3 x ln(CO2(y)/CO2(y-1))/3.5
Tcalc(y) = Tcalc(y-1) + DT

For non-physicists, the graphic picture of the CO2 forcing model (Figure 3) may help visualise how it works.

Figure 3 The CO2 radiative forcing model outputs. The model is initiated by setting Tcalc = Tmax in 1956. Model outputs are plotted for transient climate response (TCR) = 1, 2, 3 and 4˚C. The contribution of CO2 with high TCR in the range 2 to 4˚C can explain some of the warming trend but little of the structure of the temperature record.

The sunshine–surface temperature-forcing model

Clouds have two forcing effects on climate. First they reflect incoming solar radiation back to space providing an effective cooling term. Secondly they absorb IR radiation from the surface while emitting less IR radiation from cloud tops thereby increasing the green house effect (GHE). The interplay between these two effects is complex and depends on latitude and cloud height. Recent CERES satellite measurements have determined that globally the net cloud radiative effect is negative (-21 W/m2) [9] – net cooling of the Earth. UK climate is dominated by low cloud which will increases the net cooling effect. We define the Net Cloud Forcing (NCF) factor in the UK to be the ratio of solar forcing for cloudy skies to that for clear skies. Then for a given station with average solar radiation S0 (taken from NASA climatology) [5] and fractional cloud cover CC (where hours of cloud is defined as daylight hours without sunshine) we find for year y:

CC(y) = (4383-sunshine(y))/4383

the effective solar forcing

Seff(y) = (1-CC(y)).S0 + NCF.CC(y).S0

Thus we see that an increase in the radiative forcing for a given UK station due to decreasing cloud cover will change the surface temperature to balance the change through the so-called Planck response. The Planck response (4.sigma.Teff^3) is about 3.5 Watts/m2/deg.C, which is the increase in outgoing IR for a 1˚C rise in surface temperature. So the change in average temperature DT between one year and the next is given by:

DT(y) = (Seff(y) – Seff(y-1))/3.5

The model therefore predicts the average temperature Tcalc based only on CC (cloud cover) and NCF (net cloud forcing factor).

Tcalc(y) = Tcalc(y-1) +DT(y)

For each station we normalise the Tcalc(1956) to the actual average temperature Tmax(1956) and then calculate all future temperatures based only on CC (sunshine hours). The only variable in the model is NCF. Finally, all stations are averaged together to compare the model with the actual temperature record.

For those who don’t quite follow the physics the graphic output shown in Figure 4 should help visualise how the model works.

Figure 4 Output from the sunshine–surface temperature-forcing model for net cloud forcing (NCF) factors of 0.3, 0.4, 0.5 and 0.6. The model is initialised by setting Tcalc = Tmax in 1956. All subsequent years are calculated using only dSunshine (i.e. dCloud). By way of reference, NASA report mean cloud transmissibility of 0.4 for the latitude of interest [5]. NCF values >0.4 in our model incorporate a component of the greenhouse warming effect of clouds. NCF = 1 = total opacity of cloud, all radiation is reflected would be represented by a flat line on this chart. NCF = 0 = total transmissibility of cloud, all radiation reaches the surface would be represented by a high amplitude curve.

From Figure 4 it can be seen that none of the NCF values provide a perfect fit of model to measured data. NCF=0.6 fits the front end but not the back end of the time temperature series. NCF=0.3 fits the back end but not the front end of the time temperature series. It was apparent to us that an NCF value close to 0.6 could provide a good fit if temperatures were lifted at the back end by increasing CO2. The next stage, therefore, was to combine the CO2 radiative forcing and sunshine surface temperature forcing models.

Optimised combined model output

The optimised combined model output should satisfy the following criteria:

Gradient of Tmax v Tcalc = 1
Intercept = 0
R2 = 1
Sum of residuals = 0

The model is optimised with NCF = 0.54 and TCR = 1.28˚C as shown in Figures 5, 6 and 7. This provides:

Gradient = 1.0002
Intercept = +0.01
R2 = 0.85
Sum of residuals = -0.71˚C

Figure 5 Comparison of model (Tcalc) with observed (Tmax) data. The model is initialised by setting Tcalc=Tmax in 1956. Thereafter Tcalc is determined by variations in sunshine hours and CO2 alone.

Figure 6 Cross plot of the model versus actual data plotted in Figure 5.

Figure 7 Residuals calculated by subtracting Tcalc from Tmax. Not only is the sum of residuals for the optimised model close to zero but they are also evenly distributed along the time series.

Model example using TCR = 3

In order to illustrate a different output, let’s assume that there was “unequivocal evidence” that TCR = 3. How would our combined model cope? Setting TCR = 3˚C, we have adjusted NCF to produce the best possible fit as illustrated in Figures 8, 9 and 10. The optimised parameters are as follows:

Gradient = 1.004
Intercept = +0.15
R2 = 0.84
Sum of residuals = -11.1˚C

Notably, it is possible to get a good fit on three out of 4 of our criteria but a quick examination of Figures 8 and 10 shows that the fit is visibly poorer than the optimised model. The extent to which this precludes TCR as high as 3˚C is for the reader to decide.

Figure 8 Setting TCR=3˚C, the model is optimised with NCF=0.72. This provides reasonable m, c and R2 (Figure 6) but a clearly poor fit as evidenced by sum of residuals = -11.1˚C (Figure 10).

Figure 9 Cross plot of the model versus actual data plotted in Figure 8.

Figure 10 Residuals calculated by subtracting Tcalc from Tmax. With TCR set to 3, the Tcalc model produces temperatures that are consistently too high producing heavily biased negative residuals along the time – temperature series.

If one accepts that cyclical changes in sunshine / cloud contribute to the net warming of the UK since 1956, then this must reduce the contribution to warming from CO2. Hence, it becomes impossible to produce a good fit of model to observations by lending CO2 a role larger than the model can accommodate.

Relative contributions to the optimised model

Setting the combined model parameters so that there is zero effect from CO2 and zero transmissibility of cloud to incoming radiation we discovered that the output was not a flat line (Figure 11). The reason for this is because the data inputs from 23 weather stations are discontinuous (Figure 12) and this imparts some structure to the averaged data stack (Figure 11). Taking this into account, the percentage contributions of dCO2, dCloud and dArtifacts add up to 100% along our time series as shown in Figure 11.

Figure 11 The relative contributions to the optimised model from dCO2, dCloud and data artifacts. It can be seen that along the time series CO2 makes the greatest contribution followed by cloud followed by artifacts.

Figure 12 The opening and closing of weather stations imparts some structure to the Tmax and sunshine data that needs to be taken into account in this and all other interpretations of such data series.

Integrating the modulus of the curves for the optimised model shown in Figure 11 along the time series and calculating the percentage contribution to the temperature record (gross dT) provides the following result:

dCO2 – 5%
dCloud – 88.5%
dArtifact – 6.5%

However, looking at the overall final contribution of each component between 1956 and 2012 (net dT; Figure 11) produces this result:

dCO2 – 49%
dCloud – 32%
dArtifact – 19%

In other words, variance in cloud cover accounts for nearly all the structure variance in UK temperature but somewhat less than half of the total temperature rise since 1956.


The data and conclusions presented here apply only to the UK, a small island group off the West coast of Europe that currently occupies the northern end of the temperate climatic belt in a western maritime climatic setting. The polar jet stream is typically overhead and has a profound impact upon the weather regime in the UK. The NCF value of 0.54 derived from our optimised model will apply only to the UK. Other geographic locations should yield different values since they will occupy different latitudes and have different mean cloud geometries – that will fluctuate with time.

However, other localities on the Earth’s surface may be expected to display cyclical change in cloud cover that impacts surface temperature evolution. Perhaps some localities show a negative correlation between sunshine and temperature, in which case the net globally averaged effect may converge upon zero. But our analysis of global cloud cover and temperature evolution that is currently out to review suggests this is not the case [6]. Global cloud cover has fluctuated over the past 40 years and has imparted structure to the temperature record in a manner similar to that described here for the UK.

Global circulation models (GCM) that do not take into account cyclical change in cloud cover have little chance of producing accurate results. Since the controls on dCloud are currently not understood there is a low chance that GCMs can accurately forecast future changes in cloud cover and as a consequence of this they cannot forecast future climate change on Earth.

Professor Dave Rutledge from Caltech reviewed an early version of the manuscript sent to Nature and pointed out that the optimised TCR from our model = 1.28˚C was identical to the value reported by Otto et al (2013) [7]. The Otto et al work was based on a review of GCMs used in ICCP reports and applies globally. In the UK, we need to call upon increasing CO2 to produce a transient response resulting in higher temperatures to explain the observed temperature record.

Conclusions and consequences

  • UK sunshine records suggest that cloud cover fluctuates in a cyclical manner. This imparts structure to the UK temperature record (confidence = very high)
  • A combined CO2 radiative forcing and sunshine – surface temperature forcing model is optimised with NCF = 0.54 and TCR = 1.28˚C (confidence = medium; uncertainty unquantified)
  • Our empirically constrained value for TCR = 1.28˚C is identical to the value of 1.3˚C reported by Otto et al [7]
  • Our model aggregates dT over a 56 year period and provides a good fit of calculated versus observed temperature based on dCloud and dCO2 alone.
  • The consequences of the above are quite profound, especially when combined with the findings of Otto et al. It removes the urgency but does not remove the long-term need to deal with CO2 emissions.
  • Global cloud cover as recorded by the International Satellite Cloud Climatology (ISCCP) [8] program also shows cyclical change that helps explain the global temperature record.
  • The cause of temporal changes in cloud cover remains unknown.


[1] MetOffice: Historic station data.(2013).at <>
[2] Myhre, G., Highwood, E. J., Shine, K. P. & Stordal, F. New estimates of radiative forcing due to well mixed greenhouse gases. Geophysical Research Letters 25, 2715–2718 (1998).
[3] IPCC AR5 Summary for Policy Makers (2013)
[4] Keeling, C. D. et al. Atmospheric carbon dioxide variations at Mauna Loa Observatory, Hawaii. Tellus 28, 538–551 (1976).
[5] Kusterer, J. M. NASA Langley Atmospheric Science Data Center (Distributed Active Archive Center). (2008).at <>
[6] Effect of Cloud Radiative Forcing on Climate between 1983 and 2008, C. H. Best and E. W. Mearns (under review)
[7] Otto, A. et al. Energy budget constraints on climate response. Nature Geoscience 6, 415–416 (2013)
[8] The International Satellite Cloud Climatology Project (ISCCP) <>
[9] Richard P. Allan, Combining satellite data and models to estimate cloud radiative effects at the surface and in the atmosphere, RMetS Meteorol. Appl. 18: 324–333, 2011

This entry was posted in Climate change and tagged , , , , , , . Bookmark the permalink.

72 Responses to UK temperatures since 1956 – physical models and interpretation of temperature change

  1. Hugh Sharman says:

    Euan thanks.

    Brilliant as ever. As you say, this superb analysis is for a tiny patch of land. But it also covers a geologically insignificant period. Your fellow Aberdonian oil geologist wrote about this recently at Linked in, as follows:


    Chartered Petroleum Engineer

    Hi Peter,

    I provided several links in the original post.

    But just to illustrate how muddied the waters are on this subject here is a link from BGS which clearly states that during the Eocene “The sea temperature rose between 5–8 degrees celsius in just a few thousand years.”

    and from the same BGS page…

    “One of the warmest periods in the Earth’s history was the Cretaceous, from 140 to 65 million years ago.
    The Earth was then several degrees warmer than today and is described as having a ‘greenhouse’ climate (see Greenhouse Earth).
    Britain enjoyed warm, tropical conditions at this time.
    The poles were warm and at times there may have been no ice on them at all.
    There is even evidence of temperate forests growing in the Arctic and Antarctic.
    As ‘greenhouse’ temperatures were reached, the world’s ice melted, which caused significant sea level rise.”

    Here using your source again, BGS discusses the late cretaceous period…

    “sea surface temperatures around Britain probably reaching 28°C. Even deep ocean water seems to have been warm, perhaps as high as 15°C, compared with the near-freezing deep ocean water of today.”

    So even the deep ocean was as much as 15°C warmer?
    Why was it so much warmer?

    “Analysis of fossil soil horizons and fossil plant remains suggests that Late Cretaceous atmospheric carbon dioxide levels peaked at between four and 18 times current levels.”

    Our old friend CO2.

    My original point is that the mainstream debate never has any context, the climate is volatile as is atmospheric CO2, we should say how volatile it was before humans started influencing it, I don’t want to get into a nitty gritty row over semantics and minutia, to me it doesn’t matter if it was 5°C warmer or 15°C warmer or 50°C warmer.

    What matters is putting the famous hockey stick into context. It suggests a very stable past, whereas the past climate was volatile and still is, and will continue to be long after humans are gone.

    Meanwhile the whole debate is still discussed without context. It has become a hugely artificially narrow and subjective debate about theoretical future scenarios that do not address the realities of the extended past or the likelihood of intervening events.

    The quality of the debate needs to improve if it is to achieve anything long term.

  2. mididoctors says:

    Where are we going with all this? I am not sure it is really that relevant anymore… I mean do we ignore CC and burn all the carbon or do we restrict carbon for whatever reason real or imagined and effectively ration whats left?

    you see the problem if the is no CC due to fossil fuels what difference does it make?

    • Ed Keo says:

      I have always been “realistic” about climate change and climate sensitivity in particular and suspicious that it could be much lower than the climate models in the IPPC reports assume. They are just complex computer models of the atmosphere and totally dependent upon us understanding the relationship between the various parameters used to characterize the atmosphere. The length of time that has passed without any significant warming, according to all of the temperature reconstructions, suggests that the sensitivity input to the models will be revised downwards sooner rather than later. Thus climate change may not be the main driver of energy/fossil fuel restrictions in the medium term. However that still leaves us with dwindling fossil fuels reserves to drive energy efficiency etc. Unconventional resources may have postponed this problem for at least a few decades, but fossil fuels remain a finite resource that we are consuming at an ever increasing rate so the inevitable crash will follow one day.

      In the main the difference in the policy drivers of CC and resource scarcity is slight, however there are occasions when they diverge significantly. CCS, for instance, may help reduce CO2 emissions but it will introduce a huge need for additional energy resources which will increase the rate of depletion of fossil fuels. This will be a policy disaster if climate sensitivity turns out to be nearer 1.5 than 4.

    • TinyCO2 says:

      What would you be rationing it for if not AGW?

      • Euan Mearns says:

        Ed Keo hits the nail on the head. Get rid of the CO2 agenda and we get rid of CCS, probably bio fuels, carbon taxes, carbon trading etc. This should clarify thinking on what needs to be done to provide the UK with affordable, secure and reliable energy. The current debate is chaotic. Bet ya we hear abut shale gas with CCS in the not too distant future. The most expensive energy ever invented where all the energy goes to the process itself.

        At TinyCO2 – energy is already being rationed by high price the latter a symbol of scarcity. Until the scarcity issue is properly addressed, prices are just going to get higher and higher.

  3. Madrigaul says:

    What were the details of ‘Nature’s’ rejection?

    • Euan Mearns says:

      It was rejected because :

      In this case, we have no doubt that your analysis of the relationship between sunshine hours, cloud cover and temperature variance in the UK will be of interest to fellow specialists. However, we are not persuaded that your findings represent a sufficiently outstanding advance in our general conceptual understanding of climate change and its causes to justify publication in Nature Climate Change.

  4. Roger Andrews says:

    Here are a couple of existing cloud cover time series for the UK covering the period of interest, both from KNMI Climate Explorer:

    No guarantees given as to accuracy.

  5. Joules Burn says:

    Underlying all this is the assumption that what happens in the UK originates (and stays) in the UK. That is, the only contributions to the temperature is the local solar flux modulated by the cloud cover. In reality, weather comes from somewhere else. Although the effects over large distances are complex:

    it is not negligible. Warmer (or colder) air that blows in would seem to be important in determining Tmax and Tmin. In Seattle, our winter is replete with relatively warm and moist (and cloudy) storms blowing in from Hawaii (termed a “pineapple express”), which is usually displaced by a clear (.sunny) cold blast which freezes said moisture on roads.

    But let’s go with your conclusion, that CO2 forcing is overestimated. This is one of the most straightforward parts of the climate model, since we can measure absorption and emission spectra and concentrations, and things get mixed rather well. What part of the physics is wrong? Or could it be that your model suggests the presence of a “dark” forcing to be discovered later, which negates part of the CO2 forcing?

    • Euan Mearns says:

      Underlying all this is the assumption that what happens in the UK originates (and stays) in the UK. That is, the only contributions to the temperature is the local solar flux modulated by the cloud cover. In reality, weather comes from somewhere else. Although the effects over large distances are complex:

      Joules, thanks for showing up and actually challenging what we we have written. The conventional way of thinking about temperature variance is larger scale circulation. Its warmer one day because a juicy depression roles in off the Atlantic. Colder the next because the wind turns to the north. But the incremental effect with cloud is that if the southwesterly brings some clear spells and the sun comes out, then it gets even warmer. Cyclic change in cloud overprints (but doesn’t over rule) the larger circulation picture to which you refer.

      I’d point you at the posts of Roger Andrews here and in our earlier post. And also at the correlation between Tmax and Tmin in our earlier post, both of which point towards high level of temperature memory that is not wiped out by circulation.

      I’m afraid that your last paragraph is uncharacteristically rather silly. The magnitude of CO2 forcing is dependent upon the level of feedbacks and AR5 shows clearly a huge range of disagreement within the IPCC community as to exactly what the magnitude of feedbacks are (1.5 to 4.5˚C). Our work that simply seeks to explain the observations is that we need to call on CO2 + a small level of net feedbacks (or perhaps other forcing like CH4). There is not much room to accommodate multiple large feedbacks. No dark forces at work here, just the absence of theoretical feedbacks that are unsupported by observations. E

      • Joules Burn says:

        Well, then identify which (“theoretical”) feedbacks are suspect. Water vapour pressure increasing with temperature?

        The differences between climate models (and the biggest uncertainty) arises from the cloud problem, and forecasting this is certainly complicated by a lack of good data for the past (both the %cloud cover and type of cloud). Nonetheless, treating a relatively small area (UK) as a test case for models involving global-scale convection and energy transport seems rather pointless to me.

        As for the idea (etched in AGW skepticism lore) that a recent “pause” in warming requires these models to be completely revised, there is also the possibility that said “pause” is exaggerated:

        • Euan Mearns says:

          Well, then identify which (“theoretical”) feedbacks are suspect.

          Water vapour Clive has a number of posts on water vapour, one link below. The satellite data show variance in water vapour – that I believe is neither forecast or incorporated into models.

          Convection Convection (not radiative heat loss) is the main mechanism for removing heat from surface towards tropopause. This is likely a strong negative feedback where any warming leads to more convection. Any model should incorporate dConvection – past and future

          But it is understanding variations in natural forcings that are most wanting:

          The Sun Variations in spectral out put are only beginning to be understood.

          Clouds The ISCCP data clearly show that the recent warming flat spot may be linked to a trend in cloud cover. Rather than embrace this, the IPCC has instead questioned the veracity of the data.

          Ocean currents Natural cyclic change in ocean currents simply not understood in the past, and therefore impossible to forecast into the future.

          Let’s look at this another way. AR5 increased the range of ECS to 1.5 to 4.5˚C. This in itself shows a massive range of “opinion” on feedbacks within the mainstream.

          Let me put the boot on the other foot. Do you believe the observations shown in Figure 1 are valid? And if so, do you believe this will impact surface temperature development over the UK? And if so, how would you go about quantifying the effect? Or do you think its best to simply ignore it. We are careful to point out our observations apply only to the UK. Best E

        • Clive Best says:

          What is certainly true is that warmer sea surface temperatures will lead to more evaporation following the Clausius-Clapeyron equation. More water vapour at low levels will lead to more clouds and higher precipitation. However what really matters for positive feedbacks would be a consequent increase in water vapour at high altitudes.where IR radiates to space. This is because photons at higher altitudes have less energy due to the lapse rate. There is very little evidence that this is happening and perhaps even the contrary. On the other hand more clouds at low levels will give a negative feedback to warming both through increased albedo and a reduction in the lapse rate.

          The pause in warming is really due to a slight increase in global cloud cover. The paper you quote has nothing to do with physics. Instead it is playing the same game as skeptics do by finding biases in hadcrut4. I can use exactly the same procedure to arguements as they do to show that global temperatures in 1850 were at actually 1C higher due to Hadcrut4 biases in sampling for Europe and US because of smoke pollution.

          • Roger Andrews says:

            If I may be permitted to put my ten cents’ worth in here.

            Joules Burn says: “As for the idea (etched in AGW skepticism lore) that a recent “pause” in warming requires these models to be completely revised, there is also the possibility that said “pause” is exaggerated”. He then links to an article claiming that the “pause” could be a result of HadCRUT4 underestimating recent warming because of incomplete global coverage.

            Taking these points in sequence:

            First, the need to revise the models is based on concerns that are a lot more fundamental than the inability of the models to replicate the “pause”.


            Second, the problems with HadCRUT4 are also a lot more fundamental than a few minor distortions after 1998 (I wrote this a few years ago so it deals with HadCRUT3 and HadSST2 rather than HadCRUT4 ad HadSST3, but the analysis remains valid)


          • Joules Burn says:

            A. E. Dessler, M. R. Schoeberl, T. Wang, S. M. Davis, and K. H. Rosenlof
            Stratospheric water vapor feedback
            PNAS 2013 110: 18087-18091.

            We show here that stratospheric water vapor variations play an
            important role in the evolution of our climate. This comes from
            analysis of observations showing that stratospheric water vapor
            increases with tropospheric temperature, implying the existence
            of a stratospheric water vapor feedback.We estimate the strength
            of this feedback in a chemistry–climate model to be +0.3 W/(m2·K),
            which would be a significant contributor to the overall climate
            sensitivity. One-third of this feedback comes from increases in
            water vapor entering the stratosphere through the tropical tropopause
            layer, with the rest coming from increases in water vapor
            entering through the extratropical tropopause.

          • Clive Best says:

            Dressler argues that there is strong evidence of positive water vapour feedback of around 2W/m2/C. However he also admits that

            The only way that a large warming will not occur in the face of these radiative forcing is if some presently unknown negative feedback that cancels the water vapor feedback. My opinion is that the cloud feedback is the only place where such a large negative feedback can lurk. If it is not there, and the planet does not reduce emissions, then get ready for a much warmer climate.


            There are two negative feedbacks from H2O. The first is clouds and the second is a decrease in the lapse rate towards the moist lapse rate. If the temperature at the emission height of photons from CO2 molecules increases so the greenhouse effect is reduced. The signature of enhanced water vapour in the tropics should be visible ias the hot spot in the upper troposphere. This is not observed.

            Also recently NVAP-M released new data which showed a drop in water vapour after 2000 see:

            So yes – the net feedback from H2O – water vapour + clouds + lapse rate is the crucial factor determining future warming. However I have another argument as to why the sum of all H2O feedbacks cannot be positive and that is the faint sun paradox. The sun has increased output by 30% over the last 4 billkion years and during all this time liquid oceans have existed on Earth. CO2 levels have been as high as 4000 ppm so if H2O feedbacks were linearly positive the oceans would have boiled away long ago.

            See: Evidence for Negative Water Feedback –

  6. Roger Andrews says:


    I’ve found a data set that contains both monthly temperature and cloud cover data since 1960 for 21 UK stations, so you can compare temperatures at these stations directly with cloud cover instead of using sunshine hours as a proxy if you want to. The data are at:

    The data set also includes stations from elsewhere in Europe, so you can spend more happy hours extending your comparisons to countries on the other side of the Channel if you feel so inclined 🙂

    As to how much difference it might make, the Leuchars sunshine hours vs. cloud cover comparison below shows that sunshine hours are indeed a reasonably good cloud cover proxy (R = minus 0.68, note that the sunshine hours scale is inverted so that both plots move in the same sense) so I suspect maybe not much overall. The XY plot shows a decrease of 3.16 sunlight hours for every 1% increase in cloud cover, but I’m not sure what that’s telling us.

    Incidentally, the data set confirms that there is no significant seasonal range in Leuchars cloudiness, meaning that the one-month temperature lag applies only to sunshine hours.

    • Euan Mearns says:

      Roger, thanks very much for these links. My main interest is energy and have about 20 topics lined up to write about, so I’m not about to dive in:-) The work presented here took 3 to 6 months to compile. If someone wanted to bung me a £50K research grant then I may consider re-prioritising. I sent the link to Clive who is much more able than I at interrogating large data sets. Maybe you want to have a go yourself for a country like Germany or France?

      Using sunshine as an inverse proxy for cloud is imperfect. Cloud cover data will be defined in a vertical sense – how much cloud there is looking straight down? Using sunshine is imperfect but perhaps also superior. It tells you if you have line of site between point on surface and the Sun, and so when the Sun is low in the Sky, it may never penetrate fairly light cloud cover. I’m guessing this may explain the gradient in your chart. Also note comments and observations made about diurnal cloud variations. 24 hour cloud measurements will differ from daily – how are these cloud cover measurements made?

      • Roger Andrews says:


        I probably will have a go at some other countries, but concentrating on sunshine hours because they’re a more robust metric and better correlated overall with temperatures than cloud cover. And while I’m doing it I’ll take a closer look at the sunshine-temperature lag to see what might drop out. Stay tuned.

        The basic problem in doing anything with cloud cover is the lack of reliable data. You’ve plotted ISCCP in one example and I’ve plotted it in another, but there are a lot of other satellite cloud amount measurements that can be compared against ISCCP and plotting all of them together gives us spaghetti, as shown in the plot below. (The conclusion of the WCRP people who produced this plot was that you can’t conclude anything about cloud cover from the ISCCP data:)

        The only long-term cloud cover series we have (ICOADS sea and CRU TS 3.10 land) are based on eyeball estimates from ground observers, and CRU actually fits the UK sunshine data quite well, although temperature not so well. Globally, however, the two series look nothing like each other.

        Finally re your plot in your comment below comparing ISCCP with UK sunshine. I noted earlier that sunshine hours show seasonality while cloud cover doesn’t, and this seasonality is what you see when you subtract the two.

        Look forward to your posts on energy.

        • Euan Mearns says:

          Finally re your plot in your comment below comparing ISCCP with UK sunshine. I noted earlier that sunshine hours show seasonality while cloud cover doesn’t, and this seasonality is what you see when you subtract the two.

          Roger, this sent me scurrying to my chaotic hard drive in a panic. Pleased to say that I have applied a daylight correction to the sunshine data. A bit crude, using the lat of the centre of Britain and applying daylight variance from that one point to all stations. I did this before linking up with Clive. So can’t guarantee I’ve done this correctly.

          But there may be other adjustments that need to be taken into account such as angle of sun above horizon etc.

          There are also geometric issues with the ISCCP data matrix where I have not made adjustments for latitude, over 10˚ separation at 60N shouldn’t be too large? Need to see if Clive is interested in tacking this.


          • Roger Andrews says:


            I really don’t see a problem here. Your analysis uses 5-year means that will suppress the seasonal cyclicity in sunshine hours whether you’ve applied “daylight corrections” or not, so you can use these means to define long-term trends.

            I also don’t find anything surprising about the fact that UK sunshine hours show seasonal and diurnal cyclicity while UK cloud cover doesn’t (I have some cloud data from Leuchars that show no significant diurnal variations and trust you will take my word for it so I don’t have to put up another graph). You wouldn’t expect to see much in the way of seasonal or diurnal cloud cover variation in the UK anyway because UK clouds are dominantly frontal in origin and fronts don’t much care whether it’s night or day, or for that matter what time of year it is.

            But if you still want to play around with solar latitude corrections you may find this site helpful:


  7. Euan Mearns says:

    tchannon over on Tallbloke’s Talkshop had a question about diurnal cloud effect. Our analysis using sunshine as a cloud proxy is applicable to day time cloud only. It is well established that cloud cover at night time is significantly different to cloud cover during the day.

    Since we have also been looking at global cloud, one of the things I thought it would be cool to do is to compare the ISCCP satellite data over the UK with our ground based inferences. The results are shown in the chart below. There are 8 satellite nodes over the UK.

    I wasn’t sure what to make of the data at first – compare red a blue lines. Cloud cover was in the same range and seemed to be showing a degree of co-variance. Then I subtracted one curve from the other and was pretty surprised at the result. The satellite data is 24 hours and is clearly biased towards higher cloud level for most of the year – i.e. more cloud at night (24 our) than during the day (sunshine based) which is expected. In the winter months, the relationship is reversed.

    • tchannon says:

      I’m slow responding, busy with writing software, which is kind of to do with this stuff here.

      I’ll be showing things soon, I hope, real data on cloud cover.

      The UK is meteorological borderland with widely varying conditions by time and locality. It is also sensitive to general modal change. Air conditions change very fast, even daily will be misleading. In consequence combining stations is highly risky.

      What does “cloud” do anyway?

      Mountain of stuff, too much to detail now. Afraid this might come across as my being “short”, tired, will have to do.

  8. Euan Mearns says:

    @ Joulesburn Thanks for ref Brian (can you send me a pdf?). I should add at this point that I have learned much through civil discourse with Brian and others on various lists over the years.

    Googled your link and found this:

    Who say:

    The new results suggest that the stratospheric water vapor feedback may be an important component of our climate system. The researchers estimated that at a minimum this feedback adds another ~5-10% to the climate warming from the addition of greenhouse gases, and is possibly substantially more than this amount.


    • Clive Best says:

      I answered this above. Here I would like to make one point. The stratosphere is warmer than the top of the troposphere. More water vapour in the stratosphere will cool the planet because it increases radiative cooling. There is a beautiful example of this happening with CO2. The central line in the 15 micron band is so saturated that it radiates high up in the stratosphere. Here the temperature is much higher – so it radiates more than in the troposphere. Look at any satellite IR spectrum and you will see the sharp uptick in the middle of the band.

  9. Greg Goodman says:

    Hi Euan, a very interesting paper.

    I would just like to suggest some areas where data processing could be improved and may improve correlations and your results.

    Firstly, my old friend the runny mean. This quick and dirty as filters go and should have been banned from use in science decades ago. In short it does a bad job of smoothing , introduces unexpected distortions (like bending peaks sideways, which I suspect we see in your graphs) and can even invert peaks. There are several far better options.

    I will invite you read my article on this rather that listing issues and solution in detail here.

    Second is data re-sampling.

    “The first stage of data management was to compute annual means. ”

    What is the aim here? To remove the annual seasonal variation? Then use a suitable 12 month filter and retain full data time resolution. The (12,9,7mo) triple running mean my usual choice for this task, see article.

    If you need to sub-sample for some reason you _must_ apply and anti-alias filter of twice the re-sampled period before re-sampling, otherwise you risk further distortion and spurious artefacts being introduced into the data.

    I’m not clear why you sub-sampled, apart from to remove the seasonal variation, so I would just suggest filtering rather than re-sampling anyway.

    Finally, doing OLS on a scatter plot will generally under-estimate the slope since its derivation assumes an independent variable where err_x << err_y . How serious the error is in the slope depends up on the data. A quick check is to invert the axes, fit the slope the other way around and compare results. m2=1/m1 ??

    By eye, you slopes look about right but it's worth checking.

    Here is an example where it works well and there's a small difference. Generally, the broader the spread of data the worse it gets and it can very significant.

    This is one of the reasons many papers have over-estimated CS, since they regress radiative forcing against temp, get an incorrectly low slope and invert it to find CS. I've lost count of the number of times I've seen this done, and not just in climatology.

    I see nothing critically wrong , but I think addressing these points will likely improve your results.

    Best regards, Greg.

    • Euan Mearns says:

      Hi Greg, thanks very much for this insightful comment. First up, comments here are set so that your first comment is held for moderation – hence it did not appear. And the moderator tried to relieve the world of its surplus supply of red wine yesterday evening;-) Hence this delayed response.

      Running means are popular because even geologists can do em in XL. I’m aware that the procedure followed here could be open to criticism – that’s why we wanted to get a proper review. I don’t fully grasp the statistical significance of what you are saying (i.e. I don’t intuitively understand) but I do accept that there is probably a better way to mange the data. If this improves the results and confidence in the conclusions then all the better. The problem I have right now is time. I spent months taking this to this point. If you check out my posting record on The Oil Drum you’ll see that a year ago I was obsessively compiling global energy statistics and the plan was to take that to a marketable product – and I’m now back following that course.

      Finally, doing OLS on a scatter plot will generally under-estimate the slope since its derivation assumes an independent variable where err_x << err_y

      Yes, we’ve been there and done that. One of these things I got caught with one day, re-plotted a plot, accidentally inverted the axes and got a different result – led to much anxiety at the time. I think Clive suggested that Tmax on x-axis was the correct way to do this – but will have to leave this point for Clive to answer.

      With 240 comments over on Climate etc and a full in box, I’m swamped right now, but would like to follow up on your suggestions.

      • Greg Goodman says:

        ” I think Clive suggested that Tmax on x-axis was the correct way to do this – but will have to leave this point for Clive to answer.”

        Sadly , there’s no “right way”. The best you can do is put the variable with the least error/noise on x. Temperature time series are rarely low in noise or error. Roughly speaking the error in the slope goes with the ratio of the errors. If you look into the derivation of ordinarily least squares regression is assumes err_x / err_y is negligible. If you try to ignore the rules or as most people simple are ignorant of them, then you get the wrong answer.

        If you’re lucky it’s obvious to the eye that it’s not right and, as is your case, you go to work out why.

        If you get an answer your expect (like a high value of climate sensitivity for some, for example) you probably won’t notice.

        There are several ways to chose how to split the difference but all require further knowledge of the noise structure of the data, which very often we don’t have, or some wild arsed guessing. The main thing is to know the problem and fit in both directions to bracket the range where the right answer lies. If it’s really big you need to work out what to do about it.

        “Running means are popular because even geologists can do em in XL.”

        That is so very true. The other reason for OLS abuse is Excel’s right-click : “fit trend”. The user is then rock sure his result is right “because the computer did it”. This is also a gross failure of science education to teach the basics, but since most PhDs teaching seem unaware of it, I’m not sure it’s going to get better coverage any time soon.

        You will see at the end of the article I linked an example of how to do the triple RM in a spreadsheet.

      • Greg Goodman says:

        “I don’t fully grasp the statistical significance of what you are saying (i.e. I don’t intuitively understand) but I do accept that there is probably a better way to mange the data. ”

        Take the time to read running mean distortion article. I think it covers it fairly thoroughly. I’ll be glad to elaborate if you have questions.

        The Yeovilleton data below shows how averaging and running-means will shift one of the peaks by a year in that data. That sort of thing will either give you false positives or degrade a true result. The problem occurs due to data inversion at the window width / 1.3371 . In the case of your 5 years that’s 3.7 years.

        Unfortunately circa 3.5 years is a dominant frequency in SST and CO2 data !!

        Allan MacRae and Ole Humlum found lags around 9-10 months comparing SST to CO2, that is basically the 90 degree phase lag of 3.5 year oscillation because we need to be looking primarily at d/dt(CO2) vs SST which are in phase.

        So 5 year RM falls badly.

        Remedy: use 12mo filters as suggested above.

  10. Greg Goodman says:

    Another thing that would be worth investigating the lag correlation. It is probable that there will be about two months lag in warming the ground with any long term tendency in sun hours. This will probably reflect in air temps hence Tmax.

    Again this will degrade your correlation and spread the width in scatter plots.

    Jointing the dots in scatter plots can help see this sort of effect which produces loops.

    It may be worth detecting any such lag and adjusting to get the best correlations and strongest match in model.

    • Euan Mearns says:

      Time lags between sunshine and temperature response are both fascinating and I believe extremely important. If I can digress for a moment, this oft-cited paper by Lockwood seems to expect a spontaneous impact of solar activity and temperatures on the global scale where I believe time lags are the norm. Grand solar maximum around 1986, and temperatures stop rising about 12 years later.

      Lockwood, M and Frohlich, C. Recent oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proc. R. Soc. A doi:10.1098/rspa.2007.1880

      Roger Andrews has been posting charts showing 1 to 2 month time lags and this is something else I’d like to follow up on. What we found was at seasonal level, excellent correlations between sun and temp in JJA, zero correlation in DJF with intermediate results for spring and autumn. I will be intrigued to see the impact of time shifting on this data. I’m away to post the charts we have in comments so they are in public domain, open for discussion. Clive I believe will post on the JJA data shortly.

      • Greg Goodman says:

        “zero correlation in DJF …”

        You may need to get beyond “charting” in Excel, here.

        Again you need to look at how corr.coef changes with lag. cosine has zero correlation with sine with zero lag, even though they are identical in form.

        Plot these scatter plots with smaller markers and joint the dots. My guess is that the low slope in DJF may mean temp is very sensitive to a small change in sunlight hours though night time cloud will be a big factor here.

        Also the d.p. issues I raised will be lowering you corr. coef.

        You seem to have a clear effect, I’m fairly confident it will become clearer.

      • Roger Andrews says:

        Greg Goodman is quite correct; you can indeed distort your signal by applying inappropriate filtering. I have a case involving the Pinatubo eruption and CO2 where using different filters to remove the seasonal CO2 signal gives opposite results.

        But there are also cases where simple approaches do a better job. An example is Arctic ice concentration, where subtracting monthly means from running 12-month means removes the seasonal signal through the entire length of the record while the harmonic analysis approaches currently in vogue conspicuously fail to do so after 2006:

        You can in fact do a pretty good job of isolating the seasonal component by subtracting 12-year running means if the seasonal cycle amplitude is much larger than the changes in the annual mean, which is the case for most of the variables of interest in the UK. The ~one-month time lags in the UK records, however, are based on unadjusted monthly means and therefore can be considered robust. More sophisticated treatment may be desirable in the future but I doubt that it would change the lag times significantly.

        And to give you fair warning it’s a different ball game in mainland Europe. More on that shortly.

        • Greg Goodman says:

          “An example is Arctic ice concentration, where subtracting monthly means from running 12-month means removes the seasonal signal through the entire length of the record while the harmonic analysis approaches currently in vogue conspicuously fail to do so after 2006:”

          Well I’m not sure what you mean harmonic analysis approaches but running means are a crock, always.

          You may get lucky if the timing and frequency of the data does not fall foul of the distortions and data inversions of the runny mean. But when you do the analysis how do you ascertain whether you ‘got lucky’ or not ? So you never know reliable the result is.

          Please read my article on running mean distortion linked above.

          A triple running mean can deal with the differences in the magnitude of the annual cycle with no problem, it’s a decent filter with a precise zero to target the annual signal.

          Since you bring up Arctic ice here’s my copy of an article Judith published recently;

          The KNMI link you provide is the usually lame climatology crap of subtracting a monthly “climatology”. This is just the usual Mickey Mann , make it up as you go alone signal processing. They seem to forget that data processing knowledge has been part of science and engineering for centuries.

          I don’t know what you refer to about Pinatubo but if you got opposite results I’ll bet one of the methods involved a runny mean.

          Oh, and since you mentioned arctic ice , you may be interested in a little amplitude modulation triplet associated with the lunar perigee, that I dug out.

          I’m very skeptical of how they extract ice coverage from satellite passive microwave measurement but when I find that sort if detail I can not fail to be impressed.

          To get back to the point: running means must DIE ! Serious.

        • Greg Goodman says:

          “The ~one-month time lags in the UK records, however, are based on unadjusted monthly means and therefore can be considered robust.”

          Even taking monthly means is not correct.

          What is the object of the exercise? Probably two fold: reducing short-term variability (filtering) and reducing the volume of the data (decimation).

          Averaging is a valid means to reduce random gaussian (or “normal”) distributed noise. There may be some of that , so fine.

          However, in the presence of any periodic or repetitive signal any such averaging will fall on arbitrary intervals and take unrepresentative samples of the signal. This causes what is called aliasing: false signals. This is avoided by filtering (not averaging!) before resampling the data. The filter needs to remove anything faster than the resampling period. A month in this case, so a two month filter as a minimum.

          That may not be an issue in the current study but I would not describe a one month lag derived from one month resolution, poorly processed data as “robust”. It may be more like 1mo +/- 1mo.

          It is another illustration that climate science is still in data processing kindergarten. These kind of issues are the very basics in most branches of the hard sciences and engineering, it’s not some black art.

          So go with the monthly data but realise it too may contain processing errors.

    • Greg Goodman says:

      JJA here looks like a candidate for doing OLS in both directions. My optical regression tool 😉 puts the slope as passing through the extreme right-hand point. I suspect the reverse fit will pass the other side of those two points.

      BTW a friend was doing some post-grad work on earthquake prediction and was taught to ignore OLS and fit by eye because it never gave correct results on their data.

      The eye is actually quite good at this but that her supervisor did not know why OLS did not work on noisy scatter-plot data shows the degree to which basic d.p. training is lacking in Earth Sciences.

      • Greg Goodman says:

        A point on R2 values which has been raised on C.E. there is no fixed scale of what is “good” or significant, this needs to be determined in relation to the number of data points.

        As some have said, filtering will (usually) increase R2 because it reduces the ‘degrees of freedom’ in the data, that raises the bar on what can considered a ‘significant’ R2 value.

        Averaging (decimation) rather than a running average (filter) reduces the number of data points so the usual calculation of significant R values will be much higher.

        There are several variations to find significance values for corr. coefficient such as:

        which for the 354 monthly data points in yeovilleton gives 0.090
        if you decimate to annual means it becomes 0.332

        You need to provide this information when quoting R2 values. (Of course Excel does not tell you this, it just gives you the arms with which to shoot yourself in the foot).

        I would expect that the same ratio is applicable if you retain monthly resolution but pass it through a 12mo low-pass filter as I recommended.

        Lance Wallace has said he has a friend who has a formula to convert for the effect of filtering so lets hope he can provide it.

        This does not really affect comparing, say, winter to summer on the same dataset with same number of points. The comparison will still be in same sense.

  11. Greg Goodman says:

    Someone on CE raised the question of circular logic, suggesting there was a degree of induction in the way you do this.

    Probably the objective way would be to let NCR and TCS be the free parameters in a multivariate linear regression. (Sounds pretty hairy but it’s just like doing and OLS against two things at once.)

    Regress NCR*sun and TCS * 3.66W/m2 against Tmax and see what comes out . I think this is what you are in effect doing my hand in a rather crude way.

    The result will likely be similar but induction defect is removed.

    If you want to get this published in a serious journal (rather than the politicised rag that Nature has become) this would be a good and probably necessary step to get through a proper peer review process.

  12. Greg Goodman says:

    scatter plot example:

    which slope is “right” , 10 or 16 ?

  13. Greg Goodman says:

    second example shows features common to a lot of these sites. Lissagou figure loops that indicated phase lag, also there are parts of the data where each of the fitted slopes seems good.

    This is not a general feature of this kind of plot and suggests that there are times when error is sunhours is smallest and others when errors in Tmax are smallest and these periods display notably different ratios of sun/Tmax.

    A portion of negative slope links the two , presumably winter. This can easily be checked.

  14. Greg says:

    Euan, could you give a precise reference to the solar data you used?

    The larc homepages has datasets for everything under the sun (sic) and it hard to know where to look to find the data you used.


    • Euan Mearns says:

      Greg, following your comments up until now: “The larc homepages has datasets for everything under the sun” – what’s this? Clive is on a train crossing Australia for 3 days and totally out of touch. I’m inclined to send you our spread sheets but want to chat with Clive (who I’ve never met) first.

      There seems to be a pre-occupation with R2. One of the things I found is that R2 tends to stay high no matter what we do. And so our focus has been on gradient (Tcalc = Tmax) and intercept (which tends to be zero when gradient =1) and sigma res which like intercept tends to fall in line when the model = reality.

      We’ll be in touch. And thanks for all your interest and input.


  15. Greg says:

    larc comment was about your ref [5], it is not clear what the data source is .

    spreadsheet would be good if you can OK with Clive.

    Here is an explanation why your ‘winter’ period DJF has near zero correlation.

    It is probably coincidental because it’s not really feature of OLS vs inverse OLS discussion but the usual OLS you are deriving are close to the later year cooling period Sept-Dec incl.

  16. Greg says:

    1 month phase lag in Nov-March (approx) ; nearer two months in warmer end of cycle.

    • Euan Mearns says:

      Greg, Thanks for all this stuff – I get a feel for what you are doing. Lets wait until Clive gets back and take it from there – I’m trying to write one post / week, so right now my focus is shale. Re ref 5 – that is one for Clive to answer.

      • Greg says:

        OK, there’s plenty to chew on there and this is really Clive part of effort it’s true. I’m sure he’ll have something to say about it when he catches up.

        I think this graph is the bottom line as far as the annual cycle goes.

        My impression is that it’s common causality up stream (probably jet stream / AO related). I don’t think this negates your finding but it may need rewording (assuming you both draw the same conclusion from this).

        Inter-annual change is the residual of changes in this relationship, so unless you can invoke a separate mechanism on that time scale the two should tie in in some way.

        I’ll throw in a link to thins as well which helps understand the relationship between in-phase and orthogonal components. I’m not sure if this applies to the physical processes here (because I don’t think we know what they are!) , however, it is possible that dT/dt will dominate the short term response and T(t) at a longer scale.

        I don’t know if that helps but it gives some idea of how the relationships can seems contradictory on different scales. I’m sure that will mean something to Clive.

        I’ve made lots of suggestions on how to strengthen your results and I hope they are useful and that you manage to find a journal, more serious than Nature, to publish it.

        Best regards. Greg

  17. Greg Goodman says:


    AO is pressure related index. Average _height_ of 1000mbar isobar then something like first EOF

    I’m a bit wary of the physical reality of EOFs but this does seem to correlate with a lot of other stuff, so I guess that suggests a good indication of something real. (climatologists lov ’em)

    It’s fairly well accepted meteo that AO ( or what it reflects) affects the depth of the Rosby waves in the jet stream, ie how far down these waves come into temperate zones. These are the deep swirls of could you see progressing across the N. Atl on the weather sat. photos., so no surprise it affects sun hours.

    Gross simplification: low arctic air pressure (high AO) means warmer Britain.

    I chose Durham because as I scanned all the stations in the scatter plots this was one (of several) that stood out as having a nice clean (relatively speaking) repeated pattern without too much messy noise. That suggested it had a better S/N ratio which would help with identifying structure.

    Most stations show something similar so subject to checking I don’t think this is a special case.

    I’m looking at the freq spectrum of the cross-correlation at the moment. That’s looking interesting too. More on that later when I’ve dug into it a bit better. I’ll bite my tongue for now.

    • Euan Mearns says:

      Greg, this is all fascinating stuff. I’m following along, but need time to get my head around exactly what you are doing. I understand charts like this one 🙂 Keep going…..

      • Greg Goodman says:

        Look up Lissajous figures to understand what they show about phase relationships.

        Just plotting points (as many people do) removes the time element, which is silly because it is an essential part of the data when looking at “time series”. That is why I suggested linking the points in your scatter plots.

        From the lag-regressions plots I did yesterday you can see that Tmax vs sun will give max corr. coeff values if you lag sun data by one month ( nearest to the 0.8 months in the lag plot). At least that was the case for Durham, other stations could be checked since I was under the impression it was more like two for average temp.

        Lag-regression finds the lag at which one series can be calculated as a scalar multiple of the other , with minimum least squared errors.

        The lag result in itself would be consistent with a simple causal relationship, however, the phase plot provided by linking the scatter plot shows a more complex phase relationship with the phase lag varying notably throughout the year.

        By joining the dots we see that is it not just a spread due to noise but that there is clear annual loop. This loopiness reflects the phase relationship and can often be collapsed by adjusting the lag.

        Doing the same thing with d/dt(Tmax) shows a max corr. when dTmax _leads_ sun by about 2.45 months.
        This is again consistent with simple causation as is almost a quarter cycle (3mo) earlier than Tmax lag.

        Adjusting the lag to find the most consistent phase relationship produces dTmax leading sun-hours by 1.5 months: This makes it clearer how the relationship changes throughout the year. Not surprisingly it is not a simple, fixed linear relationship.

        The other bit of understanding we can get from the phase diagrams is that JJA, which Clive uses, is a period of rising temperature and falling sun-hours. If you are seeking to get an annual ‘index’ to use in year-to-year comparisons, it may be worth considering whether the three months where each variable peak would be better than using the same three months where the data are going in opposite directions.

        Again, a suggestion that may sharpen the relationship and inter-annual correlation.

        Hopefully all this will be more directly meaningful to Clive, but if you have questions about what the graphs show, I’ll try to help.

    • Greg Goodman says:

      I have done spectral density plot of Tmax cross-correlated with sun-hours for Durham.

      lagged cross-correlation determines the similarity in the structure of the two variables and shows repetitive patterns the two have in common.

      z-chirp frequency analysis was used to get the spectrum.

      There are two sets of three frequencies which suggests amplitude modulation. A symmetric split either side of a central peak is the spectral pattern created when one frequency is multiplied by another.

      It is possibly a coincidence but one of the modulation frequencies is almost exactly the orbital period of Jupiter.

      • Greg Goodman says:

        I was a bit wary of getting too close to the mess around 1year since there are a lot of windowing artefacts from the spectral analysis from the over-powering 12mo cycle. However, expanding on that region, I have just noticed a strong peak at 1.185 year.

        1.186 is the Chandler nutation , a periodic wobble in the Earths axis. A very close (1.199 a) is found in ‘oxforddata’.

        Oxford does not have the same “Jupiter” triplet but it does have 11.89 directly as a peak. The other two peaks on the 10-20 year region are 9.36 and 18.9. Both suggestive of Earth-Sun Moon alignments reflected by the repetition of the solar eclipse cycle. Since sun and moon are the main drivers of tides it’s perhaps not that improbably that they could affect cloud in a sea locked country down wind from the N. Atl.

        still it is interesting. I did not expect this sort of detail to come out of simple station data.

        • Euan Mearns says:

          Greg, a few months back I asked Clive if he could develop a view on the underlying cause of the cyclicity and I seem to recall that without doing too much analysis he saw some tidal structure in the data.

        • clivebest says:

          I was wondering whether it could be possibly that the 18.6 precession of the lunar declination is the cause of cloud variation for the UK. The Jet Stream has fluctuations called Rossby waves which meander the Jet stream to change its position over the UK and America. There have been several studies in the past which showed a correlation between droughts/wet weather and the 18.6 year cycle.

          When the declination is high the tidal bulge moves further North and accentuates the tidal strength at full/new moon for northern latitudes. There are also atmospheric lunar tides although they are rather small. Could they be large enough to affect the Rossby waves ?
          Then I found this paper published in 2011.

          Monthly lunar declination extremes’ influence on tropospheric circulation patterns.
          • Daniel S. Krahenbuhl, • Matthew B. Pace,• Randall S. Cerveny, • Robert C. Balling Jr. DOI: 10.1029/2011JD016598

          “Short-term tidal variations occurring every 27.3 days from southern (negative) to northern (positive) maximum lunar declinations (MLDs), and back to southern declination of the moon have been overlooked in weather studies. These short-term MLD variations’ significance is that when lunar declination is greatest, tidal forces operating on the high latitudes of both hemispheres are maximized. We find that such tidal forces deform the high latitude Rossby longwaves. Using the NCEP/NCAR reanalysis data set, we identify that the 27.3 day MLD cycle’s influence on circulation is greatest in the upper troposphere of both hemispheres’ high latitudes. The effect is distinctly regional with high impact over central North America and the British Isles. Through this lunar variation, mid-latitude weather forecasting for two-week forecast periods may be significantly improved.”

          During the 18.6 lunar precession, the declination angle to the equator varies from 23 to 28 degrees. This shifts the spring tidal bulge northwards. The cloud data do indeed show some matching until about 1990.


          • Greg Goodman says:

            short version:

            high time resolution, careful data processing and understanding A.M. triplets is essential to identifying cyclic drivers.

            Here is clear evidence of a 27.55d periodicity on arctic ice coverage:

            Adding the energy of the three peaks in the triplet makes this a very significant signal, whereas individually they may be dismissed as noise.

            This record, at least, suggests perigee cycle rather than declination.

  18. Greg Goodman says:

    Thanks for the ref.
    Richard Holle has also been looking at the effect of declination angle.

    I suspect we may missing the true cause here, or at least half of it. There is a lunar cycle that is extremely close to the declination cycle and that is the perigee cycle. It has a period of 27.55 rather than 27.3 days.

    That is so close that it will need some very careful processing to distinguish which it is. One indication I have found in from spectral analysis of daily arctic ice data:

    Again, an understanding of amplitude modulation and triplets is essential. Combining the three peaks, which show almost perfect symmetry, the combined energy is a very significant peak. It is clearly the perigee, not declination.

    In weather generally it may be combination of both.

    Perigee precession is about 8.85 years. This again is close to 18.6/2 so more possibilities for false attribution on the long time scales.

    There is also possible indication of perigee in trade wind data:

    N. Scaffeta has published finding 9.1 +/-0.1 lunar attributable periodicity. A similar figure is found in recent BEST project paper on NH land surface air temps.

    I think this is a failure to resolve 8.85 and 9.3 (18.6/2) . The average frequency of those two is 9.06 which is nicely within the error margin of Scaffeta. This tends to suggest both acting in some way on hemispheric scale.

    I discussed. some time back. the possibility of perigee variation causing long term tidal movements towards and away from equator/tropics. This mass transport implies heat transport. This may account for 4.43 year found in trade winds.

    This kind of thing really requires daily data sets since monthly averaging will remove most of both cycles and alias what remains!

    re 1990 , this was the reversal in trend in AO (also reflected in length of arctic melting season). I have also shown strong correlation between d/dt(CO2) and AO since 1995, less so earlier when SST seems to dominate.

    Be aware that there are different patterns during warming and cooling phases of climate, to do not be too ready to dismiss <1950 as being clear air act. It may have been a factor but be aware that things can show different patterns, especially in phase lag, during cooling

    Circa 9y is sadly one of the frequencies that gets disrupted by Hadley processing. Again this may simple the effect of working with monthly averages since most other frequency peak do not seem notably affected.

    The irony is that the simplistic choice of "monthly" as an averaging period may have been masking/disrupting some of the fundamental natural cycles in climate.

    We inherit the system of months because ancient civilisations recognised the importance of the moon. then when doing climate analysis we chose it as a convenient 'round number' over which to average. There is no scientific reason to chose a month, which is actually rather bad because it's also irregular.

    I've been have a protracted discussion on Judith's site trying to explain the signal processing defects of monthly averaging and incorrect sub-sampling procedures.

    Sadly monthly averages is pretty much the norm in climatology, which may explain the failure to detect lunar influence (as well as not looking 😉 ).

    We would likely get more from daily station data if that can be accessed.

  19. Greg Goodman says:

    Euan, could you please add this email to your white list too. WordPress keeps inserting it form me and if I don’t notice, my posts get held back. Thanks.

  20. Greg Goodman says:

    Ok, it’s not the address. Probably too many links, I assume it’s in moderation.

  21. Euan Mearns says:

    Greg, I’ve increased the number of permissible links to 10. Site was also down for an hour this morning – problem with servers at iPage.This is all fascinating, but lets not lose sight of our primary objectives which are 1) to have better averaging of the time-data series and 2) to have better understanding of the underlying cyclicity

    • Greg Goodman says:

      ” 2) to have better understanding of the underlying cyclicity”

      Well for that I fear we need to dig into all the phase plots, cross-correlation , frequency analysis stuff. Since many of these effects in climate will modulate each other, it requires some spectral detective work like recognising spectral splitting.

      I threw in a few examples of this kind of thing I’ve found because, after all, it’s all linked.

      AO seems to be linked to both Tmax and sun-hours in Durham data; as well as atmospheric d/dt(CO2) at MLO (in a causal sense)

      Sorry it’s not simpler, but I think the main point is not to try and refute a simplistic CO2 driver with a simplistic cloud driver (even though the latter is much more credible IMO).

      There clearly is fairly strong cyclic components but characterising them (let alone explaining them) requires some proper sig. processing efforts.

      This is essentially what climatology has lacked so far.

  22. Greg Goodman says:

    Thanks Euan.
    I may have been a bit verbose but those issues are at the heart of what I’m showing. If there is a lunar signal in there (and I think there is a strong possibility) then monthly data will be doing a lot to remove or transform it into some spurious long period cycle by aliasing it. (Search anti-alias filter for detail on what that means).

    You won’t find the 27.55d cycle by looking at monthly averages ! By inference that may also destroy or at least severely attenuate 8.85 and 18.5 signals linked to monthly lunar drivers.

    If you want better “averaging” – I assume by that you want to filter out the year-to-year in JJA – then try 5,4,3 triple running mean. It’s a bit approximative but will be a lot less distorting than a simple 5y RM.

    If you are talking about full TS of monthly data and you want to remove the seasonal variation use 12,9,7mo triple RM. It is a well-behaved filter with zero 12mo signal getting through (and can be easily done in your spread sheet 😉 ).

  23. Greg Goodman says:

    Judith has posted my article on running means distortion and something has just come out of trying to explain it to someone.

    678days = 1.855 years.
    1/[(1/27.56 – 1/30 )/2] = 678

    Bingo! the 1.85 year peak is what you get when you average data with a 27.55 cycle.

    Alias with 31 days = 496d =1.36a

    On the same reckoning 27.3d will give 1.66 years.

    Subject to verification but this looks like a strong peak even after being mostly averaged out .

    • Greg says:

      oops, good job I put the caveat. I used the calculation for modulation freq not aliasing which is half the value quoted.

      Since the averaging periods are very close to the lunar cycles, most of it will be taken out anyway and the aliased frequency will be weak unless the signal is very pronounced. We really need daily data to determine whether there is a significant lunar signal here.

      Sadly John Kennedy has not replied to my enquiry about accessing daily station data, yet.

      Perhaps you would like to enquire of the Met. Office yourselves.

      The cross-correlation with AO looks interesting and that index is available in daily format. Showing Tmax and cloud are linked to long term changes is AO would back up nicely the central thrust of your paper.

      Looking at the individual station data was a good idea. We get too bogged down in these statistically dubious global means of monthly means of temperature anomalies.

      Grave problems of over processing. Getting back to raw data would be preferable.

  24. clivebest says:


    The moon does effect climate in at least 3 ways
    – Changes the Earth-Sun distance every month by about 8000 km as the Earth-Moon system orbits its centre of mass
    – Moonshine on Earth – reflected solar radiation at night
    – Tidal effects on the atmosphere and oceans.

    The first 2 are rather small – about 0.02C change in surface temperature per month.
    Tidal effects in the atmosphere are much stronger at the poles and at high altitudes. The positioning of tides with latitude depends on the 18.6 year cycle and the strength of tides depends on the perigee cycle. It is clear that there must be some sort of effect on weather systems at high latitudes the question is whether it is significant or not.

    There are some old studies which found a long term correlation of droughts in N. America and China with the 18.6 year cycle. The tidal effect on the polar air mass might effect the jet stream and accentuate monthly variations when the declination is highest. This might then effect storm systems.

    I also found a 9.6 year signal in the Hadcrut3 monthly data

Comments are closed.