# The Half Life of CO2 in Earth’s Atmosphere – Part 1

• Sequestration of CO2 from the atmosphere can be modelled using a single exponential decay constant of 2.5% per annum. There is no modelling need to introduce a multi time constant model such as the Bern Model.
• The Bern model, favoured by the IPCC, uses four different time constants and combining these produces a decay curve that is not exponential but also matches atmosphere to emissions.
• The fact that both single exponential decline and multi-time constant models of emissions can be made to fit atmospheric evolution of CO2 means that this approach does not provide proof of process. Either or neither of these models may be correct. But combined, both of these models do provide clues as to the rate of the CO2 sequestration processes.

Single time constant, 2.5% per annum, exponential decline model gives an excellent fit between emissions (LH scale, coloured bands) and actual atmosphere (RH scale, black line). This confirms Roger Andrew’s assertion from a couple of weeks ago that it was possible to model sequestration of CO2 from the atmosphere using a single decline constant. The Blue wedge at bottom is the pre-1965 emissions stack that is also declined at 2.5% per annum. The half life of ~27 years is equivalent to a residence time for CO2 of 39 years, slightly but not significantly different to Roger’s result.

A couple of weeks ago Roger Andrews had a post called The residence time of CO2 in the atmosphere is …. 33 years? that stimulated a lot of high quality debate and for me a lot of new information came to light. This is the first part of an X part series of posts aimed at summarising what we know, aiming to zero in on “the truth” about CO2 sequestration rates from the atmosphere.

In this post (Part 1) I look at a single time constant exponential decline model, compare this with Roger’s model and the Bern model favoured by the IPCC and the climate science community. The Bern model uses 4 different time constants from fast to slow and infinity and this post illustrates how this works. I am not a mathematician and prefer visual illustration of mathematical equations.

This post has been significantly delayed since I could not get my XL model to produce the same results as Roger’s. Our models now agree but may still be providing slightly different results.

In Part 2 I will discuss the Atomic bomb 14C data and the proportional reservoir growth model put forward by Phil Chapman.

Why half life is important

We hear all the time form the climate science community that even if we stop burning fossil fuels (FF) today the CO2 we have already produced will still be in the atmosphere for centuries to come. We have already lit a long slow burning fuse that will lead us to climate Armageddon. How much of this is actually true?

What we kind of know for sure is that in 1965 the mass of CO2 in the atmosphere was roughly 2400 Gt (billion tonnes) and today (2010) it is roughly 2924 Gt. That is an increase of 524 Gt. And we also know that we have added roughly 1126 Gt of CO2 to the atmosphere through burning FF and deforestation (emissions model from Roger Andrews). And so while Man’s activities may have led to a rise in CO2 the rise is only 46% of that expected from our emissions. Earth systems have already removed at least 54%. How is this reconciled with the warnings of climatic meltdown?

To understand this requires understanding of the very complex carbon cycle, but in short, some of our emissions have been dissolved in ocean water and some have been taken up by enhanced forest and plant growth. Both of these enhanced uptakes are brought about by the increased partial pressure of CO2 in the atmosphere.

Understanding exponential decline and half life

In the context of atmospheric CO2, imagine a slug of CO2 added to the atmosphere, like manmade FF emissions, and how it may decline via sequestration into the oceans and trees. If the initial slug is absorbed by 5% in the first year, and 5% of the remaining 95% the following year and so on then the decline curve would be like that shown in Figure 1. The half life is the time it takes for 50% of the initial slug to be removed. In the case of 5% per annum decline it turns out that the half life is about 13 years (t1 in Figure 1). After another 13 years (t2) another 50% of what was there after t1 is removed and so on. As a rule of thumb, after 5 half lives have past there is hardly anything left of the original slug.

Figure 1 This chart illustrates how a pulse of 13.3 billion tonnes (Gt) of CO2 injected into the atmosphere in 1965 would decay if 5% of the remaining CO2 is removed each successive year. After 13 years (t1) 50% of the pulse has been sequestered. 50% of the remainder is sequestered in the following 13 years (t2) and so on.

The residence time is defined as follows:

Half life / 0.693 = Residence time

My XL spread sheet model has the exponential decline rate as the main input variable where:

P2 = P1*r

P1 = initial amount
P2 = amount remaining after 1 year
r = the annual decline rate. For example, for annual decline of 5% r=0.95

My spread sheet gives the same result as the general decline formula:

P(t) = P0*e^-rt

P0 = initial amount
P(t) = the amount remaining after time – t
t = time in years
r = the decay rate

It also allows me to estimate half life from the output. Therefore in this discussion I will stick to using decline rate and half life as illustrated in Figure 1 and where possible avoid using the more abstract residence time term.

Single time constant, multi pulse model for the atmosphere

This may sound complicated but I hope to make it simple to understand. Roger already laid the groundwork with a chart that shows the same as Figure 2. In Figure 2, the single pulse declining at 5% per annum (Figure 1) is the layer labelled as (1) in 1965 (Figure 2). The next year there is a new pulse (2) that declines at the same rate and the next year another pulse (3) and so on. The size of each annual pulse equals emissions for that year. So we have multiple pulses but they all decline at the same rate of 5% per annum. After 13 years, half of pulse one is gone and so forth. In the 16 years shown in Figure 2 a total of 289 Gt of CO2 is added to the atmosphere but sequestration has removed 70 Gt meaning that only 210 Gt remain, that is the height of the 1980 column.

Figure 2 In his earlier post, Roger produced a chart near identical to this and one reason for reproducing this here is to show that we are both singing from the same spread sheet. The pulse shown in Figure 1 is that labeled as number (1) on the chart. The next year there is a new pulse, scaled to the emissions model, that also decays at 5% and so forth. Because of sequestration into the oceans and biosphere the amount of CO2 left in the atmosphere is always much lower than the amount we have added.

We can now expand this model to a full time series, 1965 to 2010 and adjust the exponential decline rate that the model uses to produce a best fit between the model and the observed evolution of CO2 in the atmosphere (Figure 3). The atmosphere model is based on 750 Gt C in the atmosphere in 1998 (IPCC Grid Arendal) when the atmosphere had 367 ppm CO2. The C content of the atmosphere is then projected backwards and forwards from that date in proportion to annual CO2 concentrations from Mauna Loa. The data are then converted to Gt CO2 by multiplying by 44/12 (the molecular weight of CO2 / atomic weight of carbon).

Figure 3 The model is now expanded to include all years from 1965 to 2010. The black line (right hand scale) is the atmosphere based on observed CO2 at Mauna Loa. The decline rate was adjusted to give this “best fit”. Notably 1126Gt CO2 has been added but only 516 Gt remains and this fits the overall observation of CO2 sequestration reducing emissions by 54%. In detail, the fit of the emissions to the atmosphere is not as good as that achieved by Roger.

It was at this point that I encountered the first problem trying to reconcile my model with Roger’s model. The fit of additions to atmosphere is nothing like as good as that achieved by Roger and the half life yields a residence time of  18.8 years somewhat different to Roger’s 33 years. The problem with the model shown in Figure 3 is that it is built on a flat baseline that does not account for the decline (sequestration) of pre-1965 emissions.

Building in decline to the pre-1965 emissions produces an excellent fit of emissions and the actual evolution of the atmosphere (black line) with a half life of 27 years equivalent to a residence time of 39 years. Closer to but not exactly the same as Roger’s result (Figure 4).

Figure 4 Single time constant, 2.5% per annum, exponential decline model gives an excellent fit between emissions (LH scale, coloured bands) and actual atmosphere (RH scale, black line). This confirms Roger Andrew’s assertion from a couple of weeks ago that it was possible to model sequestration of CO2 from the atmosphere using a single decline constant. The Blue wedge at bottom is the pre-1965 emissions stack that is also declined at 2.5% per annum. The half life of ~27 years is equivalent to a residence time for CO2 of 39 years.

A major conclusion of this post, therefore, is that emissions can be fitted to the observed evolution of the atmosphere using a single time constant model that has a 2.5% per annum decline rate. This credit really has to go to Roger Andrews if no one else has achieved this before. To achieve this fit, it is essential to have a model where the longer term emissions also decline. In my model, the emissions are initiated in 1910.

At this point it is important to stress that matching emissions to observations assumes that all of the rise in atmospheric CO2 comes from emissions. As we shall see in Part 2, the atomic bomb 14C data suggests a much more rapid decline of 7% per year that yields a half life of ~5 years  that creates the need for some of the increase in CO2 to come from other sources. I hope to show why the bomb data give a false picture.

This leads into consideration of the Bern Model which has multiple time constants. If it is possible to get a good model fit using a single time constant why use 4? Doing so has lead to much debate on sceptic blogs since  it is difficult to conceptualise why different processes should discriminate between different parts of the overall CO2 budget. For example Willis Eschenbach writing on WUWT:

So my question is, how do the sinks know the difference? Why don’t the fast-acting sinks just soak up the excess CO2, leaving nothing for the long-term, slow-acting sinks? I mean, if some 13% of the CO2 excess is supposed to hang around in the atmosphere for 371.3 years … how do the fast-acting sinks know to not just absorb it before the slow sinks get to it?

The Bern Model

I have not found it easy to find information on the Bern model simply through Google. And it is worth declaring that until a few weeks ago I had barely heard of it. I have this from Clive Best via email:

AR4 page 213 of WG1 defines the BERN model as

a0 + sum(i=1,3)(ai.exp(-t/Taui)) , Where a0 = 0.217, a1 = 0.259, a2 = 0.338, a3 = 0.186, Tau1 = 172.9 years, Tau2 = 18.51 years, and Tau3 = 1.186 years.

The term a0 is the fraction which remains for ever in the atmosphere ( tau = infinity) – roughly 22%

Of course it doesn’t stay in the atmosphere for ever. It is eventually removed through rock weathering and build up of  sediments on the ocean floor. Tau > 1000 years.

This is translated into the following:

Time constant (Tau)   % of annual pulse removed at that rate
1.2 y                                18%
18.5 y                             34%
173 y                              26%
∞                                    22%

The Bern model may therefore be described as a multi pulse multi time constant model. In real terms it says that certain processes will sequester CO2 emissions very quickly, for example solution into ocean water, some act more slowly, for example removal by tree growth and soils and some will act very slowly, for example removal of surface water CO2 into the deep oceans.

Figures  5, 6, 7 and 8 show what these different time slices weighted according to the % of emissions they apply to look like. Note the variable Y-axis scales, the very fast slice accounts for virtually none of the accumulated CO2 growth. These models are built on a flat baseline and so are for illustrative purposes only.

Figure 5 The super fast time constant removes most of annual CO2 additions within 5 years. This is represented by the thin yellow band in Figure 9.

Figure 6 The second time constant of 18.5 years applied to 34% of emissions is the only one that resembles the single time constant, single pulse model (Figure 3). Note this is slightly convex up while the next two charts are concave up and combining the two provides a way of producing the observed linear increase in CO2.

Figure 7 With the spread sheet model I’m using it is difficult to model the t173 year slice so I set the decline to 0.1%. With the 45 year time scale involved from 1965 to 2010 this makes little to no difference. While Figure 5 shows little carry over from one year to the next the T173 slice shows virtually 100% carry over from one year to the next. The concave up style of this slice cancels the convex up style of the T18.5 year slice.

Figure 8 The T∞ slice has decline set to 0 and is virtually the same as the T173 model shown in Figure 7.

Adding the 4 time constant slices together provides the picture shown in Figure 9.

Figure 9 Combing the slices shown in Figures 5, 6, 7 and 8 produces the picture shown here. It is important to understand that this is a picture of what remains, not what went in. Tweaking the input variables of the Bern model or the atmosphere model it should be quite straight forward to produce a better fit. The pre-1965 emissions (underlying decline) are modelled as a single exponential which is an approximation since Bern is not an exponential decline model. It would be a lot of work to adapt my spread sheet to handle the underlying decline in the proper way. Doing so would likely improve the fit. The purpose here is to illustrate how the model works.

So where does this leave our understanding? Since CO2 emissions can be matched to atmosphere evolution using different modelling approaches it is clear that the approach of matching model to observations provides no proof of the underlying process. In the Bern model, 48% of emissions remain in the atmosphere for a long time. I do not believe that the model itself provides any evidence that this is the case.

In this comment to Roger’s post Phil Chapman presented an idea of redistribution of a slug of CO2 between the fast reservoirs. The atmosphere began with 21% of the fast reservoir CO2 and Phil argued that following multiple slugs, once equilibrium was reached, the atmosphere would end up with 21% of the increased amount of CO2 circulating between the fast reservoirs (assuming linear processes). In other words, 21% of emissions will remain in the atmosphere until the slow fluxes have time to remove it. This idea rhymes with the Bern model and I am currently thinking along the lines of a model that does not decline to zero but to 21% above the original baseline.

What about Willis Eschenbach’s enigma? At the moment I think it may be helpful to think about this from a different angle. We seem to know that different processes remove CO2 at different rates. They are all acting simultaneously. Hence, maybe some of the slow processes get to CO2 emissions before the faster processes grab them? But it is possible that sequestration is dominated by fast processes, just that at equilibrium 21% of emissions may remain.

In part 2 I hope to illustrate using simple models why the bomb 14C cannot be used to model CO2 sequestration rates. And I currently believe that for the same reasons natural variations in d13C are unlikely to be useful tracers either. I will also take a closer look at Phil Chapman’s idea (I do not know if this is originally Phil’s idea) and see how this may be incorporated into a refined model.

This entry was posted in Climate change and tagged , , , , , . Bookmark the permalink.

### 58 Responses to The Half Life of CO2 in Earth’s Atmosphere – Part 1

1. Euan Mearns says:

This was surprisingly easy to do. The model would say that 79% of emissions are removed by fast processes with mean Thalf of 17 years, all gone after about 85 years. 21% hang around for a long time until a new equilibrium is reached.

2. Joe Public says:

Wow Euan. I didn’t expect to continue my Physics / Chemistry / Maths lessons today. I’ll need to re-read it to assimilate the facts & principles.

[PS one phrase made me smile – “What we kind of know for sure is …”. Humour is always welcome , even when unintentional. ;-)]

3. Euan Mearns says:

Wow so many comments! But thanks Joe. Thought I’d air this explanation of why bomb 14C can’t be used for a bit of peer review before I write the post on it. Hopefully it is self explanatory 😉

What happens is that atmosphere enriched in 14C gets inhaled by forest or ocean where it mixes with CO2 depleted in 14C. The CO2 that gets exhaled is not all the same CO2 that got inhaled and is consequently depleted in bomb 14C. The model is very simplified 😉 But at T2 I’m imagining the atmosphere has the same CO2 ppm as at T1. But it is depleted in bomb 14C. The depletion has nothing to do with the rate of CO2 sequestration but by dilution of the atmosphere bomb 14C in the fast reservoirs by two component mixing.

• This is a very important conclusion. I can’t wait for the post.

• Euan Mearns says:

Well Roger, it was you that said the bomb decline rate must be wrong. It was easy to accept the bomb data especially when folks wanted to believe it. So then you look to see if it might be wrong and then quickly conclude it can’t be right. Same will apply to d13C data.

• Willem Post says:

Euan,
A Dutch scientist, Dr Priem, wrote a very long article about the CO2 impact on the geological record over the life of the earth?
It was published in Energy and Environment.
It was sent to me as a PDF, not as a URL.
I could send it to you as an attachment to an email.

• Euan Mearns says:

Willem, I will mail you tomorrow. Euan

4. Hi Euan:

Good summary!

I think the key question on the validity of the four-component Bern model is the one posed by Eschenbach:

how do the sinks know the difference? Why don’t the fast-acting sinks just soak up the excess CO2, leaving nothing for the long-term, slow-acting sinks?

I’m still wrestling with this question without so far having come up with an answer, but will be back if and when I come up with one 😉

• Euan Mearns says:

Roger, I’d be interested to know if you can replicate my 2 Tau model

21% @ ∞
79% T 1/2 ~ 17y

You will likely have to tweak the 79% exponential part since your model handles the underlying different / better than mine. Incidentally, I don’t believe the 2 Tau model will have a half life since the aggregate decline is non-exponential. Same applies to Bern.

• Dennis Coyne says:

Hi Roger,

Most of the fast decline in CO2 is into the surface layer of the ocean, which becomes saturated over short (1 year) time frames, it takes time for the ocean circulation to mix the upper layer with lower layers and there are seasonal plankton blooms which are a big part of the flux of CO2 into and out of the ocean. Some of the dead plankton that is not consumed by scavengers and does not fully decompose before sinking to the bottom of the ocean will be sequestered. The cycle of carbon into (and out of) deep soil and the deep ocean is on the order of thousands of years.

I have attempted to use a single time constant (exponential decline) to extend your analysis from 1751 to 2013, where we have anthropogenic emissions estimates from the CDIAC http://cdiac.ornl.gov/ the results are not very good. If we extend the analysis back to 1600 and assume emissions at 1751 levels from 1600 to 1750, the results are worse.

If the fast acting sinks soaked up all the CO2 in the manner that Eschenbach suggests, there would not have been the relatively constant levels of CO2 in the atmosphere (from 270 to 280 ppm over the period from 3000 BC to 1800 AD).

• Euan Mearns says:

Dennis, I don’t think you can do what you say you have done. Our model has an input called “Roger Emissions Model”. We apply an exponential decline to the emissions. Back in 1751 there were no emissions and so our model will simply return what the atmosphere actually was back then based on Mauna Loa.

As for fast sinks, see my comment at foot of thread.

• Dennis Coyne says:

Hi Euan,

I use emissions from the CDIAC, they agree pretty closely with what is in your post, there are emissions data back to 1751 see
http://cdiac.ornl.gov/ftp/ndp030/global.1751_2010.ems

I did the same basic model as you did from 1751 to the present, the model does not match reality very well using a simple exponential decline, note that your 2.5% estimate works best if we add an extra 58 Gt per year to the anthropogenic emissions for 1900 to 2013, but fails as we go farther back in time (1751 to 1900). In fact if we fit the two periods separately we get very different models.

• Euan Mearns says:

Dennis, here is Roger’s emissions model. So I find it difficult to understand your comment still. Works OK from 1900 to 2013 which is when all the emissions took place. But not during the time when there were virtually no emissions. If it doesn’t work pre-1900 then either the emissions model is wrong or there’s another process causing atmosphere variability that gets swamped by emissions later on. Who was counting how many trees got burned in China way back then?

I might also add that ice core CO2 is just a proxy for atmosphere.

5. Yvan Dutil says:

There is many time constant because you have many reservoirs acting each on their own time constant. You have the biosphere (fast), the ocean (slow) and the geological process (very slow). I you put a pulse of CO2 each one with absorb until it become saturated. .

• Euan Mearns says:

Yvan, I agree. But the broader philosophical question is what the atmosphere sees. And also, is there any physical justification for Bern having such a large portion allocated to very slow and infinitely slow processes?

Regarding my first point, imagine rapids in a river. There could be some very fast laminar flow in the middle. Less fast turbulent flow either side, and at the edges, eddies and bays with slow flow. Mathematically you could create a very complex model to describe and integrate all these different flow regimes. Or you could describe it very simply with a single equation – the average flow rate.

6. Yvan Dutil says:

It is my understanding that the slow fraction is determined from the observation of geological weathering processess. BERN is an collage of various input from other research field and is constrained by observations. You cannot do that only by fitting CO2 curve over time.

Actually, this why you get almost the same result with your simple fitting. You don’t have enough energy either at high or low temporal frequency to constrain the model.

• Euan Mearns says:

Yvan, I am a simple geologist and isotope geochemist. I have yet to fully grasp the very slow geological weathering sink for CO2. What I do know is that sub-arealy exposed limestones must be dumping significant quantities of C into the atmosphere.

You say I don’t have enough high or low temporal frequency. Which model and how do you know that? Links to peer reviewed literature (preferably public domain pdfs) are acceptable.

Best Euan

• Yvan Dutil says:

Just do the FFT of the CO2 rise curve and the FFT of the emission. There is no signal beyond 150 years because raise has not yet started. Also, you have little short term signal because their is little modulation at high frequency in in the emission. .This limitation comes from the nature of the data used.

• Euan Mearns says:

Just do the FFT of the CO2 rise curve and the FFT of the emission.

Yvan, a major objective here is to educate and to build bridges. You say I should just do a fast Fourier transform of the CO2 rise curve… I already told you I can’t do math and I am a fairly simple geologist seeking truth. So can you perhaps elaborate with some simple explanations that everyone will understand.

A simplistic view of Bern is that about 50% of emissions are sequestered and 50% hang around a bit longer on geological time. There must be some simple and easy to understand and bomb proof explanation for Bern that everyone can understand.

I gotta sign off for tonight. Hopefully resume tomorrow.

• Sam Taylor says:

Euan,

I believe he’s trying to say that by doing the FFT of the CO2 data will allow you to see the contribution of signals of various frequencies to the overall data. Analysis of this would allow you to figure out how big of a contribution to the CO2 data fast (high frequency) and slow (low frequency) contributions are making.

His point is basically that the time series isn’t long enough to resolve the low frequency components adequately, and the faster processes are hidden because the data is fairly low resolution/smoothed.

• Yvan Dutil says:

Exact Sam,

• Euan Mearns says:

This is one for Clive Best if he calls back. But if you see my other comments you’ll see that I don’t believe the slow processes have any relevance at these human time scales. If it were possible to do a spectral analysis to resolve a number of fast processes that would be interesting.

• Euan Mearns says:

Yvan here is a chart made by Roger. Are you saying that applying the FFT to these emissions data can be used to predict the future?

• Yvan Dutil says:

No, but you will see that you have almost no energy at high frequencies and at low frequencies. This means that you are blind to fast ans slow processes.

Surely the attenuation of CO2’s resident time in the atmosphere is partly controlled not just by the size of the sinks, but by the speed that each sink can absorb CO2, and by the potential saturation of various sinks?

With a body of water, for example, the surface layer saturates quickly, and it takes time for turbulence and subduction currents to move that saturated water below the surface to be replaced by unstaturated water again, until the whole water body reaches equilibrium with the atmospheric CO2’s partial pressures (and all that) at the then-prevailing temperatures. A process of minutes to decades. Stronger surface winds (a product of a more energetic climate) may increase exposure of unsaturated water to the atmosphere (hence increasing the rate of the sink), but I also understand it may be causing upwelling of already over-saturated water in some areas, leading to the ocean releasing rather than absorbing CO2 there. Not a simple thing to model on a global basis.

With carbonating rocks, the CO2 is trapped on the exposed surface of the rock and that rock has to be broken up by weathering, mechanical and chemical processes or transported into the ocean to reveal fresh rock surfaces where further capture of CO2 can occur. That’s a process of decades to millennia, wherein eventually the CO2 and other gasses in the subducted rock are taken into the mantle to eventually be returned to the atmosphere via volcanoes and vents principally along upwelling tectonic plate boundaries.

So every sink has its own characteristics, and (without transport of some kind or some other mechanism) most sinks will saturate fairly quickly. Consideration of this effect is (I suggest) a necessary component of any useful model of the life of CO2 in the atmosphere.

I think simply modelling a 27 year doubling time across all sinks is far too simplistic. In the very short term that approach may fit the observed curves, but unless you properly reflect the longer term characteristics of each sink and its replacement or rejuvenation rate I don’t think your extrapolation will provide us with any useful information for scales beyond a year or so.

This is not an obscure art, and I’m sure there are many scientists who have their heads around this far better than this poor blogger. My conclusion tho, is that I don’t buy your approach as it entails perpetual full-rate absorption by all sinks, and to my mind that is not a sufficient or even a correct explanation of the behaviour of this aspect of our atmosphere.

You could perhaps ask Dr Hansen for a comment – I believe he may have dome some work on this.

8. Euan Mearns says:

The fact that both single exponential decline and multi-time constant models of emissions can be made to fit atmospheric evolution of CO2 means that this approach does not provide proof of process. Either or neither of these models may be correct.

So we agree on that point 🙂 And so when you say you are not buying my post, what specifically is it that you are not buying?

I agree with most of what you say about the dynamics and capacity of different sinks. But I do wonder if the fast sinks cannot all be modelled as one. The problem there would be non-linearity with time. I have no where made any prediction.

The bit I disagree strongly with, in fact I think it is utter rubbish for at least two reasons, is what you say about rock weathering. What on earth is a “carbonating rock?” Lets start with an image of weathered rocks on their way to the sea:

The main Carbon processes observable in this image are forests and soils. The notion that the surface of the mountains that are being weathered is some kind of C sink is a crazy notion to me. Similarly so that the clastic sediments being rolled around in the braided river system. Most of the surface rocks on Earth are silicates with subordinate carbonate minerals. But the silicate rocks often contain subordinate amounts of carbonate minerals and weathering these will release CO2 to the atmosphere, not absorb it. The other sinks we normally talk about are forests (the biosphere) that eventually becomes oil, gas and coal and the oceans where both biological and chemical processes sequester CO2 in carbonate minerals. The one important sequestration process I never hear about is diagenesis, that is the process of forming minerals in sediments buried in sedimentary basins. That is a very slow process and not really relevant to the debate which is about where 50% of Man’s emissions have gone to. It would not surprise me if Dr Hansen has confused diagenetic processes with weathering processes – they are in fact at opposite ends of the sedimentary cycle.

Anyway, if rock weathering is a sink can you please point me at the sunk CO2. I can see coal and oil and limestone but I ain’t ever seen CO2 that has been sequestered by weathering.

http://www2.cnrs.fr/en/1995.htm

Now it may well be the case that the surface of the Earth has some adsorbed CO2 but that does not make it a sink. Even if the amount of adsorbed CO2 changes with for example a change in temperature, that does not make it a sink. The ocean is a sink because it can absorb CO2 rapidly and that CO2 can be removed by mixing with deeper water and by sea creatures. The only way that surface weathering products can be removed is in sediments by rivers. Even if they contain some CO2 they are not relevant to this debate, they are simply a part of the slowly changing background flux.

The second thing wrong with the concept of rock weathering being a slow sink requiring some CO2 to remain in the atmosphere for a very long time is the exact argument made by Willis Echenbach. If we stopped emitting CO2 today, do you really believe that a portion of the existing atmosphere is going to hang around for hundreds of years waiting to be removed by weathering processes? What will happen is that the fast processes will continue to remove it. But these might not continue to reduce the atmosphere to pre-industrial levels since we now have more CO2 (and derivatives) circulating in the fast sinks. Thus we may settle at a new baseline, perhaps 21% above pre-industrial, until the slow processes are able to remove that surplus.

Bern has:

26% @ 173Y
22% @ ∞

I can buy into the latter as being equivalent to Phil Chapman’s 22% (though I would question the timescale). But if we stopped emitting today I can see no reason why fast processes would not remove that 26% for so long as the sinks have capacity. Anyone?

• Euan Mearns says:

No one spotted the CO2 sink yet? I think the idea is that the “sequestered CO2” occurs as bicarbonate ions in the river water.

But we also have the reversible reaction CO2 + H2O = HCO3- + H+

I’m not sure, but I think increasing the concentration of bicarbonate ions in solution, some automatically goes back to CO2. In any case, this is just another way of dumping CO2 into the ocean. It is not a sink but a steady slow flux. And there is surprising little info out there on weathering of limestones.

• dennis coyne says:

Hi Euan,

The deeper layers of the ocean turn over much more slowly than you realize. There is a limit to how quickly the upper layers of the ocean can take up the excess CO2.

9. clivebest says:

There is an interesting bit of maths you can do assuming a net exponential decay model for annual CO2 pulses. Lets assume that mankind ignores the IPCC and continues emitting 5.5GT of CO2 each year for ever (say the next 50 million years). One might imagine that the atmosphere would become 100% CO2 but this is not the case as follows.

The tmosphere contains 750g tons CO2 of which 42 Gtons(5.5%) is due to the burning fossil fuels since 1750.

Man made emissions of fossil fuels are currently running at 5.5 Gtons per year

We assume that once a year a pulse of N0 = 5.5 Gtons of CO2 is added to the atmosphere due to fossil fuel emissions. This then decays away with a lifetime Tau. Then the accumulation of fossil CO2 in the atmosphere for year n is simply given by.

CO2( n) = N0( 1 +sum(i=1,n-1) (exp(-n/Tau)))

If we assume that n is very large (eg. 50 million) then we can treat this sum as an infinite series and the atmosphere will eventually saturate at a certain value of ‘anthropogenic’ CO2 concentration.

Multiplying both sides by exp(1/Tau) we can derive that the sum in the limit as n-> infinity is

CO2(infinity) = N0/(1-1/exp(1/Tau))

Taking some possible values for Tau we can calculate:

Tau Fossil Limit (Gtons) Fraction of 750 Gtons

5 30.3 4.0%
7 41.3 5.5%
10 57.8 7.75%
14 74.3 10%
50 272.3 36%
100 547.2 73%
200 1103 147%

So for Tau = 27 years the atmosphere would saturate at a level about 25% above current levels.
This would not be a disaster !

However if the BERN model is correct and there is a 22% component that remains in the atmosphere for many thousands of years then levels would keep rising.

• The atmosphere contains 750g tons CO2 of which 42 Gtons(5.5%) is due to the burning fossil fuels since 1750.

Man made emissions of fossil fuels are currently running at 5.5 Gtons per year

Clive, I think you meant C, not CO2.

Expressed as CO2 the approximate numbers are:

Atmosphere contains ~3,000 GT CO2

Of which ~900 GT is anthropogenic, assuming that all of the 120 ppm rise in atmospheric CO2 since 1750 is man-made. (The number will of course decrease if we assume that some of it is “natural”.)

FF emissions are now about 40GT CO2 a year.

I don’t think the atmosphere will saturate at any given level. More likely what will happen is that the terrestrial and oceanic sinks will eventually saturate and any more emitted CO2 will remain in the atmosphere, having nowhere else to go. Always assuming, of course, that we don’t run out of FF first, 😉

10. Euan Mearns says:

Normally we plot what is left in the atmosphere. This plot shows where sequestered emissions have gone. You can see that the T173 and T∞ processes are entirely irrelevant on the time scale considered (note I think there may be something to Phil Chapman’s 21%). I think it is bogus to have these long Tau’s in the model at all since the only relevant sinks are those that are actually removing CO2. But I do agree that slow processes may be important for removing CO2 from the fast sinks and saturation of the fast sinks is something that has to be considered.

My model does not manage the underlying (pre-1965) emissions very well, but these too should be coloured yellow and blue.

Adam and Yvan I’m happy to be corrected if there is a flaw in my logic. I don’t think I’ve seen a chart like this one before. I plotted a few more for QC purposes which are also interesting. Will be interested to hear what Roger has to say.

A broken hand pump would never be used to explain the sinking of the Titanic.

• Euan Mearns says:

The unsequestered emissions has nothing to do with the long Taus which underpins Bern logic (that is the gap between the sequestered stack and the black line). It has to do with the fact that the fast Taus do not have the capacity to pump the emissions away fast enough. Stop emitting and CO2 in the atmosphere will drop like a stone.

11. dcoyne says:

Hi Euan,

The fact that CO2 was in a relative equilibrium during much of the Holcene indicates that there must be other processes, besides the fast process you are modelling. A proper model should work under both circumstances, if the correct model has such a fast drop in CO2, how is it that an equilibrium at about 280 ppm of CO2 was established for 10,000 years or so. How does the fast process know that it should slow down when it approaches 280 ppm. Try your model with no manmade emissions and run it for 10,000 years, starting at 300 ppm of CO2 or even at 280 ppm.

What is the result?

• Euan Mearns says:

Dennis,

The fact that CO2 was in a relative equilibrium during much of the Holcene indicates that there must be other processes, besides the fast process you are modelling.

Why?

All this shows is that the distribution of CO2 between the fast reservoirs was in equilibrium and the fluxes between them was in balance. The fast processes don’t know when to stop, they are an integral part of a complex system that was in equilibrium.

The slow, permanent sequestration processes – burial of organic matter in sediments and making limestones of course need to be fed somehow. Rock weathering has to be one of the main natural sources of CO2 as carbonate minerals are dissolved along with volcanic sources and leakage of natural gas from deep reservoirs to surface. The surface of the Earth leaks CO2 and methane continuously.

• Dennis Coyne says:

Euan,

In order for the model to work we have to explain why there was a change in the rate that carbon was sequestered. For a period of time there was a balance between natural emissions of CO2 and sequestration so that the level of CO2 in the atmosphere was relatively constant, then man started added increasing quantities of CO2 to the atmosphere. You need to do a sanity check on your model, I think you will find if you assume zero man-made emissions, you will get results that do match reality

• Euan Mearns says:

Dennis, reality is that we are burning lots of FF and this is causing CO2 in the atmosphere to rise. And so if I assume zero manmade emissions the model will NOT match reality which is the black line representing the ACTUAL atmosphere.

• Dennis Coyne says:

Hi Euan,

Start your model in the year 1000 AD and assume 2200 Gt of CO2 in the atmosphere at the start and zero anthropogenic emissions from 1000 AD to 1500 AD (or you can make it some low level like 0.01 Gt/a).
The model should remain at about 2200 Gt over this period. My understanding of your simple one rate model is that the CO2 will “drop like a stone” over the period in question. Reality was a relatively stable level between 2160 and 2240 Gt of CO2 over the 1000 to 1500 AD period, can the same model reproduce both this period and the 1900 to 2010 period? I believe the Bern model works for both.

• If I assume zero man-made emissions my model flatlines at ~280ppm. Euan’s model would too.

• Dennis Coyne says:

Hi Roger,

Try the following. Start your model with 2190 Gt of CO2 in 1600 and assume zero anthropogenic emissions and a fast decay of 2.5% or 5% per year.

What do you get for a CO2 level in 1900?
Hint: at a 2.5% decline rate we get 179 Gt of CO2 left in the atmosphere in 1850 or 23 ppm CO2, in 1900 the atmospheric CO2 would be 50 Gt (6 ppm).

I guess the question would be, how do these fast processes know they should shut off at 280 ppm (or 2190 Gt) of CO2? 🙂

• Dennis: My model assumes that the carbon cycle is in balance with atmospheric CO2 at the “pre-industrial” mean of ~280ppm, so applying a decay of 2.5 or 5% a year after 1600 isn’t valid.

As to how the processes know how to shut off at 280ppm, that’s another question 🙂

• Dennis Coyne says:

Hi Roger,

A proper model does not have to assume a balance at 280 ppm of CO2.

The fact that your model only applies for 1965 to the present shows its shortcoming.

If the fast processes cause CO2 to decline at 2.5% and anthropogenic emissions stop because fossil fuels are depleted we get silly results.

For example lets say in 2100 anthropogenic emissions of CO2 are zero and atmospheric CO2 has risen to 4300 Gt CO2 (550 ppm). If CO2 declines at 2.5% per year then atmpospheric CO2 falls to 1200 Gt (155 ppm)by 2150.

You cannot just magically assume “natural balance” at 280 ppm, it has to be incorporated into the model.

Hi Euan! Thanks for your efforts. My main point (which you agree with above) related to the saturation of sinks. For example it has been found (in papers I have read ‘out there’) that the response of biomass to increasing CO2 levels is non-linear, and beyond certain CO2 concentrations it is virtually unknown how or what biomass will act as sinks at all. So this saturation and non-lineariarity of all the sinks involved in the carbon balance of our atmosphere will be key components of any model which usefully replicates the likely response of the system to any given forcing.

So at best I suggest your approach to modelling the process may curve fit to the hindcast, but its predictive ability is moot.

I think the biggest concern is that (even tho China has just instituted rules on its coal imports allowing only cleaner coals to be used) in the run to the end of oil enough carbon will be emitted to shove the planet over some tipping point from where the sinks are temporarily overwhelmed and temperatures (which respond promptly to forcings) move beyond habitable levels for most of humanity. The dear old unknown unknown lurking in the wings.

• Euan Mearns says:

Adam, We know that over 500Gt of CO2 has been sequestered some where. We don’t know how quickly and how large trees might grow (or maybe we do?). But I do firmly believe that we should not be harvesting mature forests to burn in power stations. The largest sink by far is the oceans and I think models that show mixing of surface with deeper water to be a slow process are likely oversimplified. The Gulf stream sequesters a vast amount of surface water – there is an interesting calculation to be done there – Roger?

13. Euan Mearns says:

A couple of very good comments here. I gotta go to bed and so will get back in the morning.

14. Euan Mearns says:

My post for tomorrow mounts to an assassination of the Bern model. Here is the abstract. I’m rather nervous about this since thousands of fine minds have been over this ground before. any comments welcome.

In modelling the growth of CO2 in the atmosphere from emissions data it is standard practice to model what remains in the atmosphere since after all it is the residual CO2 that is of concern in climate studies. In this post I turn that approach on its head and look at what is sequestered. This gives a very different picture showing that the Bern T1.2 and T18.5 time constants account for virtually all of the sequestration of CO2 from the atmosphere on human timescales (see chart below). The much longer T173 and T∞ processes are doing virtually nothing. Their principle action is to remove CO2 from the fast sinks, not from the atmosphere, in a two stage process that should not be modelled as a single stage. Given time, the slow sequestration processes will eventually sequester 100% of human emissions  and not 48% as the Bern model implies.

If emissions were switched off today the fast processes would continue to pump down CO2 quickly until a new equilibrium between the fast sinks is reached where the eventual CO2 concentration of the atmosphere may still contain 19% of total emissions over and above the pre-industrial baseline, that is until the slow sinks have time to pump that residual CO2 away.

The chart shows the amount of annual emissions removed by the various components of the Bern model. Unsurprisingly the T∞ component with a decline rate of 0% removes zero emissions and the T173 slow sink is not much better. Arguably, these components should not be in the model at all. The fast T1.2 and T18.5 sinks are doing all the work.  The model does not handle the pre-1965 emissions decline perfectly, shown as underlying, but these too will be removed by the fast sinks and should also be coloured yellow and blue. Note that year on year the amount of CO2 removed has risen as partial P of CO2 has gone up. The gap between the coloured slices and the black line is that portion of emissions that remained in the atmosphere.

• Dennis Coyne says:

Hi Euan,

Again I would run the model from 1500 to the present to see if you get sensible results.
Do you expect that anthropogenic emissions will be discontinued in the near future?
I have examined your 2.5% decline model and found it does not give sensible results for 1751 to 1900 or for periods earlier than this. I have not tried it, but my guess is that the Bern model works for a longer period time frame, so that the model matches with reality (as we understand it for the past 5000 years or so). About 5000 years before the present there were about 2000 Gt of CO2 in the atmosphere, let’s assume the fast processes caused this level of carbon to drop like a stone at 2.5 % per year for 3000 years where we will assume anthropogenic emissions were near zero. Does your model work in this case?

If not, could you explain what has changed since 1750, have the fast processes changed somehow?

• Euan Mearns says:

Dennis, I still don’t understand what you are driving at. On my charts there is a line = atmosphere. And there are areas = emissions. We are exploring how best to model the emissions so that they match the atmosphere. The atmosphere model is scaled to Mauna Loa. Mauna Loa is the input. And So I don’t need to go back 300 years. The model will out output the input which is actual global CO2 ppm. It is relevant to ask if the emissions decline model works for a different time period. Its a pretty huge task to rebuild the model so it begins back in 1800 or so, but I’ll think about it.

• Dennis Coyne says:

Hi Euan,

The model is very simple. In 1500 the CO2 was about 280 ppm which is roughly 2200 Gt of carbon in the atmosphere. Assume anthropogenic emissions are zero from 1500 to 1750 and that the CO2 declines by 2.5% per year. It takes about 5 minutes on a spreadsheet to do this ( I have done it already). The model gives nonsensical results relative to reality.

To have a viable model it has to work for both 5000 years before the present and for the last 50 years as well.

• dennis coyne says:

Hi Euan,

An alternative is to make some assumptions about emissions and run your model forward.

I tried the following simple model. CO2 at 550 ppm in 2100 and fossil fuels have depleted to the point that anthropogenic emissions are zero from 2100 to 2200 (probably not realistic, but simple to model). I assume CO2 declines at 2.5% per year, after 30 years we get back to pre-industrial levels, and after 50 years we are at less than ice age levels of CO2 (155 ppm in 2150).

• Dennis Coyne says:

Hi Euan,

There are estimates of CO2 levels in the atmosphere going back about 800,000 years from ice core data. I am suggesting that your model for 1965 to 2014 is pretty limited and does not flatline as Roger assumes at 2200 Gt of CO2 when anthropogenic emissions are zero.

• Yvan Dutil says:

Might I suggest to simply read the scientific literature. Terms of the Bern model do not come from the thin air. They are based of observed processes. ALso, I not that the slow term is supported by the PETM CO2 recovery..

15. itzman says:

Something worries me. This is another of those ‘all other things being equal’ scenarios.

But all other things wont be equal.

– the warmists contest that more CO2 will mean a warmer world, warmer oceans less sequestered CO2.
– other contest a warmer CO2 rich world will lead to species that are really good at absorbing CO2 flourishing in hotter climates.
– or perhaps if AGW doesn’t exist, in colder climates too.

– If C14 is being taken out of the air and not re-emitted, what is emitting? the warming oceans perhaps? 100 year old plants finally turning into methane?

We know a carbon cycle exists, but as with all things climate and bio, is a damned sight more complicated than a single exponential.

Its nice to explore this scientifically, but what is the worst of AGW, is to announce the One Truth prematurely.

16. Peter F Gill says:

Consider what happens if it is sea and land out-gassing that is in control and not anthropogenic emissions. In this case if mankind burns fossil fuels and contributes to the partial pressure of carbon dioxide then to an approximation (mixing rates in particular) a similar total increase in atmospheric carbon dioxide content results as if mankind had emitted no carbon dioxide at all!

As regards residence times there is a problem relating to definition. Crudely one may define residence time by the ratio total amount in the atmosphere by the total amount sequestrated in one year. Some 30 plus experiments using quite different techniques give a range of 4 to 25 years with 5-8 years representing most results. Of course since over the past 200 years carbon dioxide has been increasing in our atmosphere, the simple ratio for residence times is only approximate but nevertheless quite good. IPCC comes to its very different conclusions on the topic because it uses a lower steady state atmospheric content as a base to which to which it assumes we should plan to return. It both implicitly and explicitly assumes that the increase from a past steady state is due to mankind. There are actually two hypotheses here: the steady state for thousands of years hypothesis (reliant on ice core date) and the increases are all man-made relying on the fact that both atmospheric carbon dioxide and anthropogenic emissions have been increasing.