Plazaeme preservado

<< Posterior Valia contesta a UPyD
Anterior >> Esto no es un acertijo

La visión de un físico teórico sobre el "calentamiento global"

Nota previa: Me parece interesante, porque es una explicación física de algo que a cualquiera se le ocurre intuitivamente. Los alarmistas plantean una teoría con unas consecuencias sorprendentes. Se trata de que un pequeño calentamiento directo debido al CO2 provoca un gran calentamiento por la reacción del sistema a ese pequeño calentamiento inicial. Lo que se llama “realimentación”. La tesis también debe funcionar al revés; un pequeño enfriamiento provoca por realimentación un enfriamiento mucho mayor. En números, lo que plantean los calentólogos es que un efecto que causa un calentamiento directo de la temperatura media global de 1,2ºC , se convierte por realimentación en un calentamiento de 5ºC ó 6ºC.

Pero claro, hay tantas cosas que pueden provocar ese pequeño calentamiento o enfriamiento en el corto plazo (una disminución en la nubosidad, un volcán de los muy grandes, un super el Niño o otros cambios en los océanos, una diferencia en la cantidad de nieve en el hemisferio norte, polvo de los desiertos en el aire), por no hablar del largo plazo, que el clima debería ser en ese caso mucho más variable, brutalmente variable. Y no lo es.

Y en teoría tanto podría haber realimentaciones positivas (que aumentan el efecto inicial) como negativas (que disminuyen el efecto inicial).  Y esa es en realidad la discusión del clima. ¿Cual es la “realimentación neta” (le llaman la sensibilidad del clima)? Nadie lo sabe, y mucho menos nadie ha demostrado nada.

Hoy no hay tiempo para traducciones. Pero el que no lo entienda en inglés, que lo marque y que vuelva en unos días.

Pongo al final un resumen bio de Motl para que os hagáis una idea [–>].

Post robado a Luboš Motl

--

Por qué las realimentaciones no pueden ser positivas y grandes

--

When no feedbacks are included, the greenhouse effect caused by CO2 adds about 1.2 °C per doubling of the CO2 concentration. This is a result of a rather clean physics problem. There’s no real “complexity” in this problem: we reduce the Earth to a pretty manageable differential equation.

The doubling from the pre-industrial concentration of 280 ppm to 560 ppm of CO2 in the atmosphere will occur slightly before 2100, assuming business as usual. If the figure 1.2 °C were the total answer, and assuming that the mankind has caused the whole 0.6-0.8 °C of the warming we may have seen in the last century or so, it would mean that 0.4-0.6 °C of man-made warming would be left by 2100 - less than the innocent 20th century change.

That’s a completely unspectacular change. So this elementary greenhouse effect is not enough for the “applications” of the physical effect in policymaking. The advocates of carbon regulation and threats depend on some amplification of the man-made greenhouse effect, i.e. positive feedbacks. The IPCC would like the warming per the CO2 doubling to go as high as 5 °C and some people would be thrilled to see even higher figures - that seem to completely disagree with the small rate of the recent warming.

Feedbacks: geometric series

Imagine that you add some CO2. That changes the temperature by the “bare mechanism” of the greenhouse effect. But the modified temperature also changes some other things in the climate that may change the temperature again. These “second round” effects are called the feedbacks and they may change the temperature in both directions.

If the “simply calculated” bare temperature change was “ΔT” and if the new increment was “f.ΔT” where “f” is a dimensionless coefficient, this “f.ΔT” of extra warming must actually be inserted to the feedback as an input once again. That adds additional “f^2.ΔT” of warming. And so on. The total warming is ΔT(total) = ΔT (1 + f + f2 + f3 + …) = = ΔT / (1-f). Yes, it’s called the geometric series. While the total warming depends on “f” nonlinearly, it is the very coefficient “f” whose distribution should be kind of uniform. After all, the feedback “f” is a sum of many diverse effects. It’s “f” that behaves as an additive quantity, not “1/(1-f)”.

The alarming scenarios depend on the assumption that “f” is really close to one, something like “f=0.8” if not “f=0.9”, and the corresponding total warming is correspondingly high. For example, for “f=0.8”, we obtain “ΔT(total) = 1.2 °C/0.2 = 6 °C”. This is the type of results that people like James Hansen would love to be true (or at least believed to be true).

However, the values of “f” above one are almost strictly ruled out because the geometric series above is actually divergent. That would physically mean that any initial perturbation would be amplified exponentially: the deviation from the would-be equilibrium would be exponentially increasing with time. (The normal behavior is that you approach an equilibrium in the future, and your distance from the equilibrium is exponentially shrinking.) The Earth’s temperature would soon (in a logarithmic time) escape from a hospitable interval. Everything would freeze over or evaporate.

This arguably hasn’t happened for billions of years.

It follows that “f” can’t exceed one, at least not too often. It can be positive - feedbacks can be positive - but they can’t be too positive. However, we may make a much stronger statement than this one. Why?

Because physical mechanisms make it pretty inevitable that “f” is not a universal dimensionless constant. For different “quasi-equilibriums”, different chemical compositions of the atmosphere and the biosphere, the amount of ice in the Arctic, positions of the continents, and so on, i.e. for various changes that the Earth has seen during its history, the values of the total feedback coefficient “f” must have been different. The coefficient “f” is inevitably variable. (The “f” is also dependent on the location, but let’s look at the global mean temperature only.)

By the central limit theorem, we may assume that for a random moment of the Earth’s history, “f” took values in a normal distribution around the central value “f_0” and the standard deviation “SD”. Because “f” is approximately the sum of contributions from many effects, there’s no way how “f” could be “automatically” prevented from exceeding one.

So by looking at the statistical distribution, we may determine the percentage of the Earth’s history when “f” actually exceeded “1”. Whenever this occurred, if it ever occurred, the temperature was exponentially running away from the equilibrium value. So within a few decades, it would be reaching the boiling point or drop well below the freezing point. The life would die out. The geology would be very different.

Let’s assume that such an uncontrollable exponential development of “f” exceeding one would destroy the life on Earth within 47 years, to make the numbers simpler. (I was approximately inspired by a stupid movie, Age of Stupid, when I chose this figure.) The Earth is 4.7 billion years old, so its life contains 100 million periods whose length is 47 years.

Because none of those 100 million periods has contained the deadly exponentially runaway behavior we are just discussing, it follows that the probability that “f” exceeds one should be lower than “one in 100 million”.

Inserting the numbers

But we had an explicit formula for the probability that “f” exceeded one. We said that “f” was distributed according to the normal distribution around “f_0” as the mean value, with the standard deviation of “SD”.

The maths is complicated, so let’s be surprised by the power or lack of power in this argument. (I haven’t made any calculation before I wrote this text: this is being written from scratch.)

My estimate for the fluctuations of “f” depending on the “regime” of the Earth is “SD=0.1” (for feedbacks “f” comparable to one, this is something like a 10% error). I think it’s unlikely that “f” is determined much more accurately than that: it’s much more likely that the uncertainty of “f” is higher than that. Now, what is the mean value “f_0” such that the probability that “f” exceeds one, given the standard deviation “SD=0.1”, is lower than “1 in 100 million”?

Well, it’s simple. If you look at the numbers describing confidence intervals, you will see that “1 in 100 million” is approximately equivalent to a “6 sigma” deviation from the mean. So the mean value must be at least 6 standard deviations below 1. But because I decided that “SD=0.1”, it follows that “f_0” must be at most “1-0.6 = 0.4”, which leads to the total warming of 1.2 °C / 0.6 °C = 2 °C per CO2 doubling. About 1 °C would be left for the 21st century.

If you managed to show that the standard deviation for “f” is “SD=0.2”, the maximum allowed mean value of “f” would be “f=1.2-6*0.2 = 0”. If you would demonstrate that the deviations are as big as “SD=0.2”, that would prove that the (average over time and space) feedback coefficient “f” actually has to be negative!

Now, I don’t know how much “SD” actually is. One would have to look at the typical changes of the water vapor variability and the variability of cloudiness in different epochs of the paleoclimatological and geological history. But whatever the exact numbers are, I think that this argument is very powerful and largely excludes the values of “f” - and distributions for “f” - that are too close to “f=1”. Also, note that the normal distribution decreases very quickly: if I used a different distribution that is nonzero everywhere, I would get more strict conditions for “f_0”!

I feel that the argument above is a quantitative explanation for the intuition that feedbacks in systems without a runaway behavior are much more likely to be negative than positive: they must be “repelled” from the unphysical runaway region of the parameter space. The argument above is no “rigorous proof” that the feedbacks can’t be high but I think it is a sensible starting point to choose the “priors” for different values of “f” that are a priori conceivable. The priors should follow a natural distribution that should be pretty much negligible at “f=1”. That mostly excludes any significant amplification of the bare greenhouse effect.

Of course, I have no doubts that the alarmists will deny the existence of general theoretical arguments that make similar “catastrophes” very unlikely. But others may want to look at arguments in both directions.

And that’s the memo.

Notas:

El Blog de Lubos Motl:

Luboš Motl (nacido en 1973) es un físico teórico checo que trabaja con la teoría de cuerdas y con los problemas conceptuales de la gravedad cuántica. El nació en Pilsen. Hizo su maestreado en la Universidad Carlos, enPraga, y su doctorado en la Rutgers University, y ha sido un Harvard Junior Fellow (2001-2004) y profesor asistente (2004-2007) en la Harvard University.

Junto con Robbert DijkgraafErik VerlindeHerman Verlinde, el es un co-fundador de la Matrix string theory, una definición non-perturbaría de la teoría de cuerdas. Recientemente el trabajo con el limite ondas-pp de correspondencia AdS/CFT; teoría de twistors y sus aplicaciones en teoría de campo de gauge con super-simetría; termodinámica de agujeros negros y la relevancia conjeturada de los modos cuasinormales para lagravedad cuántica de bucles, y otros tópicos. El tiene presencia en la Internet, donde muchas veces participa en debates acalorados favoreciendo la teoría de cuerdas sobre la gravedad cuántica de bucles. Junto con Urs SchreiberArvind Rajaraman, el es un co-fundador y moderador del grupo de noticias sci.physics.strings.


  • McValen 2010-02-26 14:46:17
    Hola, os dejo este enlace a una interesante entrevista a otro gran físico teórico, Freeman Dyson, sobre lo mismo: http://www.terceracultura.net/tc/?p=1432
    • plazaeme 2010-02-26 14:52:38
      Gracias, McValen. Ya habíamos comentado por aquí sobre Dyson, trayendo cosas. Pero nunca está de mas recordarlo.
  • viejecita 2010-02-26 16:46:37
    Bueno, a mí, el comentario, con tanta fórmula, me rebasa totalmente. Pero le he mandado a mi hijo el de Seattle el enlace para el blog de Lubos Motl. A él todas esas cosas de la teoría de cuerdas, de física teórica, y, sobre todo, de matemáticas le encantan. A ver qué me contesta. Que con el video del otro día ( el del representante de la minoría en el senado norteamericano ), me contestó que ese señor es de los que usa la biblia como argumento y arma arrojadiza. Pero por lo menos, siempre me dice que hace falta más ciencia y menos política...