Final climate sensitivity
Substituting in Eqn. (1) the revised values derived for the three factors in ΔTλ, our re-evaluated central estimate of climate sensitivity is their product –
ΔTλ= ΔF2x κ f ≈ 1.135 x 0.242 x 2.095 ≈ 0.58 °K (30)
Theoretically, empirically, and in the literature that we have extensively cited, each of the values we have chosen as our central estimate is arguably more justifiable – and is certainly no less justifiable – than the substantially higher value selected by the IPCC. Accordingly, it is very likely that in response to a doubling of pre-industrial carbon dioxide concentration TS will rise not by the 3.26 °K suggested by the IPCC, but by <1 °K.
We have set out and then critically examined a detailed account of the IPCC's method of evaluating climate sensitivity. We have made explicit the identities, interrelations, and values of the key variables, many of which the IPCC does not explicitly describe or quantify. The IPCC's method does not provide a secure basis for policy-relevant conclusions. We now summarize some of its defects.
The IPCC's methodology relies unduly – indeed, almost exclusively – upon numerical analysis, even where the outputs of the models upon which it so heavily relies are manifestly and significantly at variance with theory or observation or both. Modeled projections such as those upon which the IPCC's entire case rests have long been proven impossible when applied to mathematically-chaotic objects, such as the climate, whose initial state can never be determined to a sufficient precision. For a similar reason, those of the IPCC's conclusions that are founded on probability distributions in the chaotic climate object are unsafe.
Not one of the key variables necessary to any reliable evaluation of climate sensitivity can be measured empirically. The IPCC's presentation of its principal conclusions as though they were near-certain is accordingly unjustifiable. We cannot even measure mean global surface temperature anomalies to within a factor of 2; and the IPCC's reliance upon mean global temperatures, even if they could be correctly evaluated, itself introduces substantial errors in its evaluation of climate sensitivity.
The IPCC overstates the radiative forcing caused by increased CO2 concentration at least threefold because the models upon which it relies have been programmed fundamentally to misunderstand the difference between tropical and extra-tropical climates, and to apply global averages that lead to error.
The IPCC overstates the value of the base climate sensitivity parameter for a similar reason. Indeed, its methodology would in effect repeal the fundamental equation of radiative transfer (Eqn. 18), yielding the impossible result that at every level of the atmosphere ever-smaller forcings would induce ever-greater temperature increases, even in the absence of any temperature feedbacks.
The IPCC overstates temperature feedbacks to such an extent that the sum of the high-end values that it has now, for the first time, quantified would cross the instability threshold in the Bode feedback equation and induce a runaway greenhouse effect that has not occurred even in geological times despite CO2 concentrations almost 20 times today's, and temperatures up to 7 ºC higher than today's.
The Bode equation, furthermore, is of questionable utility because it was not designed to model feedbacks in non-linear objects such as the climate. The IPCC's quantification of temperature feedbacks is, accordingly, inherently unreliable. It may even be that, as Lindzen (2001) and Spencer (2007) have argued, feedbacks are net-negative, though a more cautious assumption has been made in this paper.
It is of no little significance that the IPCC's value for the coefficient in the CO2 forcing equation depends on only one paper in the literature; that its values for the feedbacks that it believes account for two-thirds of humankind's effect on global temperatures are likewise taken from only one paper; and that its implicit value of the crucial parameter κ depends upon only two papers, one of which had been written by a lead author of the chapter in question, and neither of which provides any theoretical or empirical justification for a value as high as that which the IPCC adopted.
The IPCC has not drawn on thousands of published, peer-reviewed papers to support its central estimates for the variables from which climate sensitivity is calculated, but on a handful.
On this brief analysis, it seems that no great reliance can be placed upon the IPCC's central estimates of climate sensitivity, still less on its high-end estimates. The IPCC's assessments, in their current state, cannot be said to be "policy-relevant". They provide no justification for taking the very costly and drastic actions advocated in some circles to mitigate "global warming", which Eqn. (30) suggests will be small (<1 °C at CO2 doubling), harmless, and beneficial.
Even if temperature had risen above natural variability, the recent solar Grand Maximum may have been chiefly responsible. Even if the sun were not chiefly to blame for the past half-century's warming, the IPCC has not demonstrated that, since CO2 occupies only one-ten-thousandth part more of the atmosphere that it did in 1750, it has contributed more than a small fraction of the warming. Even if carbon dioxide were chiefly responsible for the warming that ceased in 1998 and may not resume until 2015, the distinctive, projected fingerprint of anthropogenic "greenhouse-gas" warming is entirely absent from the observed record. Even if the fingerprint were present, computer models are long proven to be inherently incapable of providing projections of the future state of the climate that are sound enough for policymaking. Even if per impossibile the models could ever become reliable, the present paper demonstrates that it is not at all likely that the world will warm as much as the IPCC imagines. Even if the world were to warm that much, the overwhelming majority of the scientific, peer-reviewed literature does not predict that catastrophe would ensue. Even if catastrophe might ensue, even the most drastic proposals to mitigate future climate change by reducing emissions of carbon dioxide would make very little difference to the climate. Even if mitigation were likely to be effective, it would do more harm than good: already millions face starvation as the dash for biofuels takes agricultural land out of essential food production: a warning that taking precautions, "just in case", can do untold harm unless there is a sound, scientific basis for them. Finally, even if mitigation might do more good than harm, adaptation as (and if) necessary would be far more cost-effective and less likely to be harmful.
In short, we must get the science right, or we shall get the policy wrong.
To create it, Silvera and Dias squeezed a tiny hydrogen sample at 495 gigapascal, or more than 71.7 million pounds-per-square inch - greater than the pressure at the center of the Earth. At those extreme pressures, Silvera explained, solid molecular hydrogen -which consists of molecules on the lattice sites of the solid - breaks down, and the tightly
predictions suggest metallic hydrogen could act as a superconductor at room temperatures.
Among the holy grails of physics, a room temperature superconductor, Dias said, could radically change our transportation system, making magnetic levitation of high-speed trains possible, as well as making electric cars more efficient and improving the performance of many electronic devices.
The material could also provide major improvements in energy production and storage - because superconductors have zero resistance energy could be stored by maintaining currents in superconducting coils, and then be used when needed.
"It takes a tremendous amount of energy to make metallic hydrogen," Silvera explained. "And if you convert it back to molecular hydrogen, all that energy is released, so it would make it the most powerful rocket propellant known to man, and could revolutionize rocketry."
The most powerful fuels in use today are characterized by a "specific impulse" - a measure, in seconds, of how fast a propellant is fired from the back of a rocket - of 450 seconds. The specific impulse for metallic hydrogen, by comparison, is theorized to be 1,700 seconds.
"That would easily allow you to explore the outer planets," Silvera said. "We would be able to put rockets into orbit with only one stage, versus two, and could send up larger payloads, so it could be very important."
'Proven one-step process to convert CO2 and water directly into liquid hydrocarbon fuel
A team of University of Texas at Arlington chemists and engineers have proven that concentrated light, heat and high pressures can drive the one-step conversion of carbon dioxide and water directly into useable liquid hydrocarbon fuels.
This simple and inexpensive new sustainable fuels technology could potentially help limit global warming by removing carbon dioxide from the atmosphere to make fuel. The process also reverts oxygen back into the system as a byproduct of the reaction, with a clear positive environmental impact, researchers said.
"Our process also has an important advantage over battery or gaseous-hydrogen powered vehicle technologies as many of the hydrocarbon products from our reaction are exactly what we use in cars, trucks and planes, so there would be no need to change the current fuel distribution system," said Frederick MacDonnell, UTA interim chair of chemistry and biochemistry and co-principal investigator of the project.
In an article published today in the Proceedings of the National Academy of Sciences titled "Solar photothermochemical alkane reverse combustion," the researchers demonstrate that the one-step conversion of carbon dioxide and water into liquid hydrocarbons and oxygen can be achieved in a photothermochemical flow reactor operating at 180 to 200 C and pressures up to 6 atmospheres.'
surrounding the hot object with special nanophotonic structures that spectrally filter the emitted light, meaning that they let the light reflect or pass through based on its color. Because the filters are not in direct physical contact with the emitter, temperatures can be very high.
The researchers also redesigned the incandescent filament from scratch. In this case, they turned it into a piece that was laser-machined out of a flat sheet of tungsten, which makes it completely planar. Since a planar filament has a large area, it's efficient at re-absorbing the light that was reflected by the filter.
In the new-concept light bulb prototype, the efficiency approaches some fluorescent and LED bulbs. This could be huge for the future of light bulbs.'
The rumour of this possible detection was first mentioned in The Guardian on 7 December by Paul Davies. Now the story has now taken on a life of its own, thanks to a tweet by the physicist and author Lawrence Krauss.'
My earlier rumor about LIGO has been confirmed by independent sources. Stay tuned! Gravitational waves may have been discovered!! Exciting.
— Lawrence M. Krauss (@LKrauss1) January 11, 2016
See if you can follow me on this ...
The amount of carbon on planet Earth by definition remains pretty much the same. Man has been burning fossil fuels, which puts carbon into the atmosphere. Where did the carbon in the fossil fuels come from? It mostly came from plants and bacteria that got buried underground due to geological processes. Over millions of years natural processes turned the plants and bacteria into fossil fuels. Where did the plants and bacteria get their carbon from? They got it from the atmosphere. The carbon that we are now putting into the atmosphere originally came from the atmosphere.
To better understand this, we have to understand the complete history of atmospheric carbon dioxide on planet Earth. The original earth atmosphere was an amazing 43% carbon dioxide compared with the roughly .04% that we have now. That original atmosphere had so much pressure that it could crush a man flat. About 2.5 billion years ago, cyanobacteria began using photosynthesis to convert carbon dioxide into free oxygen, which lead to the creation of our oxygen rich "third atmosphere" 2.3 billion years ago. At that time the carbon dioxide levels were about 7,000 parts per million, but it went into a somewhat steady but uneven decline because geological processes would sequester carbon underground. The decline was uneven because as part of the "carbon dioxide cycle", sometimes geological processes like volcanoes would cause massive amounts of carbon dioxide to be released back into the atmosphere.
Thirty million years ago during the Oligocene Epoch, the average temperature of the earth was about 7 degrees Celsius warmer than it is now. There was no ice on the poles, but the amount of carbon dioxide in the atmosphere was in rapid decline during this epoch. About 23 million years ago, at the beginning of the Neogene period, ice began to form on the poles. About ten million years ago, a series of intermittent ice ages began that continue to this day. I found one source that said that we are still technically in an ice age because we still have ice at the poles.
These ice ages helped create human evolution. The ice ages caused Africa to dry up which lead to some deforestation. This forced some arboreal (tree dwelling) apes to venture onto land. About 7 million years ago, the first apes that could comfortably walk upright appeared. They had evolved a new type of pelvis that allowed upright locomotion, which is about three times more efficient when trying to cross land.
The first tool making ape that resembled modern humans, Homo habilis, arose 2.5 million years ago. It would be soon followed by Homo erectus, and then about 200,000 years ago, modern humans, Homo sapiens would arise. However, Homo sapiens almost died out. About 50,000 years ago an ice age in Europe had caused Africa to almost completely dry up. The total human population had dropped to 7,000 individuals living on the southern coast of Africa. During this period humans learned how to fish, make new tools, and create permanent dwellings. When the ice age abated, these humans with their new tools spread out to rest of the world at a pace of about a mile per year. This was the beginning of the Upper Paleolithic (Late Stone Age) period.
More ice ages would follow, and during each ice age human population would decline. It is no coincidence that all of human civilization (i.e. agriculture, use of metals) would arise during a "brief" warm period between two ice ages starting about 10,000 years ago. I have heard that no matter what we do, we will enter a new ice age in about 10,000 years from now, but I have also heard speculation that the next ice age will be delayed by global warming. This actually should be our goal, since humans have always declined during the ice ages and always prospered during the intermittent warm periods.
During the geological time period of the earth, the amount of carbon dioxide in the atmosphere has been on an uneven decline and mostly disappeared. Atmospheric carbon dioxide is necessary for plant growth, and I have read that we were running dangerously low on atmospheric carbon dioxide, about 00.02%, before mankind at least temporarily reversed the trend. I just read a wikipedia article that said that atmospheric carbon dioxide will eventually get so low that all plants and animal will die off. What mankind has done is put carbon dioxide back into the atmosphere that was previously there, thus possibly delaying the next ice age. Currently the amount of carbon dioxide in the atmosphere is about 00.04%.
Carbon dioxide by itself cannot cause significant global warming. There are diminished returns. Carbon dioxide has to double again to produce the same effect as the last doubling. The effect is not linear but logarithmic. What the alarmists are worried about, and they could be correct, is positive feedback. The warming of the earth causes more water vapor to enter the atmosphere, and water vapor is a much stronger greenhouse gas than carbon dioxide, thus causing more warming. If this were true, however, the last warming period around the year 2000 should caused a continuous positive feedback, a runaway greenhouse, which didn't happen. Instead temperatures went into a major decline and hit a really big low point in the year 2007.
The skeptics believe that increased cloud cover reflects sunlight back into space thus causing a negative feedback. The skeptics are not "global warming deniers", which is a pejorative phrase used by global warming theorists to make the skeptics sound like holocaust deniers. These skeptics actually believe in global warming. At least, the legitimate skeptical scientists do. They just think that global warming is happening at a rate slower than predicted by the theorists. I can point you to an article that shows that the positive feedback models have been contradicted by the actual temperature data, which in reality has been closer to the negative feedback models.
The worst case scenario is that the polar ice caps will melt. If that happens we will lose some coastlines and all of Florida due to sea level rise. However, according to what I just read, it will take 5,000 years for the polar ice caps to melt. In other words, these are processes that take a very long time to happen. In this century we are only looking at modest temperature increases. In the meantime, humans are very adaptable. We are only five to ten years away from creating the first workable prototypes of nuclear fusion. It might take 25 years for this to be practical, but at that point if we wanted to get rid of fossil fuels altogether, we could. I think that we will also see advances in solar power, which is already happening, and battery technology to store the energy created by solar. In other words, we have it within our means to avoid any possible disasters that might be coming.