Date Joined: Aug 24, 2017 14:02:33 GMT -5
|
Post by 🦍MAXX>ⓤ on Apr 13, 2022 17:58:57 GMT -5
Humanity's expansion across the globe is inextricably tied to the environmental conditions that our early ancestors faced. On Wednesday, a research team from South Korea's Pusan National University revealed research from supercomputing modeling that suggests just how much of humanity's rise is thanks to changes in prehistoric weather. The Pusan team, led by climate physicist Axel Timmermann, used an "unprecedented transient Pleistocene-coupled general circulation model simulation in combination with an extensive compilation of fossil and archaeological records to study the spatiotemporal habitat suitability for five hominin species over the past 2 million years," per the study published in Nature. That 2-million year model, which the team refers to as the 2ma simulation,"reproduces key palaeoclimate records such as tropical sea surface temperatures, Antarctic temperatures, the eastern African hydroclimate and the East Asian summer monsoon in close agreement with paleo-reconstructions," to ensure a realistic representation of how the rain patterns in Southern Africa were likely shifting at the time. Basically, the team was looking at how the 41,000-year cyclical patterns of precipitation and temperature change caused by the Earth's axial wobble impacted the availability of resources for early humans and our close cousins. By combining the synthetic data generated by the 2ma simulation with the hard evidence of fossil and archaeological findings, the team puzzled out the places where homo sapiens and our genetic offshoots were most likely to inhabit. The Pusan team noted a few surprising trends emerging from the data. For example, the researchers found that around 700,000 years ago, Homo heidelbergensis (suspected to be the progenitors of both Neanderthals and modern humans) began expanding from their traditional range. They were able to do so because our planet's elliptical orbit created wetter, more habitable climate conditions at that time to support the expansion. The simulation projected the movement of these wet spots across the Earth and the researchers found evidence within the fossil record that moved along with them. finance.yahoo.com/news/supercomputer-simulations-show-climate-changes-role-in-early-human-migration-203531329.html
|
|
Date Joined: Apr 5, 2018 3:27:17 GMT -5
|
Post by sb on May 10, 2022 12:11:19 GMT -5
GIGO.
|
|
Deleted
Deleted Member
Posts: 0
Date Joined: May 9, 2024 15:58:44 GMT -5
|
Post by Deleted on May 10, 2022 12:45:16 GMT -5
GIGO indeed. The science of the continuous failure of climate modeling and their dire predictions. The atmosphere is about 0.8˚ Celsius warmer than it was in 1850. Given that the atmospheric concentration of carbon dioxide has risen 40 percent since 1750 and that CO2 is a greenhouse gas, a reasonable hypothesis is that the increase in CO2 has caused, and is causing, global warming.
But a hypothesis is just that. We have virtually no ability to run controlled experiments, such as raising and lowering CO2 levels in the atmosphere and measuring the resulting change in temperatures. What else can we do? We can build elaborate computer models that use physics to calculate how energy flows into, through, and out of our planet’s land, water, and atmosphere. Indeed, such models have been created and are frequently used today to make dire predictions about the fate of our Earth.
The problem is that these models have serious limitations that drastically limit their value in making predictions and in guiding policy. Specifically, three major problems exist. They are described below, and each one alone is enough to make one doubt the predictions. All three together deal a devastating blow to the forecasts of the current models.
Measurement Error Imagine that you’re timing a high school track athlete running 400 meters at the beginning of the school year, and you measure 56 seconds with your handheld stopwatch that reads to ±0.01 seconds. Imagine also that your reaction time is ±0.2 seconds. With your equipment, you can measure an improvement to 53 seconds by the end of the year. The difference between the two times is far larger than the resolution of the stopwatch combined with your imperfect reaction time, allowing you to conclude that the runner is indeed now faster. To get an idea of this runner’s improvement, you calculate a trend of 0.1 seconds per week (3 seconds in 30 weeks). But if you try to retest this runner after half a week, trying to measure the expected 0.05-second improvement, you will run into a problem. Can you measure such a small difference with the instrumentation at hand? No. There’s no point in even trying because you’ll have no way of discovering if the runner is faster: the size of what you are trying to measure is smaller than the size of the errors in your measurements.
Scientists present measurement error by describing the range around their measurements. They might, for example, say that a temperature is 20˚C ±0.5˚C. The temperature is probably 20.0˚C, but it could reasonably be as high as 20.5˚C or as low as 19.5˚C.
Now consider the temperatures that are recorded by weather stations around the world.
Patrick Frank is a scientist at the Stanford Synchrotron Radiation Lightsource (SSRL), part of the SLAC National Accelerator Laboratory at Stanford University. Frank has published papers that explain how the errors in temperatures recorded by weather stations have been incorrectly handled. Temperature readings, he finds, have errors over twice as large as generally recognized. Based on this, Frank stated, in a 2011 article in Energy & Environment,“…the 1856–2004 global surface air temperature anomaly with its 95% confidence interval is 0.8˚C ± 0.98˚C.” The error bars are wider than the measured increase. It looks as if there’s an upward temperature trend, but we can’t tell definitively. We cannot reject the hypothesis that the world’s temperature has not changed at all.The Sun’s Energy Climate models are used to assess the CO2-global warming hypothesis and to quantify the human-caused CO2 “fingerprint.”
How big is the human-caused CO2 fingerprint compared to other uncertainties in our climate model? For tracking energy flows in our model, we use watts per square meter (Wm–2).The sun’s energy that reaches the Earth’s atmosphere provides 342 Wm–2—an average of day and night, poles and equator—keeping it warm enough for us to thrive. The estimated extra energy from excess CO2—the annual anthropogenic greenhouse gas contribution—is far smaller, according to Frank, at 0.036 Wm–2, or 0.01 percent of the sun’s energy. If our estimate of the sun’s energy were off by more than 0.01 percent, that error would swamp the estimated extra energy from excess CO2. Unfortunately, the sun isn’t the only uncertainty we need to consider
Cloud Errors Clouds reflect incoming radiation and also trap it as it is outgoing. A world entirely encompassed by clouds would have dramatically different atmospheric temperatures than one devoid of clouds. But modeling clouds and their effects has proven difficult. The Intergovernmental Panel on Climate Change (IPCC), the established global authority on climate change, acknowledges this in its most recent Assessment report, from 2013:
The simulation of clouds in climate models remains challenging. There is very high confidence that uncertainties in cloud processes explain much of the spread in modelled climate sensitivity.[bold and italics in original]
What is the net effect of cloudiness? Clouds lead to a cooler atmosphere by reducing the sun’s net energy by approximately 28 Wm–2. Without clouds, more energy would reach the ground and our atmosphere would be much warmer. Why are clouds hard to model? They are amorphous; they reside at different altitudes and are layered on top of each other, making them hard to discern; they aren’t solid; they come in many different types; and scientists don’t fully understand how they form. As a result, clouds are modeled poorly.This contributes an average uncertainty of ±4.0 Wm–2 to the atmospheric thermal energy budget of a simulated atmosphere during a projection of global temperature.This thermal uncertainty is 110 times as large as the estimated annual extra energy from excess CO2. If our climate model’s calculation of clouds were off by just 0.9 percent—0.036 is 0.9 percent of 4.0—that error would swamp the estimated extra energy from excess CO2. The total combined errors in our climate model are estimated be about 150 Wm–2, which is over 4,000 times as large as the estimated annual extra energy from higher CO2 concentrations. Can we isolate such a faint signal?
In our track athlete example, this is equivalent to having a reaction time error of ±0.2 seconds while trying to measure a time difference of 0.00005 seconds between any two runs. How can such a slight difference in time be measured with such overwhelming error bars? How can the faint CO2 signal possibly be detected by climate models with such gigantic errors?
Other Complications
Even the relationship between CO2 concentrations and temperature is complicated.
The glacial record shows geological periods with rising CO2 and global cooling and periods with low levels of atmospheric CO2 and global warming. Indeed, according to a 2001 article in Climate Research by astrophysicist and geoscientist Willie Soon and his colleagues,“atmospheric CO2 tends to follow rather than lead temperature and biosphere changes.”
A large proportion of the warming that occurred in the 20th century occurred in the first half of the century, when the amount of anthropogenic CO2 in the air was one quarter of the total amount there now. The rate of warming then was very similar to the rate of warming recently. We can’t have it both ways. The current warming can’t be unambiguously caused by anthropogenic CO2 emissions if an earlier period experienced the same type of warming without the offending emissions.Climate Model Secret Sauce
It turns out that climate models aren’t “plug and chug.” Numerous inputs are not the direct result of scientific studies; researchers need to “discover” them through parameter adjustment, or tuning, as it is called. If a climate model uses a grid of 25x25-kilometer boxes to divide the atmosphere and oceans into manageable chunks, storm clouds and low marine clouds off the California coast will be too small to model directly. Instead, according to a 2016 Science article by journalist Paul Voosen, modelers need to tune for cloud formation in each key grid based on temperature, atmospheric stability, humidity, and the presence of mountains. Modelers continue tuning climate models until they match a known 20th century temperature or precipitation record. And yet, at that point, we will have to ask whether these models are more subjective than objective. If a model shows a decline in Arctic sea ice, for instance—and we know that Arctic sea ice has, in fact, declined—is the model telling us something new or just regurgitating its adjustments?
Climate Model Errors
Before we put too much credence in any climate model, we need to assess its predictions. The following points highlight some of the difficulties of current models.
Vancouver, British Columbia, warmed by a full degree in the first 20 years of the 20th century, then cooled by two degrees over the next 40 years, and then warmed to the end the century, ending almost where it started. None of the six climate models tested by the IPCC reproduced this pattern. Further, according to scientist Patrick Frank in a 2015 article in Energy & Environment, the projected temperature trends of the models, which all employed the same theories and historical data, were as far apart as 2.5˚C.
According to a 2002 article by climate scientists Vitaly Semenov and Lennart Bengtsson in Climate Dynamics, climate models have done a poor job of matching known global rainfall totals and patterns.
Climate models have been subjected to “perfect model tests,” in which the they were used to project a reference climate and then, with some minor tweaks to initial conditions, recreate temperatures in that same reference climate. This is basically asking a model to do the same thing twice, a task for which it should be ideally suited. In these tests, Frank found, the results in the first year correlated very well between the two runs, but years 2-9 showed such poor correlation that the results could have been random. Failing a perfect model test shows that the results aren’t stable and suggests a fundamental inability of the models to predict the climate.
The ultimate test for a climate model is the accuracy of its predictions. But the models predicted that there would be much greater warming between 1998 and 2014 than actually happened. If the models were doing a good job, their predictions would cluster symmetrically around the actual measured temperatures. That was not the case here; a mere 2.4 percent of the predictions undershot actual temperatures and 97.6 percent overshot, according to Cato Institute climatologist Patrick Michaels, former MIT meteorologist Richard Lindzen, and Cato Institute climate researcher Chip Knappenberger. Climate models as a group have been “running hot,” predicting about 2.2 times as much warming as actually occurred over 1998–2014. Of course, this doesn’t mean that no warming is occurring, but, rather, that the models’ forecasts were exaggerated.
Conclusions
If someone with a hand-held stopwatch tells you that a runner cut his time by 0.00005 seconds, you should be skeptical. If someone with a climate model tells you that a 0.036 Wm–2 CO2 signal can be detected within an environment of 150 Wm–2 error, you should be just as skeptical.
As Willie Soon and his coauthors found,“Our current lack of understanding of the Earth’s climate system does not allow us to determine reliably the magnitude of climate change that will be caused by anthropogenic CO2 emissions, let alone whether this change will be for better or for worse.”www.hoover.org/research/flawed-climate-models
|
|
Deleted
Deleted Member
Posts: 0
Date Joined: May 9, 2024 15:58:44 GMT -5
|
Post by Deleted on May 10, 2022 13:29:13 GMT -5
|
|
Deleted
Deleted Member
Posts: 0
Date Joined: May 9, 2024 15:58:44 GMT -5
|
Post by Deleted on May 10, 2022 13:37:40 GMT -5
The physicists math view of AGW. I'm posting my view just once on this.
|
|
Date Joined: Aug 24, 2017 14:02:33 GMT -5
|
Post by 🦍MAXX>ⓤ on May 10, 2022 18:35:31 GMT -5
|
|
Deleted
Deleted Member
Posts: 0
Date Joined: May 9, 2024 15:58:44 GMT -5
|
Post by Deleted on May 10, 2022 18:51:50 GMT -5
I'm still waiting for my home to become beachfront that I should have had 20 years ago according to Al Gore. You know... the guy who invented the internet.
|
|
Deleted
Deleted Member
Posts: 0
Date Joined: May 9, 2024 15:58:44 GMT -5
|
Post by Deleted on May 10, 2022 21:57:01 GMT -5
I'm still waiting for my home to become beachfront that I should have had 20 years ago according to Al Gore. You know... the guy who invented the internet. Don't get your hopes up, it's not likely by a long shot.
|
|
Deleted
Deleted Member
Posts: 0
Date Joined: May 9, 2024 15:58:44 GMT -5
|
Post by Deleted on May 11, 2022 14:08:30 GMT -5
LMAO, some folks must be real bored...To find that after almost a month and feel the need to comment... When I see things like this it makes me wonder if people know there is an outside world other than a computer screen... My old published papers are hardcopy, took some time to find a colleagues coauthored online version following the same principle. AGW discussions aren't time sensitive, the research/peer review continues.
|
|