Our old friend Sheldon Walker showed up again, asking what it would take to convince me there was a “pause” (or “hiatus” or even “slowdown”). Evidently he didn’t like my assertion of the absence of a “pause” in my latest video. He ended by asking whether there’s no evidence I would ever accept — I think that was his way of planting the idea.
If he had paid attention to this post, or read this paper, he’d already know the answer. What it would take is evidence that actually passes muster, statistically.
There isn’t. I’ve tried, damn hard, to find some statistical test which would establish a change in trend recently. Result: not.
What does not pass muster is “looks like.” Statistics is littered with the carcasses of ideas based on data that “looks like” something, but don’t pass muster statistically and, when further data arrived, were abundantly clearly wrong. It’s a classic and crucial lesson in statistics that “looks like” just don’t cut it — but is an excellent way to deceive yourself.
You wanna know what would convince me? Statistics, done right.
I’ve made that plain, often. I’m sick of comments asking what it would take, when I’ve already answered that question many times over. Pay attention.
This blog is made possible by readers like you; join others by donating at Peaseblossom’s Closet.
When confronted by claims of a “pause” I generally ask people that its possible to read all sorts of things out of a graph, but in order to have any understanding of what the graph shows, it is essential to know the underlying mechanics that brought about the numbers. So in this case the question is rather simple, what could have caused it to pause? What is the physical change that could make temperature measurements seem like there was a pause? We know from physics that when you increase the amount of CO2 in the atmosphere, one would expect there to be a rise in temperature knowing that there is no change in the amount of energy entering the system (the sun). Since we know the sun has not varied in temperature (my much), and that the planet core is not the source of the heat (oceans are heating from the top), the only change in the radiative “function” of the system affecting the surface temperatures is the amount of greenhouse gases. So as science tells us, we would expect there to be a temperature rise, any “pause” that could be read out of the numbers would be a random variation that would be overtaken by the expected rise. The past years have shown this to be the case as well, although the same principle can be demonstrated at many different time intervals in the past 3 to 4 decades. If anything could contribute to anything resembling a pause it would not be the lack of warming helped by greenhouse gases, but the amount of aerosols from e.g. volcanoes. Surely, science also tells us that when there is high volcanic activity we would expect there to be less or no warming, even cooling happening. Although cooling is getting less possible for each ppm rise in CO2 in our atmosphere (methane is also important).
I like the tests that show the same data presented in a different context (e.g., crop yields over time) that people people see as definitely showing an increasing trend. It makes me appreciate medical TRIPLE BLIND testing that goes the extra step to have collected data statistically analyzed with its labels removed.
Excellent physics-based rebuttal of the ‘pause’ nonsense.
The cherry of 1998 is gone. Approx 30 percent of all the warming we’ve seen globally since the 1880s has happened since 1998.
I think the most that can be said is there could have been a statistically significant pause in warming had the conditions seen in the Eastern Pacific from the last half of 2007 to 2013 had persisted, but they did not persist.
Maybe by “what would it take” he honestly meant “what sort of time series of data would pass appropriate statistical tests” to show a pause. That’s probably a very charitable interpretation, and you’re certainly under no obligation to spend effort constructing a fake data set that would show a pause, but it might be a useful illustration of how much larger an effect would have to be to pass statistical muster than for the motivated human eyeball to pick it out as possible.
The motivated human eyeball is why the gambling industry exists. Denialists are gamblers and losers.
What would it take ? I think you answered this as clearly as you possibly could in https://tamino.wordpress.com/2016/09/02/you-bet/
Maybe he wants you to reset the baseline to 2016 and try again ?
Yes, that’s exactly what Tamino should point this guy to. He laid out a clear definition of what it would take … and then reality had its say.
It is interesting to look at what happened after Tamino set this bet. 6 of the next 7 temperatures were well under the trend. If we assume a 50% chance of the temperature being below the trend, then this gives us a probability of about 0.015625 of this happening by chance. Well under the normal p-value of 0.05, which means that we should reject the null hypothesis, that there has been no slowdown.
Doesn’t a temporary slowdown look more likely?
[Response: This is why you fail statistics.
The “6 out of 7” doesn’t give a binomial probability of 0.015625 (that would be 6 out of 6), it gives a p-value of 0.0625, which fails 95% significance. And that’s for a *one-sided* test; a two-sided test raises the p-value to 0.125, which not only fails statistical significance, it fails miserably.]
Re. “Doesn’t a temporary slowdown look more likely?” Besides what Tamino has already pointed out, the test for whether a slowdown is likely or not is NOT to examine only the last 6 data points. The test, as Tamino has shown in many previous entries over many years is to test whether the 6 additional points differ significantly from the prior trend.
Your willingness to completely ignore all data which does not show what you want is an absolute hallmark of denial.
yes, a daft question, hardly worthy of a response
“what would convince you the world is flat”
well walking of the edge would be a start – but it won’t happen
It is daft to compare the claim of a slowdown, to the claim of a flat earth!
Do you truly think that? Do you truly consider it daft to compare the idea of a flay earth (something that is difficult to not see as ridiculous) with the idea of a “pause” (or “hiatus” or even “slowdown”) (something that is still ridiculous but more easy for some to accept)? To me, there is no question. It appears to be a most useful analogy. Of course, those who believe in a “pause” (or “hiatus” or even “slowdown”) may be uncomfortable with the idea of such an analogy. They may even be unnerved enough by it to dismiss it and brand it “daft!!”
After throwing a fairly random and even mix of heads and tails for 60 throws of a coin, a run of 6 consecutive heads won’t convince me that my coin has broken.
It may “look like” that to someone who only watched the last 6 throws though.
I would agree with you, if it was YOU tossing the coin, and you knew that it was the same coin being tossed every time.
Bit if you were betting on the toss of a coin, and somebody else was throwing the coin, then you might want to check that they haven’t substituted a double headed coin after 60 throws, and then got 6 heads in a row.
Also, there can be a slowdown without the climate being “broken”.
[Response: And when the stats say that significant evidence of your being cheated just isn’t there, will you go to WUWT and post about how the guy doing the flips *might* have substituted a double headed coin? Maybe he’s part of a conspiracy of two-headed coiners to ruin our economy and institute world government based on socialism?]
As an experiment in my High School chemistry class I have students flip 100 coins. Tails are removed and then heads thrown again. There are 10 groups so a thousand coins. Six classes a day so 6 thousand coins. It is common to have a coin in one class flip 10 heads in a row, and I have seen 14 in a row. When all the coins in the day are added together it comes very close to what you expect from the statistics. The main divergences from statistics is at the end when only a few coins are left.
A group with a coin that took 13 throws usually has only one for many throws and it “looks”.like it is not a fair coin. When all the numbers are added at the end it always comes out close to the statistics. The odd coin was exactly what Tamino expected, he just cannot pick which coin in advance.
As an experiment give 100 pennies each to 10 of your friends. Cast them all on a table and remove all the tails. Cast again and remove. Count the casts to see how many it takes to remove all the pennies The last coin was heads for 9 or more throws. The example cited of six in a row is common in two hundred throws, you are just not educated on the statistics.
Your suspicion of an exchanged coin just reflects on your honesty. That is why you do not understand the science. Scientists report what they measure. You have to trust each other or you cannot advance knowledge.
Lol, Conspiracy Ideation
“the data/evidence is rigged” [yawn]
Flat Easters feel the same – if it helps
The probability of getting runs of exactly 6 heads in a row in your first 60 throws is high as well. There are 2^60 permutations of coin flip results of which 54*2^54 contain a run of 6 heads in a row. This works out to .84 and a bit.
The problem with cherrypickers is they blindly ignore the fact that in doing post hoc stats the normal alpha level simply does not apply. You must adjust the alpha to control for the fact you are selecting values before analyzing them. But then the denier world would fall very quickly if they didn’t allow themselves to cherrypick.
I’ve been asked that before re the ‘pause’ idea. Lately I’ve been saying that I completely accept there was a pause in global warming in the mid-20th century, and that it is statistically distinct from the long-term trends before and after (probably got that idea from here).
Yes, Virginia, there is a statistical test for a ‘pause’ that would pass muster. We already have an example. I’m not immune to the possibility, just not persuaded by the recent kerfuffle.
Well, I think one thing which might legitimately cause a change of mind would be discovery of a new bias in the temperature records, leading to a change in the temperature record. It takes time to detect these things, so the last decade of the temperature record will often be less reliable than the preceding ones.
“The estimated TCR of ~ 1.35 (see Nicholas Lewis) is confirmed by the adjusted temperatures of the recent blogpost by Tamino. He stresses the physical importance of his statistical operation with the evaluation of his model.”
This is just Nick’s usual Energy balance crap. Nothing new.
It is actually not Nit-Picking Nick Lewis. The analysis is by some denizen of the deni-o-sphere called Frank Bosse. He simply takes a temperature record and plots it against the Forcing data from IPCC AR5 A2. His most recent efforts use the data from Tamino’s adjustments to Sol, Vol & ENSO 1951-2015. Using OLS, Bosse then determines the linear relationship between the two & so calculates a climate sensitivity that he then calls TCR.
His assertion that “The estimated TCR of ~ 1.35 (see Nicholas Lewis) is confirmed by the adjusted temperatures of the recent blogpost by Tamino.” is a bit odd as the GISS data gives a result of 1.57, a “divergence” that Bosse does note.
He also tries to tell us that there is a 0.2K wobble in the temperature series which he attributes to AMO. This AMO guff is now roaming into Nick Lewis territory (& Judy Curry as I remember) but they were eager to pick the start & end points of their regressions to ensure their AMO wobble didn’t screw up the OLS analysis (although the were utterly blind to all the other stuff staring at them from their data series).
Yes, but the methodology for deriving the sensitivity is energy balance, complete with its assumption of short response time and continual near-equilibrium stasis–both shown to be incorrect.
I did actually do a quick check on Mr Bosse’s GISS result and got 1.56ºC for his ‘TCR’. Such a number is a little difficult to communicate given the usual use of ECS so it does reely need converting. With TCR in the range 1ºC to 2.5ºC and ECS in the range 1.5ºC to 4.5ºC, the quickest conversion would suggest ECS = 2.6ºC.
Of course, the climate forcing data from IPCC does not give the constant 1% annual rise assumed by TCR , certainly not since the 1960s. It was 4% through the 1970s and has fallen slowly since then to today’s 2%. This does rather make a nonsense of Bosse’s use of regression as he is ascriibing an annual temperature to an accumulated annual forcing with no accounting for the age of that accumulated annual forcing.
If, for instance,you were take the temperature in a year to be more a result of forcing over the previous decade (so you plot T(y) against FORCEave(y-11 to y-1), the result given is 1.75ºC and with the quickest conversion, that yields an ECS=3.00ºC, a value which I’m sure is familiar from somewhere.
The other strange thing that happens when you consider temperature relates to last-decade forcing is that Bosse’s silly AMO wobble disappears & the forcing-temperature relationship straightens out.
Of course, if we could identify the best fit for this forcing temperature relationship, a bit like the adjustment to GISS that Tamino and other have provided. Okay, it is a task that should really be employing climate models but it would show the impact of ignoring the forcing profile preceeding years and that might put a bit of a brake on meally-mouthed Nick Lewis.
I’m not quite sure what TCR is (as opposed to ECS and ESS) but, according to Michael Mann, we’re at 1.2C, at least, above pre-industrial, for a 40% rise in CO2. Another 40% rise would get us to about 100% rise since pre-industrial. So sensitivity appears to be at least 2.4C. However, this doesn’t take into account other GHGs, which has probably increased CO2e above 40%, so the short term sensitivity could potentially be lower, but we certainly haven’t had the full response yet from what we’ve already put up there.
It seems to me that clinging to the possibility of a low climate sensitivity is clutching at straws.
Mike, the number is at least 2.33 °C for the transient climate response.
Aside from water CO₂ is still the major contributor to warming. This puts the equilibrium climate sensitivity well into the 3 °C+ region, which Hansen has been pointing at for decades.
Curry’s post is a disgraceful misrepresentation of the underlying science of the magnitude of the effect of ‘greenhouse’ gases, and she’s admitted that she has no expertise in ecology (and hence of the ecosystem repercussions of global warming), so her participation in this obfuscation of the issue is seriously appalling. She’ll no doubt have a fresh gust of wind in her climate change denialist sails though, now that president-elect Trump has his ticket to the oval office.
Just so we don’t shout past each other, define what you mean by “slowdown.” If all you mean is a temporarily slower rate, that is one thing–that is to be expected with some frequency in noisy data. If you mean that the physics have somehow changed, or that there is some probabilistic significance to the short time sequence, that is something quite different.
It does not help that there is “Making sense of the early-
2000s warming slowdown” co-authored by Meehl, Santer, Mann, and Hawkins, as well as Fyfe and Gillett who’s work on the question I have written about. (Part 2 is here. Additional comments by Hawkins are here.) Of course, they are talking about a slowdown, not a pause, even if, like Tamino, I see little evidence of such a thing.
The article’s comment on analyses like Tamino’s and like mine, even if we have taken the same data, looked at it in very different ways, and gotten the same result, is:
I can’t speak for Tamino, but I raise eyebrows at that “… to move beyond purely statistical aspects of the slowdown, and to focus instead on improving process understanding …”.
It is possible, in principle, that some variation treated as error statistically could actually be predicted through some finer grained analysis under certain conditions. For example, it is possible, in principle, that through high speed analysis, oh, coin flips might be predicted at better than 50-50 once the coin was in the air, at least. So the physicists are correct in principle, I guess.
However, I am far from convinced in that paper they were correct in practice. God may very well play dice–and the physicists may well be chasing a mirage–with climate variation for all practical purposes at the levels of measurement and analytic power available today.
I heard someone kid around that “Physicists hate unexplained variation”, which may go some of the way to set context for the Fyfe, et al paper.
I learned a great deal about this issue from a guest post on RealClimate last year by Stephen Lewandowsky titled Hiatus or Bye-atus?. Admittedly the title’s kind of corny, but the main point is not:
The first two are statistical questions. Using rigorous methods, Tamino and others have decisively demonstrated that statistically, any hypothetical pause is within the short-term noise around a consistent trend.
The last two questions are about climate physics. In a previous RealClimate post, Gavin Schmidt answered “yes” to the third question, when observed GMST between 2002 and 2012 is compared with the mean trend hindcasted for the period by the CMIP5 coupled-GCM ensemble. Gavin addressed the 4th question by referencing a 2014 Nature Geoscience commentary by himself and two others titled Reconciling warming trends, which showed that the divergence can be explained by adjusting the mix of greenhouse gases in the models to match observations, and quantifying the effects of ENSO, anthropogenic and volcanic aerosols, and solar irradiance. This is science at its best, working to refine our understanding of the physical basis of climate by resolving short-term “noise” to forcings.
Drat, link-tag closing fail. Since Tamino’s not my editor, please mentally fix the closing tag yourselves. There’s a properly-closed link around “Gavin Schmidt” in the 2nd sentence of the last paragraph.
[Response: In this case I’ll make an exception. But everyone be advised, I’m not your editor.]
My layman’s understanding of this so-called pause is that:
_ it is based on data from a satellite dataset which Spencer has admitted was wrong due to orbital drift.
_ the data was cherry-picked to start from the high point of the ’98 El Nino to deceive people about the true trend.
_ only measures the atmosphere, and so doesn’t represent the true energy balance of the planet.
Is this roughly correct?
If it is correct, then any talk of a pause is totally spurious. It’s like a prosecutor presenting false evidence for a crime _ evidence known by everyone including the jury, to be false _ and asking what it would take to believe that a crime took place.
A more correct legal analogy of cherrypicking is when the police and prosecutor fail to disclose to the defense exculpatory evidence thereby never allowing the whole truth to be considered.
While some deniers blatantly lie, it is more common to see what Sheldon does here: Present only a misleading fragment of information out of context and imply it explains the whole context. A fool might be fooled. But the strategy runs up against a small problem: Tamino is not a statistical fool.
There is a statistically significant slowdown in the GISTEMP yearly temperature series. It begins in 2002, ends in 2013, and has a length of 11 years.
The table below shows that the difference between the warming rate of the slowdown, and the warming rates of the warming trends, is statistically significant.
The upper limit of the 95% confidence interval for the slowdown is 0.0132
The lower limits of the 95% confidence intervals for the warming trends are all greater than 0.0132
This means that there is no overlap between the 95% confidence interval of the slowdown, and the 95% confidence intervals of the warming trends. This means that they are statistically significantly different.
This proves that there has been a statistically significant slowdown.
Trend Type Start Year End Year Length(Years) Slope Lower 95% Upper 95%
========== ========== ======== ============= ===== ========= =========
Slowdown 2002 2013 11 0.0035 -0.0063 0.0132
Warming 1975 2002 27 0.0182 0.0134 0.0231
Warming 1975 2003 28 0.0186 0.0141 0.0231
Warming 1975 2004 29 0.0183 0.0141 0.0226
Warming 1975 2005 30 0.0189 0.0149 0.0229
Warming 1975 2006 31 0.0189 0.0152 0.0226
Warming 1975 2007 32 0.0189 0.0154 0.0225
Warming 1975 2008 33 0.0183 0.0149 0.0217
Warming 1975 2009 34 0.0182 0.0150 0.0214
Warming 1975 2010 35 0.0183 0.0153 0.0213
Warming 1975 2011 36 0.0178 0.0150 0.0207
Warming 1975 2012 37 0.0175 0.0148 0.0203
Warming 1975 2013 38 0.0173 0.0146 0.0199
Warming 1975 2014 39 0.0173 0.0148 0.0198
Warming 1975 2015 40 0.0177 0.0153 0.0201
[Response: Read this.]
Where do you get those numbers from? https://www.skepticalscience.com/trend.php gives 0.0027 -0.0179 0.0233 for GISTEMP from 2002 to 2013
Your mean trend is probably OK but your uncertainty range is way too small.
.027 ℃ is the GISS trend for the 11 years ended on December 31, 2012.
.032 ℃ is the GISS trend for 12 years starting Jan 1, 2002 and ending Dec 31, 2013.
I can’t balance to his .035 ℃.
You have fallen foul of the SKS calculator’s little foible. An ‘end date’ of 2013 puts the end of the regression to 2013.0. The Jan 2002 to Dec 2013 would require the ‘end date’ to be 2014 (or seemingly at least 2013.96). That returns a trend of 0.032ºC/decade (+/- 0.117ºC/decade).With monthly data, I calculate an OLS 2sd range of +/- 0.054ºC/decade if autocorrelation is ignored. Sheldon Walker may be running his regression using annual data or some other adjustment method to gain his 2sd range.
I am well aware of the SKS calculator’s little “foible”. That’s why I said his mean trend is “probably OK”. However, my point was that his uncertainty range is WAY to small, and variations to the “foible” of SKS’s calculator make absolutely no significant difference whatsoever to that point.
I’m sure. But it’s up to him to explain himself.
I don’t know how many different trends you tested to find this one, but a Bonferroni correction for 14 multiple comparisons would widen the required confidence interval from 95% to 1-0.05/14=99.6% for each of the single intervals.
Of course, at the heart of Sheldon Walker’s analysis 2002-13 is the period 2005-08. The rate of warming 2005-08 doesn’t just demonstrate a slowdown, not even a mere hiatus: it demonstrates that global warming is actually in reverse!!! And it is statistically significant too!!!! The rate is a whopping -0.43ºC/decade (+/- 0.38ºC/decade) which is twice as a fast a rate of cooling that this so-called AGW’s warming!!!! And I achieve this flat-earth analysis using a quarter the data that Sheldon Walker uses.
Wow. I never knew statistics could be such fun when you didn’t know or care what you’re doing.
Statistics as competitive pastime: How little data can you use to reach the preferred conclusion? ;-)
(Sadly, it’s probably scuppered by being too easy to be entertaining for very long. But for some, I suppose, that’s just an ancillary part of the true game–which involves finding as many suckers as possible to believe the ‘analysis.’)
All data do is inhibit your creativity. If you want to see real creativity, look what people say about the unknowable.
“All data do is inhibit your creativity. If you want to see real creativity, look what people say about the unknowable.”
Yes. Sadly, it might even get you elected President.
SkS calculator gives -0.38ºC/decade (+/- 0.91ºC/decade) for 2005.0 to 2009.0. That is not statistically significant cooling, not even close (and you won’t make it close by varying the interpretation of SkS’s “foible” either).
If you use monthly GISS data 2005-08 in a simpe OLS calculation with no allowance for autocorrelation, the output is -0.378/decade with an sd of 0.139/decade. So the monthly data does provide a negative trend that is statistically significant if autocorrelation is ignored. However it was annual data that fell to hand (or more correctly that I chose to cherry-pick from, providing data 0.6925, 0.6375, 0.6608, 0.5400) and using OLS on that data yields the -0.43ºC/decade (sd 0.1886ºC/decade) with autocorrelation ignored.
I’m sure Sheldon Walker has made a careless choice like the one you suggest. Being a global warming denialist, he’ll probably ignore it but I just wanted to put the question out there in the unlikely event that he is honest enough to answer it.
I am not sure what the question is, that you are putting out there. If you tell me what you want to know, then I will try to answer.
It is amusing to be called a “denialist”. Which of your cherished beliefs have I denialized.
You missed the obvious question in reply to your comment above which I quote “Where do you get those numbers from?”
None of them.
You are denying empirical facts about global warming.
The word is “denied” by the way.
That you struggle to identify “the question” does not bode well. I only see one question to choose from.
You were asked by Chris O’Neil up-thread ” Slowdown 2002 2013 11 0.0035 -0.0063 0.0132 Where do you get those numbers from?”
The answer appears to be by applying OLS to the GISTEMP annual data but that would yield (with currently published figures – 2 sig pts) 0.0035 -0.0053 0.0122 so it looks like you also employed an arithmetical error. But, hey, what do I know?!
As I stated very clearly in the post where I claimed that there was a statistically significant slowdown:
Quote: “There is a statistically significant slowdown in the GISTEMP yearly temperature series. It begins in 2002, ends in 2013, and has a length of 11 years.” End-quote.
So the data is from GISTEMP, and it was ANNUAL data.
All of the calculations were done using the Excel – Data Analysis – Regression function. Here is the raw output for the slowdown trend:
Multiple R 0.2435
R Square 0.0593
Adjusted R Square -0.0348
Standard Error 0.0521
df SS MS F Significance F
Regression 1 0.0017 0.0017 0.6305 0.4456
Residual 10 0.0272 0.0027
Total 11 0.0289
Coefficients Standard Error t Stat P-value Lower 95% Upper 95%
Intercept -6.3149 8.7518 -0.7215 0.4871 -25.8152 13.1854
X Variable 1 0.0035 0.0044 0.7940 0.4456 -0.0063 0.0132
Note that the critical t value for significance level = 0.05 and 10 degrees of freedom is 2.228
Lower 95% = 0.0035 – 2.228 * 0.0044 = -0.0063
Upper 95% = 0.0035 + 2.228 * 0.0044 = 0.0132
[Response: You’ve gone from “There is a statistically significant slowdown” to “I am not evaluating the trends in terms of being statistically significant,” and now back to “There is a statistically significant slowdown.” Thanks for proving yet again what a hypocrite you are.
The test you performed to establish that significance is not the right test. I already proved that. You won’t accept it. You are a denier.]
Yeah I know that but I wasn’t asking you to restate your claim. I asked you where you got the data for the claim from. Eventually you told us. Took a while though.
I think the “while” it takes Sheldon Walker to present his methods is because his work has a rather strange status. It is both vitally important to the well-being of humanity (given how greatly it will impact the science of AGW) but it must also be given due respect as it is so precious to Sheldon Walker.
Sheldon Walker does indeed seem a little too possessive of his precious findings which is of course not how it should be. I note in the following OP thread he tells our host
But surely, if his precious “evidence” is so important, should he waiting for folk to “desire” a butchers? There is a whole interweb out there. Where is his precious “evidence”?
As I say – strange!!
Now we have a handle on the methods being employed by Sheldon Walker, they can be addressed properly. We have already established that no account has been made for autocorrelation so all assertions of there being a statistically-significant difference between the alleged pause and other time periods was always nonsensical. Now we know that annual data is used along with Student’s T correction for the resulting low data quantity, that does not change that situation. It is nonsensical.
Even so, Sheldon Walker does tread a nonsensical path that is also based purely on cherry-picking. His analysis, flawed as it is, is also sensitive to his chosen (or cherry-picked) start date of 1975. His analysis is not robust.
So, for instance, if this method is employed with a start dat of 1960, an anti-pause significant at 95% (inc Student’s T but not addressing autocorrelation) appears during the years 1992-2003, a phenomenon that is almost identically significant to Walker’s 2002-13 pause when a 1980 start date is cherry-picked, although in both cases with a 1980 start they fail the 95% test. And the existence of an anti-pause (even though statistically insignificant) within the period 1992-2003 does provide a potential physical basis for the existence of a pause (statistically insignificant) 2002-2013. I wonder if Shedlon Walker would be honest enough to admit this situation. He talks of such extensive analysis (In the next OP thread Walker talks of 97,000 to 300,000 trends analysed in a single analysis.) which he has seemingly pursued for some months (Back in February he was bragging on Wattsupia that he used 16,970 trends for his analysis.) which suggests that he should be very familiar with what I have termed the anti-pause.
The truest answer would be to the question “What would prove the IPCC wrong”, which would be a trend statistically different from the IPCC predictions. His query is more like “What would it take to convince you that there is no Santa Clause”. If Santa doesn’t exist, and there’s no proof he does, then it’s not possible to prove santa doesn’t exist except by seeing that there’s no proof he does.