tool/ has been recently updated so that it computes and draws trends (the work was done by me and Nick Barnes). Here’s some recent comparisons redrawn with trends:

The “before 1992 / after 1992 stations” from “The 1990s station dropout does not have a warming effect”:

The short trends are done with the last 30 years of data for each series (which since one series ends in 1991, is a different period for each). Notice how similar the recent trends are.

Reprising the Urban Adjustment post:

I don’t think I’ve done a combined land and ocean chart comparing hemispheres for the blog before, but here it is now:

Nick Barnes added the calculation of R2 whilst I was writing this post, causing me to redraw all the charts.

Nick has also been exploiting ccc-gistemp‘s new module, and did a run with the somewhat experimental 250km smoothing rather than the traditional 1200km smoothing. The parameter is named gridding_radius and it affects gridding in Step 3; setting it to 250km essentially reduces each station’s influence to very roughly the size of the cell used in gridding.

The effect on the trends is most visible in the Northern Hemisphere:

Trends are just one minor example of the way in which the ccc-gistemp code can be continuously improved. We don’t just draw trends for one graph, we improve the code so that all graphs can have trends.

17 Responses to “Trendy!”

  1. steven mosher Says:


    That’s one of the great things about having the code.

    So if I read this chart right what happens to the trend when you
    increase Smoothing from 250km to 1200km?

    Does the choice of 1200km make a significant difference to the warming trend?

  2. lucia Says:

    Maybe the 250 km leaves off more of the arctic? Just a guess.

  3. Nick.Barnes Says:

    @Steven: The standard GISTEMP algorithm uses a 1200km gridding_radius, so my experiment was to decrease that to 250km, not vice versa. You can see the effect in the last graph in the post: the standard 1200km algorithm is plotted in black, and the experimental 250km algorithm is plotted in green. Reading off the graph, the effect on the 30-year trend in the northern hemisphere is to decrease it from 2.5 K/century to 2.08 K/century.
    Without looking at the individual cell trends, I assume that this is because of the number of Arctic grid cells to which no station can contribute with a 250km radius: sparsity of stations means that more of the Arctic region will come out with no data. Generally speaking, warming of Arctic stations is high (as predicted by models, and apparently confirmed by sea ice trends and phenology), so Arctic grid cells tend to have high warming. So if the proportion of the Arctic with valid gridded data decreases, so does the mean trend. I’d expect GISTEMP with a low radius to be more like HadCRUT.
    [edited to fill in some more of the logical steps]

  4. steven mosher Says:


    The only problem with that explantion is that 1) CRU doesn’t extrapolate and the difference is not that big. 2) The area extroplated over is not so large as to cause that big a difference. 3) Look at the difference for the southern hemisphere. And lastly 4 GISS extrapolation leaves much to be desired.

    As you can se from this

    Areas covered occasionally by sea ice are masked using a time-independent mask.”

    So if there is sea ice coverage for any part of the year, GISS will not use SST values to cover those cells for the entire year. Those cells must be covered by extrapolations from land for that year. This means that when the area is cover with ice or with water or with part ice part water, it will have it’s anomaly extrapolated from land, regardless. HadCRUT, on the other hand, does not extrapolate their coverage. But they will use SST values for a cell when SST values are available for part of the year. If the area is covered with ice for the entire year, HadCRUT will not assign it a value. Therefore we get polar areas that are covered by extrapolation by GISS and not covered at all by HadCRUT.

  5. steven mosher Says:

    Nick I know the 1200 is the default, but that figure has bothered me ever since I read Hansen87,

    Further the 1200KM figure is not entirely justified by the underlying study in my mind. At 64-90 Hansen 87 shows that correlations at 1200Km run from -.1 to around .8 centered at .5. At low latittudes the performance is worse averaging .33. Studying the sensitivity
    of such a number has always been on my wish list

    The sensitivity to the averaging radius is important. But I suppose we can leave that to a study of the proper way of calculating a error due to spatial coverage, that error will be a function ( at least in the math I’ve seen) involving the spatial correlation which varies considerably.

  6. Nick.Barnes Says:

    I personally think that smoothing anomalies from the land to the sea is one of the more surprising aspects of the GISTEMP algorithm, and it’s on my list to run some code to calculate the correlation in between SST anomalies and nearby land anomalies. Note that where there is SST data, step 5 of GISTEMP will discard the land data (if there are more than 240 months of SST data and the nearest contributing surface station is more than 100km away: see parameters.subbox_min_valid and parameters.subbox_land_range). So although a map of land-only data will show smoothed data far out into the oceans, in fact a land-ocean dataset will have discarded this when combining, wherever SST data is available.

    I’m not sure about the “time-independent mask” quote from GISS; there isn’t any code in GISTEMP to apply such a mask (the closest thing is that when processing the monthly SST data, any monthly temperature below -1.77 C is treated as missing data for that degree-grid cell). Possibly it refers to some processing in the source datasets: in particular if the SST climatology file incorporates the mask then that would have the effect. In any case, very patchy SST records will be discarded.

    The general question is this: how ought one to calculate global mean anomalies, when areas such as the high Arctic have very little data from either SSTs or surface stations? Given that there is some observational, theoretical, statistical, and model-based support for the general anomaly-smoothing method, it seems reasonable to use it. Alternatives might include using proxy measures, or non-GHCN data (e.g. satellites, other weather stations). Or simply missing the area out, as HadCRUT and JMA do.

    [edited to add:]
    I think Hansen’s article at RC covers this fairly well, although more data would be welcome. Using a model, he calculates a two-sigma error bar in the global mean anomaly, due to incomplete coverage, of 0.05 C since 1900.

  7. steven mosher Says:


    The land/sea thing has always been a nit to pick. Jones handles it differently and in discussions with RomanM dating back to 2007 it’s clear that even Jones’ method leaves something to be desired. Its not wrong, just not the best.

    In any case, have a read of Tilo’s article.

    I think there is a way to comprimise between the HADCRU approach
    which doesnt extrapolate with the GISSTEMP approach which extrapolates too liberally if I read Tilo right. Again, the final answer will fall in between hadcru and gisstemp.

    Anyway, as always appreciate your work.

    It might be interesting to try some in between parameters between
    the 250km ( which seems over restrictive ) and the 1200km.

    I know that with TOBS work in CONUS the limit was 750KM. That is beyond that distance from a reference site Karl et al did not think there was enough information in the distant site to correct the local site.. but again this figure is geographicaly dependent.
    Which is to say, there are some places where 1200km is too liberal.

  8. LDLAS Says:

    The “before 1992 / after 1992 stations” from “The 1990s station dropout does not have a warming effect”

    what about the first fourty years?

  9. drj Says:

    LDLAS: What “first forty years” are you talking about? The first forty years on the graph, 1880 to 1920? If it is that period that you’re talking about, I doubt that there’s enough data to draw meaningful trends. If it’s something else, please expand.

  10. Ibrahim Says:


    I don’t think drj knows what drj just said :-)

  11. drj Says:

    @Ibrahim: Which bit do you take objection to? Clearly you can draw trends on the graph between 1880 to 1920, but are they meaningful? You’d have to consult errors bars. Which we don’t draw on our graphs (yet?).

    It’s well known that surface temperature reconstructions have far fewer stations for the the early 20th century and 19th century and this makes the errors in the estimate larger. So it would be unwise to blithely draw trends for those periods.

  12. Ibrahim Says:

    Well, you use them for the long “trends”.
    Draw the “trends” between 1880 and 1980.

  13. Ibrahim Says:

    I’m still waiting.

  14. Nick.Barnes Says:

    @Ibrahim: What are you waiting for? Are you expecting one of us to do something? It’s hard to tell from your comments.

    If you want someone to do something, I suggest that you write, clearly and in detail: who it is you want to do it; what it is you want them to do; and why you think they should do it. That way, they might either do it, or reply to say why they won’t.

    All of your contributions to this thread so far have been extremely brief and ambiguous, and have had a snippy tone. This isn’t that sort of blog. Please be more clear, and more clearly polite, in future.

  15. Ibrahim Says:

    If you draw the “trend” between 1880 and 1980 (pretending you have no data after that) for the global land index post cut off you will see no warming during that period.

    Best regards.

  16. drj Says:

    It is a little more tricky to “pretend we have no more data after 1980″ (data for later years can effect the anomaly for earlier years, although i fully expect the effect to be negligible in this case), but we can compute trends for the pre- and post- cutoff series from 1880 to 1980. The trends are 0.15 °C and 0.51 °C per century, with the post-cutoff having the smaller value.

    So perhaps one could argue that those stations that continue to report are those which have a lower trend from 1880 to 1980.

    We are not in the business of drawing trend lines wherever anyone pleases. The data are there (in the URL!) for people to draw their own trend lines (or people can generate the data themselves using our code).

    The trend line feature was added on a whim and currently has a bug (when drawing only one series); much more discussion of where to draw trend lines will probably result in me removing the code for simplicity’s sake.

  17. Ibrahim Says:

    “We are not in the business of drawing trend lines wherever anyone pleases.”

    I just wanted to show you how tricky it is drawing “trendlines”.

    That was all. Thank you.

    Best regards.

Leave a Reply