NSPCC ‘How Safe Are Our Children 2015’ report on UK child abuse and neglect — Evidence-based advocacy, or advocacy-based evidence-making?

I HAVE REVISED THIS POST FROM THE ORIGINAL ON JUNE 18th TO MAKE IT MORE ACCURATE: THE NSPCC IS CONSISTENT IN PRESENTING ITS UNDER 16 AND UNDER 18 DATA. (I HAD SUGGESTED THAT IT WAS NOT, IN ORDER THE BETTER TO MAKE THE CASE). MY OVERALL CONCERN REMAINS THE SAME, EVIDENCED IN NON-USE OF MOVING AVERAGE

The new NSPCC report asking ‘How Safe Are Our Children? (2015)’ NSPCC report was all over the news yesterday morning. It is the third in an annual series of reports which seek to monitor and interpret the position in the UK in respect of child abuse and neglect. The NSPCC is a major UK charity with an illustrious history, working on one of the UK’s most important & troubling social problems. All my instincts are to be supportive. Some close, deeply caring and  very expert friends of mine work for and have worked for NSPCC. I have used their guidance when leading other charities. I have donated in the past. However,  I cannot throw off some unease with how NSPCC uses statistics. The child abuse problem in 2015 is bad enough and important enough without the NSPCC opening themselves and the case to challenge through their approach to the use of data. Furthermore, today they report at a time when trust in charities—especially those largely dependent on public donation as the NSPCC is—is being challenged, for example in the light of the suicide of an elderly charity supporter in Bristol, and RSPB’s proposed building on land bequeathed on condition of no-building.

The headlines yesterday and today are everywhere, informed no doubt by NSPCC’s own vigorous press work, emphasising that their study finds a big rise in reported sexual abuse for under 16s—up a third (or sometimes quoted 38%) in a year. This is indeed a large increase, but, as they point out, this may reflect  increased reporting and improved recording. My suspicions of spin, and advocacy-based evidence-making, are aroused by a small but significant detail about the reporting.

For the first few ‘indicators’ in the report, the picture (e.g. for homicides) is presented and interpreted using a moving average—presumably because NSPCC recognises that this is, prima facie, an appropriate form of indicator for a trend, better than year-on-year data. Indeed, this particular indicator encourages some optimism about the decline in deaths through child abuse; NSPCC themselves comment that “…it is heartening that key outcome indicators of child deaths continue to point in the right direction, as the number of children dying as a result of homicide or assault remain in long term decline.” More NSPCC data 2015 homicidesIt is a surprise to me,  therefore, that the next section, on abuse reported by the police, does not use the moving average. It changes to use absolute figures and rates for each year. (The note about ‘trend’ on the chart speaks misleadingly of a one year change as a trend.)

2015 NSPCC Data

Had this section used the moving average to give a picture of the trend, the ‘increase’ (for the rate per 1,000) would be ’10%’, not the much larger, much publicised rise of 38% (in the absolute number) between this year and last. Bad enough, but a much less dramatic headline number.

As I have noted, the problems of child abuse and neglect are indeed large and significant. More cases are coming to the notice of the police; this may or may indicate an underlying increase in child abuse itself in the present. The problems must be tackled vigorously—by prevention work as well as intervention.

In making the NSPCC case for increased provision, at the conference launching the report, the NSPCC Director Lisa Harker emphasised that “Compiling this data is part of (NSPCC’s) commitment to evidence”. There is however a step between compiling data, and making evidence, which is interpretation. To help us all make sound  ‘evidence’ and derive well-grounded conclusions , NSPCC (and all in the charity world) should make a parallel commitment to appropiate interpretation, and accurate, consistent presentation.

Advertisements

6 thoughts on “NSPCC ‘How Safe Are Our Children 2015’ report on UK child abuse and neglect — Evidence-based advocacy, or advocacy-based evidence-making?

  1. No. The reason for the 5-year moving average is in order to iron out variations due to random noise. resulting from a dozen or so additional cases that happen to have occurred without any change in the underlying risk.

    Where the data set is already large (tens of thousands) as in the case of sexual offences reporting data, a random variation of a dozen or two (or even a hundred or two) is not going to affect the trends. With a data set that large, a 38% increase is in fact a 38% increase.

    Make a comparison with tossing coins. If you toss 20 coins, you aren’t going to be all that surprised if on one occasion you get 8 heads and on another you get 12. Even though that’s a 50% increase in the number of heads you got in two successive tests, it provides no significant evidence that there is anything odd about the coins. You would need to run the test more times to see whether there was an actual change in the coins or whether the variation is random. That’s the essence of the use of the 5 year moving average.

    But if you toss 20,000 coins, then a variation from 8,000 heads to 12,000 heads in successive series of 20,000 coin-tosses means something has definitely changed about the coins! You already have a large sample, you don’t need to do a moving average in order to reach this conclusion.

    There’s plenty of mathematics behind this, and you can read up on it if you wish. Any AS statistics syllabus textbook available in Smiths will provide you with all the confirmation you need.

    So to summarise, the five year moving average is only needed where you need to aggregate the number of incidents to the point where trends are discernable through the random noise.

    • Thank you again, and for persisting in my education. I shall read further too.

      I entirely understand the coin tossing example. However, an issue remains for me:
      How do you tell statistically that one % jump is in a ‘common sense’ sense of significance…. whilst 38% is ‘evidently large’ and something is going on, at what point is a % change in one year ‘significant’ in statistical terms? How do you tell?

      And does not the moving average (five years may be too long, especially if changes in policy and practice , or in the culture, work through more quickly than that) help with interpretation? A sudden jump against a downward trend might indicate something different from an increased jump in an already notably upwards trend.

      • Whether you can tell that a change is what is called “statistically significant” (i.e. is probably not just down to random variation) depends on two things: the size of the change and the size of the data set. The bigger the change, the more likely it is to be significant, and with a bigger dataset smaller changes will be significant. Again, there’s a fair bit of mathematics for determining statistical significance.

        If I would have any criticism of the NSPCC charts, then it would be that I doubt that for the child homicide figures even a 5 year moving average gives you statistically significant trends. I would have to run the numbers, but you need an awfully big change to be significant in such a small data set.

  2. There are so few child homicides in any particular year that a rise of a few might be just a matter of random variation without any underlying cause. We are talking of a few dozen such deaths in the entire country in any one year. A 5 year moving average is not an unreasonable thing to use in the circumstance.

    However when we are talking of the number of sexual offences recorded by the police, we are talking in the tens of thousands per year. So we are talking of a few dozen such reports per day. A 5-day moving average would be a reasonable thing to iron out the noise in such small sample levels, but there’s no need when we are already aggregating over a much longer period to get a much larger dataset. There’s nothing sinister in NSPCC’s approach to the statistics in the two instances you have given. The different approach is perfectly proper because of the different number of incidents being counted.

    If there were something wrong in the NSPCC’s statistical approach here, I would be more than ready to join your criticisms, I have criticised NSPCC plenty in the past.

    • Thanks for comment. But isn’t a moving average still relevant to seek map a trend in a large dataset as much as a small one? A one year jump up or down in a big fidure or a small one is still not necessarily evidence of a trend?

      • No. If you had daily totals of sexual crime reports, they would be of the order of a few dozen a day, about the name number as you have child homicides per year. So you might want to do a 5-day moving average on them to discover if there was a trend from week to week.

        But in fact what has been done is to combine the figures into yearly totals, which iron out the daily random variations far more thoroughly than a 5-day moving average does, and so give you your year-on-year trend directly, and with far more accuracy.

        You’ll either have to trust me on this or read up on the relevant mathematics.

If you want to comment, please do so here … and thank you!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s