I HAVE REVISED THIS POST FROM THE ORIGINAL ON JUNE 18th TO MAKE IT MORE ACCURATE: THE NSPCC IS CONSISTENT IN PRESENTING ITS UNDER 16 AND UNDER 18 DATA. (I HAD SUGGESTED THAT IT WAS NOT, IN ORDER THE BETTER TO MAKE THE CASE). MY OVERALL CONCERN REMAINS THE SAME, EVIDENCED IN NON-USE OF MOVING AVERAGE
The new NSPCC report asking ‘How Safe Are Our Children? (2015)’ NSPCC report was all over the news yesterday morning. It is the third in an annual series of reports which seek to monitor and interpret the position in the UK in respect of child abuse and neglect. The NSPCC is a major UK charity with an illustrious history, working on one of the UK’s most important & troubling social problems. All my instincts are to be supportive. Some close, deeply caring and very expert friends of mine work for and have worked for NSPCC. I have used their guidance when leading other charities. I have donated in the past. However, I cannot throw off some unease with how NSPCC uses statistics. The child abuse problem in 2015 is bad enough and important enough without the NSPCC opening themselves and the case to challenge through their approach to the use of data. Furthermore, today they report at a time when trust in charities—especially those largely dependent on public donation as the NSPCC is—is being challenged, for example in the light of the suicide of an elderly charity supporter in Bristol, and RSPB’s proposed building on land bequeathed on condition of no-building.
The headlines yesterday and today are everywhere, informed no doubt by NSPCC’s own vigorous press work, emphasising that their study finds a big rise in reported sexual abuse for under 16s—up a third (or sometimes quoted 38%) in a year. This is indeed a large increase, but, as they point out, this may reflect increased reporting and improved recording. My suspicions of spin, and advocacy-based evidence-making, are aroused by a small but significant detail about the reporting.
For the first few ‘indicators’ in the report, the picture (e.g. for homicides) is presented and interpreted using a moving average—presumably because NSPCC recognises that this is, prima facie, an appropriate form of indicator for a trend, better than year-on-year data. Indeed, this particular indicator encourages some optimism about the decline in deaths through child abuse; NSPCC themselves comment that “…it is heartening that key outcome indicators of child deaths continue to point in the right direction, as the number of children dying as a result of homicide or assault remain in long term decline.” It is a surprise to me, therefore, that the next section, on abuse reported by the police, does not use the moving average. It changes to use absolute figures and rates for each year. (The note about ‘trend’ on the chart speaks misleadingly of a one year change as a trend.)
Had this section used the moving average to give a picture of the trend, the ‘increase’ (for the rate per 1,000) would be ’10%’, not the much larger, much publicised rise of 38% (in the absolute number) between this year and last. Bad enough, but a much less dramatic headline number.
As I have noted, the problems of child abuse and neglect are indeed large and significant. More cases are coming to the notice of the police; this may or may indicate an underlying increase in child abuse itself in the present. The problems must be tackled vigorously—by prevention work as well as intervention.
In making the NSPCC case for increased provision, at the conference launching the report, the NSPCC Director Lisa Harker emphasised that “Compiling this data is part of (NSPCC’s) commitment to evidence”. There is however a step between compiling data, and making evidence, which is interpretation. To help us all make sound ‘evidence’ and derive well-grounded conclusions , NSPCC (and all in the charity world) should make a parallel commitment to appropiate interpretation, and accurate, consistent presentation.