Looking at the report methodology reveals an amusing methodological conceit - they z-score their indicators -
A common scaleSo however close the raw scores are on an indicator scale they will be forced into a normal distribution with one country right at the bottom and one at the top, and this distribution will be given the same sort of weight as another distribution with massive disparities - i.e. a country that is at the top of the distribution for, say, immunisation rates, even if these rates are all very similar (a range of 80-100%) but scores badly on, say, infant mortality (2-16/1000) where there is a wide range of outcomes will come out the same as a country where the converse is the case - eyeballing it this is the case for Russia or Poland versus Austria. If we look at this section - the Health & Safety of Children measure shows the Netherlands (#2) at 112-113 and Ireland (#19) at 91; this represents Infant Mortality Rates of 4.9 and 5/1000, Low Birth Weight of 5.3% and 5%, Immunisation Rates of 96% and 81%, and Deaths from Accidents of 9 and 14/100,000. Now obviously Ireland is worse than the Netherlands, but the difference in ranking (#2 vs #19) does not seem to convey the message that the Irish have a similar rate of low-birth weight, similar infant mortality, 20% worse immunisation rates, and 50% worse accidental death in the under 19s - it makes it look like children in Ireland are a diseased subclass (rather than my first thought, which is that all their kids are dying in road accidents due to the stupid provisional licence system, and the generally unsafe roads).
- Throughout this Report Card, a country’s overall score for each dimension of child well-being has been calculated by averaging its score for the three components chosen to represent that dimension. If more than one indicator has been used to assess a component, indicator scores have been averaged. This gives an equal weighting to the components that make up each dimension, and to the indicators that make up each component. Equal weighting is the standard approach used in the absence of any compelling reason to apply different weightings and is not intended to imply that all elements used are considered of equal significance.
- In all cases, scores have been calculated by the ‘z scores’ method – i.e. by using a common scale whose upper and lower limits are defined by all the countries in the group. The advantage of this method is that it reveals how far a country falls above or below the average for the group as a whole. The unit of measurement used on this scale is the standard deviation (the average deviation from the average). In other words a score of +1.5 means that a country’s score is 1.5 times the average deviation from the average. To ease interpretation, the scores for each dimension are presented on a scale with a mean of 100 and a standard deviation of 10.
There are also some elements that seem a bit unwise, taking immunisation rates as measuring
the comprehensiveness of preventative health services for children. Immunization levels also serve as a measure of national commitment to primary health care for all children
they note that
Vaccination is cheap, effective, safe, and offers protection against
several of the most common and serious diseases of childhood (and failure to reach high levels of immunization can mean that ‘herd immunity’ for certain diseases will not be achieved and that many more children will fall victim to disease.
but the very phenomenon of herd immunity means that pecentage immunisation should not be considered a uniform linear scale where 10% more immunised is always the same as 10% less immunised in terms of outcome - so if you need 90% coverage for herd immunity then an improvement from 90 to 95% coverage is not as significant as improvement from 85 to 90%. An additional factor is that as an indirect measure of health services for children immunisation is a lousy measure in countries (such as the UK) with a strong recent history of anti-immunisation campaigns (so the UK vaccination rate peaked at 90%+ after MMR was introduced before dipping again to <85% after the MMR controversy around 1998).
More later, perhaps I will get JP to discuss why the UK figures for the "Percentage of 15-19 year-olds not in education, training or employment" is actually a feature of poor population statistics, and how reliable subjective survey reports are when compared between countries.
Update: Doesn't look like JP is going to post - but basically the figures for 15-19yr olds not in education is calculated by subtracting the number in education from the latest population estimate. You may remember the controversy over the 'missing' young men in the last census, the ONS was forced to fiddle the figures and just arbitrarily add in thousands more of them. So, of course, by doing this they've suddenly massively upped the number not in education (since these extra young men are purely nominal and have no further evidence for their existence it is hardly likely they'd be enrolled in school!).