Lists of informal comments on the article:
"Oakland's Speed Hump Program: Is It Really Working?"
Journal of American Public Health
- This report starts off by saying that "pedestrian injuries caused by automobile collisions are a leading
cause of death among children aged 5 to 14 years". It gives a reference for that and although I could not
trace the exact article the following page gives the actual numbers:
In fact there were 3,388 deaths of children aged 5 to 14 in the USA in 1996 due to accidents of which
only 498 were "pedestrian, traffic related". Motor vehicle occupant deaths were much higher and figures for
drowning were comparable, so the first statement is somewhat misleading to begin with.
- The methodology is deeply suspect in that they did a lot of selection of patients and this was not done
on a "double-blind" basis (at least not apparently so). In other words, the researchers did the data selection
which is a very unscientific approach and known to result in significant bias. In fact they selected only 100 out
of 236 individuals who met the initial criteria (see the first paragraph following the heading "Results"), which is
a very large selectivity for this kind of study.
- One particular bias is that they eliminated 84 patients who lived on an "artery street or were injured
outside their neighbourhood". In other words, if your definition of an "artery street" is one without speed
humps (as it could be) then this introduces major bias. In fact, they have probably just eliminated injured
people who live on roads without humps. Nowhere do they define what an "artery street" is and why it is
different to any other street without speed humps. They said they did it by selecting roads with a street map,
but that hardly explains it.
- One particular problem also, is that some cities have more speed humps in neighbourhoods that are
wealthier and more "middle class" - this is typically because the residents of such neighbourhoods have more
influence on public expenditure and tend to be more vocal in asking for what they want. I don't know how
areas are selected for speed humps in Oakland, but this could introduce significant bias. Of course such
neighbourhoods are also likely to have parents who are less likely to let their children play in the streets,
particularly in an unsupervised manner.
In fact, they point out that in their study "SES (ie household income) was not a significant independent
predictor of child pedestrian injury" when it was a very major factor in other studies, as one would expect.
Which tends to confirm that the data has been biased in some way. They did not control for household income
in their "pair" selections" when they could have done.
- Their statistical report of "Unadjusted odd ratios" I must admit I don't quite understand. For example
they give an Odds Ratio of 0.50 (14% versus 23%) with a confidence interval of 0.27 to 0.89 with 95%
confidence. Those confidence limits would normally mean that on chance (ie. random variation) the Odds Ratio
could lie anywhere between 0.27 and 0.89, ie. 0.50 is not significant and could be pure chance. Unfortunately
they don't explain this further, and do not give the raw data (as any good scientific study would) to enable one
to verify the statistics. I would certainly question whether this data is "significant" in the scientific, statistical
sense, as they claim.
- Another bias is that they include all injured children, but again children with minor injuries of wealthier
parents are I suspect more likely to be taken to a local doctor or other private medical facility than to a
public hospital, so again there is bias in the data selection in that it would tend to select poorer children for
the study. Therefore it may be totally unrepresentative of the real world.
- In summary, this looks like to be pretty poor science, and the number of patients involved in the study
are quite small. The researchers are quite likely to have had a preconceived expectation of what they would
find (this data is not the result of some other more general study, or an unexpected result) and If this study
was replicated on a more scientific basis in another locality in different circumstances, I think the results
obtained would likely be very different.
- They mention the "confounding" factor of sidewalks. It is very likely that roads without speed humps
also do not have sidewalks (you can't normally put a speed hump on roads without sidewalks as drivers
then drive around the side of them). If roads with sidewalks are safer for pedestrians (as they suggest
is the case from one study), then this would undermine the results to a large extent.
Incidentally they say they couldn't go back and look at whether sidewalks were present at the accidents
but I don't see why not - that would be a pretty easy thing to do surely.
- It is also worth noting that the person who led this study was a medical student at the time when it
was undertaken. Now most medical students have very little training or knowledge of scientific method
or statistics, so although apparently they got someone who did to review it later, I think the design of
the study is clearly deficient.
- One thing they could have done possibly to prove that their selection of subjects, and the control pairs,
was not biased was to ask whether some other variable is constant between the two pairs. For example,
they could have looked at other pre-existing medical conditions present in the children. Again this is a typical
element of a good "experimental design" which was not done.
In summary, this is "bad" science, done by an amateur who was motivated to produce a positive result -
there would be no prize for producing a negative answer in this case!
You might also like to suggest that in future they might like to introduce proper "peer review" of articles before
they publish them, as most good scientific journals do. At least I assume they don't at present because anyone
with some basic scientific training would see that this article is misleading because the person who wrote it
clearly does not have any understanding of "confidence limits" and basic statistics.
* * *
- The experiment is poorly designed, with very small sample sizes and several possibilities for bias.
- The data produced is simply not statistically significant (ie. there is no evidence that this data shows
what they claim it shows) and in fact they claim it is "significant" when it is not (to quote "....speed humps
were associated with significantly lower odds....".
The original article was published in the April 2004 issue of the Journal of American
Public Health (JAPH). The above comments listed were submitted to RADA on
April 5, 2004 (List 1), April 6, 2004 (List 2) and April 7, 2004 (List 3).
Return to the
to link to comments from other reviewers.
Rada Project, 2004
May 24, 2004 (Version 1)