Do 'Five-Star' or 'Good' Ratings Really Mean It's a Safer Ride?
Do 'Five-Star' or 'Good' Ratings Really Mean It's a Safer Ride?
Study Backs Efforts to Quantify Vehicle Safety
Wall Street Journal - January 29, 2007
Wall Street Journal - January 29, 2007
Part of shopping for a car is, or should be, checking out the crash-test scores for the vehicle you have in mind. But do those government "star" ratings or the Insurance Institute for Highway Safety's "poor" to "good" rankings indicate anything meaningful about how you'll fare in a real-world collision? A new study suggests the answer is yes, but only for cars, not sport-utility vehicles and pickup trucks.
David Harless and George Hoffer, economics professors at the Virginia Commonwealth University School of Business, took a look at the relationship between the crash-test scores of various vehicles going back more than 20 years and the government's records of fatal accidents involving those models.
An important choice Messrs. Harless and Hoffer made when analyzing this data was to focus on what happened to the fatality rates for a specific nameplate as the model was redesigned and retested over time, according to a draft of the soon-to-be-published study.
In other words, what happened to the fatality rates for a Cadillac Deville as that car evolved from the 1988 model that got a "one-star" rating in National Highway Traffic Safety Administration crash tests to the 1999 model that achieved a four-star rating? This method attempts to address one of the persistent problems experts cite in evaluating real-world crash rates: A lot depends on the driver.
A car targeted at young drivers or hot rodders tends to have higher fatality rates than a minivan -- regardless of the lab crash-test ratings. The VCU researchers reasoned that a Cadillac Deville would be targeted at roughly the same audience from one generation to the next -- allowing for a clearer view of whether the car's safety attributes made a difference in driver crash survival.
Sifted in this way, and controlled for various other factors, here's what the data Messrs. Harless and Hoffer collected show: A passenger car with a one star NHTSA crash rating has an 18% higher driver death rate in all types of crashes than the same model redesigned to achieve a five-star rating. Cars with two-star NHTSA ratings actually fared worse, with driver death rates 36% higher than the five-star level, while four-star cars had 7% higher death rates.
The VCU researchers looked just at head-on collisions and found a similar pattern. Cars with one- and two-star ratings had higher death rates than cars with five-star ratings.
Using the Insurance Institute for Highway Safety's crash-test results for passenger cars produces a roughly similar result. Cars rated poor by the IIHS have a 43% higher death incident rate than cars rated good, the VCU researchers found.
But when Messrs. Harless and Hoffer tried their analysis on sport-utility vehicles and pickups, they came up empty. Statistically, there's no meaningful difference in the death rates for a light truck with one-star rating or a five-star rating. They got the same random results using the IIHS crash test data. Mr. Harless, in an interview, says he doesn't know why this is so.
Representatives NHTSA and the IIHS say they haven't seen the final study by Messrs. Harless and Hoffer. Other studies, including a study of Insurance Institute data by IIHS researcher Charles M. Farmer, have pointed to the basic conclusion that a vehicle with a good or five-star score does a better job in the real world of protecting drivers and occupants. But some studies, done using different data or methods, have failed to find a strong correlation between test results and safety on the roads.
To a lay person, it stands to reason that a high crash-test score means a safer car. But until relatively recently, car makers (especially those whose vehicles scored poorly) would sometimes complain that staged crash tests didn't do a good job of predicting real-world safety.
Maybe the argument about whether crash tests are meaningful isn't over. But it's beginning to look as though it is. NHTSA (www.nhtsa.dot.gov) earlier in January announced it plans to undertake a wide-ranging review of its current New Car Assessment Program crash-test methods, known to the industry by its acronym, NCAP. Now that 95% of new vehicles are scoring five stars on the current tests, crash tests could suffer from the "Lake Wobegon" effect -- all the children will be above average. That isn't useful for consumers trying to decide what's really best. The government's stated aim is to make new tests for rollover, head-on collisions and side impacts that are more rigorous than the old ones.
The Department of Transportation is seeking comments on how to change its tests, and the process will certainly involve an enormous amount of technical research and technical arguments. One of the more difficult issues will be squaring the Bush administration's aim of boosting fuel economy -- which generally means making vehicles lighter -- with the goal of improving crashworthiness, which in the past has usually meant making vehicles heavier.
The VCU professors' study represents one more piece of evidence that the efforts by the government and the Insurance Institute to shine a bright light on car makers whose vehicles did poorly in simulated tests has probably saved lives. Consumers meanwhile have learned to demand five star or "good" rated cars. It turns out their instincts were right.
David Harless and George Hoffer, economics professors at the Virginia Commonwealth University School of Business, took a look at the relationship between the crash-test scores of various vehicles going back more than 20 years and the government's records of fatal accidents involving those models.
An important choice Messrs. Harless and Hoffer made when analyzing this data was to focus on what happened to the fatality rates for a specific nameplate as the model was redesigned and retested over time, according to a draft of the soon-to-be-published study.
In other words, what happened to the fatality rates for a Cadillac Deville as that car evolved from the 1988 model that got a "one-star" rating in National Highway Traffic Safety Administration crash tests to the 1999 model that achieved a four-star rating? This method attempts to address one of the persistent problems experts cite in evaluating real-world crash rates: A lot depends on the driver.
A car targeted at young drivers or hot rodders tends to have higher fatality rates than a minivan -- regardless of the lab crash-test ratings. The VCU researchers reasoned that a Cadillac Deville would be targeted at roughly the same audience from one generation to the next -- allowing for a clearer view of whether the car's safety attributes made a difference in driver crash survival.
Sifted in this way, and controlled for various other factors, here's what the data Messrs. Harless and Hoffer collected show: A passenger car with a one star NHTSA crash rating has an 18% higher driver death rate in all types of crashes than the same model redesigned to achieve a five-star rating. Cars with two-star NHTSA ratings actually fared worse, with driver death rates 36% higher than the five-star level, while four-star cars had 7% higher death rates.
The VCU researchers looked just at head-on collisions and found a similar pattern. Cars with one- and two-star ratings had higher death rates than cars with five-star ratings.
Using the Insurance Institute for Highway Safety's crash-test results for passenger cars produces a roughly similar result. Cars rated poor by the IIHS have a 43% higher death incident rate than cars rated good, the VCU researchers found.
But when Messrs. Harless and Hoffer tried their analysis on sport-utility vehicles and pickups, they came up empty. Statistically, there's no meaningful difference in the death rates for a light truck with one-star rating or a five-star rating. They got the same random results using the IIHS crash test data. Mr. Harless, in an interview, says he doesn't know why this is so.
Representatives NHTSA and the IIHS say they haven't seen the final study by Messrs. Harless and Hoffer. Other studies, including a study of Insurance Institute data by IIHS researcher Charles M. Farmer, have pointed to the basic conclusion that a vehicle with a good or five-star score does a better job in the real world of protecting drivers and occupants. But some studies, done using different data or methods, have failed to find a strong correlation between test results and safety on the roads.
To a lay person, it stands to reason that a high crash-test score means a safer car. But until relatively recently, car makers (especially those whose vehicles scored poorly) would sometimes complain that staged crash tests didn't do a good job of predicting real-world safety.
Maybe the argument about whether crash tests are meaningful isn't over. But it's beginning to look as though it is. NHTSA (www.nhtsa.dot.gov) earlier in January announced it plans to undertake a wide-ranging review of its current New Car Assessment Program crash-test methods, known to the industry by its acronym, NCAP. Now that 95% of new vehicles are scoring five stars on the current tests, crash tests could suffer from the "Lake Wobegon" effect -- all the children will be above average. That isn't useful for consumers trying to decide what's really best. The government's stated aim is to make new tests for rollover, head-on collisions and side impacts that are more rigorous than the old ones.
The Department of Transportation is seeking comments on how to change its tests, and the process will certainly involve an enormous amount of technical research and technical arguments. One of the more difficult issues will be squaring the Bush administration's aim of boosting fuel economy -- which generally means making vehicles lighter -- with the goal of improving crashworthiness, which in the past has usually meant making vehicles heavier.
The VCU professors' study represents one more piece of evidence that the efforts by the government and the Insurance Institute to shine a bright light on car makers whose vehicles did poorly in simulated tests has probably saved lives. Consumers meanwhile have learned to demand five star or "good" rated cars. It turns out their instincts were right.
Ditto.
If you base your decision on crashing into things, I'd say the ratings are worth your time. If you base your decision on a vehicle's ability to avoid crashing into things, you'd have a hard time convincing me a Dodge Caravan can be flung around freeway debris, or handle emergency manuvers better than, say, a 3 star '85 5.0 or a barely go-kart sized Miata.
If you base your decision on crashing into things, I'd say the ratings are worth your time. If you base your decision on a vehicle's ability to avoid crashing into things, you'd have a hard time convincing me a Dodge Caravan can be flung around freeway debris, or handle emergency manuvers better than, say, a 3 star '85 5.0 or a barely go-kart sized Miata.
I’d certainly rather avoid an accident than just “survive” one but I think we all know that not every accident can be avoided no matter how agile the vehicle is and no matter how skilled the driver happens to be.
It’s also true that when you talk about avoiding, you do bring driver ability into the picture and I think we all know that most drivers simply aren’t as good as they think they are.
So…it seems to me that crash worthiness does have an important function for all of us regardless of what kind of vehicle we happen to drive and what our individual skill level happens to be.
I wouldn't buy a vehicle simply because of a good crash rating but other things being equal, I'd rather have one with a good rating rather than one with a bad one.
It’s also true that when you talk about avoiding, you do bring driver ability into the picture and I think we all know that most drivers simply aren’t as good as they think they are.
So…it seems to me that crash worthiness does have an important function for all of us regardless of what kind of vehicle we happen to drive and what our individual skill level happens to be.
I wouldn't buy a vehicle simply because of a good crash rating but other things being equal, I'd rather have one with a good rating rather than one with a bad one.

I'd rather have all the above although I don't want an electronic stability program unless I have the option of turning it off; such as when I'm running in Solo2.
nah, just a hypothetical.
in reality, it will become harder and harder to find cars without all three technologies in the very near future.
Insurance statistics seem to prove the value of stability control, though most drivers still have no idea what ABS actually does, much less how to benefit from it.
in reality, it will become harder and harder to find cars without all three technologies in the very near future.
Insurance statistics seem to prove the value of stability control, though most drivers still have no idea what ABS actually does, much less how to benefit from it.
Thread
Thread Starter
Forum
Replies
Last Post
dbusch22
Forced Induction
6
Oct 31, 2016 11:09 AM
Fbodfather
Automotive News / Industry / Future Vehicle Discussion
25
Jun 21, 2002 04:12 PM



