Yesterday, I commented on a New York Times story that appeared Wednesday, June 2, attacking the Dartmouth Research. The work that Dartmouth has done over the past two decades suggests that hospitals in some parts of the country are over-treating patients. Overtreatment means that patients who didn’t need to be in the hospital in the first place are exposed to the side effects of treatment as well as gruesome hospital- acquired infections, medication mix-ups and a host of other medical errors. Thus unnecessary care puts patients at risk while helping to drive health care bills heavenward— and suggests that we could rein in Medicare spending by squeezing some of that hazardous waste out of the system. But according to the Times: “Data [from Dartmouth] Used to Justify Health Savings Effort is Sometimes Shaky.”
In Part 1 of this post I discussed what two of the Times’ sources told me about how the Times’ reporters misrepresented what they said. Both Harvard economist David Cutler and Yale’s Dr. Harlan M. Krumholz complained that the story made it seem that they are critics of the research, when in fact they agree with Dartmouth on the basic message of the data, and see the work as, in Krumholz’ words “pivotal to moving us forward . . . we all agree that there is lots of waste and it is unevenly distributed across the country.”
A third source in Washington D.C. who talked to the Times reporters confided that they seemed to have a clear agenda: “to take down Dartmouth.”
Today, I received evidence from yet another unhappy source—the Wisconsin Collaborative for HealthCare Quality, a voluntary consortium of organizations working to improve the quality and cost-effectiveness of healthcare in Wisconsin. Chris Queram, the Collaborative’s president, and Jack Bowhan, who guides the development of value metrics for the group, report that they tried to caution New York Times reporter Gardiner Harris that he was misusing their data, “comparing apples to grapefruits,” and “jumping to a conclusions that you just can’t make.” Harris ignored their warnings.
As proof, they produced a series of e-mails that they sent to Harris, and with their permission, I’m quoting from those messages. But first, an excerpt from the Times’ story talking about the Collaborative’s data.
“Last June, as Mr. Obama campaigned for his health care overhaul, he visited Green Bay, Wis., praising the city for getting “more quality out of fewer health care dollars than many other communities. Last June, as Mr. Obama campaigned for his health care overhaul, he visited Green Bay, Wis., praising the city for getting “more quality out of fewer health care dollars than many
“Two of Green Bay’s hospitals, Bellin and St. Mary’s Hospital Medical Center, rank fourth and 11th within Wisconsin on the Dartmouth list.
“But again, Dartmouth ranks hospitals only by costs and number of treatments and procedures. A different picture emerges from work done by the Wisconsin Collaborative for Healthcare Quality, a voluntary group of health care organizations that uses both price and quality of care measures. In an analysis of heart attack care, for example, it ranks Bellin second, and St. Mary’s 15th, among the 22 hospitals in the state.
“And a Medicare ranking based on its own data that shows how many people die after treatment for certain conditions — statistics that exclude costs entirely — puts Bellin fifth, but drops St. Mary’s to second-to-last: 67th of the 68 hospitals statewide that were measured by both Dartmouth and Medicare.
“Do the Green Bay hospitals favored by Dartmouth really offer better care? Maybe not.”
The e-mail Trail
Here is the statement from the Collaborative that I received today:
The Wisconsin Collaborative for Healthcare Quality was contacted by Gardiner Harris (GH) on March 30th seeking information about the comparison of Dartmouth Atlas rankings versus WCHQ quality rankings. That is, “- do the same systems that show positive performance on Dartmouth data show the same performance on your data? Put another way; are these systems as good as Dartmouth says they are using other measures?”
There are two series of email exchanges with GH [Gardiner Harris] ; one set on March 30-31st and again on April 19-20th. Throughout the emails, GH was cautioned not to use WCHQ’s data and methodology for comparison to ranking results generated by the Dartmouth Atlas. Examples of those cautions from the Collaborative’s Chris Queram (CQ) and Jack Bowhan (JB) include:
CQ, 3/31/10, 10:47AM – “our data relate to physician groups, Dartmouth's relates to hospitals. And, the conditions being measured are different in many cases. So, the comparisons are very limited and should not be used to cast aspersions on the Dartmouth data.”
JB, 3/31/10, 1:10 PM –”There really is no way to reasonably compare the WCHQ metrics against Dartmouth and its process. “
CQ, 3/31/10, 1:53 PM – “I think you are raising an important issue, but want to be sure not to cast the [Dartmouth] Atlas in an unfair light.”
JB, 3/31/10, 5:33 PM – “I think you are jumping to a conclusion you cannot make. Here is why – I don’t think you can say Dartmouth is, or is not, a good proxy when you are trying to compare apples to grapefruits. We need an apples-to-apples comparison…”
CQ, 3/31/10, 6:59 PM – “It does make me nervous that decades worth of research by the team at Dartmouth might be impugned by the differences show in our hospital quadrants. Our methods while sufficient to enable our members to feel comfortable reporting this data and using it to guide internal improvement efforts — has not been the subject of or withstood rigorous scientific evaluation to confirm the association / correlation between the data reported on the two axis.”