In lieu of an abstract, here is a brief excerpt of the content:

Perspectives in Biology and Medicine 46.1 (2003) 21-23



[Access article in PDF]

The Quality of Quality Measurements
response to Chin and Muramatsu

Michael J. Koetting


MARSHALL CHIN AND NAOKO MURAMATSU distinguish between the application of quality measures for accountability and for quality improvement purposes. While there are many examples of quality measures that have resulted in quality improvement, it may be difficult to develop quality measures that are linked to accountability. I will briefly outline three reasons for this difficulty and will then suggest some ways in which quality assessment could be applied to develop better health policies.

Why Quality Measures Are Difficult to Link to Accountability

Unit of Analysis

Is the unit of analysis the physician, the hospital, or the health plan—which is ostensibly responsible for the overall management of health services for a given beneficiary? And how does one take into account the elements of before- and after-care, the total "episode of care"? Each part of the health care process has a different clinical and functional purpose, which may or may not integrate effectively with the needs of the next step in the chain. Each element in the chain [End Page 21] may have different cultures, different ways of describing the service being offered, and, indeed, different information systems that barely communicate with each other.

Mathematics of Analysis

This is both a special case of the unit of analysis problem and a problem in its own right. Many of the issues raised about defining the unit of analysis could be resolved if there were only enough degrees of freedom. Unfortunately, the number of cases required to resolve these issues is so large as to preclude their use for holding any specific business unit accountable. For instance, I once calculated that to get sufficient power to detect a significant difference in mortality rates in orthopedics would require a cell size of 13,000 cases. No physician—indeed, no hospital—can ever hope to actually generate that number of cases. Furthermore, one can argue that mortality is an inappropriate quality measure for a field of care such as orthopedics. While using some less dramatic measure of surgical complication would dramatically reduce the number of cases required to get power, it would create new problems of data collection and standardization. In general, it is just too hard to make the arithmetic work for most life-size accountability units.

Difficulty of Aggregation

The issue here is whether, even if it were possible to get it right at some "atomistic" level, one can aggregate these measures. Establishing some degree of quality in diabetes scarcely guarantees quality in cardiology. Indeed, on further inspection, it turns out that in academic medical centers, cardiology isn't even a single entity but a collection of programs (electrophysiology, heart failure, interventional cardiology), each of which has some independent quality axes. How then, for instance, does a hospital board of trustees know if overall quality is getting better or worse? Despite many years of trying, I have not been able to come up with a real-world way of grappling with this issue without presenting a smorgasbord of measures for a wide range of programs—which cause the trustees to instantly lose interest in quality of care in favor of a financial statement with a crisp bottom line they can understand.

Using Quality Measures to Develop Health Policy

If quality measures can't be used to establish accountability, what should be the policy response? I would suggest four things.

First, I think the Joint Commission on the Accreditation of Health Care Organizations has struck on at least one essentially right answer: worry less about finding quality measures for establishing accountability, and more about whether an organization has a system and culture for using quality measures in those situations [End Page 22] for which quality measures work. Supplemented with analysis of where things clearly go wrong, one can make meaningful—if broad—distinctions among programs or institutions.

As a corollary, invest more in structure and process improvements. The Leapfrog Group's idea that...

pdf

Additional Information

ISSN
1529-8795
Print ISSN
0031-5982
Pages
pp. 21-23
Launched on MUSE
2003-02-11
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.