
IF THERE IS A BRIDGE from research to practice, I've been crossing against the traffic. In 2000, I moved from the increasingly tough world of health care management practice (thinking I'd done my share and it was someone else's turn) and took up teaching and research (hoping I'd have time to reflect on and leam more about the intractable problems I'd been battling). Unfortunately I failed to consider, even though I "knew", that universities are also operating in an increasingly tough world, and academic work has also been intensified.
But my crossing did coincide with an emerging focus on health services research, which "examines how a variety of factors - from financing systems to medical technologies to personal behaviors - affect health care costs, quality, and access".1 It is a broad field, spanning research on the macro policy settings (how to fund health care, who gets access) through to the micro level of health care practice (for example, how clinicians might work with patients who have chronic conditions as partners in the management of their care).
Often, the meso level (where health care management lives) takes a pretty low profile. It has to be said that every sector of this still very small field, from the health economists to the clinical researchers, would also argue that their areas of interest are neglected. But health care management doesn't really feature in the definition above, and neither does it loom large in the literature emerging from health services research. This may be an obvious point - management of health services is, after all, a means to an end, and should be judged in terms of its effectiveness in supporting the work of clinicians and other health care practitioners. But then, you could say the same about policy. As many a frustrated clinician has reminded me over the years, good policy and good management are prerequisites for good clinical care.
Managers share with clinicians an obligation to ensure that their practice is oriented to achieving the best possible results for patients - a serious undertaking which needs to be based on evidence. We have seen that clinicians have no monopoly on failing to implement evidence. For example, we know that decision making about patient care is more likely to be effective when it is conducted as close as possible to the clinical interaction by practitioners who have adequate skills and who accept authority and accountability. Yet managers have been slow to operationalise this knowledge in organisational structures and processes. Thus the research-to-practice transfer mystery requires attention from managers as much as from clinicians.
So, why aren't the issues of health care management (as distinct from policy) more prominent in health services research? Perhaps one of the problems is that health management is understood to be based on a body of knowledge arising from general management research, and doesn't really need to reinvent the wheel. Again, this is partly true - perhaps we learn too little from general management theory. However, if the attrition rate of general managers brought into the US health system in the 1990s is anything to go by, the general management body of knowledge may be necessary but not sufficient for practising health care managers.
I'd like to explore the potential future development of more and better research about health management through briefly considering the problem of making research relevant to practice, and then advocating for the development of the field known as implementation research.
Can we do industry-led research?
As many who have pondered the challenge of research transfer have pointed out, finding ways to ensure that research is relevant to industry is not a simple proposition (see Brehaut and Juzwishin2 for an overview of some of this work). From the point of view of policy makers and practitioners, research is not a useful method for resolving many of their problems. Or worse, research is seen as a product that should be able to be picked off the shelf when needed.3
The methods adopted by the Cooperative Research Centre for Aboriginal Health may be useful. The CRC is a broad grouping of "industry partners" (ie, providers of Aboriginal health care and their funding bodies) and universities. It was something of a revelation to me, working within this grouping, to realise the implications of the CRC goal that industry, in this case the Aboriginal health care sector, should largely set the research agenda. Perhaps my experience at Flinders Medical Centre, where the medical leadership positions were uniformly filled by grey-haired professors, influences my perspective. But it seemed to me that there was a real reversal going on, as in the past there was an acceptance that researchers led industry in determining what should be researched and how, rather than the other way around.
It came as no surprise that this reversal had implications for how research funding decisions would be made, and that it would be difficult to make it work. The CRC has struggled to find a way to operationalise the goal of industry-led research. It has developed a process where the Board endorses priority areas from among those put forward by the industry partners, and researchers are then invited to form teams to design and conduct research projects in response. The first run of this process threw up many challenges, but the lessons learned are making it work better in subsequent rounds.
Health authorities around the country are also experimenting with a broad range of ways to ensure that research effort is directly relevant to their dilemmas. The initial focus on linkages is probably necessary, but practical changes in the way business is done, on both sides of the bridge, are also needed.
Implementation research is a priority
The call by clinicians for evidence-based management is understandable, particularly when managers and health authorities are critical of failures of evidence uptake in clinical practice. But the questions are very different, and neither the paradigms nor the methods of clinical research are directly transferable to management research.
I have been ruminating for a few years now on the question of how to generate answers to the questions that keep managers awake at night, and I think there are some clear priorities. Pressman and Wildavsky, in a book with the longest title ever seen in English - Implementation: How Great Expectations in Washington Are Dashed in Oakland: Or Why It's Amazing that Federal Programs Work at All, This Being a Saga of the Economic Development Administration as Told by Two Sympathetic Observers who Seek to Build Morals on a Foundation of Ruined Hopes4 - demonstrate convincingly the impotence that results from a lack of shared commitment to program goals, values or shared incentives when the efforts of many players are required. Given that most of the difficult problems in health care management, and the innovation opportunities they engender, depend on multiple players with divergent goals, the implications are highly relevant for health care management.
Pressman and Wildavsky laid foundations for the field now known as implementation research, a label which is variously defined. From a more clinical perspective, it is seen as "the scientific study of methods to promote the uptake of research findings, and hence to reduce inappropriate care".5 In social policy, it is described more broadly as "research that focuses on the question 'What is happening?' in the design, implementation, administration, operation, services and outcomes of social programs."6 Implementation research is a subset of evaluation research, focused on the process not simply the impact, and seeks to answer questions about what is happening, whether it is what was expected, and, importantly, why things are happening as they are. It may use both quantitative and qualitative methods, but is generally directed towards problems that do not have a simple, sharply focused or numerical answer.
Implementation research addresses areas that are the domain of management, and may be the best way of approaching problems at the meso level. The frequency with which imported good ideas fail is alone sufficient reason to focus on the question of implementation. It seems that there is no method of improving the effectiveness of health care delivery that is so good that it will work wherever it is tried. And conversely, it sometimes seems that almost any tool will work in some particularly blessed teams. When I was a manager, I spent a fair bit of time grappling with the question "What is happening?" when innovations seemed to be making a difference, or even more urgently, when they failed. Anything that throws light on the question of why a method that works in one setting falls short when tried in another would be welcomed. A lot is already known about what makes the difference (see, for example, National Institute of Clinical Studies7), but the need for continuing enquiry as to how managers and organisations can improve their effectiveness remains strong, as does the need for inclusion of such enquiry in evaluations of clinical practice change.
Much enquiry that can be labelled implementation research in health is necessarily small-scale and localised, because of the sporadic institution-specific way things are often done in health care (especially innovation). Implementation research is not attempting to uncover the laws of nature or of disease processes - rather it seeks to deal with the complex contextual factors that influence the ways things are done by human actors in sociotechnical systems. We need to accept that the appropriate methods of enquiry are different, that rigour in this field looks different, and that the meaning of life will never be 42.
Conclusion
Over the years, I have reviewed several papers written by doctors raging against the lack of evidence to support policy and management decisions. Now, more effort is going into research transfer, and that effort has identified some very good reasons why a focus at the meso level may be the best way to generate knowledge for improving management, and feeding back to policy. And if it seems a bit forlorn to suggest that evidence for decision making needs to be generated post hoc - that is, in the implementation phase - one could perhaps be consoled by the thought that decision making is an iterative, never-ending process. Evidence about the downsides of last year's management decisions just might help shape next year's.
One of the things I have really enjoyed as an editor of Australian Health Review is reading the significant proportion of the work it publishes that is located at what I have called the meso level, and is about implementation. The Journal's authors are a broad-based group of managers, practitioners and researchers, and they contribute much of interest for those who want to know what might work in their health care setting, and under what circumstances. For me, this is the Journal's real contribution to health care in our region, and to the difficult work its managers face. Co-editing the Journal has been a great experience. I have learned a lot from my co-editor Dr Sandra Leggat, and from AHR's authors, and would like to thank them for their company on the crossing.
[Reference]
References
1 AcademyHealth. Placement, coordination, and funding of health services research within the Federal Government. Washington: Academy Health, 2005. Available at: http://www.chsr.org/placementreport.pdf (accessed Sep 2006).
2 Brehaut JD, Juzwishin D. Bridging the gap: the use of research evidence in policy development, HTA Initiative #18. Edmonton: Alberta Heritage Foundation for Medical Research, 2005.
3 Lomas J. Improving research dissemination and uptake in the health sector: beyond the sound of one hand clapping. Centre for Health Economics and Policy Analysis, Hamilton.
4 Pressman JL, Wildavsky A. Implementation. 2nd edition. Berkeley: University of California Press, 1979.
5 Walker AE, Grimshaw J, Johnston M, et al. PRIME - PRocess modelling in ImpleMEntation research: selecting a theoretical basis for interventions to change clinical practice. BWC Health Services Research 2003; 3:22 [online journal]. Available at: http://www.biomedcentral.com/1472-6963/3/22 (accessed Sep 2006).
6 Werner A. A guide to implementation research. Washington: Urban Institute Press, 2004: 1.
7 National Institute of Clinical Studies. Factors supporting high performance in health care organisations [Literature Review Series]. Prepared by the health management group at La Trobe University. Melbourne: NICS, 2003. Available at: http://www.nicsl.com.au/asp/ index.asp?page=materials/materials_year_article&cid= 5212&id=415 (accessed Sep 2006).
[Author Affiliation]
Judith Dwyer, Director, Health Service Management Development Unit
Flinders University, Adelaide, SA.
Correspondence: Prof Judith Dwyer, Mark Oliphant Bldg, Level 2B Flinders University, Bedford Park, SA 5042. judith.dwyer@flinders.edu.au