Résumés
Abstract
Evidence-based practice, first developed in medicine as a response to the exponential growth in the knowledge required to exercise clinical judgment, is now seen as way to rationalize professional practice in the fields of health and social services. However, correctly assessing its contributions requires understanding its limitations, which are the result of a series of methological reductions. These reductions are analyzed in relation to the more or less voluntary introduction of the evidence-based practice perspective into a new philosophy of public service management.
Keywords:
- Evidence based management,
- evidence based practice,
- professions
Résumé
En provenance d’un monde médical en butte à une croissance exponentielle des savoirs nécessaires au jugement clinique, la perspective des pratiques fondées sur les résultats probants, mieux connue sous le label Evidence Based Practice, se présente comme une solution pour rationaliser les pratiques professionnelles dans le secteur de la santé et des services sociaux. Cette perspective comporte cependant quelques limites découlant notamment d’une série de réductionnismes méthodologiques qu’il est nécessaire d’élucider pour bien identifier ce qu’elle peut apporter. Ces réductionnismes seront par la suite analysés au regard de l’inscription plus ou moins volontaire de la perspective Evidence Based Practice dans une nouvelle philosophie de gestion des services publics.
Mots-clés :
- Gestion par données probantes,
- pratiques fondées sur des données probantes,
- professions
Resumen
Proveniente de un universo médico confrontado con un crecimiento exponencial de conocimientos necesarios al juicio clínico, la perspectiva de las prácticas basadas en la evidencia (Evidence Bases Practice), se presenta como una solución para racionalizar las prácticas profesionales en el sector de la salud y los servicios sociales. Esta perspectiva conlleva sin embargo algunos límites, derivados principalmente de una serie de reduccionismos metodológicos que es necesario elucidar para identificar mejor lo que puede aportar. Dichos reduccionismos serán analizados a continuación en relación con la inserción más o menos voluntaria de la perspectiva Evidence Based Practice en una nueva filosofía de gestión de los servicios públicos.
Palabras clave:
- Gestión basada en la evidencia,
- prácticas basadas en la evidencia,
- profesiones
Corps de l’article
Introduction[1]
Since the early 1990s, efforts to introduce protocols into professional practices have been supported by reference to the evidence-based approach (Couturier et Carrier, 2005 This approach has led to a synthesis of current knowledge in intervention protocols and its development can be seen directly in the increasing number of clinical guides and guidelines available to practitioners and indirectly in developments in technologies and tools – particularly assessment tools and systems whose foundation is more or less explicitly aligned with evidence based practice. This effort to introduce protocols has both virtues (Wennberg, 2004) and risks (Webb, 2002). To make it possible to evaluate these effects, we look first at the epistemological limits of evidence-based practice and then evaluate it in terms of the transformation it has lead to in the way public services are managed. We are not suggesting that protocols developed on the basis of evidence-based research should be abandoned but rather that it is important to understand their limitations in order to respond to criticisms of the approach.
Evidence-based or reductive results
The initial intention of those promoting an evidence-based approach was to establish rigorous mechanisms for the collection and rapid dissemination of research results as a way to support clinical practice (Davidoff et al., 1995). Given the rapid growth in knowledge, protocols were developed as a way to help overwhelmed practitioners deal with crises more efficiently, an approach that has a number of virtues. How can we ask professionals to meet the ethical and practical requirement that they take into account the latest knowledge in their fields when the databases containing such knowledge have become too large for even the most enthusiastic to master? The evidence-based perspective was developed as a way to support clinical practice by providing rigorously and continuously aggregated knowledge using information processing and standardized results.
Sackett and his colleagues have defined evidence-based practice in medicine as " the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients" (Sackett et al., 1996: 71). This perspective was first conceptualized and promoted in Anglo-Saxon medicine in the early 1970s, particularly by Archibald Cochrane, who hoped to find a way to standardize medical practices that did not align with current scientific knowledge (Axelsson, 1998). The concept became increasingly popular after 1992, when it begins to be mentioned explicitly in medical databases, and vast networks of researchers began to gather data to be used in meta-analyses (e.g., the Cochrane Collaboration[2]). The methodology developed for such analyses involves aggregating comparable research results to increase their statistical power, making it possible to settle debates in a particular domain, at least temporarily. For example, there would be less debate over the effectiveness of some drug-based treatments for aggression if clinical decisions were made on evidence-based results (Huband et al., 2008). According to Popper’s principle, which underlies this perspective, a scientific theory or statement remains valid as long as it has not been invalidated by new research. The evidence-based perspective is thus a way to continually produce and revise guidelines for professional practice. It is not intended to freeze practices through standardization, although the relationship it creates between knowledge and practice may occasionally result in increased rigidity. In a context where evidence-based results are increasingly easy to access, and where complaint mechanisms (administrative or judicial) are becoming more numerous, following practices prescribed in a guide becomes increasingly inviting, even if the guide explicitly states that practitioners have the final say in clinical decisions.
Two central methodological principles determine whether knowledge aggregation will be successful. The first is that the studies being considered must have compatible results. In practice, there has been a strong preference for studies whose results are derived from random clinical trials (Trinder and Reynolds, 2000), which has led to the exclusion of research involving other methodologies. Even random clinical trials thought to have errors may be excluded, despite having been evaluated previously by sponsors, ethics committees, scholarly reviewers, etc. However, decisions made by external observers about the quality of complex research projects risk being affected by misinterpretations or even conflicts of interest.
Once the meta-analyzers have separated the wheat from the chaff, the second methodological principle establishes a confidence threshold, which varies according to the particular area being considered or the goals of those directing the research. Even on the most rigorous approaches[3], it is rare for the threshold to be greater than 90%, meaning that the confidence level, even in a best-case scenario, eliminates 10% of the data as "aberrant" – the same percentage as evidence[4] supporting Copernican views of heliocentrism, which were dismissed in favor of the view in 90% of the evidence in the field, produced by its religiously orthodox opponents. More seriously, the logic of cumulative knowledge that underlies evidence-based practice, and is supported by a social context that is in favor of the resulting prescriptive guidelines, conflicts with one of the fundamental principles of research – methodical doubt – which requires taking into account even information whose importance is not immediately apparent. There is thus epistemological tension between the principle of methodological doubt and knowledge accumulation.
Given that, it is clear that while the processes used in creating aggregates are usually of high quality, they do not involve all forms of knowledge or all results and the reduction process introduces a series of epistemological issues. For instance, the hierarchy used in determining the quality of evidence reveals the value structure that anchors the epistemology of evidence-based practice. While theorists officially acknowledge the epistemological value of experiential knowledge (Bensing, 2000), seeing it, in principle, as essential to determing correct professional practices, it is recognized as evidence only in the earliest stages (Couturier and Carrier, 2004). Experience-based data collected by clinicians is considered to be empirically true only if it is agreed on through consensus (Castel and Merle, 2002). The resulting clinical consensus then needs to be validated by recognized experts using the Delphi method[5] and a pre-established confidence target. For example, 90% of experts have to recognize the value of a particular consensus for it to be acknowledged as evidence based. The validity of this collected information thus rests on agreement among experts as well as a somewhat naive faith that experts are capable of determining which contingencies can be overlooked.[6] The reasoning behind evidence-based practice leads to the belief that expert consensus will (or will not) be validated through random clinical trials, the ultimate source of proof. Knowledge aggregation thus presupposes that there is a hierarchy of kinds of evidence (Mullen, 2002), with objective knowledge at the top of the scale (Tonelli, 1998). Given this, it cannot be argued that evidence-based practice rejects other types of knowledge, for instance those that originate from qualitative research or clinical experience, without discussion. Instead, it deals with this sort of knowledge differently, by classifying various types of knowledge in terms of a value structure that makes it clear to readers, and particularly to decision makers, that there is "little evidence" for the value of knowledge derived from experience and qualitative work and "much evidence" for the value of research from random clinical trials. This classification has been recognized as a de facto marginalization of knowledge by epistemologists (Granger, 2001) as well as in classical knowledge theories (Charlot, 1997; Vergnaud, 2001) and emerging professional didactics (Pastré, 2005).
Evidence-based practice is anchored in Popper’s epistemology, in which knowledge is considered to be true until falsified; experience-based knowledge is relegated to being an unacknowledged part of any clinical act (Welsh and Lyons, 2001). In a number of "client-oriented" professions, the force of the idea of evidence-based practice leads to an epistemological dissonance between what is promoted as "good practice" and what clinical activities, which are based on a constructivist foundation, reveal about the axiological foundations of an individual’s practice, and even identity (Bensing, 2000; Webb, 2001). From this perspective, evidence-based practice is more than a method whose only purpose is to provide the overwhelmed practitioner with an easily understood way to deal with a large and entangled set of research results. It is also an attempt to impose a more or less totalitarian epistemological position (Holmes et al., 2006). This insight follows the canonical work of Freidson (1984), who demonstrated that doctors were able to define themselves as a profession by convincing closely related professionals of the truth of a system of interprofessional relations in the social and health sectors. Accepting the truth of this system also leads those in medicine to adopt the same epidemiological position. The perspective discussed here has thus doubled the amount of control over doctors, first internally through the control one ideological faction in the medical field has over others (Bensing, 2000) and second externally by the state in its attempts to find a way to balance its own powers in relation to what it sees as an overly powerful medical field (Harrison, 1998).
While there are clearly internal debates among doctors, such debates are even more visible among those in other people-oriented professions, such as social work or criminology. Internal battles over epistemology in these professions have increased following the expansion of evidence-based practice, which has been seen as an opportunity to reorient professions according to the tenets of medically inspired scientific grandeur (Sheldon, 2001). This has led to a militant attitude toward the discretionary power that is seen as inherent in the unwritten and dubious nature of overly constructivist practices (Gambrill, 1999; Proctor, 2002; Wennberg, 2004). Increasing the scientific nature of these professions is a response, according to proponents of evidence-based practice, to internal and external skepticism about their effectiveness and scientific basis, and therefore their professionalism (Sheldon & Chilvers, 2000). For others, it is a fundamental epistemological threat that undermines the very foundations of their profession by imposing a medical perspective (Webb, 2001). In the epistemological debate over monopolization of the identity of a professional group, criminology has demonstrated one way to deal with such problems, with its marriage of positivism (apparent in the importance it gives to criminal etiology) and constructivism (its position of normative relativism) (Debuyst et al., 1998).
The evidence-based practice perspective has introduced three forms of reductionism. The first is epistemological reductionism – the exclusion of other forms of knowledge, other methods, and other epistemologies, which are considered to be at best "soft" professional practices, primitive forms of evidence that a good researcher will know how to domesticate or civilize. The second is reductionism in what counts as action, rendering it comatose by reducing it to indicators, variables, and epidemiological categories. Health care that operates within an evidence-based perspective minimizes patient uniqueness by failing to take sufficient account of individual values, needs, and preferences (Bensing, 2000). The patient’s personal and idiosyncratic characteristics, which are not considered to be relevant variables in random clinical trials, can sometimes be reduced to the level of "nuisance," to statistical noise that endangers the internal validity of a study (Bensing, 2000). Third is the cultural reductionism that results from the de facto exclusion of a majority of works produced in languages other than English, which are seldom or never indexed in data banks, a problem that, unfortunately, cannot be discussed further here.
These epistemological reductionisms shift the locus of truth from its relationship to clinical findings toward a more abstract reality, an aggregation of research results (Timermanns and Kolker, 2004). The resulting practical guides interfere with the therapeutic relationship between professional and client, inducing a normative logic that tells the practitioner, more or less directly, "what ought to be done" (Harrison, 1998: 20). The influence these guides have on technical measures (assessment tools, expert systems, etc.) discreetly supports a change in the way issues are looked at, moving from clinical to epidemiological, (ibid., 1998: 21), reflecting a shift in power from practitioners to researchers, to epidemiologists, and, lastly, to public policy decision makers who, depending on their needs, appropriate the research that seems to present the shiniest truth.
The evidence-based practice perspective and combined health and social approaches
While the rationalization of medical practice advocated by Cochrane (2003) was originally intended only to increase the scientific basis of clinical practice, the exceptionally wide audience for his ideas among decision makers and those responsible for creating policy in health and social programs was central to its rapid and wide dissemination. The use of evidence-based practice in policy management came into its own in the United Kingdom under the Labor government, as evidenced by a speech by then-Prime Minister Tony Blair: "What counts is what works" (quoted in Sanderson, 2004: 367). Since then, state actions in the fields of health and social services have had to be justified by reference to scientific proof of their efficacy.
This interpretation of evidence-based practice has led to the extraordinary level of support it has received from managers and donors (Niessen et al., 2000). It made it possible for them to satisfy their incessant desire for quantification by introducing it into practices that had previously been to some degree outside managerial control as they required knowledge acquired by working in a field, were based on clinical judgment, or were protected by professional autonomy. The result of this convergence between the most positivist fringe of the medical field and management was an evidence-based context (Colyer and Kamath, 1999), evidence-based policies (Mullen, 2002), and evidence-based management (Kieran and Rundall, 2001) that demanded evidence of cost-effectiveness (Niessen et al., 2001). The result, in some cases, was the totalitarian introduction (Holmes et al., 2006) of an evidence-based paradigm (Gill et al., 1996).
This paradigm is reflected in the trend toward "New Public Management" (Merrien, 1999), a neoliberal-inspired movement aimed at ensuring better use of state resources through tighter control over clinical practices. This managerial trend is aimed at rationalizing organizational structures, making local administrations more accountable for the efficiency of services, and cost control through tight budgeting and monitoring results through reappraisals, impact measurement, performance contracts, or scorecards.
In countries such as the United States, where private insurance plays an important role in financing services, insurance plan managers see practice guides as tools that make it possible to base decisions about service provision on stable and validated economic criteria. This has led, for example, to some private insurers being willing to fund only one treatment for certain mental disorders (Tanembaum, 2005: 168). What began as a scientific and clinical concern for increasing the quality of care is gradually shifting toward enabling better control of cost efficiency as understood from the perspective of public affairs management.
Embedding these ideas in public management has led to a reconceptualization of the quality of services and the development of technologies to monitor it, such as audits of quality based on indicators taken from well-regarded studies (Power, 1997). Quality is thus understood as the amalgamation of treatment efficiency and user satisfaction. The user is considered to be a consumer (market logic[7]), making it important to monitor customer needs[8] (requests) and the way these are dealt with professionally (responses). The gap between requests and responses then becomes central to assessments of efficiency (Merrien, 1999; Larivière, 2005). Measuring the gap also helps decide how to orient services and, eventually, the mechanisms used to ration supply or demand. Historically, such rationing has been based on hospital waiting lists and the clinical judgement of practitioners (Harrison, 1998).
Evidence-based practice gives decision makers the opportunity and the means to control costs, preserve quality as they define it, and justify the rationalization of expenses according to principles such as equity of treatment in the context of resource scarcity (Dopson et al., 2003: 41). This perspective relies on an analysis of risk factors that involves new strategies for organizing services and controlling their delivery that emphases population groups rather than individual cases: low-risk groups are referred to low-cost services, that is, services provided in the community, notably by families, while high-risk groups are directed to institutions (Webb, 2002). This contradicts the original intent behind evidence-based practice, which was to standardize practices rather than regulate them according to economic rationales (Dopson et al., 2003).
Dopson and collaborators (2003) felt that the scientific nature of evidence-based practice appears to make it easier for politicians to make difficult decisions and to take less responsibility when controlling physician practices. From the point of view of public decision makers, it provides explicit and transparent criteria that legitimize the decision-making process around issues in healthcare. For some its aura of scientific rationality makes the process seem to be transparent, apolitical, and rational (Harrison, 1998), while for others it is tainted with neoliberal ideology (Larivière, 2005) and is an attempt to impose a single view of truth (Holmes et al., 2006).
Evidence-based practice thus does more than aggregate "data": it imposes a system that, by defining what is true, attempts to manage risk and uncertainty in healthcare policies (Trinder and Reynolds, 2000: 12). In this sense, for Webb, this perspective is part of a "performance culture" (2001: 60) whose goal is to control quality, optimize efficiency, and reduce the risks in healthcare. This performance culture is part of a paradigm that privileges instrumental rationality, particularly in a context where politicians seek to manage economic and social issues scientifically and apolitically in a way that makes social policies an exercise incontrol (Sanderson, 2004). Evidence-based practice leads to a shift, with trust located by privileging public trust, which deals with relationships between individuals and systems, to the detriment of private trust, which deals with relationships between people, in this case between practitioner and client (Harrison, 1998). Trust is transferred from a fallible individual to a supposedly neutral quality-control system (Lohr et al., 1998; Trinder and Reynolds, 2000). This process creates a change in how services are understood – they are no longer seen as "individual acts" but instead as "products of an organized system" (Castel and Merle, 2002: 339).
This convergence between a positivist epistemology and a managerial utilitarianism (Ogien, 1995) has been concretely embodied in a managed-care approach that encourages the use of only those practices whose effectiveness can be demonstrated (Lecomte, 2003: 14), which has sometimes limited the choice in practices (Tanembaum, 2005). From a legal perspective, stricter legislation concerning professional responsibility has restricted the clinical decisions of practitioners, who must comply with certain standards to avoid legal action (Howard et al., 2003; Raines, 2004). Practical guides are the least restricting form of managed care, while standardized assessment tools and expert systems are the most restrictive. According to Rosen and his colleagues (2003: 210), practical guides are less useful for directing practice than for identifying lacunae in practitioner knowledge, promoting continued education, and reducing differences between medical practices while publicly demonstrating the value of professional interventions.
Conclusion
The initial intent to provide practitioners with aggregated knowledge through practical guides, guidelines, or standardized tools has, in addition to its undeniably positive pragmatic effects, led to epistemic effects related to various methodological reductionisms. Among these effects is a partial shift in the decision-making locus of professional action from the clinical relationship to one orchestrated by abstract authorities that combines scientific positivism, the managerial spirit (Ogien, 1995), and the medical “episteme” characteristic of the times. Despite the obvious and strong connections between these three dominant forms of reason, evidence-based practice, which assumes that there will be an almost mechanical causal relationship between the availability of evidence-based results and their integration into practitioners’ decisions – their dissemination will occur "like water flowing through a pipe" (Dawson quoted by Greenhalgh, 2005: 65) – overestimates the ability of such findings to affect practice. We know that knowledge that could affect protocols can, like water, slip through the cracks and even the most well-designed conduit can become outmoded. The protocol is not the practice, just as the pipe is not the water.While the perspective behind evidence-based practice can certainly be criticized, particularly given its totalitarian tendencies, it is important to recognize the opportunities it presents. In the coming years, a nuanced reflection on this perspective should emerge. To achieve this, we need a better understanding of how practitioners use evidence-based practices, which involves documenting both the level at which they are implemented as well as the amount of resistance, in the Foucaultian sense of the term, there is to them. It is this conjunction of integration and resistance that makes the undertaking complex.
Parties annexes
Notes
-
[1]
Original article published in Criminologie. L’intevention à l’aune des données « probantes », vol. 42, no 1, Spring 2009. Couturier, Y., Gagnon, D. & Carrier, S. (2009). Management des conduites professionnelles par les résultats probants de la recherche. Une analyse critique. Criminologie, 42(1), 185–199. https://doi.org/10.7202/029812ar
-
[2]
The Campbell Collaboration (http://www.campbellcollaboration.org/) and the Center for Evidence Based Social Services do the same for the social sciences (http://www.ripfa.org.uk/aboutus/archive/).
-
[3]
It is 66% in certain ISO protocols (www.iso.ch/iso/en/stdsdevelopment/whowhenhow/proc/proc.html).
-
[4]
We adopt the francophone sense of the term, in which "evidence" relates to what is accepted as common sense as opposed to scientific proof.
-
[5]
This method is an excellent example of how a strictly qualitative approach can be concealed by statistical apparatus. Qualitative judgements are transformed into into objective value, the rate of agreement. The idea of the "expert" object merits more serious deconstruction than we can undertake here. It suffices to note that the “expert” is a social construct and that attributing expertise includes all the risks involved in the political use of status, here related to scientific position (Bourdieu, 2001).
-
[6]
Work in the sociology of science, however, provides a strong argument against such naiveté (Callon, 1986; Latour, 1988; Knorr-Cetina, 1996; Bourdieu, 2001).
-
[7]
This clientelism has paralleled a reduction in citizen participation in public service governance.
-
[8]
Need is then determined less by what patients say than by etiological categories in accepted studies.
References
- Axelsson, R. (1998). Towards an Evidence Based Health Care Management. International Journal of Health Planning and Management, 3, 307-317.
- Bensing J. (2000). Bridging the Gap. The Separate Worlds of Evidence Based Medicine and Patient-Centered Medicine. Patient Education and Counseling, 39(1), 17-25.
- Bourdieu, P. (2001). Science de la science et réflexivité. Paris: Raisons d’agir.
- Callon, M. (1986). Éléments pour une sociologie de la traduction. La domestication des coquilles St-Jacques et des marins-pêcheurs dans la baie de St-Brieuc. L’année sociologique, 36, 169-208.
- Castel, P., & Merle, I. (2002). Quand les normes de pratiques deviennent une ressource pour les médecins. Sociologie du travail, 44, 337-355.
- Charlot, B. (1997). Du rapport au savoir, éléments pour une théorie. Paris: Anthropos.
- Cochrane Collaboration (2003). Cochrane Reviewers’ Handbook 4.1.6. [En ligne] http://www.cochrane.org/
- Colyer, H., & Kamath, P. (1999). Evidence-Based Practice. A Philosophical and Political Analysis : Some Matters for Consideration by Professional Practitioners. Journal of Advanced Nursing, 29, 188-193.
- Couturier, Y., & Carrier, S. (2004). Pratiques fondées sur les données probantes en travail social : un débat émergeant. Nouvelles Pratiques Sociales,16(2), 68-79.
- Couturier, Y., & Carrier, S. (2005). Scientificité et logiques de preuve en contexte de pratiques fondées sur les données probantes (Evidence Based Practice). In D. Morin (ed.), Les pratiques professionnelles fondées sur les résultats probants : questionnons à nouveau ce paradigme. Cahiers de l’ACFAS, 101, 9-26.
- Davidoff, F., Haynes, B., Sackett, D., & Smith, R. (1995). Evidence-based Medicine. British Medical Journal, 310, 1085-1086.
- Debuyst, C, Digneffe, F., Labadie, J., & Pires, A. (1998). Histoire des savoirs sur le crime et la peine. Tome II : La rationalité pénale et la naissance de la criminologie. Montréal: Les Presses de l’Université de Montréal, Les Presses de l’Université d’Ottawa, De Boeck Université.
- Dopson, S., Locock, L., Gabbay, J., Ferlie, E., & Fitzgerald, L. (2003). Evidence-based medicine and the implementation gap. Health, 7 (3), 311-330.
- Freidson, E. (1984). La profession médicale. Paris: Payot.
- Gambrill, E. (1999). Evidence-Based Practice: An Alternative to Authority-Based Practice. Families in Society, 4, 341-350.
- Gill, P, Dowell, A. C, Neil, R. D, Smith, N, Heywood, P., & Wilson, A. E. (1996). Evidence-based general practice: a retrospective study of interventions in one training practice. British Medical Journal, 312(7034), 819-821.
- Granger, G. (2001). Sciences et réalité. Paris: Odile Jacob.
- Greene, G., Jensen, C., & Jones D. (1996). A Constructivist Perspective on Clinical Social Work Practice with Ethnically Diverse Clients. Social Work, 37(6), 172-180.
- Greenhalgh, T. (2005). Diffusion of innovations in service organizations: A systematic literature review. Oxford: Blackwell publishing.
- Harrison, S. (1998). The politics of evidence-based medicine in the United Kingdom. Policy & Politics, 26(1), 15-31.
- Holmes, D., Murray, S., Perron, A., & Rail, G. (2006). Deconstructing the evidence-based discourse in health sciences: truth, power and fascism. International Journal of Evidence Based Health Care, 4, 180-186.
- Howard, M., McMillen, C., & Pollio, D. (2003). Teaching evidence-based practice: Toward a new paradigm for social work education. Research on Social Work Practice, 13(2), 234-259.
- Huband, N., Ferriter, M., Nathan, R., & Jones, H. (2008). Antiepileptics for aggression and Associated impulsivity (Protocol). Cochrane Database of Systematic Reviews, 4, CD003499.
- Kieran W., Thomas, G., & Rundall, G. (2001). Evidence-Based Management: From Theory to Practice in Health Care. Milbank Quarterly, 3, 429-457.
- Knorr-Cetina, K. (1996). Le “souci de soi” ou les “tâtonnements”: ethnographie de l’empirie dans deux disciplines scientifiques. Sociologie du travail, 3, 311-330.
- Latour, B. (1988). La vie de laboratoire. La production des faits scientifiques. Paris: La Découverte.
- Larivière, C. (2005). Les risques de la nouvelle gestion publique pour l’intervention sociale, [on line] http://www.rqiiac.qc.ca/pub/lariviere.htm (consulted 7 mars 2006).
- Lecomte, Y. (2003). Développer de meilleures pratiques. Santé mentale au Québec, 18(1), 9-36.
- Lohr, K., Eleazer, K., & Mauskopf, J. (1998). Health policy issues and applications for evidence-based medicine and clinical practice guidelines. Health Policy, 46, 1-19.
- Merrien, F.-X. (1999). La Nouvelle Gestion Publique: un concept mythique. Lien social et politiques-RIAC, 41, 95-103.
- Morin, D. (Ed.) (2005). Les pratiques professionnelles fondées sur les résultats probants: questionnons à nouveau ce paradigme. Cahiers de l’ACFAS, 101, 9-26.
- Mullen, E. J. (2002). Evidence-based social work – theory & practice: historical and reflective perspective. 4th International Conference on Evaluation for Practice, University of Tampere. Tampere, Finlande. [On line] http://www.uta.fi/laitokset/sospol/eval2002/CampbellContext.PDF
- Niessen, L, Grisjseels, M., & Rutten, F. (2000). The Evidence-Based Approach in Health Policy and Health Care Delivery. Social Science & Medicine, 51, 859-869.
- Ogien, A. (1995). L’esprit gestionnaire. Une analyse de l’air du temps. Paris: É.H.E.S.S.
- Pastré, P. (2005). Apprendre par la simulation: de l’analyse du travail aux apprentissages professionnels. Toulouse: Octarès.
- Power, M. (1997). The audit society. Oxford: University press.
- Proctor, E. (2002). Social Work, School Violence, Mental Health, and Drug Abuse: a Call for Evidence-based practices. Social Work Research, 2, 67-69.
- Raines, J. C. (2004). Evidence-based practice in school social work: A process in perspective. Children & Schools, 26(2), 71-85.
- Rosen, A., Proctor, E. K., & Staudt, M. (2003). Targets of change and interventions in social work: an empirically based prototype for developing practice guidelines. Research on Social Work Practice, 13(2), 208-233.
- Sackett, D., Rosemberg, W., Gray, J., Haynes, R., & Richardson, W. (1996). Evidence-Based Practice: what It Is and What It Isn’t. British Medical Journal, 312, 71-72.
- Sanderson, I. (2004). Getting evidence into practice: perspectives on rationality. Evaluation, 10(3), 366-379.
- Sheldon, B. (2001). The Validity of Evidence-Based Practice in Social Work: a Reply to Stephen Webb. British Journal of Social Work, 31: 801-809.
- Sheldon, B., & Chilvers, R. (2000). Evidence-Based Social Care: a Study of Prospects and Problems.Lyme Regis: Russell House Publishing.
- Tanenbaum, S. (2005). Evidence-Based Practice as Mental Health Policy: three Controversies and a Caveat. Health Affairs, 24(1), 163-173.
- Timmermans, S., & Kolker, E.. (2004). Evidenced-based Medicine and the Reconfiguration of Medical Knowledge. Journal of health and social behavior, 45, extra issue, 177-193.
- Tonelli, M. (1998). The Philosophical Limits of Evidence-Based Medicine. Academic Medicine, 12, 1234-40.
- Trinder, L., & Reynolds, S. (2000). Evidence-based practice: a critical appraisal. Oxford: Blackwell Science.
- Vergnaud, G. (2001). Les sciences de l’éducation. Paris: La Découverte.
- Webb, S. (2001). Some Considerations on the Validity of Evidence-Based Practice in Social Work. British Journal of Social Work, 31, 57-79.
- Webb, S. (2002). Evidence-Based Practice and Decision Analysis in Social Work. An Implementation Model. Journal of Social Work, 2(1), 45-63.
- Welsh, I., & Lyons, C. (2001). Evidence-Based Care and the Case for Intuition and Tacit Knowledge in Clinical Assessment and Decision Making in Mental Health Nursing Practice. Journal of Psychiatric and Mental Health Nursing, 4, 299-305.
- Wennberg, J. E. (2004). Practice Variations and Health Care Reform: connecting the dots. Health Affairs, Web Exclusive, 140-144.