Corps de l’article

Across common indicators of academic success, students in the US who belong to historically marginalized racial groups and disadvantaged socioeconomic groups consistently achieve less than their more socially advantaged counterparts. These achievement gaps indicate unequal educational opportunities. Further, social scientists find that gaps in test scores redound to wage gaps between racial and socioeconomic groups (Brighouse et al., 2018; Duncan & Murnane, 2014). An important aim of US education policy is to address achievement gaps by facilitating improvement among disadvantaged students.[1] At the same time, it aims to raise achievement in the US overall by facilitating improvement among all students. For nearly 20 years, the US has employed an evidence-based approach to education (hereafter “US-EBE”) modelled on evidence-based medicine to pursue these goals. Essentially, US-EBE is a strategy for improving the quality of schools nationwide by sponsoring experimental research that identifies generally effective, replicable interventions (e.g., programs and instructional practices) and creating incentives for practitioners to use them in their classrooms, schools, or districts. This paper offers an analysis of US-EBE as a policy strategy that prompts us to consider the relative moral importance of these two goals under present circumstances.

For US-EBE to narrow gaps while also increasing achievement across student groups, disadvantaged students at the low end of the gap would have to improve at faster rates than advantaged students at the high end.[2] I argue that US-EBE does not support the relative improvement needed to narrow gaps. By aiming to identify interventions that are effective for the general student population, the research agenda driving US-EBE supports improvement in absolute terms for all student groups. Thus, it is attractive if we want to prioritize raising overall achievement by facilitating improvement for all student groups. I argue that US-EBE would be a better strategy for narrowing achievement gaps if the research agenda shifted its focus to interventions that facilitate improvement among disadvantaged students directly instead of the general student population. However, shifting the research agenda is likely to vitiate US-EBE’s contributions to overall achievement across student groups.

My analysis of US-EBE in the second section indicates that we can reasonably expect it to either raise achievement across groups or narrow gaps, but not both.[3] This raises a normative question: which goal should we pursue using US-EBE? In the third section, I explore moral considerations that bear on this question, focusing on the costs and benefits that each option can be expected to place on students. I argue that insofar as we care about improving the prospects of disadvantaged students post-schooling or creating equal opportunity through education, we ought to use US-EBE to narrow gaps. Further, I argue that the costs associated with doing so are morally justifiable whereas those associated with the alternative are not.

Note that this is not primarily an argument about how individual schools should treat different students; it concerns the federal policy goals that drive educational research and funding for evidence-based school improvement efforts and shape education policy and practice at the state and local levels. Importantly, the normative portion of this paper is an exercise in non-ideal theory.[4] As such, I accept the stated aims of current policies as policy goals rather than defending or arguing for them. And, given our vast investments in US-EBE and broad support for evidence-based policy generally, I take for granted that we will continue using some version of it and thus focus on how it could be improved to better serve its goals.

Aims of Educational Justice in the US

Currently, the US conceives of educational justice primarily in terms of academic outcomes (i.e., achievement) and their distribution across students. It strives to “provide all children with a significant opportunity to receive a fair, equitable, and high-quality education, and to close educational achievement gaps” (ESSA, Title I, 2015). Existing achievement gaps are morally objectionable because they are unjustified inequalities in the distribution of academic outcomes. We can think of achievement gaps in terms of various indicators: the rates at which group members reach a minimum proficiency threshold, graduation rates, rates at which they attend and finish college, grade point averages, admission and participation in “gifted” programs, and average scores on standardized tests.[5] Along all these dimensions, White students outperform Black, Latinx, and Native American students, and students with high socioeconomic status outperform students with low socioeconomic status. These disparities indicate that students who belong to socially disadvantaged groups tend to have worse opportunities for academic achievement than students in advantaged groups.[6] Closing achievement gaps, then, requires closing opportunity gaps, which are often attributed to disparities in the quality of the education students receive (Goldhaber et al., 2015; Isenberg et al., 2013).

Achievement gaps refer to inequalities between groups. The equality that is realized by eliminating gaps is either equality in the rates at which groups reach a goal (e.g., proficiency or high school graduation) or in groups’ average achievement of academic outcomes (usually indicated by test scores). Thinking of equality in terms of group-level rates or averages allows the distribution of outcomes within each group to reflect a range of individual characteristics, like levels of effort and ability. Because these individual characteristics do not track social categories like race, socioeconomic status, and ethnicity, we can expect similar distributions of them within groups.[7] Absent the influence of these social factors on academic achievement, then, groups should reach the proficiency threshold at equal rates and their average test scores should be (approximately) equal.

To address achievement gaps, US policy prioritizes improvement among disadvantaged students. In addition to improving schools serving high concentrations of disadvantaged students, it strives to increase achievement overall by improving schools nationwide. This goal is motivated in part by the fact that the performance of American students lags behind that of students in many other countries (Desilver, 2017; OECD, 2016). Even top achievers in the US tend to be outperformed by their international peers (Jacobs & Wolbers, 2018). To fare well in global competitions the US “needs all students to excel, including the very best” (Grubb, 2012, p. 81). For that reason, the Every Student Succeeds Act (ESSA) “hold[s] all schools – including relatively high performers – responsible for the continuous progress of all groups of students” (Haycock, 2017; see also Obama, 2015).

However, its commitment to prioritize improvement among disadvantaged students constrains efforts to raise national averages by specifying that increases cannot stem solely from improvement among advantaged students – disadvantaged students must also improve. Likewise, the commitment to raise achievement overall by facilitating improvement across student groups constrains efforts to narrow achievement gaps. Strategies targeting gaps must be consistent with improvement for all groups of students. In other words, policies should aim to level up achievement among disadvantaged students rather than level down achievement among the advantaged. To summarize these goals, we can say that federal education policy aims to:

  1. Raise overall achievement as much as possible by improving academic outcomes across students nationwide, provided disadvantaged students are among those who improve.

  2. Narrow achievement gaps between socially advantaged and disadvantaged groups by levelling up achievement among disadvantaged students.

By focusing on absolute, non-comparative levels of academic achievement (e.g., increased test scores), the first goal addresses the problem of low achievement among disadvantaged students, but not unequal achievement between groups. The second goal is responsive to both low achievement and achievement gaps; it focuses on achievement gains for students at the low end of achievement gaps relative to students at the high end.

These goals are advanced by federal education policies that constrain state policies and shape practices within schools and districts. If achievement is largely dependent on school quality, as policymakers tend to assume,[8] then improving schools should improve achievement. US-EBE is supposed to improve schools by introducing more effective interventions, thereby advancing both goals. But, as we shall see, US-EBE, in its current form, makes the most sense if we prioritize the first goal over the second.

The Evidence-Based Approach to School Improvement in the US

US-EBE grows out of a “scientific revolution within education” that advocates the development and use of “replicable programs and practices with strong evidence of effectiveness” (Slavin, 2002, p. 18; Slavin, 2008, p. 125). Such evidence is to be procured using rigorous experimental methods – namely, randomized controlled trials (RCTs). While these commitments are typical of the evidence-based education approach, US-EBE’s funding and incentive structure and its accountability provisions distinguish it from models used elsewhere (e.g., Norway, the United Kingdom).

Roughly, US-EBE consists of a research stage that involves testing interventions to determine “what works” and an implementation stage in which research-backed interventions are used in practice. Intermediary organizations like the What Works Clearinghouse (WWC) connect research to practice by vetting studies, summarizing evidence in a user-friendly format, and disseminating results to practitioners (e.g., teachers, administrators, decision-makers, curriculum planners) so they can use reliable evidence to decide what to implement in their schools or classrooms. Schools and educators at various levels are accountable for their students’ outcomes (e.g., standardized test scores). The general idea is that we can increase the effectiveness of teachers by supplying them with and encouraging them to adopt effective practices, thereby improving schools. School quality is largely measured by students’ performance on standardized tests (Koretz, 2008; 2017). Better test scores or greater improvement in test scores are supposed to indicate the quality of students’ learning opportunities.

Because I am focusing on US-EBE as a strategy, I de-emphasize potential or actual problems with its implementation. For example, some advocates claim that the potential effects of US-EBE are diminished because schools often fail to use evidence-based interventions properly (e.g., Slavin, 2008). While getting the most out of US-EBE requires overcoming such barriers, we can set them aside to examine the logic of US-EBE as a policy strategy. After all, whether we should use resources to improve the implementation of a policy strategy depends on whether that strategy is equipped to advance its goals.

The Structure of US-EBE

The two goals identified in the previous section aim for a pattern of educational outcomes. To understand how US-EBE is supposed to produce that pattern, we must take a closer look at its overall structure. On the research side, the Institute of Education Sciences (IES) funds research that tests educational interventions to generate evidence of their effectiveness. Following the standard evidence-based approach used in medicine and other policy areas, it favours RCTs because they are considered the best method for establishing causal claims within study settings (i.e., “it worked”) that generalize (i.e., “it works”) to other settings (Mosteller & Boruch, 2002; Slavin 2002, 2008, 2020).

RCTs involve randomly assigning units of study (e.g., schools, classes, or students) to treatment and comparison groups to balance confounding factors between them, so they differ only in that one receives the intervention and the other does not. Balancing groups rules out alternative explanations for the difference in observed effects between the treatment and comparison group, allowing researchers to attribute that difference to the intervention. Because the goal is to identify what works generally, across different students and settings, IES encourages running multiple RCTs to test the same intervention in different locations. The common assumption is that student populations are similar enough to infer that what has worked in some places will work in others – especially if demographic characteristics like race, socioeconomic status, or ethnicity in the target setting are similar to those in the studies.[9]

On the implementation side, educators are encouraged to adopt evidence-based interventions. Since the study design supposedly produces results that they can expect to travel from the study settings to their own, practitioners are encouraged to focus on implementing evidence-based interventions with fidelity to replicate results. To facilitate this division of labour, the WWC vets studies and disseminates “educator-friendly reviews of research” that indicate “which specific programs and practices have been proven to work in rigorous evaluations” (Slavin, 2008, p. 125). The aim is to provide access to an array of “proven, replicable programs that are ready for them to use” (Slavin, 2020, p. 25). Decision-makers must determine which of these can be implemented, with fidelity, in their school, district, or classroom. Practitioners (e.g., administrators, teachers, staff) are then tasked with implementing them.

Since the experimental research central to US-EBE aims to identify interventions that are generally effective in that they can be expected to produce certain effects (e.g., increase students’ percentile rank by 8–10%) in most educational settings, research outputs promise potential benefits for students in various demographic groups (ideally, for the US student population). US-EBE prioritizes improvements for disadvantaged students at the implementation stage through funding and accountability. Since US-EBE was created by the No Child Left Behind Act (NCLB), Title I (the federal provision for allocating supplemental federal resources to schools serving high concentrations of disadvantaged students) has required that schools use evidence-based interventions that meet the WWC’s standards (Eisenhart & Towne, 2003; Walters et al., 2009).[10] Making federal funding conditional on using evidence-based methods provides a strong incentive for adopting them in Title I schools. Several states provide additional incentives using state funding. Under ESSA, all states are required to assist schools that consistently underperform, in part by supporting their use of evidence-based strategies. Grant programs also offer opportunities for qualifying schools to compete for extra funds to implement evidence-based programming, including whole-school reforms (Duncan, 2018).

Additionally, holding schools and teachers accountable for students’ outcomes is considered “an important tool in the effort to raise achievement and close gaps” (Hall, 2013, p. 1). Mechanisms like standardized tests measure students’ achievement levels and progress. Within schools, each student subgroup is expected to show improvement. For subgroups that do not, schools must develop evidence-based improvement plans.

Assessing US-EBE as a Policy Strategy

Looking at its overall structure, we can see how US-EBE is supposed to improve the quality of schools and effectiveness of teachers, especially those serving disadvantaged students. Thus, it seems well-suited to the first policy goal identified above: improving achievement overall and among disadvantaged students. However, I will argue that its focus on identifying interventions that are generally effective makes it less promising as a strategy for meeting the second goal: narrowing achievement gaps.

To raise achievement across groups while simultaneously narrowing gaps, students in disadvantaged groups would have to catch up to their advantaged peers by making greater gains in achievement. But while using generally effective evidence-based interventions seems to support absolute gains for disadvantaged students, it may not support the relative gains needed to narrow gaps. Adopting interventions that produce positive effects, on average, for the general population of students could contribute to greater gains among disadvantaged students on average, but it could just as easily lead to comparable or lesser gains.

As we saw, the research central to US-EBE promotes improvement across all students by investigating what works generally for a broad range of students in the US. It prioritizes improvement among disadvantaged students at the implementation stage by providing special incentives and financial resources to support the use of evidence-based interventions. However, there is no reason to think that using interventions that are generally effective will always lead to greater gains among disadvantaged students. That holds true even if we assume that the supplementary funds allocated to Title I schools are sufficient for implementing them with fidelity, because schools that do not receive supplementary federal funding – including charter, private, and some public schools – can also use evidence-based interventions to improve their students’ outcomes. Indeed, various factors provide incentives for them to do so.

Federal, state, and local policies that base teacher and school evaluations partly on their contributions to students’ performance encourage all schools to use “proven” interventions that promise significant positive effects. Because policies regarding school choice put schools in competition with one another, indicators of school quality are important. Test scores are usually central to parents’ assessments of school quality,[11] which provides an incentive for using interventions that can be expected to improve scores according to rigorous experimental research. Additionally, taking an evidence-based approach to education may itself be seen as an indicator of school quality, given the general popularity of evidence-based policy and practice across policy domains. So, we can expect both Title I and non-Title I schools to use evidence-based interventions.

As previously noted, placing evidence of general effectiveness at the centre of the school improvement strategy implies that evidence-based interventions can be expected to work, to some extent, in all schools that implement them. Indeed, describing interventions as replicable suggests that interventions tested by multiple RCTs are likely to produce comparable average effects wherever they are used – barring settings that are highly atypical. If that were true, using the same intervention in schools serving advantaged students and disadvantaged students would sustain existing achievement gaps.[12] However, RCT results, alone or combined, do not show that the same effects can be replicated across sites (Joyce & Cartwright, 2020; Morrison, 2021).[13] Average effect sizes are estimates based on particular groups of students (i.e., study populations) in particular settings. They are only indicative of what effects are likely to occur in other sites that are sufficiently similar in terms of population and contextual factors relevant to the performance of the intervention (Deaton & Cartwright, 2018; Joyce, 2019; Morrison, 2021).

Proponents who accept that interventions deemed generally effective are unlikely to produce identical effects across sites often maintain that they can be expected to produce positive effects. Indeed, Robert Slavin, a prominent advocate of basing decisions on evidence from RCTs, acknowledges that the effects typically vary for subgroups but claims that it is unlikely that an intervention “found to be effective on average would have zero impact, or even a negative impact, for any subgroup with significant representation in the schools” (2020, p. 28). Let us suppose for now that this more moderate claim is true. Since effects will vary, using the same intervention in Title I and non-Title I schools will not necessarily sustain gaps – gaps could narrow, but they could also widen. It is possible that disadvantaged students in one school will benefit more than advantaged students in another school. Within heterogeneous schools, it is possible that disadvantaged students will gain more than their advantaged peers. But, if all we know is that generally effective interventions will have a (unspecified) positive impact wherever they are used, the reverse seems just as likely. So, the claim that evidence-based interventions are likely to have a positive impact on all groups of students is only comforting if we prioritize absolute gains across groups rather than relative gains for students at the low end of the achievement gap.

For US-EBE to be a strategy for narrowing gaps, evidence-based interventions would have to reliably contribute to larger positive effects among disadvantaged students. Positive RCT results do not give us a reason to think that will be the case. The idea that disadvantaged students in Title I schools will reliably gain more from generally effective interventions assumes that they attend ineffective, low-quality schools. More technically, it assumes that practices or curricula used in Title I schools are always worse than the comparison intervention used in RCTs while practices at schools serving mostly advantaged students are always better than the comparison intervention. If that were the case, then, ceteris paribus, we would expect the same intervention to produce larger effects in Title I schools just as we would expect acetaminophen to have greater effects when compared to a placebo and smaller effects when compared to aspirin.

There is little reason to think this assumption holds across Title I and non-Title I schools.[14] But, even if we grant that Title I schools are less effective, we would have to assume that students in disadvantageous circumstances who have putatively received worse than average instruction in an area (e.g., reading or math) are in a position to gain more from an intervention than students in advantageous circumstances who have putatively received better instruction in that area. This seems implausible as a general assumption because it disregards contextual factors that affect the effectiveness of interventions. A significant literature demonstrates that various factors in and out of school affect learning (e.g., Ladd, 2012). The extent to which students benefit from an intervention often depends on their background knowledge and proficiencies, for example. Socioeconomic status and family dynamics also affect learning readiness (e.g., Rowan, 2011). It seems that students who are more academically advanced and ready to learn will often be in a better position to reap the benefits of an intervention. So, we cannot simply assume that disadvantaged students attending lower-quality schools will gain more from interventions that are generally effective for the broad student population.

Some recent contributions to the literature on evidence-based education call for evidence from other research methods that speaks to how RCT-tested interventions are likely to perform in specific target settings (e.g., Joyce & Cartwright, 2020). Such information gives educators a better sense of how “generally effective” interventions might affect their students. But importantly, evidence that helps all educators choose interventions that will produce the best results for their students in their setting supports absolute gains across groups – not relative gains for disadvantaged students. Indeed, the explicit purpose of this shift is to provide evidence that is useful for all educators (Cartwright & Hardie, 2012; Joyce & Cartwright, 2018). So, educators in schools serving disadvantaged students would be able to identify the interventions that will produce the best results for their (disadvantaged) students, but educators in schools serving advantaged students would also be able to identify the interventions that produce the best results for their (advantaged) students. To support relative gains for disadvantaged students, all educators would have to choose interventions that produce greater benefits for disadvantaged students regardless of their student population. While possible, this is unlikely given the current incentive structure and the aversion to a national curriculum in the US. Further, doing so would require teachers to be less responsive to the needs of their own students.

The upshot is that an evidence-based approach to school improvement that focuses on the identification and implementation of generally effective interventions may increase overall achievement by facilitating improvement across groups (including disadvantaged groups), but we should not expect it to narrow gaps. Trends in achievement since US-EBE was introduced support this assessment.[15] Despite improved academic outcomes across groups, achievement gaps have remained largely intact over the past two decades. Gaps persist because students in advantaged racial and socioeconomic groups have improved, on average, at least as much as students in disadvantaged groups (Haycock, 2017; Rothstein, 2011; Timar & Maxwell-Jolly, 2012).[16] So, the current model is attractive if we want to prioritize the first policy goal over the second.

Using US-EBE to Narrow Achievement Gaps

To narrow achievement gaps, US-EBE must support improvement among disadvantaged students relative to advantaged students. So, I propose prioritizing improvement among disadvantaged students at the research stage in addition to the implementation stage (i.e., providing supplemental funding to enable Title I schools to adopt evidence-based interventions). Prioritizing their improvement at the research stage would mean producing research outputs that support improvement among disadvantaged students without simultaneously supporting comparable or greater improvement among advantaged students.

A research agenda focused on improvement among disadvantaged students could take different shapes. A promising option would be to conduct research that provides information that educators can use to offset the effects of social disadvantages on their students’ outcomes. However, this is not just a matter of figuring out which evidence-based interventions tend to work best, on average, for students in disadvantaged groups by disaggregating the average treatment effect for the general population. Again, that information could lead to increased gains for disadvantaged students but would not ensure relative gains. Instead, I suggest focusing the research agenda on devising interventions specifically designed to help disadvantaged students.

However, I want to caution against emulating the current US-EBE model. An approach centred on experimental research that aims to identify general effectiveness for disadvantaged groups could reproduce problems that arise with research that aims to identify general effectiveness for the general population of students. To see this, consider evidence-based interventions like Success for All (SFA) and the Knowledge is Power Program (KIPP), which are designed to help students who are educationally disadvantaged in certain ways.

The information available through the WWC indicates that both SFA and KIPP effectively improve outcomes, on average, among non-White students and students with low socioeconomic status.[17] This is because each intervention produced positive average effects in the combined study populations, which included a high percentage of students from these groups. For example, KIPP’s effects on achievement in the language arts have been observed in four experimental studies, for a combined sample of 20,804 students. Of those students, 59% were Black, 36% were Hispanic, and 84% qualified for the free and reduced-cost lunch program.[18] KIPP caters to students from these educationally disadvantaged groups by lengthening the school day, emphasizing discipline, and creating a school culture that prioritizes academic success (Thernstrom & Thernstrom, 2004). Similarly, SFA is a highly structured, multifaceted program designed to help disadvantaged students by, among other things, addressing nonacademic issues and providing one-on-one tutoring for students who fall behind (Lingenfelter, 2016).

Testing such interventions in an effort to figure out which “work” or are generally effective for disadvantaged students seems like a step forward, but it may not be the best strategy for facilitating relative gains. That is because we should not expect the same intervention to work across (disadvantaged) schools and students (Joyce, 2019; Morrison, 2021). Since that is the case, knowing that these interventions produce positive effects among disadvantaged students on average does not tell educators how they will affect the particular (disadvantaged) students they serve in their classroom, school, or district. Indeed, research on the implementation of SFA indicates that educators find it necessary to significantly adjust the program so their students can benefit from it (e.g., Lingenfelter, 2016; Peurach, 2011). Likewise, analyses of KIPP indicate that it cannot be replicated at scale (e.g., Rothstein, 2004). Additionally, KIPP tends to work best for some members of disadvantaged groups – often those who are most advantaged (e.g., Schmidtz & Brighouse, 2020).

While broad categories of disadvantage tend to bear on academic performance across the US, there is reason to think that disadvantageous social circumstances differ qualitatively between locations in ways that impact achievement (Ladd, 2012). This may be partly because the relevant factors can overlap and intersect in various ways. A recent longitudinal study that compares US school districts finds that socioeconomic- and race-based achievement gaps exist within each district but vary in character (Fahle & Reardon, 2018; Reardon, 2016). Educational disadvantage tracks the same features, but dimensions, causes, and experiences of disadvantage differ, as do the mechanisms through which they affect students’ performance in school. So, we should not expect interventions designed to address disadvantage in general to be similarly effective across students and settings.

For example, certain kinds of advantage in early childhood are stronger predictors of academic success in some locations than they are in others. Increased parental involvement and reading at home, for instance, has had a significant impact on students’ performance in some districts but made little difference in others (García & Weiss, 2017; Tárraga García et al., 2018). Similarly, the impact of Head Start, a national early childhood education program for low-income preschoolers, varies significantly from place to place. Participants meet the same criteria, but Head Start has been highly effective for some and insignificant for others (Morris et al., 2018). Likewise, research investigating neighbourhood effects on school outcomes found that neighbourhoods in Chicago that shared broad demographic characteristics – poor and predominantly non-White – differed significantly in terms of factors that influenced school improvement efforts, for example, social capital (Bryk et al., 2010; Lingenfelter, 2016).

Although finding discrete interventions that would ameliorate educational disadvantage across settings despite local variation would be useful, given the variability and complexity of the contributing factors, that seems unlikely in practice. Indeed, Paul Lingenfelter’s analysis of evidence-based programs like SFA leads him to conclude that “No crisp intervention, no ‘proven program’ can solve a complex problem at scale, in different settings, and over time as situations change” (2016, p. 42). So, instead of privileging experimental research that attempts to identify “what works” for disadvantaged students, it seems more promising to adopt a research agenda that focuses on identifying and offsetting local sources of educational disadvantage.[19] That way, research outputs would help educators improve outcomes in their own settings rather than telling them which interventions tend to work for unidentified members of disadvantaged groups. This is especially important if we care about the distribution of outcomes within disadvantaged groups as well as the average and if we want to improve learning opportunities for all disadvantaged students (Brighouse, 2014).

The research agenda I have in mind would use various methods to study particular places and devise interventions that are likely to be effective there, given the problems students face, available (or accessible) resources, and causal pathways afforded by the educational setting (Cartwright & Hardie, 2012). For example, the Stanford Center for Opportunity Policy in Education and partnerships between University of California researchers and local school districts have helped particular groups of disadvantaged students improve by studying local impediments and creating tailored strategies for addressing them (Quartz et al., 2017; Stanford Center for Opportunity Policy in Education, 2014). Additionally, advocates of “improvement science” in education (e.g., Bryk, 2010) call for this kind of research.

While the solutions that helped students improve in one setting may be helpful for devising solutions in others, the goal of this locally focused research is to understand and address complex problems as they exist in particular places. In cases where educators are considering interventions that have worked elsewhere, they must still consider the kind of information this research produces to support predictions about how those interventions are likely to perform in their particular setting (Joyce & Cartwright, 2020).

Importantly, because research outputs would primarily benefit local populations of disadvantaged students, they would support gains among disadvantaged students (e.g., at Title I schools) without simultaneously supporting gains among advantaged students at other schools (e.g., non-Title I). Advantaged students at heterogeneous schools might improve from interventions that cater to the needs of their disadvantaged peers, but there is no reason to think they will reliably improve as much or more than their peers, for whom the interventions are tailored. Generally speaking, interventions designed to address or counteract local impediments that put students at a disadvantage will primarily help the disadvantaged. For example, interventions that support increased attendance will help those who would otherwise be absent, not those who will attend school either way. Likewise, hiring social workers who help students access external resources only helps those who need those resources. Thus, assuming advantaged students do not improve dramatically for other reasons, this adjusted model could be expected to help disadvantaged students catch up.

The Normative Question

I have argued that because the current US-EBE model focuses on identifying educational interventions that are generally effective across settings, it supports improvement for all students (including disadvantaged students) but is ill-equipped to narrow achievement gaps. I then outlined an alternative model that seems more promising for narrowing gaps because its research outputs would support relative improvement among disadvantaged students. These descriptive claims raise a normative question: should we use US-EBE to raise overall achievement or to narrow achievement gaps? In this section, I will argue, provisionally, that we ought to use US-EBE to narrow gaps.

While many considerations are relevant to evaluating policy options, I focus on the costs and benefits each option imposes on students. I assume that we care about academic achievement in large part because it significantly affects students’ prospects for obtaining valuable outcomes post-schooling. This consideration can help guide our evaluation of costs and benefits for disadvantaged students, which I take to be particularly weighty – not because of my own prioritarian commitments but because US federal education policy explicitly commits to prioritizing improvement for disadvantaged students. This commitment is motivated by the fact that members of disadvantaged groups currently lack sufficient access to the benefits of education, which significantly impacts their life prospects.

Importantly, educational achievement has both positional and non-positional value for students. It is a positional good in that the value of one’s achievement depends on how much others have achieved. Students who reach higher levels of achievement enjoy a competitive advantage over those who achieve less. For example, a high school diploma has greater positional value for graduates when few others earn them. Graduates are more likely to get jobs when they compete with non-graduates rather than fellow graduates or candidates with college degrees. Many academic outcomes also have non-positional value. Achieving them can contribute to someone’s wellbeing in many ways apart from improving their competitive prospects for desirable positions and occupations. In assessing costs and benefits, then, we must consider both the positional and non-positional value of educational achievement.

Prioritizing the first policy goal (raising overall achievement) amounts to prioritizing improvement among disadvantaged students in terms of non-positional educational goods. Recall that this goal aims to address the problem of low achievement among students from disadvantaged groups, which can redound to unsatisfying life prospects. Prioritizing the second policy goal (narrowing achievement gaps) amounts to prioritizing both the positional and non-positional value of education for disadvantaged students. Recall that this goal aims to address the problem of unequal educational opportunities and outcomes, which redound to unequal life prospects post-schooling, along with the problem of low achievement among disadvantaged students.

All else equal, education policies that maximize benefits while minimizing costs are highly desirable. If we think of costs and benefits in terms of the quantity of outcomes students achieve, using US-EBE to advance the first goal appears preferable in that it provides benefits for students across groups. The current US-EBE model supports improved outcomes for disadvantaged students without sacrificing improvement among advantaged students. Ideally, as many students as possible in all groups would improve their outcomes, yielding significantly higher achievement overall than if students in only some groups were to improve. By contrast, adjusting US-EBE to narrow gaps in the way I have suggested does potentially sacrifice improvement for advantaged students, thereby imposing greater costs on them and yielding smaller overall gains in achievement.

Thinking just in terms of academic achievement, then, the current US-EBE model seems better than the alternative. However, focusing on quantities of outcomes in absolute terms disregards the positional value of improved academic outcomes for those who achieve them. Given existing gaps, advantaged students’ achievement has high positional value while disadvantaged students’ achievement has low positional value.

It seems that, under present circumstances, we should prioritize improving the positional value of education for disadvantaged students because the positional benefits of education are very important to overall life prospects. Indeed, social scientists find that achievement gaps contribute significantly to wage gaps between racial and socioeconomic groups (e.g., Brighouse et al., 2018; Duncan & Murnane, 2014). Moreover, in the US, one’s job and access to social positions largely determines one’s quality of life (Ci, 2013; Ilies et al., 2019). The jobs available to those with greater educational achievement not only are more lucrative, but often are more rewarding, allow greater autonomy, and garner more respect (Walker & Bantebya-Kyomuhendo, 2014). Despite their potential contributions to wellbeing, knowledge or skills with only non-positional value (e.g., the enjoyment one gets from playing a musical instrument during leisure time) seem to be insufficient substitutes for those with positional value. If rewards were distributed differently (not primarily through employment), positional benefits may be less important. But, at present, insofar as we are concerned with improving disadvantaged students’ prospects through schooling, we should place greater weight on positional educational benefits.

Importantly, the distribution of costs and benefits associated with a US-EBE model that aims to narrow gaps is justifiable because, in terms of positional value, socially advantaged students currently enjoy benefits to which they are not entitled. They are not entitled to the high positional value of their achievement because it stems from injustice for disadvantaged students. Thus, adopting policies designed to remove those (positional) benefits does not require moral justification.

However, policies like the current US-EBE model that produce (undeserved) benefits for advantaged students do require moral justification. The discussion above, along with the fact that education is considered an important tool for creating fair equality of opportunity among adults, suggests that disadvantaged students are entitled to the positional benefits of their improvement under current circumstances. Since benefits to which advantaged students are not entitled are less morally weighty than benefits to which disadvantaged students are entitled, such policies seem unjustifiable.

Using US-EBE to narrow gaps also decreases the non-positional value of advantaged students’ achievement, compared to the current model, because it does not facilitate their improvement. But these costs also seem justifiable. Given the size of achievement gaps, they are necessary to benefit disadvantaged students in positional terms – for the positional value of their education only increases if they improve relative to advantaged students. It is also worth noting that this adjusted US-EBE model aims to level up achievement. So, while it carries an opportunity cost for advantaged students (i.e., it compromises their opportunity to benefit from US-EBE), it does not disable their progress. The expectation is that their achievement levels would remain more or less constant while US-EBE helps disadvantaged students catch up.

Advantaged students may incur costs beyond those considered here. If so, we should ask whether they are necessary for improving disadvantaged students’ prospects within schooling and post-schooling, and if they contribute to fair equality of opportunity. Even costs that are objectionable under some circumstances may be acceptable now given the urgency of helping the disadvantaged and addressing existing achievement gaps. A US-EBE model that directly addresses educational disadvantage may help break the persistent connection between social circumstances and academic achievement, thereby improving social mobility.[20] If that is the case, the additional costs it imposes might be justifiable because they bring us closer to achieving educational or social justice.

Although the costs I have considered are justifiable, we should ask whether imposing them on advantaged students is ultimately better for disadvantaged students. The current US-EBE model stands to increase overall achievement in the US to a greater extent than the alternative, which could benefit society as a whole. As Elizabeth Anderson observes, “[m]ore highly educated people are better able to serve others in demanding jobs and volunteer service positions” (2007, p. 615). Having an educated public is beneficial within democratic societies and developing talents increases our competitive advantage in the global market. By contrast, using US-EBE to narrow gaps could stagnate growth among top achievers and among advantaged students generally, undermining their ability to make valuable social contributions that benefit all members of society.

Indeed, creating greater equality domestically may diminish competitive prospects for the US in the global economy. Because the US lags behind many other countries, even if advantaged students maintain their levels of achievement while disadvantaged students catch up, the gap between top performing students in the US and its global competitors will likely increase. How the US competes internationally affects all citizens, including those who are disadvantaged.

These broader social benefits clearly bear on disadvantaged students’ prospects post-schooling, perhaps more so than non-positional benefits that students receive from their own academic achievement, considered in isolation. So, while non-positional benefits do not outweigh positional benefits, we can ask whether social benefits make up for the loss of positional benefits that disadvantaged students would gain from narrowing achievement gaps. That depends in part on the nature and magnitude of the social benefits they can expect.

As resources are currently distributed in the US, wealth accrues primarily to beneficiaries of existing inequalities. There is little reason to think that greater wealth will significantly benefit disadvantaged members of society rather than sustaining social stratification and economic inequality. The mere possibility of progressive taxation that directly benefits the disadvantaged is insufficient for justifying the current US-EBE model. For US-EBE to be justified by its positive effects on the nation’s wealth, benefits of a certain kind must be directed to disadvantaged members of society. Unless greater national wealth contributes to fulfilling social obligations, weighing the social benefits of academic achievement more heavily than positional benefits for disadvantaged students prioritizes economic success over securing justice entitlements. Perhaps that is a defensible trade-off if it enlarges the smallest share of the pie, but creating a system geared to do that requires political will that is currently lacking in the US.

Let us assume the disadvantaged would benefit as much or more, in terms of resources, from others’ social or economic contributions than from their own participation. Does that speak in favour of the current US-EBE model? The answer is unclear, but it is worth considering the character of the benefit each option provides in light of what American citizens value. In addition to material wealth and resources, they purport to value autonomy, individual success, personal responsibility, and social mobility. Redistribution that secures better social services and healthcare would improve the quality of life for those who cannot afford these essentials. But will providing those social benefits contribute to realizing these other values better than narrowing gaps in educational achievement? If not, structuring society so contributions from the advantaged provide greater benefits to the disadvantaged, at least by providing services, may not justify weighing overall achievement more heavily than narrowing gaps.

Conclusion

This essay clarified and explored normative considerations relevant to pressing moral problems in education policy and practice. I identified two goals central to federal education policy in the US. I argued that US-EBE, a central strategy for pursuing them, is unlikely to advance both simultaneously. This descriptive point motivates a normative question regarding the relative moral importance of these goals. To begin addressing it, I examined the values at stake and evaluated some of the trade-offs associated with each option. I argued, provisionally, that narrowing gaps is more morally important than raising achievement in absolute terms because unequal achievement imposes high costs on disadvantaged students while providing unjustified benefits for advantaged students.