Bringing “Behavioral” Fully into Behavioral Public Administration

: Behavioral economics is an increasingly influential field across the social sciences, including public administration. But while some behavioral economics ideas have spread rapidly in public administration research, we argue that a broader range of behavioral economics concepts can and should be applied. We begin by outlining some central models and concepts from behavioral economics to fix ideas, including the rational model and the “behavioral” response. We then discuss how a variety of heretofore underutilized behavioral economics concepts can be applied to a specific area of work in public administration – bureaucratic decision making. Our aim in doing so is two-fold. First, we hope to provide fresh food for thought for researchers and practitioners working in the broader behavioral public administration space. Second, we hope to demonstrate that there is substantial scope for expanding behavioral economics’ influence on public administration research.

prefer one or the other (or find them precisely equal in appeal). Preferences are also assumed to be "transitive," so that if option A is preferred to option B, and option B is preferred to a third option, then option A is most preferred of the three, disallowing preference reversals (an example of which is the third option being preferred to the first).
Other assumptions pertain to the decision-maker's ability to process information and the consistency of their behavior. When evaluating choices across time, individuals are assumed to be able to forecast and commit to their optimal future consumption or work plan by calculating the correct present discounted value of the costs and benefits associated with each choice. One implication is that once a decision is made, individuals follow through with that plan, meaning procrastination would not occur (more on this later). Finally, in evaluating decisions, individuals can accurately process large amounts of potentially complex information and that supposedly irrelevant contextual factors (like one's current emotional state, the environment, how options are presented on a menu, etc.) do not factor into the decision because they do not affect the utility value of each option, and it is utility that determines choices. Ultimately these assumptions imply that one can distill each choice problem to its objective qualities, perform an accurate cost-benefit analysis, and choose the option that maximizes utility perfectly. Rational models do allow for this utility maximization procedure to be imperfect, but only in special cases (such as when there is incomplete information or rational inattention, for example), but even in these cases, rational models impose assumptions. In the case of incomplete information, the rational model assumes that decision-makers can assign accurate probabilities to every alternative outcome and weigh the likelihood that each state of the world will occur when comparing alternatives. With rational inattention, decision-makers are assumed to be able to accurately incorporate costs of the time and effort needed to gather and process all the information to make an informed decision and weigh this against the cost of making an optimization error.
The power of the rational model lies in its simplicity-it allows researchers to map behaviors back to preferences. The logic is fairly simple: if I observe you choosing one option over all other available options, and I assume that you are a rational optimizer, I can make a statement about your preferences-namely that you preferred the chosen option to every other available option (Bernheim & Rangel, 2009;Richter, 1966). This concept of "revealed preference" is valuable in the process of assessing social welfare. If, for example, a bureaucrat can learn something about what citizens prefer from their behavior, it is easier for the bureaucrat to evaluate whether an administrative program is meeting the citizens' needs.
Despite its popularity, the rational model imposes a level of consistency in human behavior that has been shown not to hold in a variety of empirical settings. Prominent work in behavioral economics has shown cracks in the ability to explain human behavior with rational models in both lab and field settings (Simon, 1990;Kahneman & Tversky, 1979;Thaler & Shefrin, 1981;Laibson, 1997;Fehr & Schmidt, 1999;DellaVigna, 2009). Behavioral models update the rational model by allowing decision makers to care about relative outcomes (Koszegi & Rabin, 2006;Kahneman & Tversky, 1984), to procrastinate (Thaler & Shefrin, 1981;Laibson, 1997;O'Donoghue & Rabin, 1999), to care about others beyond what can be justified by genetics and repeated interactions (Fehr & Schmidt, 1999;Frey & Oberholzer-Gee, 1997), to have limited attention (Chetty et al., 2009;Bordalo et al., 2012), and to rely on quick, though imperfect, heuristics to make decisions (Tversky & Kahneman, 1973;Tversky & Kahneman, 1974;Thaler, 1999;Kahneman, 2003). While some of this work models decision-making as a multi-system process in which the "rational mind" competes with emotion and instinct (Kahneman, 2011;Thaler & Shefrin, 1981), others retain the notion of individual decision makers seeking to maximize utility, but simply expand the utility function to incorporate behavioral components like present bias (Laibson, 1997), preferences for equity and fairness (Fehr & Schmidt, 1999), or social identity considerations (Benjamin et al., 2010). Many of these psychological inputs might lead to behavior that is inconsistent and unpredictable according to a rational choice framework, but which can be anticipated and explained by formal models in behavioral economics.
Because of its intuitive appeal, behavioral economics has recently caught on in policy circles (e.g., the Behavioural Insights Team in the U.K. and the Social and Behavioral Sciences Team in the United States). Much of the applied work in behavioral economics centers around using "nudges" (Thaler & Sunstein, 2008) to encourage behavior change in different policy-relevant domains (Madrian, 2014;Sunstein, 2020). The term "nudge" captures the idea that the decision making of non-rational agents can be significantly altered by small modifications to the decision environment, which would normally not influence a rational agent. This idea provides a tool for anyone seeking to alter others' behavior; by changing the "choice architecture," one can encourage people to choose a specific option without removing any options. 1 While work on nudges has had a significant impact in the policy world, we believe a path forward in behavioral public administration necessarily involves expanding the range of behavioral ideas that are brought into public administration. Simply put, behavioral economics is a conceptually rich field, and public administration researchers and practitioners have only scratched the surface in mining it for valuable insights.
We next provide specific examples of how behavioral economics models and theories that are less used in public administration apply to (and could enhance) research on bureaucratic decision making, a broad area of interest for public administration scholars. We hope this exercise leaves the reader convinced that there is significant room to expand the role of behavioral economics in public administration and inspired to pursue work that brings the full spectrum of behavioral economics tools to bear on important public administration topics.

Bureaucratic Decision Making: Broadening the Scope for Behavioral Research in Public Administration
In a sense, the application of behavioral economics to how bureaucrats behave is a natural one. 2 A bureaucrat's behavior (and that of the citizens with whom bureaucrats interact) is guided not just by rational motives (incentives and cost-benefit optimization, etc.), but also by "behavioral" factors that form the psychological underpinnings of the behavioral economics models we described (Lee & Clark, 2017 and Schubert, 2017 make related arguments). Yet, there are core theories in public administration, from the framework in Downs' canonical Inside Bureaucracy (Downs, 1967), to the bureau-shaping model (Dunleavy, 2014), to public choice theory (discussed in Boyne, 1998), that assume that public servants act in accordance with rational choice models. For example, assuming that results-focused management will improve outcomes in bureaucratic structures implicitly requires the assumption that bureaucrats act as rational optimizers (Moynihan, 2005). However, assuming rationality in this way can prevent us from developing important insights and resolving empirical puzzles; for example, the "behavioral" tendency to stick with the status quo and not to use all available information could help explain the empirical finding that merit-based pay (one type of results-focused management) is minimally effective in the public sector (Choi & Whitford, 2013). With this context in mind, in this section we outline a few examples of how an expanded view of behavioral economics, which incorporates aspects of the field that remain relatively underutilized in public administration, has the potential to enrich our understanding of bureaucratic decision making.

Mental models with availability and confirmation biases
The public sector has limited resources to tackle important social problems, which means in theory (and ideally in practice) that politicians and bureaucrats must objectively evaluate public programs for efficacy. The process of evaluating programs is often simplified (consciously or not) by assuming rationality on the part of the key players in the policy landscape, an implication of which is that simply providing full information would be sufficient for accurate program evaluation. A natural way forward may be to focus on the behavioral biases of citizens; however, bureaucrats themselves are no less susceptible to irrationalities. When designing or evaluating a public program, a bureaucrat necessarily (and perhaps implicitly) starts with an existing mental model, or set of priors, about the program, and that set of priors about what types of policies are most likely to succeed may be overly influenced by examples that are highly available, or easy to call to mind (availability heuristic, Tversky & Kahneman, 1973). That mental model guides what data the bureaucrat will use to design and judge the program. Recent research suggests that when our mental models are wrong, access to copious data is not enough to fix our mistaken views for a few reasons. One is that self-serving or self-justification biases can push individuals to conclude that they were right all along. Another is that individuals who start with an incorrect mental model tend not to pay attention to the right data and therefore do not learn and optimize correctly (Hanna et al., 2014). For example, a bureaucrat may have the ex-ante belief that improving the quality of a school's facilities is the most effective way to improve overall student academic performance. When tasked with improving performance at underperforming schools, this bureaucrat will likely push for investments in physical infrastructure. If these investments lead to some improvement in academic performance, the bureaucrat will use this as evidence of the accuracy of his view. The problem, of course, is that this is not proof that infrastructure investments are the most efficient use of scarce resources--perhaps the money would have had a greater effect had it been dedicated to incentives for teachers to improve student outcomes. If these alternative options are not part of the bureaucrat's mental model, it is likely that no amount of time, experience, or data will lead the bureaucrat to optimize properly.
A second reason why objective information may fail to overcome the wrong mental model is that bureaucrats may be biased towards interpreting evidence as confirming their existing views, even if the data are objectively ambiguous. This phenomenon is known as "confirmation bias" (Nickerson, 1998;Charness and Dave, 2017;Charness et al., 2021). Confirmation bias can lead two people with different views to interpret the same information in different ways, hardening beliefs and hampering an arrival at objectively accurate evaluations. In our previous example, a bureaucrat who believes that spending on school facilities improves student performance may not only focus on evidence that supports his view but may also interpret neutral information (such as students spending more time in a newly-constructed computer classroom) as supporting his view. (In the example of a new computer classroom, more time spent does not necessarily mean that the quantity or quality of the student's knowledge has increased.) Unfortunately, the solution to the problem of inaccurate mental models coupled with confirmation bias is not simple. It is tempting to think that forcing bureaucrats to confront evidence that is contradictory to their existing views would suffice; however, humans are adept at justifying inconsistencies to themselves and tend to double down on prior views when confronted in this way (Festinger, 1957;Charness et al., 2021;Mobius et al., 2022).
Thus, confirmation bias and inaccurate mental models can pose a persistent challenge to objective program evaluation, by biasing how public agencies tackle social problems. Behavioral economics theories suggest that psychological path dependence matters. However, this is an area at the intersection of behavioral economics, psychology, and public administration where research (especially field work) is limited and needed.

Revealed Preference with Limited Attention and Present Bias
An implication of assuming rationality is that citizens' true preferences can be inferred from observing their choices through the argument of revealed preference, as discussed earlier. Inferring citizens' preferences from their actions assumes that citizens process and grasp all facets of public sector policies and programs -that is, they are perfectly attentive all the time. There are good reasons to doubt this assumption. The decision-making environments citizens encounter in the public sector are often complex. It is quite likely that citizens only pay attention to a limited amount of information when making decisions, and this information is likely to be that which is most salient (Chetty et al., 2009;Finkelstein, 2009;Hossain & Morgan, 2006;DellaVigna, 2009;Gabaix, 2014;Gabaix & Laibson, 2006;Bordalo et al., 2013;Caplin et al., 2011). 3 Consider, for example, the challenges that bureaucrats face in deciding how best to communicate information about a program or product to citizens. If they assume that citizens are perfectly attentive, bureaucrats might provide all possible information publicly and leave citizens with the responsibility to sort through and digest all the information. However, in the real world, citizens appear far more responsive to simplified than to more complex information, even when the latter is technically more informative (see for example Bhargava &Manoli, 2015 andLiebman &Luttmer, 2015). This suggests that: 1) the assumption that citizens can perfectly process complex information is incorrect; and 2) increasing the information available to citizens is not necessarily an effective way to obtain information about their true preferences.
Even if a citizen is fully attentive, administrative burdens (bureaucratic processes, extensive paperwork, etc.) add an additional confounding factor that can hinder the ability to infer preferences from actions (Moynihan et al., 2015). Citizens may struggle to overcome administrative burdens to access programs they value not due to imperfect attention but due to procedural obstacles, driving a wedge between preferences and behavior. Relevant to this discussion is that individuals can be myopically focused on the present (Laibson, 1997;O'Donoghue & Rabin, 1999), a phenomenon known as present bias. The seminal models of present bias alter the way decision makers discount or weight utility between today and the future in a way that places excessive emphasis on utility today when compared to utility in the future. As a result, these models can generate substantial procrastination; agents with present bias often choose to "do something tomorrow" as the optimal option, only to change their mind when tomorrow arrives. In the case of program take-up, present-biased citizens might overweight the present burden and frustration from administrative processes and underweight the future benefits of a program, and thus fail to utilize a program that is net beneficial to them from a rational perspective. While the field of research on administrative burden is growing richer and more substantive over time, more work that brings this subfield together with behavioral economics work on inconsistent time preferences seems likely to generate new insights and practical tools for overcoming take-up challenges. 4

The Process of Innovation: Status Quo Bias, Social Identity, and Self-Preservation Motives
Behavioral economics concepts can also inform research on individual and group level forces that can either encourage or stifle change and innovation in bureaucracies. Important work in behavioral economics finds that individuals are often influenced by status quo bias, or the tendency to stick with the current option or plan (Madrian & Shea, 2001). Because bureaucracies are composed of individuals who have this tendency, status quo bias can surface in the decisions within a bureaucratic system. This can mean that policies and programs "already on the books" can be hard to jettison (even when the policies or programs are underperforming). The tendency to stick with the status quo makes its way into behavioral economics models in a few ways, but generally involves altering the decision maker's utility function from one in which utility is derived from absolute levels of the inputs to one in which utility is derived from the level of those inputs relative to a reference point, often the status quo (see for example, Koszegi & Rabin, 2006, O'Donoghue & Sprenger, 2018, and Masatlioglu & Ok, 2005. Such models are sometimes combined with a particular functional form for utility relative to that reference point where outcomes below that point (losses) are more harmful (to utility) than gains of the same magnitude are beneficial. This idea that losses psychologically hurt more than gains of the same size help, called loss aversion, is well documented (Tversky & Kahneman, 1991;Kahneman & Tversky, 1979;Kahneman et al., 1991). To give an example relevant to public administration, when evaluating whether to change a program, eliminating an underperforming element of a program may result in a feeling of "loss" relative to the status quo (or reference point), which may discourage bureaucrats from cutting that aspect of the program (Adams et al., 2021 andKlotz, 2021 are relevant recent works in this area as well). Applying the lens of behavioral economics models with reference dependent preferences and loss aversion to such a situation can shed light on how bureaucrats make these types of choices.
Innovation can also be stifled if bureaucrats recognize flaws in existing policies or processes but choose not to advocate for policy changes for reasons related to the preservation of self-image. A psychological reason for inaction here could be that if a bureaucrat's proposed action does not solve the problem or makes the problem worse, it will be easy for the bureaucrat's colleagues to assign blame for the proactive error. In contrast, staying quiet when one should have spoken up is nearly impossible to detect and punish, and humans tend to resist making errors of commission relative to errors of omission (see or Coffman, 2014 which discusses hesitancy to contribute ideas more generally). Behavioral economics models that allow anticipated regret Bell, 1982) and one's social image (Bursztyn & Jensen, 2017) to affect utility and thus decisions help bring this insight from psychology into the behavioral economics toolbox.
Furthermore, individuals, bureaucrats included, do not make decisions in a vacuum, and the influence of others can sway decision makers in ways that are not captured by classical economic models. The model of social identity in Benjamin et al. (2010), which builds on prior work on identity economics (Akerlof & Kranton, 2000), provides a key intuition: if we model individuals as identifying with a social group that has associated typical preferences and actions for group members, a tension can arise between what individuals personally prefer and what individuals know to be the accepted preference of those in their social group. This means that while an individual might personally prefer a certain action or decision, the individual may ultimately decide on a compromise position or behavior that represents a weighted average of what the individual thinks is best and what their group identity encourages one to think is best. This behavior is a departure from standard economic models where the only relevant preferences are those of the decision-maker and not what the decision-maker perceives to be the preferences of others.
If social identity influences individual-level behavior in a bureaucracy, this could limit innovation and discourage change. That is, if acquiescence to authority is part of the "bureaucratic identity," innovative ideas could be silenced in the minds of bureaucrats, before they are even voiced. For example, if a manager's request for feedback on a program is met with silence, that silence may not be due to a lack of objections to aspects of the program, but rather to a dominant bureaucratic identity that favors passive acceptance at the individual level.
Another related group-level force that can stifle innovation is the social environment of a bureaucracy. Research across the social sciences has consistently shown that peer pressure and social influence are powerful forces (for example, see Asch, 1951, Gerber et al., 2008, and Bursztyn & Jensen, 2015. One notable example germane to the public sector is groupthink-the tendency for groups of like-minded individuals to be blind to important elements of a problem that they are collectively overlooking. Indeed, the canonical example of groupthink comes from the public policy world: the famously botched Bay of Pigs invasion in 1961 (as outlined first in Janis, 1972). Groupthink is in part driven by the individual-level desire for a strong social reputation, and bureaucrats (like people in any social environment) may feel the need to fit in with their peers and adhere to the norms of their workplace environments. Thus, groupthink might keep bureaucracies from implementing changes and innovations, even in the presence of a strong collective desire to do good.
In this piece, we presented several sketches of how insights from behavioral economics might be applied to the study of bureaucratic decision making. Many of these topics are being studied in parallel by behavioral economists and by public administration scholars, but there is much to be gained by joining forces across fields. Not only could this generate important new conceptual and theoretical insights, but it could also allow for testing behavioral economics in the field more rigorously in real world policy environments. We believe the returns to expanding the scope of behavioral public administration research in this manner are enormous, and we look forward to seeing more research in this space doing this critical work.

Notes
1. Some researchers have pointed out that nudges maintain a certain degree of paternalism because nudges are intended to guide behavior in a direction determined by the designer. This research questions aspects of the way Thaler and Sunstein (2008) define, explain, and justify the idea of nudges. A discussion of these critiques is largely outside the scope of this paper, but see Sugden (2017) as an example. 2. Note that by "bureaucrat," we mean any official who works at a government agency within a procedural chain-of-command structure. 3. A related strand of literature focuses on models where this limited attention can lead individuals to consider only a subset of the options available to them (though in some cases not necessarily through a rational reaction to the limits of human cognition). See for example Sims, 2003 and2006;Caplin & Dean, 2015;Masatlioglu et al., 2012;and Lleras et al., 2017. 4. Some examples of this type of work on administrative burden include Christensen et al. (2020), Herd & Moynihan (2019), Moynihan et al. (2015), and this journal's recent symposium on administrative burden (e.g., Lopoo et al., 2020;Hock et al., 2021;Ali & Altaf, 2021).