COMPAR AND CONTRAST DECISION MAKING

com
ATTACHED FILE(S)
Compare and contrast two different decision making models used in public sector decision making. Which one do you feel is better? Explain why.
Read “Bridging the Divide between Evidence and Policy in Public Sector Decision Making: A Practitioner’s Perspective” by Arinder fromPublic Administration Review(2016).
Read “Crisis Decision-Making: Understanding the Decision-Making Process during Emergencies” by Goldberg fromJournal of Business & Behavioral Sciences(2013).
Journal of Business and Behavioral Sciences
Vol 25, No.2; Fall 2013

17

CRISIS DECISION-MAKING: UNDERSTANDING THE
DECISION-MAKING PROCESS DURING
EMERGENCIES

Kenneth I. Goldberg
National University

ABSTRACT: Organizations going through emergencies have to work with a
variety of stakeholders, or system of stakeholders, as they prepare for, recover
from and return to normalcy. As in any organization, decisions made by one
stakeholder can have consequences on other stakeholders. The challenge facing
emergency and business continuity managers is developing procedures that allow
system stakeholders to better understand the decisions being made and thereby
mitigate the impact of unintended consequences. This paper reviews the related
literature on three theories that can be applied to organizational decision-making
and how they can assist leaders better understand the decisions organizations
make during emergencies. The paper concludes with a model that can be
generalized to any organization or system for minimizing unintended
consequences and improving the transparency of decision-making during
emergency situations.

INTRODUCTION

As organizations become more and more complex with ever increasing
stakeholder interests, a challenge facing emergency and business continuity
managers is addressing the interconnectedness of organizations and the impact it
has on decision making. Palmberg (2009) and Ng (2009) describe this
interconnectedness as a complex adaptive system where dynamic and
interdependent connections exist between agents. On an international scale these
events can include technological incidents, terror-related risks, food safety and
infectious diseases. The same can be said for disasters on a national or regional
level such as oil spills, flooding, and on a more local scale earthquakes and
tornados. Similarly, the interdependence of decision-making can also be applied
to the private sector in business continuity responses for events such as
information security breaches, computer hacking and terrorist acts. Due to the
complexity of these systems, the decisions stakeholders make will result in
consequences on other organizations as well.

The challenge emergency and business continuity managers have during these
situations is accurately determining the interconnectedness and consequences of
actions when attempting to return to a sense of normalcy. Although it is easy to
see the interconnectedness of actions on a major scale, it also occurs on a smaller
scale between local or regional governments, small businesses and nonprofits.
Goldberg
18

For example, one of the most common lessons learned from looking back on
emergency actions has to do with communication efforts. Kettl (2006) described
the various systems of communication between federal, state and local agencies
as being a “wicked problem” (273) that prevented essential support from being
provided to communities along the Gulf Coast of the United States during
Hurricane Katrina. Similar “wicked problems” (Kettl, 2006) often arise on a
more local level with businesses and governments trying to respond to
emergencies.

Developing methodologies to identify the interrelatedness of decisions between
organizations can be challenging during normal operations. However, identifying
relationships and their intended and potentially unintended consequences during
crisis situations can be particularly challenging. By developing a model to
analyze decision-making inputs from a variety of perspectives, emergency and
business continuity professionals may be able to better predict the outcomes of
their decision-making, reduce unintended consequences and more quickly return
to normalcy.

DISCUSSION

Complexity Theory: Complexity theory attempts to explain how organizations
behave. Complexity theory suggests there are underlying assumptions of
organizational behavior and external forces that drive decision-making.
According to Morrison (2005), organizations, like society, are dynamic open
systems that are sensitive to forces.They are influenced by feedback and their
interconnectedness to other organizations.

According to Stacey, Griffin and Shaw (2000), individual actions play a major
role in how an organization will react in times of emergencies. The authors
suggest that one must understand how individuals will act during a crisis to
understand how and what decisions may be made in responding to emergencies
or disasters.

Similarly, Wheatley (1999) discusses how change can cause chaos in
organizations. Wheatley suggests that in leading through change one must
understand the underlying principles and vision of the organization to accurately
determine how it will act in a crisis. To better understand decision-making during
emergencies, Wheatley (1999) and Stacey, Griffin and Shaw (2000) suggest that
one has to also understand the underlying assumptions (values and shared vision)
of an organization to understand crisis decision-making.

Structuration Theory: Structuration theory attempts to explain decisions
through the lens of organizational values and culture. Similar to organizational
culture, Morrison (2005) suggests organization routines can be powerful
influences in organizations.
Journal of Business and Behavioral Sciences
19

Stones (2005) suggests decisions made by individuals in organizations are
influenced by the values and culture an organization practices. According to
Stones (2005) organizational culture can replace the individual values in
decision-making. As a result, understanding organizational culture and values
can help predict decision-making during an emergency.

In the field of nursing, research suggests that structuration theory influences a
culture of safety. According to Groves, Meisenbach and Scott-Cawiezell (2011) a
culture of safety in nursing strongly influences practices. The authors suggest that
there can be competing cultures in the medical facility system that may
compromise one culture over another resulting in unintended consequences when
decisions are made. Applying the concept of competing cultures to emergency
and disaster management, one can suggest that competing cultures may exist
between interrelated governmental or business systems. Understanding the
complex relationships between system stakeholders may mitigate unintended
consequences of decisions.

Systems Theory and Systems Thinking: Systems theory attempts to explain the
causal relationships of actions taken within and on a system. Checkland (2006),
in his study of Soft Systems Thinking, suggests that an organization has a “view”
of itself that can influence how it reacts to the internal and external forces of the
system in which it operates. Checkland argues that each organization interprets
situations differently when trying to solve a problem or react to an influence.

Other social science research supports Checkland’s work on Soft Systems
Thinking. Zexian and Xahui (2010) suggest that organizations are self-organizing
and adaptive to internal and external forces when responding to influences like
emergencies. Skarzauskiene (2010) suggests that leaders need to understand
reasons for change and the needs of others in their system when new influences
are thrust on them. Similarly, Cundill, Cumming, Biggs and Fabrecius (2011)
and Mello (2008) suggest that change is contextually driven and creates new
needs and variables that have to be addressed by an organization. Research
suggests that system theory can explain the causal relationships within and
between organizations so needs can be addressed in a holistic perspective rather
than as independent actions.

Systems thinking is the study of the causal relationships on a system (Senge,
1994 and Senge, Smith, Kruschwitz, Laur and Schley, 2010). It is one way for
emergency and business continuity managers to begin to understand how actions
taken by one organization can impact stakeholders when responding to
emergencies. Senge (1994) and Senge et al. (2010) argue that by studying the
system of an organization (both its internal procedures and operations in the
external environment), one can understand the interrelatedness of decisions.
Senge (1994) and Senge et al. (2010) suggest that understanding the
Goldberg
20

interrelatedness of actions within an organization and among stakeholders can
help managers provide services that are coordinated, intended and sustainable.

Using the principles of systems thinking, Mitchell (2006) discusses how two
other concepts can help clarify coordination efforts, quicken response times and
promote returning to normalcy. Referring to the disciplines of the Learning
Organization (Senge 1994), Mitchell (2006) suggests that understanding an
organization’s Mental Models and Shared Vision can help one understand
decisions made by organizations in times of crisis. Mental Models are the
defensive mechanisms of individuals that prevent seeing the need for change
(Senge 1994). These models can prevent an organization from addressing the
need for change during emergencies. As a result, an organization tries to respond
to an emergency with their routine business operations which may no longer be
appropriate. Shared Vision is the common vision or guiding purpose of the
organization that is shared among its members (Senge 1994). Similar to Mental
Models, an organization’s Shared Vision can prevent it from seeing the need to
change given new circumstances during emergencies. Understanding how Mental
Models and Shared Vision can impact decision-making in an emergency can help
overcome resistance to necessary change.

Similarly, Flood (1999) suggests that understanding systems from four
perspectives can further define interrelationships within and among
organizations. Flood (1999) uses systems thinking as a way of solving
organizational issues or dilemmas (6). According to Flood (1999) systems
thinking develops a deeper understanding of the interrelatedness of
organizational actions.

Flood’s four proposes of a systems thinking model consists of the following:
(1) Systems Process – the efficiency and reliability of the system
(2) Systems Structure – the effectiveness of the system
(3) Systems of Meaning – does the system do what we want it to do?
(4) Systems of Knowledge-Power – how is knowledge transmitted within
the system
In addressing interrelatedness of emergency or business continuity actions, one
can suggest that Flood’s Systems of Meaning could help in understanding how
organizations act in times of disasters.
Understanding what drives the decision-making process of an organization can
develop responses that can best support system-wide efforts during emergencies.
Even more importantly, one can also identify potentially unintended
consequences from decisions that left unaddressed could cloud transparency in
decision-making and delay an organization’s return to normalcy.

Systems Thinking/Complexity/Structuration Decision-Making Model: By
combining the concepts of Systems Thinking, Complexity Theory, and
Structuration Theory, one can envision a model for decision-making in complex
Journal of Business and Behavioral Sciences
21

environments like emergencies and disasters. As a result, decisions can be made
that reduce unintended consequences, provide better coordinated responses and
improve decision-making transparency.
As indicated in Figure 1 in the Systems
Thinking/Complexity/Structuration Decision-Making Model, organizations:
(1) identify decision-making inputs such as organizational values, vision,
mental models, underlying assumptions and culture;
(2) identify business processes and procedures;
(3) identify potential system variables that can be impacted by a change
event;
(4) identify decision-making inputs that can impact other stakeholders
through intended and unintended consequences;
(5) collaborate with system stakeholders to develop processes that best
support each other; and
(6) evaluate responses and incorporate them into new business processes
where appropriate.
By recognizing a system’s underlying assumptions such as
organizational values, vision and culture; business processes and procedures; and
potential variables that can impact a change event, collaborative actions can then
be developed to best support the needs of the system and its members. The result
will be improved responses and transparency of decision-making that takes into
account the underlying assumptions of organizations.

Goldberg
22

Figure 1
Systems Thinking/Complexity/Structuration Decision-Making Model

Goldberg, 2013

Model Application: One can see this process taking place during table-top
exercises of an emergency. In a recent exercise, a county office of emergency
services representative was discussing the role that local hospitals would need to
play in providing medical support to the community and first responders. The
emergency service representative suggested that the hospitals would provide the
necessary beds and space to care for the emergent needs of the community. The
hospital representative said that during normal operations, they would be able to
do this. However, they might not be able to provide the support immediately after
an emergency. The hospital’s initial response had to be ensuring the care of
existing patients. In this case, the emergency services representative made a
decision based on routine operations and did not take into account the adaptive
changes required on the system given the new variables of the emergency. These
new variables included the new requirements of the hospital to ensure the care
and safety of their existing patients before accepting potentially large numbers of
new patients from the emergency. After a period of collaboration, it was
determined that the emergency services’ need to support the community
(established by their culture, values and vision) and the hospital’s new
requirement for first supporting their existing patients (a new variable introduced
by the emergency) could both be supported. One possible answer was to set up a
temporary triage tent that provided the emergent medical needs from the
Identify underlying
values, principles, vision
and culture of
stakeholders
Identify business
processes of system
stakeholders
Identify potential
system variables during
an emergency
Identify decision-making
inputs for intended and
unintended
consequences
Develop collaborative
responses that support
decision-making inputs
Evaluate repsonses and
incorporate them into
new business proceses
Journal of Business and Behavioral Sciences
23

emergency while the hospital ensured the safety of their existing patients. The
information about the temporary triage center was then fed back into the
decision-making process and incorporated into emergency plans. As a result, the
intended consequence of providing the necessary emergency medical support
was addressed with transparency in the decision-making process and with no
expected unintended consequences.

CONCLUSION

During emergencies and disasters organizations strive to recover and return to
normalcy in complex systems environments. To assist in the decision-making
process, an understanding of the system in which an organization operates; and
the values, assumptions and cultures of the stakeholders can help maximize
intended consequences, reduce unintended consequences and improve
transparency. As a result, looking at systems processes and their values,
assumptions and culture of stakeholders, organizations can more effectively and
expeditiously return to normalcy after an emergency or disaster.

REFERENCES

Checkland, R.B., and Poulter, J. (2006). Learning for action, a short definitive
account of soft systems methodology and its use for practitioners,
treachers and students. Chichester: Wiley.

Cundill, G., Cumming, G. S., Biggs, D., and Fabricius, C. (2011). Soft Systems
Thinking and Social Learning for Adaptive Management. Conservation
Biology, 26, 13-20.

Flood, R. (1999). Rethinking the fifth discipline, learning within the unknown.
New York: Routledge.

Groves, P. S., Meisenbach, R. J., and Scott-Cawiezell, J. (2011). Keeping
Patients Safe in Healthcare Organizations: A Structuration Theory of
Safety Culture. Journal of Advanced Nursing, 67, 1846-1855.

Kettl, D. (2006). Is the Worst Yet to Come? The ANNALS of the American
Academy of Political and Social Science, 604, 273-287.

Mella, P. (2008). Systems Thinking: The Art of Understanding the Dynamics of
Systems. The International Journal of Learning, 15, 79-88.

Morrison, K. (2005). Structuration Theory, Habitus and Complexity Theory:
Elective Affinities or Old Wine in New Bottles? British Journal of
Sociology of Education, 26, 311-326.

Goldberg
24

Ng, P. (2009), Examining the use of new science metaphors in learning
organization, The Learning Organization, 16, 168-80.

Palmberg, K. (2009). Complex Adaptive Systems as Metaphors for
Organizational Management. The Learning Organization, 16, 483-498.

Skarzauskiene, A. (2010). Managing Complexity: Systems Thinking as a
Catalyst of the Organization Performance. Measuring Business
Excellence, 14, 49-64.

Senge, P. (1994). The fifth discipline: the art and practice of the learning
organization. New York: Currency Doubleday.

Senge, P., Smith, B., Kruschwitz, N., Laur, J., and Schley, J. (2010). The
necessary revolution: working together to create a sustainable world.
New York: Broadway Books.

Stacey, R., Griffin, D., and Shaw, P. (2000). Complexity and Management: Fad
or Radical Challenge to Systems Thinking. New York: Routledge.

Stones, R. (2005). Structuration Theory. New York: Palmgrave McMillan.

Wheatley, M. (1999). Leadership and the new science. San Francisco: Berrett-
Koehler.

Zexian, Y., and Xuhui, Y. (2010). A revolution in the field of systems thinking-a
review of Checkland’s system thinking. Systems Research and
Behavioral Science Syst. Research, 27, 140-155.
Copyright of Journal of Business & Behavioral Sciences is the property of American Society
of Business & Behavioral Sciences and its content may not be copied or emailed to multiple
sites or posted to a listserv without the copyright holder’s express written permission.
However, users may print, download, or email articles for individual use.
394 Public Administration Review • May | June 2016

Bridging the Divide between Evidence and Policy in
Public Sector Decision Making:
A Practitioner ’ s Perspective
Max K. Arinderrecently retired after
34 years of service to the Mississippi Joint
Legislative Committee on Performance
Evaluation and Expenditure Review, having
served 15 years as chief analyst for planning
and support and 19 years as executive
director. He holds a PhD in experimental
psychology from the University of Southern
Mississippi and has served as staff chair
of the National Conference of State
Legislatures and the National Legislative
Program Evaluation Society.
E-mail:max.arinder@att.net
Abstract :While policy advocates can help bridge the divide between evidence and policy in decision making by
focusing on ambiguity and uncertainty, policy makers must also play a role by promoting and preserving deliberative
processes that value evidence as a core element in leveling raw constituent opinion, ultimately resulting in a better-
informed electorate. Building on existing research and analytic capability, state legislatures can increase the demand
for and delivery of relevant information, giving the institution the capacity to keep abreast of research in critical public
policy areas. By implementing data and time-conscious evaluative frameworks that emphasize evidence-based decision
making and longitudinal cost–benefit analytics at critical policy-making junctures, the institutional culture can
become less unpredictable and the “rules of the game” can be more transparent. In 2015, Mississippi ’ s legislative leaders
created a system to review requests for new programs and funding using such an evidence screen.
Kimberley R. Isett, Brian W. Head, and Gary VanLandingham, Editors
Max K. Arinder
Evidence in Public
Administration
Bridging the Divide between Evidence and Policy in Public Sector Decision Making: A Practitioner’s Perspective 395
As a longtime nonpartisan legislative staffer, I agree with Paul Cairney, Kathryn Oliver,and Adam Wellstead ’ s assertion that there are significant
differences in academic and political cultures and in how academics
and policy makers view and use “evidence” in decision making.
Their article “To Bridge the Divide between Evidence and Policy:
Reduce Ambiguity as Much as Uncertainty” provides a thought-
provoking perspective on how the scientific community could
use elements of public policy theory to better communicate in
a political culture, thus giving more weight to their empirical
observations in a deliberative process.
While I agree that there needs to be a culture shift in the way
scientists advocate for policy, experience tells me that there also
needs to be a culture shift in the policy-making institutions
themselves. Yes, policy makers often balance evidence, emotions,
and beliefs in making a decision. The question is whether we
can create an environment in which an optimum balance can be
achieved among these elements for a given policy question. The
answer can be found in an exploration of the “culture” of a given
policy-making body.
The conclusions drawn by Cairney, Oliver, and Wellstead would
not be foreign in many of the legislative program evaluation shops
around the country. One of the ongoing challenges they face as
nonpartisan legislative staff is to get their work recognized and
used in policy debate. Most provide regular in-service training
on ways to achieve that goal through improved written and oral
communication. I see this as a parallel goal to those espoused for
scientists by Cairney, Oliver, and Wellstead .
A State Legislature as a Policy Culture
Professionals in the legislative arena see daily the swirl of facts,
opinions, feelings, and arguments surrounding policy makers. Amid
this swirl, some legislators routinely rely on empirical evidence in
key policy areas, while others use emotions and beliefs based largely
on pressing constituent concerns. “My people tell me” is often heard
around the capital, reflecting the importance of constituent opinion
in any decision-making process. This may well be the natural state
of any politically based public environment. Constituent opinion
is a strong voice and will certainly continue to be a part of any
legislative arena. But how well informed is that voice? True, we are
a society awash in information. But how good is that information?
A significant element of a sound policy culture should be the
promotion and preservation of a deliberative process that values
evidence as a core element in leveling raw constituent opinion.
Failure may well negate one of the strengths of representative
democracy itself: controlling tyranny by the majority.
Understanding the deliberative core of a particular political
institution is a critical first step in bridging any divide between
evidence and policy in decision making. The opportunities for
change can be found within the bounds of the long-standing
rules, values, and practices of legislative bodies. Generally, these
institutions will have both expectations for empirical rigor and
latitude for constituent opinion, whether that opinion be rigorously
developed or not. While legislatures can be, as Cairney, Oliver, and
Wellstead describe, “unpredictable policy-making environment[s]
in which attention can lurch from issue to issue,” they do possess
a basic culture that allows them to function fairly efficiently to
keep government service structures operational and to address
perceived needs. If we can understand this natural balance in a
given legislature, we can better see its strengths and understand
its weaknesses regarding the nature and utility of different types
of evidence in the decision-making process and perhaps find
opportunities for constructive change.
Changing the Policy Culture of a State Legislature
While the scientific community can provide an important impetus
to change through actions referenced in the work of Cairney,
Oliver, and Wellstead, a core change in a legislative culture must
ultimately be accomplished by the membership of the institution
itself, optimally with assistance from a sound technical staff
and supported by an informed constituency. Just as constituent
opinion is important in policy making, it is also an important
element in determining the policy culture of a legislature.
Paradoxically, achieving the needed constituent support for change
may depend on the will of policy makers to take the first steps
in developing a transparent, evidence-informed policy process
that increases public knowledge in critical ways. To do so, elected
officials must acknowledge that, while they are elected to represent
their constituents’ opinions on important policy issues, they are
also chosen to help determine whether those opinions, regardless
of how formed, can withstand empirical debate—certainly a
political risk, but one worth taking in the interest of both the
common good and the health of the representative democratic
process.
The wider processes of debate, coalition formation, and persuasion
to reduce ambiguity could be at home in such an environment
because the culture itself prescribes the method of framing policy
questions, demanding scientific evidence as a basic element when
needed. In such an environment, policy advocates would be
expected to meet established standards for rigor and completeness
if they expect their proposal or program to be seriously considered.
Such an environment would help with the demand for and delivery
of relevant information as well as give the institution the capacity to
keep abreast of research in critical public policy areas.
Can Such an Ambitious Culture Shift Be Achieved?
Since the 1970s, legislatures around the country have recognized
their lack of the research evaluation skills needed to identify
and appropriately use robust evidence. In response to that
need, legislatures have established internal audit and evaluation
capacities that enable them to carry out their role as a coequal
branch of government through legitimate research and analytic
capability. Over the years, this capacity has reached different levels
of development in different legislatures, but all are marked by
more scientifically trained teams that can, as recommended by
Cairney, Oliver, and Wellstead, help build research capacity in
government, reduce the loss of institutional memory, and generate
a clearer research question when policy makers commission
evidence.
In addition, the bipartisan National Conference of State Legislatures
continually highlights the challenges legislatures face in developing
sound responses to pressing public needs. This work is underpinned
by an informal multistate agreement on the core values that should
396 Public Administration Review • May | June 2016
—Program Purpose
1.a. What public problem is this program seeking to address?
1.b. Briefly stated, how will this program address the public problem identified in Question 1.a?
1.c. Does this proposed program effort link to a statewide goal or benchmark identified in Building a Better Mississippi: The
Statewide Strategic Plan for Performance and Budgetary Success? (yes or no)
1.d. If the answer to Question 1.c was “yes,” specify the statewide goal or benchmark to which the proposed program links.
1.e. Explain where this program fits into your agency’s strategic plan; i.e., specify the agency goal, objective, and strategy
that the proposed program seeks to address.
2
1
—Needs Assessment
What is the statewide extent of the problem identified in Question 1.a, stated in numerical and geographic terms?
3—Program Description
3.a. What specific service efforts/activities will you be carrying out to achieve the outcomes identified in Question 5.a?
3.b. Describe all start-up activities needed to implement the program.
3.c. Provide a time line showing when each start-up activity will take place and the date when you expect the program to be fully operational.
3.d. Over the time period for which you are requesting funding:
i. How many of each of the service efforts and activities identified in Question 3.a do you intend to provide and in which
geographic locations?
ii. How many individuals do you intend to serve?
4—Return on Investment
4.a. What are the estimated start-up costs for this program, by each start-up activity described in Question 3.b?
4.b. Once the program is fully operational, what is the estimated cost per unit of service?
4.c. List each expected benefit of this program per unit of service provided. If known, include each benefit’s monetized value.
4.d. What is the expected benefit to cost ratio for this program, i.e., total monetized benefits divided by total costs?
5—Measurement and Evaluation
5.a. What specific outcomes do you expect to achieve with this program? Each outcome must be stated in measurable terms
that include each of the five elements specified in the following example:
Required Elements of a Measurable Outcome Example of a Measurable Outcome
1 Targeted Outcome
2 How the Outcome Is Calculated
3 Decrease
Infant mortality rate (number of deaths of
children less than one year of age per 1000 live births)
Number of deaths of children less than one year of age during
a specified time period [generally one calendar year, unless
otherwise noted] divided by the number of live births during
the same period, multiplied by 1,000
Direction of Desired Change (increase, decrease or maintain)
In order to establish a performance baseline, for each outcome measure reported in answer to Question 5.a, report the
most recent data available at the time of your request and the reporting period for the data.
5.c.
5.b.
For each outcome measure reported in answer to Question 5.a, explain how you arrived at the expected rate of change
by the target date.
5.d. How often will you measure and evaluate this program?
5.e. What specific performance measures will you report to the Legislature for this program? At a minimum, you should
include measures of program outputs, outcomes, and efficiency.
6—Research and Evidence Filter
6.a. As defined in MISS. CODE ANN. Section 27-103-159 (1972), if there is an evidence base, research base, promising
practice or best practice on which your agency is basing its proposed new program, attach a copy of or online link to
the relevant research.
6.b. If there is no existing research supporting this program, describe in detail how you will evaluate your pilot program with
sufficient rigor to add to the research base as required by MISS. CODE ANN. Section 27-103-159 (1972).
7—Fidelity Plan
7.a. For programs with an existing research base, explain the specific steps that you will take to ensure that the program is
implemented with fidelity to the evidence/researc h/best practice on which it is based.
7.b. If there is no existing research base for this program, explain the key components critical to the success of your pilot
program and how you will ensure that these components are implemented in accordance with program design.
4 Targeted % Change 18.5% decrease in the infant mortality rate per 1,000 live births
5 Date Targeted to Achieve Desired Change One year from full implementation of program
Figure 1 Seven Elements of Quality Program Design
Bridging the Divide between Evidence and Policy in Public Sector Decision Making: A Practitioner’s Perspective 397
mark any truly representative political body and, if followed, should
establish a solid foundation for implementing a more evidence-
informed policy culture.
Finally, with the impetus of multistate participation in the
Pew-MacArthur Results First Initiative, a number of states are
implementing data and time-conscious evaluative frameworks
that emphasize evidence-based decision making and longitudinal
cost–benefit analytics as an important element in the policy-making
process. With Pew-MacArthur ’ s technical assistance, participating
states are able to better use their own technical staffs in employing
methods that will help meet an evolving demand for clearer evidence
at critical points in the policy-making process, thus providing policy
makers with the information needed to justify decisions based on
evidence that can be balanced with constituent opinion.
Developing an Evidence-Sensitive Policy Culture
Through the years, there have been many efforts at introducing
“scientific” management principles into governmental
administrative and budgetary practices. Sound in principle, these
efforts were embraced with great expectations but often fell short for
various reasons, not the least of which has been that they have been
tied more to political cycles and personalities than to foundational
changes in the way we think about program and budgetary
accountability.
This was certainly true in Mississippi 20-plus years ago when
the legislature passed the Performance Budgeting and Strategic
Planning Act of 1994. The act itself is sound, but a retrospective
look at implementation reveals critical failures in follow-through
that compromised its utility as a budgetary tool that could be used
to build a priority and data-driven budget. For example, elements
of the act that would have provided the resources to analyze
and perfect strategic planning and performance data were not
implemented. As a result, legislators were presented with raw data
that often was not helpful in legislative deliberation.
However, Mississippi ’ s current legislative leadership has
acknowledged this long standing flaw, has embraced the importance
of data analytics to sound policy processes, and has adopted
a strategic view of budgeting that can fundamentally alter the
budgetary culture of the state. Backed by a professional staff
dedicated to establishing and maintaining a framework of evidence,
performance, and cost-based data to support sound policy debate,
the budget and appropriations committees will now be able to
develop clearer budgetary recommendations to fund those programs
and services that help the state reach its overall policy objectives and
eliminate those that do not.
The key elements of this system can be summarized as follows: a
legislatively developed and maintained statewide strategic planning
effort; a comprehensive statewide program inventory; and mastery
of the Pew-MacArthur Results First Initiative as a tool for bringing
data-driven decision making and cost–benefit analytics to bear on
the state ’ s budgetary process.
These three deceptively simple key elements contain other supporting
elements that also need to be developed or implemented. Examples
include a system for keeping the state-level strategic planning process
relevant across election cycles and responsive to executive branch
initiatives without losing its strategic value; a transparent, longitudinal
tracking system to monitor progress in achieving state-level outcomes;
increased capacity to identify program-level costs and monetize
relevant benefits; a needs assessment process that allows cost–benefit
analytics to be better utilized in selecting programs; a strategy for
assessing the efficiency and effectiveness of administrative support and
other nonintervention programs; an evidence-based research focus for
newly proposed intervention programs; routine use of cost–benefit
analytics and performance-based outcomes to identify programs for
possible elimination and resource redirection; and expanded capacity
for fidelity studies and performance evaluations, to name a few.
Creating such a framing process allows the institutional culture to
become less unpredictable and the “rules of the game” to be more
transparent.
While the thoughts by Cairney, Oliver, and Wellstead on the
multilevel, unpredictable nature of policy environments are certainly
noteworthy, critical junctures exist in the policy process where a
disciplined approach to screening policy proposals can become
pivotal in a budget culture, determining in large part the future of
the initiative or program being advocated. It is at these junctures,
most critically in the appropriations committees, that political
leaders have an opportunity to insist that the rules of the game
require vetting of every proposal (regardless of politics) against a
core of critical questions designed to assess the potential and cost of
the program against anticipated benefits relative to the need being
addressed.
For example, in 2015, Mississippi ’ s legislative leaders created a
system to review requests for new programs and funding using an
evidence screen. This process, labeled the Seven Elements of Quality
Program Design (see figure 1), requires agencies to meet certain
criteria to qualify for funding. For instance, agencies must report
whether a requested program has “an evidence base, research base,
promising practice or best practice” model, describe the monitoring
system that will be used to ensure that evidence-based programs
are implemented with fidelity, and explain how they will measure
the results the program is achieving. While this does not guarantee
removal of politics from the process, it does create a point where the
quality of evidence and sound cost–benefit analytics can take center
stage for all to see.
This analytic tool reflects the spirit of the times in Mississippi and is
a clear example of how the evidence/policy gap can be reduced in a
real-world decision-making process. With tools like this—and this
is only one example—policy makers now have the option of seeking
advice from an independent broker that they can trust, using the
information called for by the Seven Elements. Such an approach
allows legislative policy makers to ask key questions that are directly
in line with their political and public policy interests.
Conclusion
While agreeing that empirical facts cannot be completely separated
from human values, ongoing changes in the policy culture of state
legislatures over the last 40-plus years lead the author to believe
that we can make research-sensitive strategies a more important
part of our core value system in the selection and funding of policy
initiatives and programs. To do so, we—scientists, independent
398 Public Administration Review • May | June 2016
reviewers, and policy makers alike—must all work
to support any efforts made in policy-making
institutions to shape the policy environment in
such a way that the demand for empirical evidence
becomes a required, appropriately targeted part of
the policy-making culture. While decisions may
still be made in light of other considerations, the
actual supporting data in such an environment will
be available and transparent for all to see, with one
desired outcome being a better informed electorate.
Public Administration Review,
Vol. 76, Iss. 3, pp. 394–398. © 2016 by
The American Society for Public Administration.
DOI: 10.1111/puar.12572.
If there is an area of Public Administration process or practice where you feel there is a mismatch
between the accumulated evidence and its use, and the substantive content is salient to today’s
political or institutional environment, please contact one of the editors of Evidence in
Public Administration with a proposal for consideration: Kim Isett (isett@gatech.edu),
Gary VanLandingham (gvanlandingham@pewtrusts.org), or Brian Head (brian.head@uq.edu.au).
Copyright of Public Administration Review is the property of Wiley-Blackwell and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more