Search This Blog

Sunday, 9 March 2014

Strategic Management In Heritage, Sport and Culture Sector Agencies

Strategic Management In Heritage, Sport and Culture Sector Agencies

 Part 2.       Impact Assessment 
October 2010
  
Greg Claridge
Bakker Maniparathy Claridge Limited

Strategic Management

The cultural well-being of New Zealanders is defined as the “vitality that communities and individuals enjoy through participation in recreation, creative and cultural activities”.[1] Popular support for state development of these three sectors remains strong: A 2008 survey confirmed a continuing high level of public interest in culture since 1997, with more than four in five people who are at least “quite interested”.[2]

Government plays a range of roles in fostering the development of the cultural, heritage and sporting sectors. It delivers a range of services that cover all of the main “interventions” of government: regulator, educator, service delivery, funder, and in some areas, enforcement. The public see government as a key supporter of culture in New Zealand.[3]

Strategic management of government agencies in these roles across the three main sectors requires a focus on the strategic goals of the agency (or “outcomes”) whilst delivering sustainable, “value for money” services and activities. Balancing these two foci is rarely easy, and even more difficult is gathering and interpreting useful sector feedback information that reveals the impact of the agency within its sector, and the “value for money” of its services.

During 2010, the Ministry has embarked upon a programme of capability assistance with the culture, heritage and sport sector agencies. The first phase of this programme has sought to further develop agency “intervention logic” linking outputs to outcomes, including the specification of indicators that provide management with useful information about the effectiveness of the agencies activities and services.   

The second phase is the production of this paper on impact assessment. The objective of this phase is to provide the management of sector agencies with a simple tool on good impact assessment, and how to use it in management decision-making. This builds upon on the model presented to sector agencies during the first half of 2010 during a series of sector workshops and individual meetings.

The Basic Model


Good management in government agencies is based upon an outcome focused approach to planning, management, and reporting. The model that describes the concepts of agency performance is shown in Figure 1 below. “Impact” analysis specifically addresses the links between outputs and outcomes. Understanding and using impact data lies at the heart of the strategic management of State sector agencies, which exist to make a difference in the lives in New Zealanders.
Impact analysis that reveals “what works and what does not” is central to good management. Without this analysis and information, management cannot be making funding or operational decisions based upon knowledge of the relative effectiveness of funding or service choices.



Figure 1. Model of Good Public Management                                         © Bakker Maniparathy Claridge

Step 1. Defining Outcomes

The biggest hurdle to developing a stable intervention logic and performance measures is clarity of the outcome definitions. Until these are described properly, i.e. as a measure of state of a target group, then the risk remains that outcome definitions change annually and are never embedded. Without stable outcome definitions impact data cannot be defined, collected, and analysed.

Outcomes must be defined in a manner which enables clear unambiguous measurement of the impact or consequences of government services upon society. Agencies should limit the number of outcomes to the “vital few” outcomes that are:
·         well aligned with the agency’s mission or purpose;
·         linked to services, outputs and inputs (the things agencies manage);
·         supported by knowledge of the influences driving outcomes;
·         collectively measure major outcomes from or across all dominant output classes;
·         measure the benefits experienced by target groups;
·         timely; and
·         support critical business decisions, including resource decisions.
Impact assessment focuses on the links between the outputs of an agency’s services or its funding decisions and the outcomes it is trying to achieve. Impact analysis requires a succinct description of the linkage (“Intervention Logic”) between an agency’s outputs and outcomes that is grounded on knowledge of the influences driving the outcomes, and the specification of the desired action or behaviour of the component of the “system”.

Step 2. Building an Outcome Framework (Intervention Logic)

Once an agency has defined its outcomes correctly then developing the intervention logic should not be difficult.

1.    A generic process for building an intervention logic is:

1.    Confirm Outcomes. Simple definitions only.
2.    Specify the “System” or “Process”
3.    Specify target group(s)
4.    Identify any intermediate steps. Keep these to a MINIMUM.
5.    Only use steps that measure a “state” of target group
6.    Link outcomes to specific activities(outputs)
7.    Test for logical causality (IF/THEN)
8.    Specify how outcome data will be used in decision-making
The final product should be a simple framework that:

1.    Links the agency’s services to a set of intermediate results and then the long-term, strategic outcomes. An example of a useful outcome framework is shown in Figure 2 below.
2.    Demonstrates the alignment between the agency’s strategic outcomes and its mission. The simple test for a useful set of strategic outcomes is to ask the question: “if these outcomes improve over time will the agency be achieving its mission?”
3.    Groups related agency activities to common intermediate results. In this example protecting archaeological sites and protecting heritage structures are two separate processes that together are the protection services for the agency
4.    Identifies useful impact measures .Any change in the mix or quantity of heritage protection services and activities should result in a measurable impact on the intermediate results, and a long-term impact on the strategic outcomes.



Step 3. Measuring and Using Impact Data

The purpose of building an outcome framework is to demonstrate a reasoned analysis of the agency’s services and activities that specifies the long-term strategic outcomes, and the useful intermediate results that provide agency management and govrnment with data that informs the relative effectiveness of policy and management options in the mix of services funding by Government. 

Measuring impact specifically refers to measuring changes to intermediate results or strategic outcomes. Intermediate results provide agency management with useful information on the effectiveness of one or more related services or activitities.

Changes to the long-term strategic outcomes reveal the overall effectiveness of the system as a whole. These are sector level indicators that normally reflect the overall ombined impact of several different agencies, as well as other external and uncontrolled influencers (drivers). Strategic outcome indicators are useful for reporting overall system performance, and may be appropriate for public reporting purposes, whereas intermediate results provide agency management with specific performance feedback from decisions on services and activities.

The intermediate results should all be measurable. Continuing with the example (Figure 2), the agency’s activities for protecting heritage structures could be measured as:
·         a count of total heritage structures protected, or
·         the % of  identified structures that are protected to a specified standard.

Using either or both indicators as a common indicator, agency management could then make changes to their output mix of influencing District Plans, informing owners of heritage structures, or advocating in the resource consent process to maximise the impact on protecting heritage structures. If cost information is also included, then management are should be able to demonstrate the relative value for money of investing in different activities.

Example of Using Impact Indicators

If the agency currently focussed on advocating in the resource consent process, they could trial focussing on influencing several Council Distrct Plans and then observe the impact of this change on the heritage protection indicators over several years in these disctricts.

Management could then analyse the protection rates of districts where only Resource Consent advocacy activities were used compared to districts where the agency influenced District Plans. If, for example, the results in Figure 3 were obtained and, if the costs were proportionally equal, then management could consider changing the service in other districts.



Bringing It All Together

Measuring the impact of an agency’s services or activities is best achieved by measuring the change in the social gain. A successful strategy has a positive impact on the social benefit of the service.

The ideal state is for management, the Board and the Minister of an agency to be making decisions after considering impact data – knowledge about what works, what does not, and how services could be changed to improve outcomes.

Getting to this ideal state requires:

1.    Measurable definitions and data of outcomes/immediate results,
2.    Alignment of activities and services to specific outcomes, and
3.    Reliable output and cost data.

Using the HPT Protection of Heritage outcome example, then the ideal state would require:

1.    Data on the preservation status of protected heritage structures
2.    Data on:
a.    Heritage provisions of District Plans for all Councils
b.    Proposed building development s by owners for protected heritage structures
c.    Council resource consent decisions
3.    Cost data for HPT activities to support this outcome

The process of defining the measurable outcome, the system data, and cost data normally reveals immediate data issues for most agencies. In some cases, required data is held by other agencies and is not easily accessed or provided. If this data is crucial to analysis of impacts then these data gaps should be part of the change agenda for management.

Finally, once the data required for impact analysis has been properly specified and collated, then many small agencies will have to access the skills necessary to complete the impact analysis and provide management with performance information that makes a material difference to the delivery of services to New Zealanders. 

Using Performance Data

Once an agency has collected expenditure, input, output, and outcome data it is then in the position to prepare report based on the evaluative measures described in Figure 1. Trend data of economy, efficiency, and effectiveness measures reveal useful information about the performance of an organisation.

This performance information is useful for many purposes:

·         Management decision-making:  This data enables management to test the impact of decisions that affect service delivery and the impact on theses changes on society (outcomes).
·         Board reporting: The same data sets that management use to make decisions about service delivery are also useful to Boards in monitoring performance and discussions on the strategic direction of the agency.
·         Departmental Overview: The expenditure, input, output, and outcome data and the evaluative measures in particular also meet the reporting requirements of Departments and Ministries responsible for monitoring Crown agencies.

The data generated by the model of good public management (Figure 1) normally meets the information and reporting requirements of key stakeholders. This should materially reduce the costs of accountability reporting for most agencies.
Economy data enables management to monitoring the costs of inputs. Decisions arising from economy data are normally effective over a one or two year time frame as employees are normally a fixed cost and not a variable cost. Decisions on the number or unit cost of employees are not frequent for most agencies

Efficiency data has a more immediate use for management. Sudden or unexpected changes in outputs/input normally draw immediate management attention. A material decrease in efficiency should always be investigated immediately by management. A steady increase in efficiency should, ideally, be the consequence of a previous strategic decisions. Such an increase normally confirms good implementation of an agency’s strategic plans, whereas an unexpected increase in efficiency is always worthy of investigation.

Effectiveness data reveals the overall performance of the organisation in achieve its desired outcomes (goals or objectives). Trends in improving or declining effectiveness are normally revealed over years. Most agency’s are focussed on changing behaviour of some (or even all) New Zealanders. This change is rarely visible in just one or two years data that is normally reported. Ten or more years effectiveness data is normally required to reveal long term systemic behavioural or performance change.  



[1] “What is Cultural Well-being?”,    http://www.culturalwellbeing.govt.nz/node/1
[2] “How Important Is Culture? New Zealanders’ Views In 2008 – An Overview”, p17  http://www.mch.govt.nz/publications/how-important-is-culture/HowImportantIsCulture.pdf
[3] “How Important Is Culture? New Zealanders’ Views In 2008 – An Overview”, p6


For more theory and case studies onhttp://expertresearchers.blogspot.com/

For Premium Academic and Professional Research:  jumachris85@gmail.com


No comments:

Post a Comment