Value & Impact toolkit first steps: impact

As an introduction to evaluation, this section provides some essential and basic first steps for evaluation planning. It has been adapted from Schuh and Upcraft’s ‘framework for evaluation’ (2001) and further developed from the experiences of the universities involved in the pilot phase of the value and impact project. It uses questions to probe the purpose, type, scope, setting and methods of an evaluation project, and defines the steps to be followed in planning and implementing evaluation.

This page about impact features the following:

Return to the Value & Impact online toolkit.

Background

The literature review noted that the majority of literature found on impact originates in the United States and much is focused on student learning outcomes-based evaluation. This section focuses on outcomes-based evaluation and uses and adapts the US literature. It also uses / adapts literature from elsewhere, namely that relating to the impact evaluation of university library services (SCONUL) and from the experiences of the universities involved in the pilot phase of the value and impact project.

Outcomes-based impact evaluation may produce ‘unexpected’ or ‘unintended’ outcomes. Whilst these may come as a surprise, they are nevertheless equally valuable and important, and will contribute to any improvements that may be required to a student service, an intervention or activity.

Pilot comments

"It helped to discuss amongst ourselves what was satisfaction and what was impact and using everyday analogies such as going shopping to discuss this amongst ourselves. In terms of measuring impact it was useful to have in your mind that in some way you will need to be able to measure before and after the event of having received the service."

University of Brighton

Impact diagram

Top

Outcomes

Evaluating outcomes is a specific type of evaluation aimed at gauging the impact of student services, programmes and facilities and their effect on learning, development, academic achievement, and other intended outcomes.

In evaluating outcomes, the following types of questions will need to be asked (Bresciani, 2004):

  • What are we trying to do and why?
  • What is the programme trying to accomplish?
  • How do we do it?
  • How well are we doing it?
  • How do we know?
  • How do we use the information to improve / celebrate success?
  • Do the improvements we make work?

Student services practitioners will also need to ask themselves: ‘Can institutional interventions (for example, programme, services and policies) be isolated from other variables that may influence outcomes, such as background and entering characteristics, and other collegiate and non-collegiate experiences?’ (Upcraft and Schuh, 1996). The same authors (2001) also note that this is the most important and most difficult type of evaluation.

The development of outcomes must be driven by the specific context of the activity/service being evaluated (see generating outcomes). Outcomes must also be measurable and meaningful in order to demonstrate the impact of programmes or services. The following points need to be considered when drafting outcomes (Bresciani, 2004):

  • Is it measurable? Is it identifiable rather than countable? Can you identify or observe when you know the outcome has been met?
  • Is it meaningful? Does it mean something to students and providers? Is it manageable? Outcomes have to be incorporated into the day-to-day life of a programme, but not all outcomes have to be evaluated / measured every year.
  • Who will I be gathering evidence from to know that my outcome has been met? Students might not be the only source of evidence. Other parties, such as academics and administrators, might need to be involved in the process.
  • Who would know if my outcome has been met? Students might not be the only source of evidence. Other parties, such as academics and administrators, might need to be involved in the process.
  • How will I know if it has been met? What does meeting the outcome look like? How do you know the intended outcome / development has occurred? Will it provide me with evidence that will lead me to make a decision for continuous improvement? Outcomes need to be articulated in such a way as to lead to improvement.

Top

Generating outcomes

Before outcomes can be evaluated, they need to exist. This section looks at an approach to generating outcomes, which involves a number of steps concerning the identification of objectives, outputs and intended outcomes, and impact indicators. The example illustrated is of an orientation programme to support international students, and it has been adapted from work undertaken by SCONUL on the impact of university library services.

Top

Objectives

Objectives drive the whole process of outcomes-based or impact evaluation. In defining objectives, they need to be specific (precise about what you intend to achieve) and time-limited (what can be achieved within a specific period). An example of a specific type of objective for a particular service / activity is:

  • Service / activity: supporting international students who are new to the country and university through an orientation programme.
  • Objective: to provide international students with the information and socialisation opportunities to ensure a successful transition into university life.

Once objectives have been defined, the outputs can be identified. These are deliverables - what students receive from the service/activity.

Top

Outputs

Identifying outputs:

  • Service / activity: supporting international students who are new to the country and university through an orientation programme.
  • Objective: to provide international students with the information and socialisation opportunities to ensure a successful transition into university life.
  • Output: international students attend a programme.

The next step is to determine the intended outcomes so that the impact indicators can be developed. Intended outcomes are the change that is meant to happen as a result of students’ participation with the service / activity by addressing the objectives. Intended outcomes can be identified by asking the following questions:

  • What sort of changes are we looking for in the students that we want to reach with this service objective?
  • How can we tell if the service is making a difference?

The answers to these questions will become the intended outcomes, which then need to be turned into impact indicators. Impact is usually about having an impact on people, which creates change. Changes in people may take the following forms:

  • Affective (attitudes, perceptions, levels of confidence, satisfaction with the service).
  • Behavioural – people do things differently (for example, doing something more or less often, asking different types of questions, being more critical or more independent).
  • Knowledge-based (for example, knowing where to go and who to approach for relevant information and support).
  • Competence-based – people do things more effectively (for example, able to find and access appropriate information and support).

Top

Pilot comments

"The solution was to have in our minds that an output was something which is delivered and received (more tangible), an outcome could be what we would expect to happen as a result of receiving the service, and an impact indicator would be how an individual(s) might feel having received the service or what an individual might do or now know as a result of receiving the service."

University of Brighton

"Our first activity was mapping our objectives of the pilot study against the evaluation template within the toolkit. This allowed us to determine which areas we would prioritise our focus on and what research methodology we would adopt. We would recommend any institution undertaking this sort of exercise to sit down and fully comprehend these aspects from the outset."

University of East London

"Although we knew we wanted to concentrate on each of the areas we had selected, it was a bit disconcerting to think through how we would achieve it. What helped with this was meeting with the pilot researcher and discussing this, and then using the toolkit's impact study template to iron out the objective(s), outputs, intended outcomes, impact indicators and source of data."

University of Brighton

"The necessity of managing expectations regarding scope and scale, and marshalling appropriate resources is a key learning point."

University of Surrey

Impact indicators

Impact indicators will translate intended outcomes into pieces of information, which will indicate whether or not change has taken place. Indicators should be:

  • Directly linked to what you are trying to achieve - the key areas in which you are trying to make a difference.
  • Clear and understandable - check them with colleagues.
  • Generally accepted as reasonable and fair - you need to agree that the indicators really do tell you about the impact of your service.
  • Valid - evaluate what they say they are evaluating.
  • Informative, providing significant information - illuminating your successes, highlighting changes/trends, throwing up warning signs.
  • As few in number as possible – three or four are ideal for any part of the service.

Identifying intended outcomes and impact indicators:

  • Service / activity: supporting international students who are new to the country and university through an orientation programme.
  • Objective: to provide international students with the information and socialisation opportunities to ensure a successful transition into university life.
  • Output: international students attend a programme.
  • Intended outcome: international students are equipped with sufficient information for a successful transition into university life.
  • Impact indicators: international students have registered with a doctor; students are informed about and aware of a range of services, such as student services, opening a bank account, visas, living in the UK; international students are confident about where to go to find information; students are informed about and aware of socialisation opportunities at the university.

The next step in the process is to gather appropriate evidence and decide how much is needed to enable good decisions to be made. A balance needs to be struck between what you need to know, how well your evidence is telling you this, and how much of your resources you can afford to commit to this work. The information to be collected needs to show progress in delivering the objectives / intended outcomes and their associated impact indicators.

Top

Steps to take

These steps assume that generating outcomes has been followed. They adapt the Schuh and Upcraft framework for an evaluation of outcomes.

A quantitative and/or qualitative approach can be taken to evaluate outcomes. The example given is an evaluation of the impact of a university’s student volunteering scheme using a qualitative approach.

Example of steps to take in the process of planning evaluating outcomes:

 Step Example 
Define the problem
Credible empirical evidence is required to demonstrate student development outcomes from a student volunteering scheme and its impact on student achievement.
Determine the purpose of the evaluation
To find out the extent to which students who are taking advantage of the scheme have benefitted from the experience, and whether it has impacted on achievement levels in comparison with those who are not involved in the scheme.
Determine the appropriate approach to evaluation
This example takes a qualitative approach.
Determine the objectives / intended outcomes / impact indicators
These will be determined depending on the service provided or the programme offered. For example, a student volunteering scheme might have outcomes aimed at students’ personal development in terms of increased confidence, social and communication skills, ability to lead others, feelings of being part of a community, greater cultural awareness and understanding of diverse groups of people.
Select the evaluation tool
Determine whether individual interviews or focus groups are the most appropriate tool for collecting the data.
Determine the population to be studied and the sample
A purposive, random sample of those involved in an activity from a particular cohort or number of cohorts.
Determine how the information will be collected
When is the most effective time to collect the data (pre- and/or post-experience?), who should conduct the interviews / focus groups (issues of experience, objectivity)?
Determine how the information will be codified / analysed
 Much information will be gathered, so considerations about data organisation and sequencing need to be made to enable meaningful analyses.
Conduct the appropriate analyses
The challenge is to make sense of the data, identify patterns, and to construct a framework for communicating what the data reveal – have the outcomes been met, what improvements are needed?
Evaluate the analyses for policy and practice implications
What do the findings reveal in terms of evidence for policy development and practice review?
Report the results effectively
Are different reports needed for different audiences?

Derived from Schuh and Upcraft, 2001.

Top

Related templates

Top

AMOSSHE, The Student Services Organisation is a UK non-profit professional association. Company registration number 4778650.
Read our privacy policy.

Powered by Wild Apricot Membership Software