Skip Links | Access/General info
Evaluation capacity building in widening participation practice Lancaster University home page
County South, Lancaster University, Bailrigg, Lancaster LA1 4YD, United Kingdom
Tel: +44 (0) 1524 592907 E-mail enquiries
Home >
A Activity I Information P Presentation W Website

Return to 'Toolkit' Structure: Ten features of evaluation

4 Evaluation Impact Indicators

CSET use a range of planning tools to assist practitioners in developing a framework for evaluation and indicators of performance, as discussed in the evaluation planning section we have used RUFDATA for identifying, planning and collecting evaluation data. Importantly, interventions evolve over time and evaluation needs to accommodate this. The Aimhigher guidance also emphasises this (‘the importance of progressive programmes of co-ordinated activities that will contribute to a learner progression framework’), and evaluation as a ‘cycle of activity’.

To obtain a more nuanced understanding and to gather evidence that provides the richness and robustness of a good evaluation it is necessary to think about a number of other features, namely, impact indicators and consider how evaluations will connect with other stakeholders’ evaluation plans or activity.

In this toolkit we outline two types of impact indicator:

  • the EPO framework helps to provide the context and explanation for progression and attainment data as well as other measurable outputs regarding participation
  • the levels of impact encourage the evaluator to move beyond the event feedback form and explore changes in individual attitudes and behaviours as well as institutional or systemic changes in policy and practice which also help to contextualise more quantitative measures.
To do sign

Things to do

To review the material relating to Enabling, Process and Outcomes EPO indicators these are available in presentation 4B

NB It is important to try and think about how you will collect evidence of different indicators, and to recognise that the level of the evaluation will influence what evidence you will obtain and how it might be used.

As the quantitative measures and outputs are often those more readily captured identify those first and think about how you will report these with respect to the ‘core participant data’ you are collecting (see discussion about evaluation practicalities section 6)

Having identified the outcomes, try to generate some possible enabling and process indicators. You might generate these based on your own knowledge, or ideas arising from other people’s reports.

Thinking about the possible EPO at the beginning is helpful in identifying questions for semi-structured interviews or focus group discussions and can ensure your evaluation remains focused and that you make best use of the time you have available.

For examples of EPO indicators that emerged during evaluation of an Up2uni Evaluation report see 4A.

back to top of page

I Evaluation Impact Indicators: EPO – A Case Study 4A (pdf slides 130kB)
This is an extract from an evaluation of Aiming4uni in Furness that identifies enabling, process and outcomes. For full report see Up2uni Evaluation Report, 2007
P Evaluation Impact Indicators: Enabling, Process Outcome Indicators 4B
(pdf slides 420kB) (pdf Handout 205kB)
This PowerPoint presentation includes the headings and questions used in the RUFDATA planning framework; it is for useful as a basis for workshops or group discussion.

Enabling, Process and Outcome Indicators

Enabling indicators refers to dimensions which need:

  • to be set up
  • to be in place
  • frameworks for action
  • policies
  • protocols
  • space
  • time
  • people
  • resources

Enabling indicators are concerned with the structures and support which needs to be set up or provided to better enable the desired processes to take place. These may include such things as the establishment of an institutional, departmental or group policy for widening participation, appointment of WP co-ordinators or working parties, allocation of funds/resources e.g. WP and disability premium or Aimhigher funding, time table changes or the provision of professional development to enable staff to support targeted students. The degree to which items under this heading are provided is highly relevant to any evaluation of the outcomes it provides an explanatory context that is as valuable in explaining why an activity might be successful or not. It is often these features that are missing from performance measurement systems that focus on the quantitative participant data.

back to top of page

Process indicators refer to dimensions which are concerned with actions:

  • Ways of doing things e.g. admission procedure, information, advice and guidance provision
  • Styles of learning or working
  • Behaviours
  • Experiences e.g. open days, summer schools, or induction of students into university
  • Narratives e.g. how teachers and future learners perceive learning in University

Process indicators are concerned with what needs to happen within the ‘target group’ practice in order to embody or achieve desired outcomes. In order to assess the effects of a strategy, the experience of the targeted learners should be attributable to strategic interventions sponsored by institutional policy on widening participation. The issue of attribution is critical here.

Outcome indicators refer to dimensions which are concerned with ‘end points’:

  • Goals e.g. learning outcomes on a learner progression framework
  • Desired products or achievements e.g. production of a personal statement for UCAS form
  • Numbers e.g. participants involved in an event, achieving desired goal
  • Changes e.g. in perception, intentions
  • New practices e.g. increased motivation, use of a new skill

Outcome indicators are concerned with the intermediate or longer-term outcomes of the activities or programme of activities and are tied to impact goals. Since widening participation strategies are ultimately about effecting institutional change to facilitate positive changes in student behaviour, the most critical outcome indicators tend to refer to student based outcomes. However, it is perfectly possible to identify intermediate outcomes which refer to shifts in departmental or subject cultures/teaching styles which could be positively attributable to widening participation activity.

All these indicators can be addressed through evidence gained from standard instruments [questionnaires, interviews of key stakeholders and informants, participants etc] and by the inspection of documentary evidence. It may be that indicative evidence through case studies of change is a useful tool. See section 7 evaluation data collection.

back to top of page

Levels of Evaluation Focus for impact indicators

There is a type of evaluation tool which is designed to organise an evaluation focus for planning purposes. It understands possible evaluation foci in terms of levels. Essentially, it corresponds to the elements of the trajectory taken by an intervention from the quality of the target group’s experience of the initial activity (for example this could be a workshop, a seminar, a campus visit, or mentoring project etc.) through to the extent to which the activity creates longer term changes in the individual and strategic effects on stakeholders working within institutions and ultimately impacts on the whole system. It therefore moves its focus from individual participant experience to staff and their practice within organisations concluding with changes at a macro level.

The original CSET model identified 5 ‘levels of evaluation focus in planning approaches to the evaluation of widening participation’. This ‘toolkit’ contains a modified version of the ‘levels model’ that emerged through discussion and feedback. The version below combines levels 1 and 2 (the experience of the intervention and awareness/aspiration outcomes) and re-presents each level with some illustrative examples.

To do sign

Things to do

  • To review the material relating to levels of impact 4D.
  • Think about the type of evidence you might collect to demonstrate the effects or impact at each level.
  • Although quantitative measures and outputs provide evidence of learning may be more easily captured for year groups you should think about how you will report on cohort results and how you might use the ‘core participant data’ you collect to support you when analysising assessment results.
  • Having identified the individual outcomes at levels 1 and 2, try to identify and capture evidence of some of the enabling and process indicators that may help explain how or why the participants demonstrate specific outcomes. The enabling and process indicators often relate to level 3 indicators that look at effects or impact on the institution.
  • You can generate enabling or process outcomes based on your own knowledge of the context, or ideas arising from other people’s reports.
  • Thinking about the possible EPO indicators at the beginning of the evaluation process before you collect your data can help you to generate questions for semi-structured interviews or focus group discussions and can ensure your evaluation remains focused and that you make best use of the time you have available.
  • For examples of EPO indicators that emerged during evaluation of an Aimhigher Summer School see 4A.

Level 1: Quality of the experience and immediate effects or situated learning

In many cases this diagnostic tool is used as a quality check, customer service tool or ‘happy sheet’. It is important as a diagnostic tool for the quality of the delivery of an engagement strategy or estimations of awareness about the topic under consideration.

If the engagement had specific learning or knowledge based outcomes in mind on the part of the target group, this level is concerned with measuring what these may be. In the widening participation environment it might be associated with new information acquired by the target group (courses and routes into HE they were not aware of before), attitudinal changes, the development of new horizons. These outcomes so are important. However, while not corresponding to new behaviour or practice on the part of the target group, they might be considered a necessary condition for such change.

Questions

  • How was the intervention experienced by the target group ‘at the time’? – quality of resources, space, timing and relevance of the engagement activity.
  • What new information / skills has the target group gained? – changes in their awareness, confidence, aspirations, knowledge of HE

Methods

Participant feedback through questionnaires, focus groups

back to top of page

Level 2: Quality of transfer or reconstructed learning to new environments and practices

This level of impact can be addressed by quantitative indicators with relatively little diagnostic potential but might also include indications of the experience of University life and its support services by the target group involving more narrative inquiry techniques. This is probably the ‘gold standard’ in terms of widening participation and is a direct reference to the extent to which strategies have produced more routine, longer term changes in the attitudes, capacities, behaviours, confidence and identities in the target group.

Questions

  • What changes are there in capacities as well as confidence evidenced by quantitative indicators e.g. SATs, GCSE attainment rates, staying on rates, applications and entry to HE based on areas and/or target schools;
  • How are these changes, for example, motivation and confidence recognised by the learner and evident to others such as teachers and parents / carers?

Methods

External and institutional data of attainment, progression, applications, admissions, which needs to be linked to ‘core participant data’ and for the more qualitative aspects semi-structured interviews, focus group discussions

back to top of page

Level 3: Quality of institutional or sector impacts

This level shifts the focus from the experience of individual learners to the extent to which strategies are promoting ‘new ways’ of doing things at the institutional level in terms of new systems, routine systemic practices and assumptions which are framed by the widening participation agenda. Institutional (or sector) impacts including changes to the way schools, colleges, HEIs engage in widening participation agenda; commitment to particular practices or projects, the experience of teachers, parents, and HEI staff, their views of WP interventions and the evidence they offer of the effects of such interventions on the learning cultures and practices of schools, colleges, and HEIs.

As an evaluation focus, other key stakeholders (undergraduate and post graduate officers, learning and teaching committees, teams engaged in learner support practices, teachers engaged in routine teaching and learning practices, not just the Aimhigher co-ordinator) will form the source of evaluative evidence. In effect the focus at this level is on institutional change which involves those whose primary remit may not be widening participation.

Questions

  • How has the institution responded to the ideas or ‘new ways’ of doing things introduced by the intervention?
  • What changes to policy or practices have happened that stakeholders associate with the intervention?

Methods

Questionnaires, focus groups, semi-structured or dialogic interviews help gather evidence, another source of evidence are documents e.g. School Development Plan or newsletter, artefacts e.g. press releases, websites.

Level 4: Quality of Impact on macro or long term strategic objectives

This level is more relevant to HEFCE, DIUS and others interested in the macro context. One way of doing this will be to make use of Higher Education and Aimhigher Partnership evaluations in particular those at level 2 and 3 to help develop a meta perspective on how the policy is achieving positive effects overall. In order to have this as an option it is crucial that individual evaluations make explicit the context of their evaluation using common ‘core participant data’ and descriptors of the categories of activity and levels of experience.

back to top of page

Level 5: Changes in sector wide and macro practices

Macro or long term strategic objectives; the way local trends connect with – illustrate, reinforce, contradict the longer-term national trends. Some HEIs and Aimhigher partnerships will have tracking schemes in place that will enable them to begin to comment at this level, it is however, assumed that bringing evidence together at this level will be undertaken by the funding council.

P Evaluation Impact Indicators: Moving beyond the feedback form 4D
(pdf slides 420kB) (pdf Handout 205kB)
This PowerPoint presentation includes slides that outline the different headings and shows how evaluation needs to move beyond the event ‘satisfaction sheet’, it includes reference to other Aimhigher Resources.
I Evaluation Impact Indicators: Examples of evaluation at different levels 4E
This handout outlines examples of evaluation reports that have drawn on different levels of evaluation data.

Return to 'Toolkit' Structure: Ten features of evaluation

HEFCE

 

 

Department of Educational Research

Centre for the Study of Education and Training

REAP Research Equity Access and Participation

| Home | About | Team Members | Resource Toolkit | Contact us |