Skip Links | Access/General info
Evaluation capacity building in widening participation practice Lancaster University home page
County South, Lancaster University, Bailrigg, Lancaster LA1 4YD, United Kingdom
Tel: +44 (0) 1524 592907 E-mail enquiries
Home >
A Activity I Information P Presentation W Website

Return to 'Toolkit' Structure: Ten features of evaluation

2 Evaluation Preparation

To do sign

Things to do

Assemble the core group of people together – at this initial meeting the wider the group of people the better, as each will bring a different perspective. Depending on how well you know each other you may find activity 2A useful to establish existing connections.

Undertake some of the situational analysis activities. See activities 2B 2D 2E 2F and the presentation 2C that provides an overview of different types of situational analysis.

Undertaking a situational analysis together by capturing information on PowerPoint or flip chart in front of everyone allows you to clarify the points that you all agree or subscribe to and saves typing up notes at the end.

Agree who will collect and collate information about existing data and evaluation activity for the next meeting; alternatively invite colleagues to bring this information with them. If you adopt this latter approach ensure that this does not limit what you discuss and allows you to build in opportunities to add in other activities, data or evaluation activity following the meeting.

NB It is possible to merge the general situational analysis with the initial Evaluation Audit, however, for making sense of and decisions about the evaluation audit it is necessary to have all the information collated so that you are working with all pieces of the jigsaw not just a few of them.

Reasons to undertake a situational analysis

The first requirement is a contextual, or ‘situational analysis’ that understands the position of key stakeholders and is able to establish agreed priorities and outcomes. It is likely that a contextual analysis will reveal that in most cases evaluation is not the central core of [WP practitioners’] working practice and that their interests and expertise may lie elsewhere. As such the participants brought together to develop the evaluation plan will almost certainly have the ‘practicality ethic’ of people working in organisational settings. The evaluation must be manageable and for that reason it is neither possible nor desirable to try and evaluate everything. One of the benefits of having a plan is to enable you to have a clear idea about what you are evaluating and a justification for your decisions about what is being evaluated and when.

When bringing colleagues with different roles, responsibilities and professional interests together, terminology can be both problematic and fascinating. The context influences how and what individuals understand. Some words and phrases have come to mean particular things to practitioners in the field of Widening Participation. Clarifying what you all mean can be extremely illuminating and avoid unnecessary confusion and misunderstanding in the future.

I Evaluation Preparation: Gathering the Team and Identifying Connections 2A (pdf 60kB)
This activity includes a list of questions and offers on way of establishing existing working patterns and institutional structures that might become the basis of future evaluations, for example, the relationship between the widening participation practitioners and the disability officer, or the committee who will monitor and receive future evaluation findings.
I Evaluation Preparation: What is Widening Preparation? 2B (pdf 90kB)
This activity will allow you to clarify and develop a shared understanding of widening participation, this is a valuable activity when preparing or reviewing the content of a widening participation strategy as well as generating a list of issues that you might prioritise in the evaluation plan itself.

back to top of page

Audit of existing evalution practice

When producing or (in some cases, refining) an evaluation plan it is useful to audit existing evaluation practice, identify gaps and / or a new focus of attention. These initial meetings are vital to the whole process because understanding the context and working within the parameters set by this is as important as sharing ideas about technical aspects of actually doing evaluation. The discussion in the Aimhigher ‘evidence good practice guide’ is very helpful in thinking about the reasons for doing this and offering questions to think about.

The institutional or partnership context in which you work will determine which type of mapping activity is most suitable for your audit of existing evaluation practice. The context includes your history for example previous widening participation strategy and activities, institutional structures, existing work, especially monitoring and reporting mechanisms, as well as the staff you assemble to audit your evaluation practice. Each type of map allows you to plot existing activity and to have a visual representation or summary of work; you can then build on this by including the ‘core participant data’ and other evaluative data you already collect.

We have identified four possible mapping activities that can support this process, for an overview of the different approaches see presentation 2C, for a summary of the approaches see below and use the relevant activity sheet to support you in undertaking this audit.

I Evaluation Preparation: Mapping Activities an Overview 2C (pdf 370kB)
This PowerPoint presentation includes examples of the different types of mapping used in the consultancy case studies, one or more of which might be a useful approach for recording and summarising your existing data.

 

Mapping onto Student Lifecycle

The focus with this map is the student and allows different stakeholders to map the data they collect at different stages of the lifecycle. This enables you to identify duplication of effort and gaps in the evaluative data that is collected.

The number and name of stages in the student lifecycle will vary depending on organisational and partnership structures. The challenge is to try and achieve a balance between the number and specificity of stages. Too few stages, results in the data sources referring to very different issues, which makes it difficult to analyse. Too many stages, makes it difficult to make more strategic decisions as the information is too specific.

I Evaluation Preparation: Mapping onto Student Lifecycle 2D (pdf 80kB)
An explanation of the process of mapping onto a student lifecycle and examples of this approach showing how it can highlight gaps or duplication in data collection.

back to top of page

Mapping onto a Timeline

The focus with this map is time and can allow you to identify not only when activities are being delivered and when data is available, but also when data might be needed and by whom. So for instance, you can plot dates for major committee meetings, who will make decisions about funding future work, or require evaluation data for reporting purposes, and key events including holidays, exams etc. You can also use this information to inform your decisions about when to undertake different phases of an ongoing evaluation as well as help plan more precise activities within a shorter, tighter-focused evaluation. This type of map enables you to ensure that there is enough time to undertake the evaluation including analysis and timely dissemination of findings. It helps to avoid a number of practical obstacles such as gaining access to participants and securing their active involvement.

The duration of the map will determine the level of detail you should include. It is possible that you may want to have several maps, one for the full three year period of your evaluation plan, and others for each year, or individual evaluation projects. If using the timeline to map the range of activities available to pupils and other participants as they progress through education you may want to develop a timeline covering the various years in each key stage.

Depending on who is undertaking the evaluation, it is vital to make sure that you make decisions about adjustments to the timing of one project with respect to the wider timeline, as slippages in one project can impact on the overall implementation of the plan.

A timeline map can be useful to ensure flexibility and minimise the inconvenience caused by unexpected events, or maximise the benefits of unexpected opportunities.

I Evaluation Preparation: Mapping onto a Timeline 2E (pdf 90kB)
An explanation of the process of mapping onto a timeline for annual, learner progression pathway and for core evaluation activities showing how it can support the planning of timely data collection to allow time for analysis and reporting to internal and external strategic groups. For examples of timelines see 2C.

 

Mapping onto a Chain of Events

This form of mapping allows multiple stakeholders to contribute to the mapping exercise and encourages those with responsibility for developing the evaluation plan to discuss the sequence and focus of different activities. The chain can consist of activities relating to different groups of participants, for instance specific pupil cohorts, or pupils, teachers, parents and carers.

Alternatively, the chain can consist of activities delivered by different stakeholders. Building up the chain of events, deciding on their sequence and adding subsequent events is a very flexible step that some may find useful for longitudinal evaluations associated with multiple interventions that might feature within a learner progression framework. The mapping chain can help capture the interventions that will feature in a particular evaluation and provide a useful basis for discussion. It also has potential as a basis for discussion with participants about the relative impact of specific activities.

I Evaluation Preparation: Mapping Chain of Events 2F (pdf 80kB)
An explanation of the process of mapping events undertaken by specific groups of participants and / or delivered by different stakeholders to complement progression pathways.

back to top of page

Mapping onto the existing evidence base

As a minimum previous evaluation reports offer the basis for generating questions for future evaluations. However, there are possible benefits to mapping when preparing for future evaluations. This can be particularly useful in helping to identify key and common concerns amongst stakeholders.

This form of mapping allows multiple stakeholders to identify opportunities for sharing the data they collect as well as the evidence base that already exists. It is suitable for identifying who already collects and reports on the individual pieces of participant profile data. The current HEFCE guidance for Aimhigher Partnerships means that from August 2008 all partnerships will be collected the same participant profile data. It therefore seems sensible for HEIs to collect, as a minimum, the same core data to enable them to have the option of mapping onto what will become a growing evidence base generated by Aimhigher and conversely to maxmise the likelihood of their evaluations being used by Aimhigher and HEFCE.

Mapping onto the existing evidence base can also capture some of the different types of impact indicator (see evaluation impact indicators) that are essential for developing a rich and robust body of evidence. Notably this mapping exercise lists or tabulates previous evaluations and reports which provide a valuable context and starting point. The extent to which you can refer to existing evidence and comment on the connection with your own future evaluative reports will depend on the previous rigour of other evaluations and the similarity or differences between the data collected.

back to top of page

Return to 'Toolkit' Structure: Ten features of evaluation

HEFCE

 

 

Department of Educational Research

Centre for the Study of Education and Training

REAP Research Equity Access and Participation

| Home | About | Team Members | Resource Toolkit | Contact us |