Skip Links | Access/General info | ||||||||||||||||||||||
County South, Lancaster University, Bailrigg, Lancaster LA1 4YD, United
Kingdom Tel: +44 (0) 1524 592907 E-mail enquiries |
||||||||||||||||||||||
Home > | ||||||||||||||||||||||
Return to 'Toolkit' Structure: Ten features of evaluation 2 Evaluation PreparationReasons to undertake a situational analysisThe first requirement is a contextual, or ‘situational analysis’ that understands the position of key stakeholders and is able to establish agreed priorities and outcomes. It is likely that a contextual analysis will reveal that in most cases evaluation is not the central core of [WP practitioners’] working practice and that their interests and expertise may lie elsewhere. As such the participants brought together to develop the evaluation plan will almost certainly have the ‘practicality ethic’ of people working in organisational settings. The evaluation must be manageable and for that reason it is neither possible nor desirable to try and evaluate everything. One of the benefits of having a plan is to enable you to have a clear idea about what you are evaluating and a justification for your decisions about what is being evaluated and when. When bringing colleagues with different roles, responsibilities and professional interests together, terminology can be both problematic and fascinating. The context influences how and what individuals understand. Some words and phrases have come to mean particular things to practitioners in the field of Widening Participation. Clarifying what you all mean can be extremely illuminating and avoid unnecessary confusion and misunderstanding in the future.
Audit of existing evalution practiceWhen producing or (in some cases, refining) an evaluation plan it is useful to audit existing evaluation practice, identify gaps and / or a new focus of attention. These initial meetings are vital to the whole process because understanding the context and working within the parameters set by this is as important as sharing ideas about technical aspects of actually doing evaluation. The discussion in the Aimhigher ‘evidence good practice guide’ is very helpful in thinking about the reasons for doing this and offering questions to think about. The institutional or partnership context in which you work will determine which type of mapping activity is most suitable for your audit of existing evaluation practice. The context includes your history for example previous widening participation strategy and activities, institutional structures, existing work, especially monitoring and reporting mechanisms, as well as the staff you assemble to audit your evaluation practice. Each type of map allows you to plot existing activity and to have a visual representation or summary of work; you can then build on this by including the ‘core participant data’ and other evaluative data you already collect. We have identified four possible mapping activities that can support
this process, for an overview of the different approaches see presentation
2C, for a summary of the approaches see below and use the relevant activity
sheet to support you in undertaking this audit.
Mapping onto Student LifecycleThe focus with this map is the student and allows different stakeholders to map the data they collect at different stages of the lifecycle. This enables you to identify duplication of effort and gaps in the evaluative data that is collected. The number and name of stages in the student lifecycle will vary depending on organisational and partnership structures. The challenge is to try and achieve a balance between the number and specificity of stages. Too few stages, results in the data sources referring to very different issues, which makes it difficult to analyse. Too many stages, makes it difficult to make more strategic decisions as the information is too specific.
Mapping onto a TimelineThe focus with this map is time and can allow you to identify not only when activities are being delivered and when data is available, but also when data might be needed and by whom. So for instance, you can plot dates for major committee meetings, who will make decisions about funding future work, or require evaluation data for reporting purposes, and key events including holidays, exams etc. You can also use this information to inform your decisions about when to undertake different phases of an ongoing evaluation as well as help plan more precise activities within a shorter, tighter-focused evaluation. This type of map enables you to ensure that there is enough time to undertake the evaluation including analysis and timely dissemination of findings. It helps to avoid a number of practical obstacles such as gaining access to participants and securing their active involvement. The duration of the map will determine the level of detail you should include. It is possible that you may want to have several maps, one for the full three year period of your evaluation plan, and others for each year, or individual evaluation projects. If using the timeline to map the range of activities available to pupils and other participants as they progress through education you may want to develop a timeline covering the various years in each key stage. Depending on who is undertaking the evaluation, it is vital to make sure that you make decisions about adjustments to the timing of one project with respect to the wider timeline, as slippages in one project can impact on the overall implementation of the plan. A timeline map can be useful to ensure flexibility and minimise the inconvenience
caused by unexpected events, or maximise the benefits of unexpected opportunities.
Mapping onto a Chain of EventsThis form of mapping allows multiple stakeholders to contribute to the mapping exercise and encourages those with responsibility for developing the evaluation plan to discuss the sequence and focus of different activities. The chain can consist of activities relating to different groups of participants, for instance specific pupil cohorts, or pupils, teachers, parents and carers. Alternatively, the chain can consist of activities delivered by different
stakeholders. Building up the chain of events, deciding on their sequence
and adding subsequent events is a very flexible step that some may find
useful for longitudinal evaluations associated with multiple interventions
that might feature within a learner progression framework. The mapping
chain can help capture the interventions that will feature in a particular
evaluation and provide a useful basis for discussion. It also has potential
as a basis for discussion with participants about the relative impact
of specific activities.
Mapping onto the existing evidence baseAs a minimum previous evaluation reports offer the basis for generating questions for future evaluations. However, there are possible benefits to mapping when preparing for future evaluations. This can be particularly useful in helping to identify key and common concerns amongst stakeholders. This form of mapping allows multiple stakeholders to identify opportunities for sharing the data they collect as well as the evidence base that already exists. It is suitable for identifying who already collects and reports on the individual pieces of participant profile data. The current HEFCE guidance for Aimhigher Partnerships means that from August 2008 all partnerships will be collected the same participant profile data. It therefore seems sensible for HEIs to collect, as a minimum, the same core data to enable them to have the option of mapping onto what will become a growing evidence base generated by Aimhigher and conversely to maxmise the likelihood of their evaluations being used by Aimhigher and HEFCE. Mapping onto the existing evidence base can also capture some of the different types of impact indicator (see evaluation impact indicators) that are essential for developing a rich and robust body of evidence. Notably this mapping exercise lists or tabulates previous evaluations and reports which provide a valuable context and starting point. The extent to which you can refer to existing evidence and comment on the connection with your own future evaluative reports will depend on the previous rigour of other evaluations and the similarity or differences between the data collected. |
|
|||||||||||||||||||||
| Home | About | Team Members | Resource Toolkit | Contact us | |