Skip Links | Access/General info | ||||||||||||||||||
County South, Lancaster University, Bailrigg, Lancaster LA1 4YD, United
Kingdom Tel: +44 (0) 1524 592907 E-mail enquiries |
||||||||||||||||||
Home > | ||||||||||||||||||
Return to 'Toolkit' Structure: Ten features of evaluation 4 Evaluation Impact IndicatorsCSET use a range of planning tools to assist practitioners in developing a framework for evaluation and indicators of performance, as discussed in the evaluation planning section we have used RUFDATA for identifying, planning and collecting evaluation data. Importantly, interventions evolve over time and evaluation needs to accommodate this. The Aimhigher guidance also emphasises this (‘the importance of progressive programmes of co-ordinated activities that will contribute to a learner progression framework’), and evaluation as a ‘cycle of activity’. To obtain a more nuanced understanding and to gather evidence that provides the richness and robustness of a good evaluation it is necessary to think about a number of other features, namely, impact indicators and consider how evaluations will connect with other stakeholders’ evaluation plans or activity. In this toolkit we outline two types of impact indicator:
Enabling, Process and Outcome IndicatorsEnabling indicators refers to dimensions which need:
Enabling indicators are concerned with the structures and support which needs to be set up or provided to better enable the desired processes to take place. These may include such things as the establishment of an institutional, departmental or group policy for widening participation, appointment of WP co-ordinators or working parties, allocation of funds/resources e.g. WP and disability premium or Aimhigher funding, time table changes or the provision of professional development to enable staff to support targeted students. The degree to which items under this heading are provided is highly relevant to any evaluation of the outcomes it provides an explanatory context that is as valuable in explaining why an activity might be successful or not. It is often these features that are missing from performance measurement systems that focus on the quantitative participant data. Process indicators refer to dimensions which are concerned with actions:
Process indicators are concerned with what needs to happen within the ‘target group’ practice in order to embody or achieve desired outcomes. In order to assess the effects of a strategy, the experience of the targeted learners should be attributable to strategic interventions sponsored by institutional policy on widening participation. The issue of attribution is critical here. Outcome indicators refer to dimensions which are concerned with ‘end points’:
Outcome indicators are concerned with the intermediate or longer-term outcomes of the activities or programme of activities and are tied to impact goals. Since widening participation strategies are ultimately about effecting institutional change to facilitate positive changes in student behaviour, the most critical outcome indicators tend to refer to student based outcomes. However, it is perfectly possible to identify intermediate outcomes which refer to shifts in departmental or subject cultures/teaching styles which could be positively attributable to widening participation activity. All these indicators can be addressed through evidence gained from standard
instruments [questionnaires, interviews of key stakeholders and informants,
participants etc] and by the inspection of documentary evidence. It may
be that indicative evidence through case studies of change is a useful
tool. See section 7 evaluation data collection. Levels of Evaluation Focus for impact indicatorsThere is a type of evaluation tool which is designed to organise an evaluation focus for planning purposes. It understands possible evaluation foci in terms of levels. Essentially, it corresponds to the elements of the trajectory taken by an intervention from the quality of the target group’s experience of the initial activity (for example this could be a workshop, a seminar, a campus visit, or mentoring project etc.) through to the extent to which the activity creates longer term changes in the individual and strategic effects on stakeholders working within institutions and ultimately impacts on the whole system. It therefore moves its focus from individual participant experience to staff and their practice within organisations concluding with changes at a macro level. The original CSET model identified 5 ‘levels of evaluation focus
in planning approaches to the evaluation of widening participation’.
This ‘toolkit’ contains a modified version of the ‘levels
model’ that emerged through discussion and feedback. The version
below combines levels 1 and 2 (the experience of the intervention and
awareness/aspiration outcomes) and re-presents each level with some illustrative
examples. Level 1: Quality of the experience and immediate effects or situated learningIn many cases this diagnostic tool is used as a quality check, customer service tool or ‘happy sheet’. It is important as a diagnostic tool for the quality of the delivery of an engagement strategy or estimations of awareness about the topic under consideration. If the engagement had specific learning or knowledge based outcomes in mind on the part of the target group, this level is concerned with measuring what these may be. In the widening participation environment it might be associated with new information acquired by the target group (courses and routes into HE they were not aware of before), attitudinal changes, the development of new horizons. These outcomes so are important. However, while not corresponding to new behaviour or practice on the part of the target group, they might be considered a necessary condition for such change. Questions
MethodsParticipant feedback through questionnaires, focus groups Level 2: Quality of transfer or reconstructed learning to new environments and practicesThis level of impact can be addressed by quantitative indicators with relatively little diagnostic potential but might also include indications of the experience of University life and its support services by the target group involving more narrative inquiry techniques. This is probably the ‘gold standard’ in terms of widening participation and is a direct reference to the extent to which strategies have produced more routine, longer term changes in the attitudes, capacities, behaviours, confidence and identities in the target group. Questions
MethodsExternal and institutional data of attainment, progression, applications, admissions, which needs to be linked to ‘core participant data’ and for the more qualitative aspects semi-structured interviews, focus group discussions Level 3: Quality of institutional or sector impactsThis level shifts the focus from the experience of individual learners to the extent to which strategies are promoting ‘new ways’ of doing things at the institutional level in terms of new systems, routine systemic practices and assumptions which are framed by the widening participation agenda. Institutional (or sector) impacts including changes to the way schools, colleges, HEIs engage in widening participation agenda; commitment to particular practices or projects, the experience of teachers, parents, and HEI staff, their views of WP interventions and the evidence they offer of the effects of such interventions on the learning cultures and practices of schools, colleges, and HEIs. As an evaluation focus, other key stakeholders (undergraduate and post graduate officers, learning and teaching committees, teams engaged in learner support practices, teachers engaged in routine teaching and learning practices, not just the Aimhigher co-ordinator) will form the source of evaluative evidence. In effect the focus at this level is on institutional change which involves those whose primary remit may not be widening participation. Questions
MethodsQuestionnaires, focus groups, semi-structured or dialogic interviews help gather evidence, another source of evidence are documents e.g. School Development Plan or newsletter, artefacts e.g. press releases, websites. Level 4: Quality of Impact on macro or long term strategic objectivesThis level is more relevant to HEFCE, DIUS and others interested in the macro context. One way of doing this will be to make use of Higher Education and Aimhigher Partnership evaluations in particular those at level 2 and 3 to help develop a meta perspective on how the policy is achieving positive effects overall. In order to have this as an option it is crucial that individual evaluations make explicit the context of their evaluation using common ‘core participant data’ and descriptors of the categories of activity and levels of experience. Level 5: Changes in sector wide and macro practicesMacro or long term strategic objectives; the way local trends connect with – illustrate, reinforce, contradict the longer-term national trends. Some HEIs and Aimhigher partnerships will have tracking schemes in place that will enable them to begin to comment at this level, it is however, assumed that bringing evidence together at this level will be undertaken by the funding council.
|
|
|||||||||||||||||
| Home | About | Team Members | Resource Toolkit | Contact us | |