The project will experiment with, and evaluate, three approaches to measuring learning gain among cohorts of students across five disciplines in two institutions. It will assess the institutional factors that may help and hinder the adoption of each approach.
Partners: City College Norwich
Project methodologies: Grades; Surveys; Mixed methods; Other qualitative methods
The trial of three different learning gain measures at University of East Anglia: what do the approaches offer?
The overall aim of the project is to experiment with, and evaluate, three different approaches to identifying and measuring learning gain using data from cohorts of students across different discipline areas during 2015-16 and 2016-17. The three approaches are:
- student marks and GPA
- self-efficacy assessments
- concept inventories.
Aims and objectives
- To test each approach to learning gain (students marks/GPA; self-efficacy assessments; concept inventories) on at least two cohorts of students
- To identify strengths and weaknesses of each approach to identifying and measuring learning gain
- To assess institutional factors that may either serve as barriers to, or help facilitate, the adoption of each approach to identifying and measuring learning gain
- To understanding the suitability and scalability of each of the measures.
Experiences and outcomes
The institutional perspective:
One distinct benefit of these approaches - in particular, student marks/GPA and self-efficacy assessment (SEA) - is that they are embedded in the pedagogy and so do not require additional student buy-in or engagement.
For example, SEA has been actively championed within the School of Economics for a number of years and there has subsequently been some wider interests in other schools to implement this type of formative assessment. In terms of concept inventories, where these were embedded as part of the course or module, such as in Chemistry, there was also more subsequent student engagement.
In terms of differences in average marks and banded GPA there is considerable variation in the seeming distance-travelled between different discipline areas at UEA. Expressed as marks, the difference between the cohort with the greatest distance-travelled (average student mark 5.52% higher in final year than first year) and the cohort with the lowest (average student mark 4.58% lower) is over 10%. The reasons behind differences among cohorts have been examined through interviews. The emerging findings, which highlight some inconsistencies in marking cultures, are that:
- While a generic marking scale is applied across the university, several academics have developed more subject based marking rubrics
- The nature of subjects give different marking profiles, with mathematical subjects producing a different (bimodal) distribution of marks when compared to essay based subjects which tend to be more clustered
- There is an acceptance of the subjectivity of the marking process in some subjects, especially when it comes to small differences (for example 2%) in marks awarded
- The nature of the assessment design varies form course to course with some students having to produce different numbers of assessments for modules of the same credit size
- Opportunities to discuss marking and assessment approaches between schools are limited.
Positive learning gain is associated with confidence gain. When students learn from each other in the classroom, their confidence at tackling similar problems in the future also increases (at class level and at student level).
Using concept inventories, it appears that a conceptual learning gain consequential to the module took place. The regression analysis undertaken indicates that those students who performed worse in the first run exhibited larger absolute improvements in their conceptual understanding than their better performing peers.
Quotes about the impact of the project
From staff in direct contact with students:
"Being part of the project team has been particularly interesting in surfacing the issue of differing marking mind-sets."
"The learning gain project has brought together a team of pedagogic scholars and professional support staff to focus on operationalising a system of learning gain measures for the University. The insights derived have already proved valuable in informing approaches to learning enhancement in a range of subject areas." Prof Neil Ward, PVC Academic Affairs, UEA
In developing and implementing learning gain initiatives generally, there needs to be strong staff buy-in to ensure success. This was the case at UEA, which helped to facilitate the implementation of the project reported in this case study. Existing capacity as well as strong teaching-focused culture alongside research culture also act as enablers and help to facilitate staff interest. The fact that the UEA project design was not set up in a hierarchical way, but followed a bottom-up, inclusive, process design was an enabler and helped to fuel interest in the initiative.
When it comes to analysing and using student marks as a way to measure learning gain, existing systems and units in place that deal with student marks are important. At UEA, the role of the University’s Business Intelligence Unit (BIU) and their existing systems for data analysis regarding student performance facilitated the use of student marks and GPA as measures of learning gain.
Any initiative to develop quantitative measures around the efficacy of learning and teaching may be interpreted by some stakeholders as part of wider surveillance and performance management cultures. Another issue to consider is research ethics, which can add an additional burden to pursuing these types of research initiatives. This raises the general question of where evaluating teaching ends and doing research begins – institutions need to consider this issue carefully.
These are some of the issues that HE institutions interested in implementing learning gain approaches may wish to consider.
Student marks and GPA
This approach compares a standard measure of actual percentage marks awarded at two points in time (undergraduate marks only). The methodology compares the average mark per student cohort, by school and then route (standard, with foundation year, or, with integrated master year). An average mark was calculated using the last five years of student cohorts marks at the end of year 1 and compared to the average mark (calculated in the same way) at the end of year 3. This was converted to a GPA raw score and used an amended form of the HEA GPA scale to give each student a banded GPA.
The three learning gain indicators in the self-efficacy assessments used in this project are:
- self-efficacy, measured as student self-reported confidence in formative assessment performance
- self-assessment skills, measured as the statistical association of student confidence and student performance
- peer-instruction learning gains, measured as the difference in the proportion of correct responses to questions before and after peer-instruction.
Simple concept inventory tests were administered at two points during the academic year 2015-2016 and 2016-17 in Chemistry, Biology and Pharmacy to examine this approach to measuring learning gain across a period of study. Based on these tests, a ‘normalized gain’ was then determined between a first and second sitting of the concept inventory instrument, which is interpreted as a measure of conceptual learning gain associated with a course.
Publications and forums
Arico, F., Gillespie, H., Lancaster, S.; Ward, N. and Ylonen, A. (2017) Lesson in Learning Gain: insights and critique, Higher Education Pedagogies, Special Issue on Learning Gain (forthcoming)
- Annamari Ylonen, School of Education and Lifelong Learning, UEA; email firstname.lastname@example.org
- Helena Gillespie, School of Education and Lifelong Learning, UEA; email email@example.com
Authors of case study: Arico, F.; Gillespie, H.; Lancaster, S.; Ward, N. and Ylonen, A.