Tag Archives: feedback

Feedback is a gift

In academic life you get used to receiving feedback, including plenty of negative feedback when your grant proposal is declined by a funding agency or your manuscript is rejected by the editor of a journal.  We are also subject to annual performance reviews which can be difficult if all of your proposals and manuscripts have been rejected.  So, how should we respond to negative feedback?

The Roman philosopher, Marcus Aurelius is credited with the saying ‘Everything we hear is an opinion, not a fact’, which perhaps implies we should not take the negative feedback too seriously, or at least we should look for some evidence.

Tasha Eurich has suggested we should mine it for insight and harness it for improvement but without incurring collateral damage to your self-confidence.  He recommends a five-point approach, based on empirical evidence:

  1. Don’t rush to react
  2. Gather more evidence
  3. Find a harbinger
  4. Don’t be a lonely martyr but engage in dialogue
  5. Remember that change is not the only option; you can accept your weaknesses, share them and work around them.

If you are the one giving the negative feedback then it is worth remembering the stages of response to bad news are denial, anger, bargaining, depression and acceptance.  Hopefully, the feedback will not induce the full range of response but, when it does, you should not be surprised.

See earlier posts on giving [‘Feedback on feedback‘ on June 28th, 2017] and receiving student feedback [‘Deep long-term learning‘ on April 28th, 2018].

 

Source: Tasha Eurich, ‘The right way to respond to negative feedback’, HBR, May 31st, 2018.

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.

Feedback on feedback

Feedback on students’ assignments is a challenge for many in higher education.  Students appear to be increasingly dissatisfied with it and academics are frustrated by its apparent ineffectiveness, especially when set against the effort required for its provision.  In the UK, the National Student Survey results show that satisfaction with assessment and feedback is increasing but it remains the lowest ranked category in the survey [1].  My own recent experience has been of the students’ insatiable hunger for feedback on a continuing professional development (CPD) programme, despite receiving detailed written feedback and one-to-one oral discussion of their assignments.

So, what is going wrong?  I am aware that many of my academic colleagues in engineering do not invest much time in reading the education research literature; perhaps because, like the engineering research literature, much of it is written in a language that is readily appreciated only by those immersed in the subject.  So, here is an accessible digest of research on effective feedback that meets students’ expectations and realises the potential improvement in their performance.

It is widely accepted that feedback is an essential component [2] in the learning cycle and there is evidence that feedback is the single most powerful influence on student achievement [3, 4].  However, we often fail to realise this potential because our feedback is too generic or vague, not sufficiently timely [5], and transmission-focussed rather than student-centered or participatory [6].  In addition, our students tend not to be ‘assessment literate’, meaning they are unfamiliar with assessment and feedback approaches and they do not interpret assessment expectations in the same way as their tutors [5, 7].  Student reaction to feedback is strongly related to their emotional maturity, self-efficacy and motivation [1]; so that for a student with low self-esteem, negative feedback can be annihilating [8].  Emotional immaturity and assessment illiteracy, such as is typically found amongst first year students, is a toxic mix that in the absence of a supportive tutorial system leads to student dissatisfaction with the feedback process [1].

So, how should we provide feedback?  I provide copious detailed comments on students’ written work following the example of my own university tutor, who I suspect was following example of his tutor, and so on.  I found these comments helpful but at times overwhelming.  I also remember a college tutor who made, what seemed to me, devastatingly negative comments about my writing skills, which destroyed my confidence in my writing ability for decades.  It was only restored by a Professor of English who recently complimented me on my writing; although I still harbour a suspicion that she was just being kind to me.  So, neither of my tutors got it right; although one was clearly worse than the other.  Students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded [8].

Students like clear, unambiguous, instructional and direction feedback [8].  Feedback should provide a statement of student performance and suggestions for improvement [9], i.e. identify the gap between actual and expected performance and provide instructive advice on closing the gap.  This implies that specific assessment criteria are required that explicitly define the expectation [2].  The table below lists some of the positive and negative attributes of feedback based on the literature [1,2].  However, deploying the appropriate attributes does not guarantee that students will engage with feedback; sometimes students fail to recognise that feedback is being provided, for example in informal discussion and dialogic teaching; and hence, it is important to identify the nature and purpose of feedback every time it is provided.  We should reduce our over-emphasis on written feedback and make more use of oral feedback and one-to-one, or small group, discussion.  We need to take care that the receipt of grades or marks does not obscure the feedback, perhaps by delaying the release of marks.  You could ask students about the mark they would expect in the light of the feedback; and, you could require students to show in future work how they have used the feedback – both of these actions are likely to improve the effectiveness of feedback [5].

In summary, feedback that is content rather than process-driven is unlikely to engage students [10].  We need to strike a better balance between positive and negative comments, which includes a focus on appropriate guidance and motivation rather than justifying marks and diagnosing short-comings [2].  For most of us, this means learning a new way of providing feedback, which is difficult and potentially arduous; however, the likely rewards are more engaged, higher achieving students who might appreciate their tutors more.

References

[1] Pitt E & Norton L, ‘Now that’s the feedback that I want!’ Students reactions to feedback on graded work and what they do with it. Assessment & Evaluation in HE, 42(4):499-516, 2017.

[2] Weaver MR, Do students value feedback? Student perceptions of tutors’ written responses.  Assessment & Evaluation in HE, 31(3):379-394, 2006.

[3] Hattie JA, Identifying the salient facets of a model of student learning: a synthesis of meta-analyses.  IJ Educational Research, 11(2):187-212, 1987.

[4] Black P & Wiliam D, Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1):7-74, 1998.

[5] O’Donovan B, Rust C & Price M, A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in HE, 41(6):938-949, 2016.

[6] Nicol D & MacFarlane-Dick D, Formative assessment and self-regulatory learning: a model and seven principles of good feedback practice. Studies in HE, 31(2):199-218, 2006.

[7] Price M, Rust C, O’Donovan B, Handley K & Bryant R, Assessment literacy: the foundation for improving student learning. Oxford: Oxford Centre for Staff and Learning Development, 2012.

[8] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations.  Assessment & Evaluation in HE, DOI: 10.1080/02602938.2017.1310801, 2017.

[9] Saddler R, Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in HE, 35(5):535-550, 2010.

[10] Hounsell D, Essay writing and the quality of feedback. In J Richardson, M. Eysenck & D. Piper (eds) Student learning: research in education and cognitive psychology. Milton Keynes: Open University Press, 1987.

Traditionalist tendencies revealed

Thank you for the supportive comments in response to my post on January 4th about to blog or not to blog [see ‘A tiny contribution to culture?‘].  They dispelled any lingering doubts about continuing to write every week.  When I first started writing this blog, I didn’t have an editor.  Then, for a while an English literature graduate, who I know well, acted as my editor.  He didn’t run off with the butler but his enthusiasm waned and I am very grateful to my current editor, who ensures that my narrative threads are not severed or [too] tangled and my sentences are complete.

Feedback is a tricky thing because often it only comes from a small but vocal minority; so, how much notice should one take of it?  We live in a world where the ‘customer’ is always right and a response to feedback is often an expectation.  I felt some pressure to respond to last week’s comments and they were positive – it becomes almost an imperative when the comments are negative, even when expressed by a tiny minority of ‘customers’.  This might be appropriate if you are running a hotel or an automotive service department but seems inappropriate in other settings, such as education.  Engineering students need to develop creative problem-solving skills and research shows that students tend to jump into algebraic manipulation whereas experts experiment to find the best approach.  This means that engineering students need to become comfortable with the slow and uncertain process of creating representations and exploring the space of possibilities, which is achieved through extensive practice, according to Martin and Schwartz. Not surprisingly, most students find this difficult but are uncomplaining; however, for some it is not to their liking and they provide, often vocal, feedback along these lines.  This is fine and to be expected.  However, in the post-truth world of higher education, many administrators and governments appear to value the views of these vocal students more highly than the experts delivering the education – at least so it seems much of the time.

I am not suggesting that we shouldn’t evaluate the quality of educational provision but perhaps it would be more appropriate to ask our students after they have had the opportunity to experience the impact of their education on their post-university life as well as considering the impact of our students on society.  Of course, this would be much more difficult for administrators than collating a set of on-line questionnaires each term.  However, it would have a longer time constant which would be more conducive to evolutionary rather than revolutionary changes in curricula and pedagogy.  Now I sound like a traditionalist when I have been trying so hard to be a post-modernist!

References

Martin L & Schwartz DL, A pragmatic perspective on visual representation and creative thinking, Visual Studies, 29(1):80-93, 2014.

Martin L & Schwartz DL, Prospective adaptation in the use of external representations, Cognition and Instruction, 27(4):370-400, 2009.