Author Archives: Eann Patterson

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.

Massive engineering

Last month I was at the Photomechanics 2018 conference in Toulouse in France.  Photomechanics is the science of using photons to measure deformation and displacements in anything, from biological cells to whole engineering structures, such as bridges or powerstations [see for example: ‘Counting photons to measure stress‘ posted on November 18th, 2015].  I am interested in the challenges created by the extremes of scale and environmental conditions; although on this occasion we presented our research on addressing the challenges of industrial applications, in the EU projects INSTRUCTIVE [see ‘Instructive update‘ on October 4th, 2017] and MOTIVATE [see ‘Brave New World‘ posted on January 10th, 2018].

It was a small conference without parallel sessions and the organisers were more imaginative than usual in providing us with opportunities for interaction.  At the end of first day of talks, we went on a guided walking tour of old Toulouse.  At the end of second day, we went to the Toulouse Aerospace Museum and had the chance to go onboard Concorde.

I stayed an extra day for an organised tour of the Airbus A380 assembly line.  Only the engine pylons are made in Toulouse.  The rest of the 575-seater plane is manufactured around Europe and arrives in monthly road convoys after travelling by sea to local ports.  The cockpit, centre, tail sections of the double-deck fuselage travel separately on specially-made trucks with each 45m long wing section following on its own transporter.  It takes about a month to assemble these massive sections.  This is engineering on a huge scale performed with laser precision (laser systems are used to align the sections).  The engines are also manufactured elsewhere and transported to Toulouse to be hung on the wings.  The maximum diameter of the Rolls-Royce Trent 900 engines, being attached to the plane we saw, is approximately same as the fuselage diameter of an A320 airplane.

Once the A380 is assembled and its systems tested, then it is flown to another Airbus factory in Germany to be painted and for the cabin to be fitted out to the customer’s specification.  In total, 11 Airbus factories in France, Germany, Spain and the United Kingdom are involved in producing the A380; this does not include the extensive supply chain supporting these factories.  As I toured the assembly line and our guide assailed us with facts and figures about the scale of the operation, I was thinking about why the nuclear power industry across Europe could not collaborate on this scale to produce affordable, identical power stations.  Airbus originated from a political decision in the 1970s to create a globally-competitive European aerospace industry that led to a collaboration between national manufacturers which evolved into the Airbus company.  One vision for fusion energy is a globally dispersed manufacturing venture that would evolve from the consortium that is currently building the ITER experiment and planning the DEMO plant.  However, there does not appear to be any hint that the nuclear fission industry is likely to follow the example of the European aerospace industry to create a globally-competitive industry producing massive pieces of engineering within a strictly regulated environment.

There was no photography allowed at Airbus so today’s photograph is of Basilique Notre-Dame de la Daurade in Toulouse.

On vacation

I am on vacation and off-grid, probably getting cold and wet in the English Lake District.  If you have withdrawal symptoms from this blog then follow the links to find out why you need a vacation too!

Gone walking posted on April 19th, 2017.

Digital detox with a deep vacation posted on August 10th, 2016.

Deep vacation posted on July 29th, 2015.

 

Some changes to Realize Engineering

The advertising industry is becoming a pervasive influence on us – telling us how we should eat, dress, travel, vacation, borrow, bank, insure, think and vote.  We are constantly bombarded with messages designed to induce us to buy goods or services that we don’t really need and that undermine progress towards a sustainable society [see my post ‘Old is beautiful‘ on May 1st, 2015].

Many services are offered to us for free in order to expose us to advertisements and to collect data about our habits and interests that are put to uses about which we know little.  These issues became prominent last week with the allegations about the inappropriate use of data from Facebook by Cambridge Analytica [see for example the The Guardian on March 25th, 2016].  A number of organisations have reacted by closing down their Facebook pages [see for example Reuters on March 23rd, 2018] and a #deletefacebook movement has started [see for example The Guardian on March 25th, 2016, again].  I have joined them and deleted my Facebook page as well as disconnecting this blog from Facebook.  Also, in a couple of weeks I plan to stop using Twitter to disseminate this blog; so, if you receive this blog via Twitter then please start to follow it directly.

Finally, the advertisements at the bottom of my blog posts will disappear because I am paying to use WordPress instead of allowing advertising to cover the costs.  A side-effect of this change is a new url: realizeengineering.blog/ So please update your bookmarks,if it doesn’t happen automatically!