Category Archives: Learning & Teaching

A short scramble in the Hindu Kush

I have been resolving an extreme case of Tsundoku [see ‘Tsundoki‘ on May 24th, 2017] over the last few weeks by reading ‘A short walk in the Hindu Kush‘ by Eric Newby which I bought nearly forty years ago but never read, despite taking it on holiday a couple of years ago.  Although it was first published in 1958, it is still in print and on its 50th edition; so, it has become something of classic piece of travel writing.  It is funny, understated and very English, or least early to mid 20th century English.

It felt quite nostalgic for me because about thirty years ago I took a short walk in Gilgit Baltistan. Gilgit Baltistan is in northern Pakistan on the border with China and to the west of Nuristan in Afghanistan where Eric Newby and Hugh Carless took their not-so short walk.  I went for a scramble up a small peak to get a better view of the mountains in the Hindu Kush after a drive of several days up the Karakorum Highway.  We were driven from Islamabad to about a mile short of the border with China on the Khunjerab pass at 4730 m [compared to Mont Blanc at 4810 m].

I was there because the Pakistani Government supplied a small group of lecturers with a mini-bus and driver to take us up the Karakorum Highway [and back!] in exchange for a course of CPD [Continuing Professional Development] lectures on structural integrity.  This we delivered in Islamabad to an audience of academics and industrialists during the week before the trip up the Karakoram Highway.  So, Eric Newby’s description of whole villages turning out to greet them and of seeing apricots drying in the sun on the flat roofs of the houses brought back memories for me.

Everyday examples contribute to successful learning

Some weeks ago I quoted Adams and Felder [2008] who said that the ‘educational role of faculty [academic staff] is not to impart knowledge; but to design learning environments that support…knowledge acquisition’ [see ‘Creating an evolving learning environment’ on February 21st, 2018].  A correspondent asked how I create a learning environment and, in response, this is the first in a series of posts on the topic that will appear every third week.  The material is taken from a one-day workshop that Pat Campbell [of Campbell-Kibler Associates] and I have given periodically in the USA [supported by NSF ] and UK [supported by HEA] for engineering academics.

Albert Einstein is reputed to have said that ‘knowledge is experience, everything else is just information’.  I believe that a key task for a university teacher of engineering is to find the common experiences of their students and use them to illustrate engineering principles.  This is relatively straightforward for senior students because they will have taken courses or modules delivered by your colleagues; however, it is more of a challenge for students entering the first year of an engineering programme.  Everyone is unique and a product of their formative conditions, which makes it tricky to identify common experiences that can be used to explain engineering concepts.  The Everyday Engineering Examples, which feature on a page of this blog [https://realizeengineering.blog/everyday-engineering-examples/], were developed to address the need for illustrative situations that would fall into the experience of most, if not all, students.  Two popular examples are using the splits in sausages when you cook them to illustrate two-dimensional stress systems in pressure vessels [see lesson plan S11] and using a glass to extinguish a birthday candle on a cup cake to explain combustion processes [see lesson plan T11].

Everyday Engineering Examples were developed as part of an educational research project, which was funded by the US National Science Foundation [see ENGAGE] and demonstrated that this approach to teaching works.  The project found that significantly more students rated their learning with Everyday Engineering Examples as high or significant than in the control classes independent of the level of difficult involved [Campbell et al. 2008].  So, this is one way in which I create a learning environment that supports knowledge acquisition.  More in future posts…

References

Adams RS & Felder RM, Reframing professional development: A systems approach to preparing engineering educators to educate tomorrow’s engineers. J. Engineering Education, 97(3):230-240, 2008

Campbell PB, Patterson EA, Busch Vishniac I & Kibler T, Integrating Applications in the Teaching of Fundamental Concepts, Proc. 2008 ASEE Annual Conference and Exposition, (AC 2008-499), 2008

 

CALE #1 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA]

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.

Some changes to Realize Engineering

The advertising industry is becoming a pervasive influence on us – telling us how we should eat, dress, travel, vacation, borrow, bank, insure, think and vote.  We are constantly bombarded with messages designed to induce us to buy goods or services that we don’t really need and that undermine progress towards a sustainable society [see my post ‘Old is beautiful‘ on May 1st, 2015].

Many services are offered to us for free in order to expose us to advertisements and to collect data about our habits and interests that are put to uses about which we know little.  These issues became prominent last week with the allegations about the inappropriate use of data from Facebook by Cambridge Analytica [see for example the The Guardian on March 25th, 2016].  A number of organisations have reacted by closing down their Facebook pages [see for example Reuters on March 23rd, 2018] and a #deletefacebook movement has started [see for example The Guardian on March 25th, 2016, again].  I have joined them and deleted my Facebook page as well as disconnecting this blog from Facebook.  Also, in a couple of weeks I plan to stop using Twitter to disseminate this blog; so, if you receive this blog via Twitter then please start to follow it directly.

Finally, the advertisements at the bottom of my blog posts will disappear because I am paying to use WordPress instead of allowing advertising to cover the costs.  A side-effect of this change is a new url: realizeengineering.blog/ So please update your bookmarks,if it doesn’t happen automatically!