Tag Archives: deep learning

Learning problem-solving skills

Inukshuk: meaning ‘in the likeness of a human’ in the Inuit language. A traditional symbol meaning ‘someone was here’ or ‘you are on the right path’.

One definition of engineering given in the Oxford English Dictionary is ‘the action of working artfully to bring something about’.  This action usually requires creative problem-solving which is a common skill possessed by all engineers regardless of their field of specialisation.  In many universities, students acquire this skill though solving example problems set by their instructors and supported by example classes and, or tutorials.

In my lectures, I solve example problems in class using a pen and paper combined with a visualiser and then give the students a set of problems to solve themselves.  The answers but not the solutions are provided; so that students know when they have arrived at the correct answer but not how to get there.  Students find this difficult and complain because I am putting the emphasis on their learning of problem-solving skills which requires considerable effort by them.  There are no short-cuts – it’s a process of deep-learning [see ‘Deep long-term learning’ on April 18th, 2018].

Research shows that students tend to jump into algebraic manipulation of equations whereas experts experiment to find the best approach to solving a problem.  The transition from student to skilled problem-solver requires students to become comfortable with the slow and uncertain process of creating representations of the problem and exploring the possible approaches to the solution [Martin & Schwartz, 2014].  And, it takes extensive practice to develop these problem-solving skills [Martin & Schwartz, 2009].  For instance, it is challenging to persuade students to sketch a representation of the problem that they are trying to solve [see ‘Meta-representational competence’ on May 13th, 2015].  Working in small groups with a tutor or a peer-mentor is an effective way of supporting students in acquiring these skills.  However, it is important to ensure that the students are engaged in the problem-solving so that the tutor acts as consultant or a guide who is not directly involved in solving the problem but can give students confidence that they are on the right path.

[Footnote: a visualiser is the modern equivalent of an OverHead Projector (OHP) which instead of projecting optically uses a digital camera and projector.  It’s probably deserves to be on the Mindset List since it is one of those differences between a professor’s experience as a student and our students’ experience [see ‘Engineering idiom’ on September 12th, 2018]].


Martin L & Schwartz DL, A pragmatic perspective on visual representation and creative thinking, Visual Studies, 29(1):80-93, 2014.

Martin L & Schwartz DL, Prospective adaptation in the use of external representations, Cognition and Instruction, 27(4):370-400, 2009.


CALE #9 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA] – although this post is based on an introduction to tutorials given to new students and staff at the University of Liverpool in 2015 & 2016.

Photo: ILANAAQ_Whistler by NordicLondon (CC BY-NC 2.0) https://www.flickr.com/photos/25408600@N00/189300958/

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.


[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.