Tag Archives: education

So how do people learn?

Here’s the next in the CALE series.  When designing a learning environment that supports the acquisition of knowledge by all of our students, we need to think about the different ways that people learn.  In the 1970s, Kolb developed his learning style inventory which is illustrated in the diagram above.  Approaches to learning are plotted on two axes: on the horizontal axis is learning by watching at one end and learning by doing at the other; while on the vertical axis is learning by feeling at one end and learning by thinking at the opposite end.  Kolb proposed that people tend to learn by a pair of these attributes, i.e. by watching and feeling, or watching and thinking, or doing and thinking, or doing and feeling, so that an individual can be categorised into one of the four quadrants.  Titles are given to each type of learning as shown in the quadrants, i.e. Divergers, Assimilators, Convergers and Accommodators.

In practice, it seems unlikely that many of us remain in one of these quadrants though we might have a preference for one of them.  Honey and Mumford [1992] proposed that learning is most effective when we rotate around the learning modes represented in the quadrants, as shown in the diagram below.  Starting in the doing & feeling quadrant by have an experience and being an Activist, moving to the feeling & watching quadrant by reviewing the experience as a Reflector, then in watching and thinking mode, drawing conclusions from the experience as a Theorist, culminating with planning the next steps as a Pragmatist in the thinking and doing quadrant before repeating the rotation.

There are other ideas about how we learn but these are two of the classic theories, which I have found useful in creating a learning environment that is dynamic and involves cycling students around Honey and Mumford’s learning modes.

References:

Kolb DA, Learning style inventory technical manual. McBer & Co., Boston, MA, 1976.

Honey P & Mumford A. The Manual of Learning Styles 3rd Ed. Peter Honey Publications Limited, Maidenhead, 1992.

 

CALE #3 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA]

Formative experiences

A few weeks ago, I wrote about how we all arrive in the classroom with different experiences that are strongly influenced by the conditions in our formative years.  When I talk about this process in workshops on teaching, I invite attendees to tell us about something that has influenced their approach to learning.  However, I kick-off by sharing one of mine: I joined the Royal Navy straight from school and so I arrived at University having painted the white line down the centre of the flight deck of an aircraft carrier but also having flown a jet.  This meant that my experience of dynamics was somewhat different to most of my peers.  It’s amazing the life experiences that are revealed when we go around the room at these workshops.  Feel free to share your experiences and how they influence your learning using the comments section below.

CALE #2 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA]

Photo by Pedro Aragao [Creative Commons Attribution-Share Alike 3.0 Unported]

Everyday examples contribute to successful learning

Some weeks ago I quoted Adams and Felder [2008] who said that the ‘educational role of faculty [academic staff] is not to impart knowledge; but to design learning environments that support…knowledge acquisition’ [see ‘Creating an evolving learning environment’ on February 21st, 2018].  A correspondent asked how I create a learning environment and, in response, this is the first in a series of posts on the topic that will appear every third week.  The material is taken from a one-day workshop that Pat Campbell [of Campbell-Kibler Associates] and I have given periodically in the USA [supported by NSF ] and UK [supported by HEA] for engineering academics.

Albert Einstein is reputed to have said that ‘knowledge is experience, everything else is just information’.  I believe that a key task for a university teacher of engineering is to find the common experiences of their students and use them to illustrate engineering principles.  This is relatively straightforward for senior students because they will have taken courses or modules delivered by your colleagues; however, it is more of a challenge for students entering the first year of an engineering programme.  Everyone is unique and a product of their formative conditions, which makes it tricky to identify common experiences that can be used to explain engineering concepts.  The Everyday Engineering Examples, which feature on a page of this blog [https://realizeengineering.blog/everyday-engineering-examples/], were developed to address the need for illustrative situations that would fall into the experience of most, if not all, students.  Two popular examples are using the splits in sausages when you cook them to illustrate two-dimensional stress systems in pressure vessels [see lesson plan S11] and using a glass to extinguish a birthday candle on a cup cake to explain combustion processes [see lesson plan T11].

Everyday Engineering Examples were developed as part of an educational research project, which was funded by the US National Science Foundation [see ENGAGE] and demonstrated that this approach to teaching works.  The project found that significantly more students rated their learning with Everyday Engineering Examples as high or significant than in the control classes independent of the level of difficult involved [Campbell et al. 2008].  So, this is one way in which I create a learning environment that supports knowledge acquisition.  More in future posts…

References

Adams RS & Felder RM, Reframing professional development: A systems approach to preparing engineering educators to educate tomorrow’s engineers. J. Engineering Education, 97(3):230-240, 2008

Campbell PB, Patterson EA, Busch Vishniac I & Kibler T, Integrating Applications in the Teaching of Fundamental Concepts, Proc. 2008 ASEE Annual Conference and Exposition, (AC 2008-499), 2008

 

CALE #1 [Creating A Learning Environment: a series of posts based on a workshop given periodically by Pat Campbell and Eann Patterson in the USA supported by NSF and the UK supported by HEA]

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.