Tag Archives: knowledge

Knowledge is power

Pitt Rivers Museum, Oxford

“The list of things that I believe is, if not infinite, virtually endless. And I am finite.  Though I can readily imagine what I would have to do to obtain evidence that would support anyone of my beliefs, I cannot imagine being able to do this for all of my beliefs.  I believe too much, there is too much relevant evidence (much of it available only after extensive, specialized training); intellect is too small and life is too short.”

These words are a direct quote from the opening paragraph of an article by John Hardwig published in the Journal of Philosophy in 1985. He goes on to argue that we can have good reasons for believing something if we have good reasons for believing that others have good reasons to believe it.  So, it is reasonable for a layperson to believe something that an expert also believes and that it is even rational to refuse to think for ourselves in these circumstances.  Because life is too short and there are too many other things to think about.

This implies a high level of trust in the expert as well as a concept of knowledge that is known by the community.  Someone somewhere has the evidence to support the knowledge.  For instance, as a professor, I am trusted by my students to provide them with knowledge for which I have the supporting evidence or I believe someone else has the evidence.  This trust is reinforced to a very small extent by replicating the evidence in practical classes.

More than 30 years ago, John Hardwig concluded his article by worrying about the extent to which wisdom is based on trust and the threat to “individual autonomy and responsibility, equality and democracy” posed by our dependence on others for knowledge.  Today, the internet has given us access to, if not infinite, virtually endless information.  Unfortunately, much of the information available is inaccurate, incomplete and biased, sometimes due to self-interest.  Our problem is sifting the facts from the fabrications; and identifying who are experts and can be trusted as sources of knowledge.  This appears to be leading to a crisis of trust in both experts and what constitutes the body of knowledge known by the community, which is threatening our democracies and undermining equality.

Source:

Hardwig J, Epistemic dependence, J. Philosophy, 82(7):335-349, 1985.

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.

Creating an evolving learning environment

A couple of weeks ago, I wrote about marking examinations and my tendency to focus on the students that I had failed to teach rather than those who excelled in their knowledge of problem-solving with the laws of thermodynamics [see my post ‘Depressed by exams‘ on January 31st, 2018].  One correspondent suggested that I shouldn’t beat myself up because ‘to teach is to show, to learn is to acquire‘; and that I had not failed to show but that some of my students had failed to acquire.  However, Adams and Felder have stated that the ‘educational role of faculty is not to impart knowledge; but to design learning environments that support knowledge acquisition‘.  My despondency arises from my apparent inability to create a learning environment that supports and encourages knowledge acquisition for all of my students.  People arrive in my class with a variety of formative experiences and different ways of learning, which makes it challenging to generate a learning environment that is effective for everyone.   It’s an on-going challenge due to the ever-widening cultural gap between students and their professors, which is large enough to have warranted at least one anthropological study (see My Freshman Year by Rebekah Nathan). So, my focus on the weaker exam scripts has a positive outcome because it causes me to think about evolving the learning environment.

Sources:

Adams RS, Felder RM, Reframing professional development: A systems approach to preparing engineering educators to educate tomorrow’s engineers. J. Engineering Education, 97(3):230-240, 2008.

Nathan R, My freshman year: what a professor learned by becoming a student, Cornell University Press, Ithaca, New York, 2005

Entropy on the brain

It was the worst of times, it was the worst of times.  Again.  That’s the things about things.  They fall apart, always have, always will, it’s in their nature.’  They are the opening three lines of Ali Smith’s novel ‘Autumn’.  Ali Smith doesn’t mention entropy but that’s what she is describing.

My first-year lecture course has progressed from the first law of thermodynamics to the second law; and so, I have been stretching the students’ brains by talking about entropy.  It’s a favourite topic of mine but many people find it difficult.  Entropy can be described as the level of disorder present in a system or the environment.  Ludwig Boltzmann derived his famous equation, S=k ln W, which can be found on his gravestone – he died in 1906.  S is entropy, k is a constant of proportionality named after Boltzmann, and W is the number of arrangements in which a system can be arranged without changing its energy content (ln means natural logarithm).  So, the more arrangements that are possible then the larger is the entropy.

By now the neurons in your brain should be firing away nicely with a good level of synchronicity (see my post entitled ‘Digital hive mind‘ on November 30th, 2016 and ‘Is the world comprehensible?‘ on March 15th, 2017).  In other words, groups of neurons should be showing electrical activity that is in phase with other groups to form large networks.  Some scientists believe that the size of the network was indicative of the level of your consciousness.  However, scientists in Toronto led by Jose Luis Perez-Velazquez, have suggested that it is not the size of the network that is linked to consciousness but the number of ways that a particular degree of connectivity can be achieved.  This begins to sound like the entropy of your neurons.

In 1948 Claude Shannon, an American electrical engineer, stated that ‘information must be considered as a negative term in the entropy of the system; in short, information is negentropy‘. We can extend this idea to the concept that the entropy associated with information becomes lower as it is arranged, or ordered, into knowledge frameworks, e.g. laws and principles, that allow us to explain phenomena or behaviour.

Perhaps these ideas about entropy of information and neurons are connected; because when you have mastered a knowledge framework for a topic, such as the laws of thermodynamics, you need to deploy a small number of neurons to understand new information associated with that topic.  However, when you are presented with unfamiliar situations then you need to fire multiple networks of neurons and try out millions of ways of connecting them, in order to understand the unfamiliar data being supplied by your senses.

For diverse posts on entropy see: ‘Entropy in poetry‘ on June 1st, 2016; ‘Entropy management for bees and flights‘ on November 5th, 2014; and ‘More on white dwarfs and existentialism‘ on November 16th, 2016.

Sources:

Ali Smith, Autumn, Penguin Books, 2017

Consciousness is tied to ‘entropy’, say researchers, Physics World, October 16th, 2016.

Handscombe RD & Patterson EA, The Entropy Vector: Connecting Science and Business, Singapore: World Scientific Publishing, 2004.