Tag Archives: knowledge

In Einstein’s footprints?

Grand Hall of the Guild of Carpenters, Zurich

During the past week, I have been working with members of my research group on a series of papers for a conference in the USA that a small group of us will be attending in the summer.  Dissemination is an important step in the research process; there is no point in doing the research if we lock the results away in a desk drawer and forget about them.  Nowadays, the funding organisations that support our research expect to see a plan of dissemination as part of our proposals for research; and hence, we have an obligation to present our results to the scientific community as well as to communicate them more widely, for instance through this blog.

That’s all fine; but nevertheless, I don’t find most conferences a worthwhile experience.  Often, there are too many uncoordinated sessions running in parallel that contain presentations describing tiny steps forward in knowledge and understanding which fail to compel your attention [see ‘Compelling presentations‘ on March 21st, 2018].  Of course, they can provide an opportunity to network, especially for those researchers in the early stages of their careers; but, in my experience, they are rarely the location for serious intellectual discussion or debate.  This is more likely to happen in small workshops focussed on a ‘hot-topic’ and with a carefully selected eclectic mix of speakers interspersed with chaired discussion sessions.

I have been involved in organising a number of such workshops in Glasgow, London, Munich and Shanghai over the last decade.  The next one will be in Zurich in November 2019 in Guild Hall of Carpenters (Zunfthaus zur Zimmerleuten) where Einstein lectured in November 1910 to the Zurich Physical Society ‘On Boltzmann’s principle and some of its direct consequences‘.  Our subject will be different: ‘Validation of Computational Mechanics Models’; but we hope that the debate on credible models, multi-physics simulations and surviving with experimental data will be as lively as in 1910.  If you would like to contribute then download the pdf from this link; and if you just like to attend the one-day workshop then we will be announcing registration soon and there is no charge!

We have published the outcomes from some of our previous workshops:

Advances in Validation of Computational Mechanics Models (from the 2014 workshop in Munich), Journal of Strain Analysis, vol. 51, no.1, 2016

Strain Measurement in Extreme Environments (from the 2012 workshop in Glasgow), Journal of Strain Analysis, vol. 49, no. 4, 2014.

Validation of Computational Solid Mechanics Models (from the 2011 workshop in Shanghai), Journal of Strain Analysis, vol. 48, no.1, 2013.

The workshop is supported by the MOTIVATE project and further details are available at http://www.engineeringvalidation.org/4th-workshop

The MOTIVATE project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 754660.

Knowledge is power

Pitt Rivers Museum, Oxford

“The list of things that I believe is, if not infinite, virtually endless. And I am finite.  Though I can readily imagine what I would have to do to obtain evidence that would support anyone of my beliefs, I cannot imagine being able to do this for all of my beliefs.  I believe too much, there is too much relevant evidence (much of it available only after extensive, specialized training); intellect is too small and life is too short.”

These words are a direct quote from the opening paragraph of an article by John Hardwig published in the Journal of Philosophy in 1985. He goes on to argue that we can have good reasons for believing something if we have good reasons for believing that others have good reasons to believe it.  So, it is reasonable for a layperson to believe something that an expert also believes and that it is even rational to refuse to think for ourselves in these circumstances.  Because life is too short and there are too many other things to think about.

This implies a high level of trust in the expert as well as a concept of knowledge that is known by the community.  Someone somewhere has the evidence to support the knowledge.  For instance, as a professor, I am trusted by my students to provide them with knowledge for which I have the supporting evidence or I believe someone else has the evidence.  This trust is reinforced to a very small extent by replicating the evidence in practical classes.

More than 30 years ago, John Hardwig concluded his article by worrying about the extent to which wisdom is based on trust and the threat to “individual autonomy and responsibility, equality and democracy” posed by our dependence on others for knowledge.  Today, the internet has given us access to, if not infinite, virtually endless information.  Unfortunately, much of the information available is inaccurate, incomplete and biased, sometimes due to self-interest.  Our problem is sifting the facts from the fabrications; and identifying who are experts and can be trusted as sources of knowledge.  This appears to be leading to a crisis of trust in both experts and what constitutes the body of knowledge known by the community, which is threatening our democracies and undermining equality.

Source:

Hardwig J, Epistemic dependence, J. Philosophy, 82(7):335-349, 1985.

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.

Creating an evolving learning environment

A couple of weeks ago, I wrote about marking examinations and my tendency to focus on the students that I had failed to teach rather than those who excelled in their knowledge of problem-solving with the laws of thermodynamics [see my post ‘Depressed by exams‘ on January 31st, 2018].  One correspondent suggested that I shouldn’t beat myself up because ‘to teach is to show, to learn is to acquire‘; and that I had not failed to show but that some of my students had failed to acquire.  However, Adams and Felder have stated that the ‘educational role of faculty is not to impart knowledge; but to design learning environments that support knowledge acquisition‘.  My despondency arises from my apparent inability to create a learning environment that supports and encourages knowledge acquisition for all of my students.  People arrive in my class with a variety of formative experiences and different ways of learning, which makes it challenging to generate a learning environment that is effective for everyone.   It’s an on-going challenge due to the ever-widening cultural gap between students and their professors, which is large enough to have warranted at least one anthropological study (see My Freshman Year by Rebekah Nathan). So, my focus on the weaker exam scripts has a positive outcome because it causes me to think about evolving the learning environment.

Sources:

Adams RS, Felder RM, Reframing professional development: A systems approach to preparing engineering educators to educate tomorrow’s engineers. J. Engineering Education, 97(3):230-240, 2008.

Nathan R, My freshman year: what a professor learned by becoming a student, Cornell University Press, Ithaca, New York, 2005