Category Archives: Soapbox

Nauseous blogging?

In his novel ‘Nausea’, Jean-Paul Sartre suggests that at around forty, experienced professionals ‘christen their small obstinacies and a few proverbs with the name of experience, they begin to simulate slot machines: put in a coin in the left hand slot and you get tales wrapped in silver paper, put a coin in the slot on the right and you get precious bits of advice that stick to your teeth like caramels’.  When I first read this passage a few weeks ago, it seemed like an apt description of a not-so-young professor writing a weekly blog.

I am on vacation combining the positive effects of reading [see ‘Reading offline‘  on March 19th, 2014] and walking [see ‘Gone walking‘ on April 19th, 2017] with a digital detox [see ‘In digital detox‘ on July 19th, 2017]; but, through the scheduling facilities provided by WordPress, I am still able to dispense my slot machine homily. I will leave you to decide which posts are from the left and right slots.

Source:

Jean-Paul Sartre, Nausea, translated by Lloyd Alexander, New York: New Directions Pub. Co., 2013.

La Nausée was first published in 1938 by Librairie Gallimard, Paris.

Fourth industrial revolution

Have you noticed that we are in the throes of a fourth industrial revolution?

The first industrial revolution occurred towards the end of the 18th century with the introduction of steam power and mechanisation.  The second industrial revolution took place at the end of the 19th and beginning of the 20th century and was driven by the invention of electrical devices and mass production.  The third industrial revolution was brought about by computers and automation at the end of the 20th century.  The fourth industrial revolution is happening as result of combining physical and cyber systems.  It is also called Industry 4.0 and is seen as the integration of additive manufacturing, augmented reality, Big Data, cloud computing, cyber security, Internet of Things (IoT), simulation and systems engineering.  Most organisations are struggling with the integration process and, as a consequence, are only exploiting a fraction of the capabilities of the new technology.  Revolutions are, by their nature, disruptive and those organisations that embrace and exploit the innovations will benefit while the existence of the remainder is under threat [see [‘The disrupting benefit of innovation’ on May 23rd, 2018].

Our work on the Integrated Nuclear Digital Environment, on Digital Twins, in the MOTIVATE project and on hierarchical modelling in engineering and biology is all part of the revolution.

Links to these research posts:

Enabling or disruptive technology for nuclear engineering?’ on January 28th, 2015

Can you trust your digital twin?’ on November 23rd, 2016

Getting Smarter’ on June 21st, 2017

‘Hierarchical modelling in engineering and biology’ [March 14th, 2018]

 

Image: Christoph Roser at AllAboutLean.com from https://commons.wikimedia.org/wiki/File:Industry_4.0.png [CC BY-SA 4.0].

The disrupting benefit of innovation

Most scientific and technical conferences include plenary speeches that are intended to set the agenda and to inspire conference delegates to think, innovate and collaborate.  Andrew Sherry, the Chief Scientist of the UK National Nuclear Laboratory (NNL) delivered a superb example last week at the NNL SciTec 2018 which was held at the Exhibition Centre Liverpool on the waterfront.  With his permission, I have stolen his title and one of his illustrations for this post.  He used a classic 2×2 matrix to illustrate different types of change: creative change in the newspaper industry that has constantly redeveloped its assets from manual type-setting and printing to on-line delivery via your phone or tablet; progressive change in the airline industry that has incrementally tested and adapted so that modern commercial aircraft look superficially the same as the first jet airliner but represent huge advances in economy and reliability; inventive change in Liverpool’s Albert Dock that was made redundant by container ships but has been reinvented as a residential, tourism and business district.  The fourth quadrant, he reserved for the civil nuclear industry in the UK which requires disruptive change because its core assets are threatened by the end-of-life closure of all existing plants and because its core activity, supplying electrical power, is threatened by cheaper alternatives.

At the end of last year, NNL brought together all the prime nuclear organisations in the UK with leaders from other sectors, including aerospace, construction, digital, medical, rail, robotics, satellite and ship building at the Royal Academy of Engineering to discuss the drivers of innovation.  They concluded that innovation is not just about technology, but that successful innovation is driven by five mutually dependent themes that are underpinned by enabling regulation:

  1. innovative technologies;
  2. culture & leadership;
  3. collaboration & supply chain;
  4. programme and risk management; and
  5. financing & commercial models.

SciTec’s focus was ‘Innovation through Collaboration’, i.e. tackling two of these themes, and Andrew tasked delegates to look outside their immediate circle for ideas, input and solutions [to the existential threats facing the nuclear industry] – my words in parentheses.

Innovative technology presents a potentially disruptive threat to all established activities and we ignore it at our peril.  Andrew’s speech was wake up call to an industry that has been innovating at an incremental scale and largely ignoring the disruptive potential of innovation.  Are you part of a similar industry?  Maybe it’s time to check out the threats to your industry’s assets and activities…

Sources:

Sherry AH, The disruptive benefit of innovation, NNL SciTec 2018 (including the graphic & title).

McGahan AM, How industries change, HBR, October 2004.

Deep long-term learning

About six months ago I wrote about providing feedback to students [see post entitled ‘Feedback on feedback’ on June 28th, 2017].  I wrote that students tend to find negative feedback unfair and unhelpful, even when it is carefully and politely worded; but they like clear, unambiguous, instructional and directional feedback [1].  I suspect the same could be said for their teachers, many of whom fear [2] or are anxious about [3] their next student evaluation of teaching (SET) report even though they tend to see SET data as useful [2].  Some university teachers are devastated by negative feedback [4], with inexperienced and female teachers being more sensitive and more likely to experience negative feelings [5].  What follows below is a brief review, but long blog post, on the usefulness of student evaluation of teaching with the bottom line being: student evaluations of teaching have serious limitations when the goal is to instill deep long-term learning in a culture that values teachers.

Student evaluations of teaching (SET) are widely used in higher education because collecting the data from students at the end of each term is easy and because the data is useful in: improving teaching quality; providing input to appraisal exercises; and providing evidence of institutional accountability [2].  However, the unresolved tension between the dual use of the data for teacher development and as a management tool [2, 6] has led to much debate about the appropriateness and usefulness of student evaluation of teaching with strong advocates on both sides of the argument.

For instance, there is evidence that students’ perception of a lecturer significantly predicts teaching effectiveness ratings, with the charisma of the lecturer explaining between 65% [7] to 69% [8] of the variation in ‘lecturer ability’; so that student evaluations of teaching have been described as ‘personality contests’ [9].  Some have suggested that this leads to grading leniency, i.e. lecturers marking students more leniently in order to attract a higher rating, though this argument has been largely refuted [7]; but, there are several studies [10-12] that report a negative association between a pessimistic attitude about future grades and ratings of teacher effectiveness.

However, of more concern is the evidence of student fatigue with teaching evaluations, with response rates declining during the academic year and from year 1 to 4, when adjusted for class size and grades [6].  Student completion rates for end-of-term teaching evaluations are influenced by student gender, age, specialisation, final grade, term of study, course of study and course type. This means that the respondent pools do not fully represent the distribution of students in the courses [6].  Hence, a knowledge of the characteristics of the respondents is required before modifications can be made to a course in the best interests of all students; but such knowledge is rarely available for SET data.  In addition, the data is usually not normally distributed [13] implying that common statistical practices cannot be deployed in their interpretation, with the result that the lack of statistical sophistication amongst those using SET information for appraisal and promotion leads to concerns about the validity of their conclusions [8].

However, recent research creates much more fundamental doubts about the efficacy of SET data.  When learning was measured with a test at the end of the course, the teachers who received the highest SET ratings were the ones who contributed most to learning; but when learning was measured as performance in subsequent courses, then the teachers with relatively low SET ratings appeared to have been most effective [14-16].  This is because making learning more difficult can cause a decrease in short-term performance, as well as students’ subjective rating of their own learning, but can increase long-term learning.  Some of these ‘desirable’ difficulties are listed below.  So, if the aim is to instill deep long-term learning within a culture that values its teachers then student evaluations of teaching have serious limitations.

References

[1] Sellbjer S, “Have you read my comment? It is not noticeable. Change!” An analysis of feedback given to students who have failed examinations. Assessment & Evaluation in HE, 43(2):163-174, 2018.

[2] Spooren P, Brockx B & Mortelmans D, On the validity of student evaluation of teaching: the state of the art, Review of Educational Research, 83(4):598-642, 2013.

[3] Flodén J, The impact of student feedback on teaching in higher education, Assessment & Evaluation in HE, 42(7):1054-1068, 2017.

[4] Arthur L, From performativity to professionalism: lecturers’ responses to student feedback, Teaching in Higher Education, 14(4):441-454, 2009.

[5] Kogan LR, Schoenfled-Tacher R & Hellyer PW, Student evaluations of teaching: perceptions of faculty based on gender, position and rank, Teaching in Higher Education, 15(6):623-636, 2010.

[6] Macfadyen LP, Dawson S, Prest S & Gasevic D, Whose feedback? A multilevel analysis of student completion of end-of-term teaching evaluations, Assessment & Evaluation in Higher Education, 41(6):821-839, 2016.

[7] Spooren P & Mortelmanns D, Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educational Studies, 32(2):201-214, 2006.

[8] Shevlin M, Banyard P, Davies M & Griffiths M, The validity of student evaluation of teaching in Higher Education: love me, love my lectures? Assessment & Evaluation in HE, 24(4):397-405, 2000.

[9] Kulik JA, Student ratings: validity, utility and controversy, New Directions for Institutional Research, 27(5):9-25, 2001.

[10] Feldman KA, Grades and college students’ evaluations of their courses and teachers, Research in Higher Education, 18(1):2-124, 1976.

[11] Marsh HW, Students’ evaluations of university teaching: research findings, methodological issues and directions for future research, IJ Educational Research, 11(3):253-388, 1987.

[12] Millea M & Grimes PW, Grade expectations and student evaluation of teaching, College Student Journal, 36(4):582-591, 2002.

[13] Gannaway D, Green T & Mertova P, So how big is big? Investigating the impact of class size on ratings in student evaluation, Assessment & Evaluation in HE, 43(2):175-184, 2018.

[14] Carrell SE & West JE, Does professor quality matter? Evidence from random assignment of students to professors. J. Political Economics, 118:409-432, 2010.

[15] Braga M, Paccagnella M & Pellizzari M, Evaluating students’ evaluation of professors, Econ. Educ. Rev., 41:71-88, 2014.

[16] Kornell N & Hausman H, Do the best teachers get the best ratings? Frontiers in Psychology, 7:570, 2016.