A student journalist contacted me yesterday and asked about my feelings on the accuracy of feedback on a popular crowd sourced website. They approached me because I had the highest number of student evaluations (206 at this moment). My feelings are complicated. I wrote in 2014 about how the “easiness” of a class and student evaluations of teacher performance do not correlate with learning in research, and how the metrics of RMP may short-change learning.
When the college asks me remind / encourage students to respond to the emailed administrative evaluations of faculty, I also ask students fill out evaluations for other students on RateMyProfessor. I think I have more evaluations because I teach Comp 1 in computer classrooms. Responding to both evaluation tools in the moments after classes takes less effort for students in my computer aided classes.
I do questions the accuracy. Profs who teach electives or higher level requirements for degrees have more motivated students who WANT to be in class. I teach developmental and gateway classes. KBOR, and the college, require students to take these classes – so these classes are more likely to have students who do NOT want to be in class. It’s a tougher crowd.
I solicit and discuss feedback on my teaching throughout the semester, because I see misconceptions. Some students say attendance is not mandatory, but I take attendance regularly and practice administrative drop for non-attendance (after 5 absences, with verbal and written warnings). I want to think I give good feedback, as apparently 31% of my students voluntarily note, and that my teaching entertains (22% say I’m hilarious) students while inspiring (6%) them, but class is project driven and inquiry based. I try keep lectures to 10 minutes at the start of class, and yet 10 students think this is lecture heavy. Granted, that’s 5% of respondents, but as the attendance feedback, it makes me wonder if some feedback is wrong? or maybe maybe some feedback should not be acted on/ should be ignored.
To test the accuracy, I’m asking my students to rate me on the same scale and criteria RateMyProfessor does at a Google form I created here. NOTE: I could not bring myself to ask students to rate my “hotness.” I suddenly realized why a female colleague explained how that data collection is problematic and compromise the assessment and the use thereof. I understood before, but I get it differently now.
Furthermore, and in full disclosure: I started out in marketing right out of college. I have always seen RateMyProfessor as a marketing site asking to be gamed. I have asked students to rate me hard, to filter more serious students into my class. That may have had unintended effects, and difficulty may result in lower ranking. That didn’t bother me until I learned that counselors and administrators use RateMyProfessor as well as students, and getting a reputation as challenging might compromise enrollment. I have pause regarding the ethics, but someone else created the game. I’m just trying to figure out what winning looks like. Does popularity equate with good teaching? Some research suggests not, but I’m going to have to look for the links again.