Each semester, thousands of faculty evaluations are quickly filled out or bubbled in, only to be tossed into an envelope and immediately sent to department heads.
In order to acquire more accurate ratings for the professors at Texas A&M University-Commerce, the school has decided to introduce a new evaluation program developed by the Individual Development and Educational Assessment center (IDEA).
The program itself will give students a chance to evaluate their professors in a more in-depth method, and also allow them to describe which teaching methods were useful in the classroom.
“I never thought the evaluations we did at the end of semesters were very thorough or even that accurate. The questions are always too general, I hate marking lower than the highest rating possible without being able to say why I chose a lower rating,” Paige DeFelice, sophomore psychology major said.
According to the IDEA Center, student ratings of instruction have been the subject of over 2,000 published research studies. The majority of these studies offer reassurance that the evaluations schools conduct are accurate if at least ten people participate in the evaluation.
But even though studies prove evaluations to be mostly accurate, people still believe that no rating form can include all possible course objectives or teaching methods. They believe that ratings and evaluations go hand in hand with the attitudes of the students who fill the forms out.
“When I fill out the evaluations at the end of the semester, it usually depends on if I like the professor or not. If I like them, I usually check the highest rating for every category. If I don’t like them much, I actually take the time to read the questions and bubble in the ovals accordingly,” Jasmine Reno, freshman biology major said.
On the other hand, researchers believe that if most students in a class have only taken it to meet requirements, and not because it was a course they were actually interested in taking, then the professor will receive unavoidable negative evaluations, no matter the teaching method.
The IDEA program itself is attempting to balance out these negative contributors by taking such superfluous factors into account through adjusted ratings.
According to the IDEA Center, one of the several weakness with student evaluations and ratings is the “halo effect,” so-called because it describes the tendency of raters to form a more general opinion of the person being rated and then let that opinion decide all other categories of rating.
In a classroom, if students have a positive impression of the professor and his or her teaching methods, then this “halo effect” has a positive outcome for that professor’s ratings; but if the general consensus is negative, then ratings are unfavorable.
Researchers have also found that most people have a tendency to avoid the extremes (very high and very low) in deciding ratings. This is called the “error of central tendency,” and results in more ratings piling up toward the middle of the rating scale than should be there.
“I never choose the lowest ratings on evaluations, even if I don’t like the professor. I figure that if a person has been hired to teach at a college, than their teaching methods can’t be bad enough to deserve the lowest rating,” James Ingland, junior agriculture major said.
By using these new methods of student evaluation, A&M-Commerce hopes to pinpoint areas that need improvement in the classroom, as well as get a better judgment of the quality of instruction it is providing for its students.