Seven breakthrough solutions
Taylor Wolken:The Texas Public Policy Foundation’s solutions: Part One
Published: Wednesday, June 8, 2011
Updated: Wednesday, July 25, 2012 22:07
Last month, faculty members circulated an open letter to Richard Box, Chairman of the Texas A&M University System Board of Regents, expressing concern over the seven solutions and garnering over 800 signatures of support. Finally, on May 26 Professor Jaime Grunlan gave an impassioned speech to the Board of Regents questioning the SLATE program, which grants professors cash rewards for the best student evaluations and was met with thunderous applause.
With all the concern over these "Seven Breakthrough Solutions," we will spend the next several days looking at each solution.
Breakthrough solution one is to "Measure Teaching Efficiency and Effectiveness and Publicly Recognize Extraordinary Teachers."
The goal is "to improve the quality of teaching by providing legislators and governing boards with a simple tool to measure faculty teaching performance and to publicly recognize excellent teachers."
At face value, the solution and goal appears bland and uncontroversial however there are some red flags. How does one measure "efficiency and effectiveness" accurately with "a simple tool" when the subject matter is complex and subjective?
Step one is to "gather the data and measure teaching efficiency and effectiveness."
Step A is compiling salary and benefit costs, total students taught in the last year, average student satisfaction rating and average percentage of A's and B's awarded.
Step B is dividing total employment cost for each professor by the number of students taught and "force rank from highest cost per student taught to lowest cost per student taught."
This would be an excellent metric if every class was the same size, could be taught the same way, and required the same expertise from each professor.
Evaluating "efficiency and effectiveness" using class size has significant drawbacks. Core curriculum courses generally have the largest class sizes followed by mandatory classes in each major and the smallest classes are upper level courses. On the other hand, professors and grad students with the least expertise often teach the lower level courses while those with the most expertise teach the upper level courses. Using this metric, a grad student teaching an intro course is more valuable than a seasoned professor teaching an upper level course. The effect is then enhanced because the seasoned professor makes more money teaching a smaller class.
Class sizes also vary significantly by subject. While a POLS 206 class may have 200 students, MATH 141 will be significantly smaller. Subjects like math and English—where testing can't be done for hundreds of students with scantrons—would be considered less efficient.
Step C is to "compare student satisfaction ratings and grade distributions."
This step is vague. Would high satisfaction ratings and high grades be preferable? High satisfaction ratings and low grades? Low satisfaction and high grades, or low satisfaction and low grades? Perhaps the TPPF will enlighten us with a guest column.
Since it is unclear what the preferred outcome is, let's simply address the viability of using student satisfaction ratings.
What does student satisfaction measure? Critics call it a popularity contest, but that is a bit unfair. Truth is we don't really know what student satisfaction really measures.
Was the class highly rated because the professor was likeable? Was it cause the student got an A? Was it well taught? Did the course meet the student's expectations? What were their expectations? Does a student evaluate a blow-off class the same as one in their major? Did the student learn a lot from the class? Were theymad the professor had a strict attendance policy? Who knows?
Student satisfaction is subjective and we don't know the criteria.
Step D is to collect and read all research articles published in the last twelve months for high cost faculty.
This step is also vague. Who are high cost faculty? Are all high cost faculty researchers? Is this a measure of how often they publish or is there some criteria to evaluate quality of research?
Many professors see the seven solutions as an attack on research. In regard to research, TPPF spokesman David Guenthner lamented, "You can talk about the double helix on one end of the spectrum, but on the other end of the spectrum you have the professor who does the study on Texas barbecue."
Guenthner doesn't seem to value barbecue research yet countless restaurant chains spend billions of dollars perfecting their food. According to the National Restaurant Association the restaurant industry had $580 billion in sales and employed 13 million in 2010. All research may not be equal but who gets to decide what research is better?
Step two is to "Publicly post the student satisfaction ratings and number of students taught for each teacher in several prominent locations at their respective colleges."
This final step is fair game and a matter of transparency even if it has a scarlet letter feel. If anyone should be able to see the results of a student satisfaction survey it's the students. If a rewards program continues then there is a necessity to keep the surveys done in-house but it should be noted that websites like "pick a prof" already offer students an opportunity to evaluate their professors.
An interesting part of the seven solutions is that they also address possible shortcomings of their policy.
One argument states, "Some may seek to substitute tenured faculty committees for rating faculty effectiveness or use such committees to adjust student satisfaction ratings."
Their response, "Research shows that student satisfaction ratings remain one of the best measures of teaching effectiveness, especially when coupled with student-teacher contracts that describe what students should expect to learn and limits on grade inflation."