Experienced teachers know that “How do I want class to go?” and “How is class actually going?” are two different questions, and both are worth asking. To grow as a teacher, you need evidence on how you’re doing; you need feedback, whether from student evaluations, a CNDLS Mid-Semester Teaching Feedback session, assessments you develop yourself, or from peers.
Faculty receive feedback on their teaching every semester in the form of student evaluations. These can be a valuable tool for growth as a teacher, if they’re approached constructively and with a hunger to make them useful.
Most schools (Georgetown included) have a standard form that they use for evaluations, whether online or on paper; many schools (Georgetown again included) also allow you to ask additional questions. (Georgetown faculty can find instructions on the OADS Course Evaluations page.) This is a great opportunity to make sure you get the feedback you want. Maybe you’re trying a new text and you want students’ specific feedback on it; maybe you have a teaching assistant and you’re curious about whether the students felt you worked well together; maybe you’re thinking of making a change next time and you’d like their thoughts on the idea. Whatever it is, don’t miss the chance to focus students’ feedback on areas of particular interest to you.
The goal in conducting evaluations is to let students know how important the evaluations are. Tell them that you’re genuinely interested in their feedback and that it helps you grow as a teacher. That’s the key message to send: These evaluations matter to you, and their views matter to you. You can explain how you make use of student evaluations and give examples of how they’ve affected your teaching in the past. Urge students to go beyond the numbered rating scales to write more detailed responses, if the evaluation forms allow for it.
In fact, the impact of evaluations goes beyond your personal growth; you’re not their only audience. Your department and school probably see them and take them seriously as well. Furthermore, at some institutions future students may have access to these evaluations or parts/summaries of them. For example, at Georgetown, “The results of some questions (Section I, question 5; Section II, questions 2, 3, 5 & 6; Section III, questions 2, 3, & 5) are published online in the schedule of classes, available to students through MyAccess, unless otherwise specified by faculty through the Course Evaluation Request Form on MyAccess.”
Most schools (including Georgetown) require that you not be in the room when evaluations are being filled out; this makes it easier for students to be candid, so it can be a good idea to leave the room even if you’re not required to.
Student evaluations are controversial (see Can the Student Course Evaluation Be Redeemed? - The Chronicle of Higher Education), with a number of people raising questions about their validity and value. In particular, there are consistent concerns that evaluations may be subject to gender and racial bias (see Is Gender Bias an Intended Feature of Teaching Evaluations? - Inside Higher Ed ). Some teachers, however, find them quite useful, and in any case there’s no way around the fact that they remain a significant part of professors’ lives, and are used by department chairs and other administrators as they make decisions about retention, tenure, promotion, and so on.
The content of evaluations varies from school to school, but many (including Georgetown) include both a quantitative component (i.e., rating scales) and a qualitative component (i.e., short answers). Both can provide valuable information, and both need thoughtful interpretation.
When considering the quantitative results, look for patterns. At the most basic level: you can identify areas of strength and weakness where your scores are highest, and where they’re lowest. But do go beyond a consideration of average scores. If your school provides the information, look at the distribution of scores (i.e., How many people gave a rating of 5, how many a 4, and so on?). In a small class, one or two extreme ratings (unusually high or unusually low) can skew the whole average up or down. (This is one reason you want a lot of students responding; it lessens the impact of outliers.) And though the average doesn’t mathematically discount these scores, you should try to discount their importance yourself; one or two unusual responses are not good indicators of how the class went overall. Also look to see if the class breaks into groups; perhaps there’s a cluster of high scores and group of low ones, with little middle ground. In that case, the class produced two very different experiences, and you’ll want to think about how that happened. (Did people take the class for different reasons? Were there different levels of previous experience among the students? Did the students have different TAs?) It can also be helpful to compare your scores to average scores in your department and/or university (when available), to give you some context for your own scores. Finally, if you’ve taught the class before, have your scores improved or fallen, or does it vary by question? Can you attribute those changes to anything new to the class (e.g., changes in course structure, activities, or assessments; changes in material covered; different student population; different time of day taught; different room; etc.)?
The qualitative results are especially helpful in setting context for the quantitative results. However, these are subject to skewing, too—if only one or two people write answers, they may not be representative of the group as a whole. This is why it’s important to urge the whole class to take the time to fill out these sections. It’s also why you should look for patterns here, too. Is there an issue raised (either positively or negatively) by multiple students? Do several people agree on a strength or weakness of the course? If so, you’ll want to take that feedback in. Or perhaps several people bring up the same thing—a particular text, for example—but some liked it and others disliked it. In that case, you’ll have to wrestle with how the same course element had different levels of success with different students. For example, maybe it depended on whether the students had previous experience in the discipline or not. (Advanced students like advanced material more than less advanced students do.) Or perhaps it depended on how interested people were in the material in the first place. Were some of the students there for a requirement, and others out of personal interest? Are there ways that you can, in future courses, meet the needs of both groups? As with the quantitative results, if comments this semester differ from the comments you received in previous semesters, ask yourself: what changed? Finally, do focus on both weaknesses and strengths. If you’re passionate about teaching, you may find yourself focusing exclusively on the criticism—but you need to know your strengths in order to bring them to bear in all relevant areas of the course.
One last thought: occasionally, depending on context, negative feedback can be a positive sign. Maybe you’re teaching a class that has a reputation for being easy and attractive to disengaged students, and you want to change that reputation. In that case, some feedback that the workload is too big or too hard might indicate that you’re moving the course in the desired direction. Or maybe the feedback confirms your heretofore-unconfirmed suspicion that a particular text or exercise doesn’t work well. Or maybe you tried something new that produced a negative reaction pointing in a good direction; let’s say you tried a community engagement component in the class and students found that it brought up big, uncomfortable issues. Well, you might in fact want those issues to come up; next time you would keep the community engagement but add a component that helps students get comfortable discussing the uncomfortable.
Overall, the point is to think in a more complex way than “These evaluations are positive” or “These evaluations are negative.” If we approach them with the same complexity of mindset that we bring to our scholarship, student evaluations can helps us grow as teachers.
As we discuss in the Assessments section of this site, there are a number of both high-stakes and low-stakes ways to find out what exactly students are learning. CATs (Classroom Assessment Techniques) are short in-class activities like the One-Minute Paper, in which students spend a minute writing a short answer to a prompt of your choice related to the day’s lesson. These should give you a good sense of what students understand and, indirectly, how effectively you’re teaching. (Here is a good list of possible CATs.)
You can compose your own mechanism to collect feedback, with students completing the measure either quickly in class or at their leisure outside class. Mid-semester can be a good time to do this; by that time, the course has been running long enough for students to have informed opinions, and there’s still enough time to implement some changes going forward. Some faculty members have had success with surveys, in-person and online, posing open-ended questions such as:
Questions like these require students to think about their own learning and also assure them that you take their needs into account when crafting your teaching.
Of course, you don’t have to wait until mid-semester to gather feedback; some faculty members use ongoing strategies such as a suggestion box or the selection of a class representative to whom students can go to voice their concerns and who can be counted on to share the concerns with the professor anonymously and in a timely fashion.
Once you have gathered students’ feedback, it’s a good idea to report back to them, addressing what they’ve said and letting them know which suggested changes you can implement immediately, which you’ll consider for future editions of the course, and which are unfeasible for various reasons. Showing them that you take their views seriously should encourage them to continue providing helpful feedback.
Student feedback can be enormously useful, but of course represents only one kind of perspective. As faculty development specialists Rebecca Brent and Richard Felder note, “students are not qualified to evaluate an instructor’s understanding of the course subject, the currency and accuracy of the course content, the appropriateness of the level of difficulty of the course and of the teaching and assessment methods used in its delivery, and whether the course content and learning objectives are consistent with the course’s intended role in the program curriculum.” Peers do have this kind of expertise, do understand the position you’re in and its many challenges, and they can bring their own experience to bear on an evaluation. They can speak clearly to your goals and situation as a teacher. For all these reasons, and also because of the importance of this kind of advanced feedback in tenure and promotion decisions, faculty generally need some form of peer teaching assessment as part of a comprehensive evaluation. Whichever way you go, here are some tips:
Before you start this process with a colleague, you have to decide the purpose of the assessment:
Knowing which kind of assessment you’re looking for may well shape what you choose to do and how you choose to do it.
Then, Brent and Felder recommend significant preparation before beginning a peer assessment. In fact, they recommend that departments, each year, form pools of faculty who are trained at the beginning of the semester and are made available to visit colleagues’ classes throughout the year. Whether you organize something this structured or not, it’s a good idea to get clear on what criteria you’re applying to the evaluation. Rather than trying to find a pre-existing teaching evaluation checklist to use, we recommend a more purposeful and individual process:
Successful peer evaluations generally have several stages:
Some considerations:
After the class visit(s) and review of materials, there are two more very important steps:
Many summative assessments skip right from the class visit(s) to the formal record, but this skips a crucial opportunity. A meeting between the professor and the assessor(s) allows questions to be answered, teaching techniques to be clarified, and opens an exploration of the context and thinking that led to certain decisions, and more. The professor’s responses to this debriefing might provide relevant information. This follow-up meeting also makes it much more likely that the assessment, even if its intentions are largely summative, will help the teacher to grow.
At Georgetown, the Center for New Designs in Learning and Scholarship (CNDLS) offers faculty Mid-Semester Teaching Feedback sessions. We meet with faculty to hear their impressions of the class and questions about how it’s going, and then we go into the classroom without the professor to hear from the students, orally and in writing. Then we meet with the faculty member again to discuss the students’ feedback and to offer thoughts, suggestions, and strategies. This combines student input with the perspective that a faculty peer can offer. One note: This is not designed to be used in tenure and promotion decisions; this process provides information for the faculty’s use only. (I.e., it doesn’t end up in your file.)
Brent, R., and R.M. Felder. 2004. “A Protocol for Peer Review of Teaching.” In Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition, American Society for Engineering Education.
Ory, John C. “Faculty Thoughts and Concerns About Student Ratings.” Office of Instructional Resources at the University of Illinois at Urbana-Champaign.
Van Note Chism, Nancy. Peer Review of Teaching. Anker Publishing. San Francisco, CA: 2007.
Please reach out to us at cndls@georgetown.edu if you'd like to have a conversation with someone at CNDLS about these or other teaching issues.