Student evaluations are here to stay. And that’s the way it should be. I think universities owe it to students to provide a structured opportunity to provide feedback on classroom experiences. It’s not a matter of “customer service,” but instead, of respecting students and hearing what they have to say. But the way evaluations are typically structured, they facilitate inappropriate application and interpretation, and they don’t ask what we should be asking.
How do undergraduate students wind up in labs doing research? What’s the best way to identify students to bring into the lab?
I want to share a quick story about something slightly stupid that I did some years ago, while teaching.
I’ve noticed that junior scientists tend to be really picky about conflicts of interest, whereas senior scientists don’t tend to be sticklers.
Do you love it when students waste office hours with questions that don’t help them learn? Do you want to cultivate anxious emails from students sent at 3 in the morning? Do you want your students to wager their grades by guessing what you think is the most important material?
Then don’t tell your students what is going to be on the exam.
As we train the next generation of STEM professionals, we use a filter that selects against marginalized folks, on account of their ethnicity, income, gender, and other aspects of identity. This, I hope you realize, is an ethical and pragmatic problem, and constrains a national imperative to maintain competitiveness in STEM.
When we are working for equity, this usually involves working to remediate perceived deficiencies relative to the template of a well-prepared student — filling in gaps that naturally co-occur with the well-established inequalities that are not going away anytime soon. These efforts at mitigation are bound to come up short, as long as they’re based on our current Deficit Model of STEM Recruitment.
Authorship disputes are not uncommon. Even when there are no actual disputes over who did what on a project, there may be lots of authorship resentments. That’s because a lot of folks — by no mere coincidence, junior scientists more often — end up not getting as much credit as they think they deserve when a paper comes out.
This fits my experience so so well. I am first gen American, started at community college, transferred to a good public university and struggled but ultimately graduated with a 3.2 GPA and did OK on GREs. Had zero “social capital” (and had no idea what that was). I was lucky to have a TA (PhD student) who took me under her wing and had me volunteer in her lab a few hours a week and an excellent professor in my last quarter who informed me about internships and helped me secure one specifically targeting minority students (and it was paid!). Anyhow, after gaining a lot of experience though field jobs , I applied and was rejected from many PhD programs and ended up going to a small CSU, racking up student loans and working full time while getting my Master’s. I then applied to one of the better ecology programs with excellent letters of reference and was flatly denied. Again, luckily I had a greater supervisor at a govt agency who was very supportive and together we published a couple of manuscripts. I re-applied to that same ecology programs and was offered a multi-year fellowship (no TAing, no RAing). The only difference in my application was the publications. Now that I am in the program, I look around at a sea of white faces and most of them I have come to find out are straight out of undergrad, no pubs, very little experience, just great grades and test scores and a lot of social capital and opportunity (paid internships, semester at a field station, paid field methods courses, etc) . What a load of crap.
Call now! Loan counselors are standing by at +1 (301) 731-4535*.
The last couple weeks have posed a challenge, as several people have contacted me (mostly out of the blue), asking me for ideas about specific steps they can take to improve the recruitment of minority students. This isn’t my field, but, I realize I’ve put myself in this position, because it’s a critical issue and I discuss it frequently. I’m just one of many who work in minority-serving institutions.
I realize that most of the suggestions I’ve given to people (but not advice) are generalized. If several folks are writing to me, I imagine there are many more of y’all out there who might be thinking the same thing but not writing. Hence this post. Just with my suggestions.
I’m about to make some statements that I think should be obvious. In fact, everything I say in this post about travel awards will probably be obvious, but I feel moved to write about it since these obviously bad travel awards exist.
Grad students are typically on very tight budgets.
Grad students are expected to attend and present their work at conferences (usually at least one per year).
Departments or schools often have funds available (as conference travel grants or similar) to students to help cover the costs of attending conferences, which is good.
Some of these grants require students to wait until after the conference is over and include all receipts for their expenses before they can apply, which is bad.
My experiences are leading me to worry that strident attitudes against religion are harming efforts to diversify our scientific communities.
This post grows out of a conversation I was having about how scientists purchase supplies and equipment at smaller institutions. It would be helpful if you could leave comments with information and experiences you have.
We should have double blind grant reviews. I made this argument a couple weeks ago, which was met with general agreement. Except for one thing, which I now address.
Some readers said that double-blind reviews can’t work, or are inadvisable, because of the need to evaluate the PI’s track record. I disagree with my whole heart. I think we can make it work. If our community is going to make progress on diversity and equity like we keep trying to do, then we have to make it work.
We can’t just put up our hands and say, “We need to keep it the same because the alternative won’t work” because the status quo is clearly biased in a way that continues to damage our community.
In some academic fields, double-blind reviews of manuscripts for peer-reviewed publication is the norm. It’s no surprise that people who study human behavior use double-blind review. They must be on to something that most of us in the “hard” sciences haven’t picked up yet.
When I was a tween, a cutsey feel-good book was a bestseller: All I Really Need to Know I Learned in Kindergarten. If we learn to solve problems as kids, that should help us solve similar problems as adults.
Let’s do a kindergarten-level exercise in math and pattern recognition. Can you figure out what shape comes next?
If you said star, you’re right! Congrats!
Let’s do another one. What shape do you expect to find next?
If you said star again, then that means you’re two for two. Good job!
Let’s look for another pattern:
What do you think comes next? If you guessed , then you’re right! Your pattern recognition skills are fantastic!
“Open Science” is an aggregation of many things. As a concept, it’s a single movement. The policy changes necessary for more Open Science, however, are a conglomerate of unrelated parts.
I appreciate, and support, the prevailing philosophy of Open Science: “the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society.” Transparency is often, though not always, good.
Do you provide attribution for images in your lectures and presentations? If you don’t, here are some reasons why you should.
They say a picture is worth a thousand words. Apparently that’s not enough for a citation.
This is a question for both the people requesting letters of recommendation, and those who are signing the letters of recommendation.
About a month ago, a blog post-ish thing was published in Science, that was griping about a not-rare phenomenon. Sometimes when junior scientists ask for letters of recommendation, they’re asked to write a first draft of the letter. This is, allegedly, “minor fraud.”
While navigating the unemployment system in Sweden, I’ve discovered that I need to report every month what I’ve been doing to find a job. It includes applying for jobs of course but also training. I should also include working on my CV, networking and other activities that improve my employability. I’ve also been warned that one shouldn’t “work” during this time and all work has to be reported (you can work for up to 75 days and keep your unemployment status).
All of this has me reflecting on what work is in academia.
It seems to me that few other professions have the same structure as academic research.
Do you think giving students “participation” points is a good idea? I don’t.
I’m on vacation. But while I was posting a few photos on social media (amazing National Parks and a wooden carving of bigfoot drinking a beer) I stumbled on some extended silliness among fellow scientists that I want to discuss. Luckily, I woke up early, my family is sleeping in, so here goes.
A very-routine event has somehow caused some a great worry: A famous person said something rather hideous. This hideous opinion was put in quotes and got circulated on twitter. A storm-of-righteous-indignation built on twitter, and spilled over onto facebook and other media outlets. Within a few days, this famous person got “in trouble,” insofar as a famous and powerful person can genuinely get in trouble for voicing a contemptuous opinion.
This is a very common story. It’s a little different because of the specifics:
Imagine this scene: A professor at work gets a phone call.
Phone Voice: Hi, I’m the parent of Bill Smith, a student in your intro class.
Professor: Um, hi..?
Phone Voice: Bill was upset about the score he got on a quiz last week, and he thought some of the questions were unfair.
Professor: I’m sorry but I’m prevented from discussing a student’s academic records under the protection of FERPA [the Family Education Rights and Privacy Act].
Phone Voice: But I am his parent and Bill told me it was okay to speak with you about it.
Professor: That might be true, but without evidence of a FERPA waiver signed by the student, I can’t have this conversation.
Phone Voice: Oh, we had that waiver form signed at orientation.
Phone Voice: During an orientation session together with our son, the university presented to him a waiver form to sign to waive access to FERPA. It’s on record. I can email a copy if you want.
Professor: I prefer the student talk to me about his own grades.
Phone Voice: I realize that, but I have the right to discuss his grades with you and I’d like to talk about question three on the quiz.
Chatting with people at La Selva Biological Station in Costa Rica, the topic from a recent post came up: that journals have cut back on “accept with revisions” decisions.
There was a little disagreement in the comments. Now, on the basis of some conversations, I have to disagree with myself. Talking with three different grad students, this is what I learned:
Some journals are, apparently, still regularly doing “accept-with-revisions.” And they also then are in the habit of rejecting those papers after the revisions come in.
This is a guest post by Susan Letcher, Assistant Professor of Environmental Studies at Purchase College in New York.
A recent job posting at Cocha Cashu caught my eye:
What: Co-Instructor for the Third Annual Course in Field Techniques and Tropical Ecology
Where: Cocha Cashu Biological Station, Manu National Park, Peru
When: September 1 (arrive a few days earlier)- November 30, 2015
Oh cool, I thought. A field course based at a premier research station. Sounds great. But as I read further, a sinking horror took over:
Students who did their undergraduate work at elite universities are dominating access to federally funded graduate fellowships in the sciences. I pointed out this obvious fact at the beginning of this month, which to my surprise caught quite a bit of attention. I also got a lot of email (which I discuss here — it’s more interesting than you might expect).
A common response was: Okay, that’s the problem, what about solutions? Hence, this post. First, here are some facts that are are germane to the solutions.
This is a post by Catherine Scott.
I am TAing a first year introductory Ecology/Evolution course this semester, and the laboratory exam is coming up on Tuesday. I’m spending a lot of time this weekend emailing the entire class list messages that start, “Dear students, a member of the class asked…” I go on to list the (anonymized) question, and my answer. I copied this technique from a great professor I had for an invertebrate zoology course. As an extremely shy undergraduate student who never once went to an office hour or emailed a professor or TA with a question, I really appreciated this approach.
Here’s an idea for a new way to fund science: We can just create websites about our projects, and then ask taxpayers to vote for competing research proposals, based on which ones they see on social media.
I didn’t say it was a good idea. This is, essentially, what crowdfunding is.
As we start up the new semester, this is an apt time to evaluate, and update or change, our grading schemes.
I don’t like giving grades. I wouldn’t assign grades if I didn’t have to, because grades typically are not a good measure of actual learning.
Over the least year, I’ve heard more about a new approach to assigning grades, that has a lot of appeal: “standards based grading,” in which students get grades based on how well they meet a detailed set of very clearly defined expectations. This is apparently a thing in K-12 education and now some university instructors are following suit.
Student evaluations are the main method used to evaluate our teaching. These evaluations are, at best, an imperfect measuring tool.
Lots of irrelevant stuff affects evaluation scores. If you’re attractive or well dressed, this helps your scores. If you are a younger woman, you have to reckon with a distinct set of challenges and biases. If the weather is better out, you might get better evaluations, too. So, don’t feel bad about doing things to help your scores, even if they aren’t connected to teaching quality.
My university aptly calls these forms by their acronym, “PTE”: Perceived Teaching Effectiveness. Note the word: “perceived.” Actual effectiveness is moot.
People are aware whether or not they learned. However, superficial things can really affect perception. What our students think about the classroom experience is important. But evaluation forms are not really measuring teaching effectiveness. These evaluations measure student satisfaction more than learning outcomes. Since we are being held accountable for classroom performance based on student satisfaction, it is in our interest to pay attention to the things that can improve satisfaction.
Here are some ways I’ve approached evaluations with an effort to avoid getting bad ones.
- I try to teach effectively. The best foundation of perception is reality. I put some trust in my students’ ability to assess performance. If I’m doing a good job, my students should know it.
- I work hard to demonstrate that I respect my students. It’s easy to give in to the conceit that my time is more valuable than the time of my students. When I see myself going down that dark pathway, I try to follow the golden rule, and treat the time of my students with as much concern as I would like my own time to be treated. For example, I make sure class always ends on time.
- I emphasize fairness. On the first day of class, I let students know that life isn’t fair, but I try hard to make sure that my class done as fairly as possible. Students often volunteer gripes about their other classes, and unfairness is always the common thread in these discussions. Even if students perform poorly in a class, if they think that it was conducted fairly, then they are still usually satisfied.
- I recall Hanlon’s Razor: “Never ascribe to malice that which is adequately explained by incompetence.” None of my students are out to get me. Ideally, they’re out for themselves. Sometimes, I’m not clear enough about expectations. When a student needs something, I approach the interaction with the default assumption that it’s my fault. And if it’s not my fault, it’s not an intentional flaw, so I can’t give students a bad time about the shortcoming.
- I don’t engage in debates about graded assignments. I tell my students that if there is a very simple mathematical error or something I missed, they can bring it to me immediately after class. Any other errors need to be addressed with a written request by the start of the next class meeting. I’ve only gotten a few of these, and in all cases, the students were correct.
- When a student is persistent about points, I avoid the argument whenever possible. I don’t concede unearned credit, but I don’t dismiss the concern either. Nearly all requests for grade changes are so tiny, they have a negligible on the final grade. I show, numberwise, it doesn’t seem to make a difference. I tell them that if they are right on the borderline at the end of the semester, I’ll make a note of it and we could talk about it at that time. This prevents the student from waging a futile argument, and keeps me out of the business of catering to minutia.
- I run a tight ship. I can get annoyed by inappropriate behavior, but the students are usually even more annoyed. When someone is facebooking in the front row or monopolizes discussion, the rest of the class is usually super-pleased that I shut it down, as long I do it with respect. Classroom management is a fine art that we are rarely taught. (I’ve learned some education faculty and K-12 teachers.) I think establishing the classroom environment in the first few days is critical. I don’t enforce rules, but I develop accepted norms of behavior collaboratively on the first day of class. When things happen outside the norm, I address them promptly and, I hope, gently. When anybody (including myself) is found to be outside the norm, we adjust quickly because we agreed to the guidelines on the first day of class. I’ve botched this and have been seen as too severe on occasion, but I’d prefer to err on that side then having an overly permissive environment in which students don’t give one another the respect of their attention.
- A classic strategy is to start out the term with extreme rigor, and lessen up as time goes on. I don’t do this, at least not intentionally, but I don’t think it’s a bad idea as long as you finish with high expectations. In any circumstance, I imagine it would be disaster to increase the perceived level of difficulty during the term.
- I use midterm evaluations, using the university form partway through the semester, for my own use. This gives me early evidence about perceptions with the opportunity to change course, if necessary. I am open and transparent about changes I make.
- I often use a supplemental evaluation form at the end of the term. There are two competing functions of the evaluation. The first is to give you feedback for course improvement, and the second is to assess performance. What the students might think is constructive feedback might be seen as a negative critique by those not in the classroom. It’s in our interest to separate those two functions onto separate pieces of paper. Before we went digital, I used to hold up the university form and say: “This form [holding up the scantron] is being used by the school as a referendum on my continued employment. I won’t be able to access these forms until after the next semester already starts, so they won’t help me out that much.” Then I held up another piece of paper [an evaluation I wrote with specific questions about the course] and said, “This one is constructive feedback about what you liked and didn’t like about the course. If you have criticisms of the course that you want me to see, but don’t think that my bosses need to see them, then this is the place to do it. Note that this form has specific questions about our readings, homework, tests and lessons. I’m just collecting these for myself, and I’d prefer if you don’t put your names on them.” I find that students are far more likely to evaluate my teaching in broad strokes in the university form when I use this approach, and there are fewer little nitpicky negative comments.
- I try to avoid doing evaluations when students are more anxious about their grade, like on the cusp of an exam or when I return graded assignments. When I hand out the very helpful final exam review sheet, which causes relief, then I might do evaluations.
- I don’t bring in special treats on the day I administer evaluations. At least with my style, my students would find it cloying, and they wouldn’t appreciate a cheap bribe attempt. Once in a long while, I may bring in donuts or something else like that, but never on evaluation day.
- I’ve had some sections in with chronic attendance problems, in which some students would skip or show up late. On those occasions, I made a point to administer evaluations at the start of class on a day that had low attendance. I imagined that the students who weren’t bothering to attend class were less likely to give a stellar rating. Moreover, the absent students weren’t as well qualified to evaluate my performance as those sitting in class. (Of course, those attendance problems indicated that I had a bigger problem on my hands.)
- Being likable and approachable. Among all the things that influence evaluations, I think this is the biggest one. There are many ways to be liked by your students, as a human being, but I think being liked is prerequisite to really good scores. Especially with our students who face a lot of structural disadvantages, approachability is important for the ability to do the job well. I’m not successful enough on this front. It hasn’t tanked me in evaluations, because by the end of the semester the students are comfortable with me, but that doesn’t emerge as quickly as I’d like. This is the area I need to work on the most. I am to do all the professorly things with students with the greatest needs, they need to be able to talk to me.
Of course, some of these tips don’t apply if the evaluations are being administered online. This is a growing trend, and my university made the switch a couple years ago. (Thoughts and experiences with paper vs. online evaluations are in the ever-growing queue for future posts.)
Are there different or additional approaches that you use for the non-teaching-performance related aspects of student evaluations?
[update: be sure to read this comment. I think everything in this post is relevant to professors of both genders, but there are additional issues involving student biases that female professors need to deal with that I haven’t addressed. Professors need to be approachable to do their jobs. If students can’t talk to us, then that puts a low ceiling on what we can help our students achieve. However, what it means to be professional and “approachable” for a younger female professor might look really different than for an older guy. As I don’t have experience being a younger female professor, I’m not as well qualified to address this as some others. Another good reason to cruise over to Tenure, She Wrote.]