Active learning is flexible and designed to reach the reticent

Standard

I’ve gotten positive feedback about a post in which I explain how it’s not that much work for me to do active learning in the classroom. However, a couple entirely reasonable misgiving seem to crop up, and I’d like to give my take on those causes for reluctance to start up with active learning approaches. Continue reading

Where do you eat lunch? And does it matter?

Standard

Lunch culture seems to vary a lot from place to place.

I will admit to sometimes eating lunch at my desk, even though it is seems a highly unusual thing at European universities. But these days it is rare for me to do that, partly because most people aren’t and partly because it is just nicer to take a moment and eat properly. Continue reading

Remetaphoring the “academic pipeline”

Standard

We need to ditch the “academic pipeline” metaphor. Why?

The professional destinations of people who enter academic science are necessarily varied.

We do not intend or plan for everybody training in science to become academic researchers.

The pipeline metaphor dehumanizes people. Continue reading

Respectful conversation at academic conferences

Standard

You’re probably familiar with this scene from academic conferences:

Person A and and Person B have been chatting for a few minutes. Person C strolls by and makes eye contact with Person A. Person C gives a big smile to Person A, which is reciprocated, perhaps with a hug. Both A and C enthusiastically ask one another about their lab mates, families, and life in general.

At this moment, Person B is feeling awkward.

Continue reading

Huge problems during research are totally normal

Standard

At the moment, I have the great pleasure of working with a bunch of students at my field site in Costa Rica. Which means that I’m really busy — especially during the World Cup too! — but I’m squirreling away a bit of time before lunch to write about this perennial fact that permeates each field season.

We are used to stuff working. When you try to start your car, it turns on. When we set alarms to wake us up, they typically wake us up. You take a class, work hard and study, and earn a decent grade. Usually these things things happen. And when they don’t happen, it’s a malfunction and a sign of something wrong. Continue reading

Preparing a talk for a conference

Standard

I distinctly recall a little non-event at a conference: I was scooting to catch a friend’s talk on time. I found him sitting in the hallway outside the room, slide carousel* in his lap. Grabbing a bunch of slides and putting them into his carousel. He was picking out slides, on the fly at literally the last minute. Figuring out both his content and his sequence Continue reading

The interplay of science and activism

Standard

This is a guest post by Lirael.

I’m a grad student in the sciences. I’m also an activist. I spend most of my time doing one of those two things. So Amy’s recent post got me thinking about science and activism and how they mix. What do you do when needed public attention to some issue in your field turns out to be lacking in scientific literacy (or understanding of the business of science)? How, in general, do science and politics interact? What are the implications of those interactions?

When we talk about making things political, what we typically mean is reducing them to soundbites or partisan battles. But if you think of politics in a broader sense, most things are political, or at least have political implications, including science. Study on the speed and potential effects of climate change is political. Which diseases get prioritized in research funding is political. How diseases are defined is political (look up the controversy around women, the CDC, and AIDS in the early 1990s). Robotics, with its wide variety of applications, including politically charged ones like defense, manufacturing, and agriculture, is political. Politics is about people’s lives, and science affects people’s lives – isn’t that one of the reasons that many of us got into it?

Funding isn’t apolitical either. Within some fields of math and computer science, there’s significant controversy about NSA funding. One reason that I stopped working for defense contractors was my discomfort with how the funding source affected how we were thinking about the potential applications of our own work – for instance, thinking of, and presenting, work on fast pattern classification in terms of its usefulness in missile guidance rather than its usefulness in classifying ventricular arrhythmias or diagnosing retinal disorders. Or thinking of and presenting work on indoor robot navigation in terms of military applications rather than civilian emergency assistance, room cleaning, etc. And concerns about funding and conflicts of interest, especially in a biotech context, leading to bad/biased science aren’t limited to people who are clueless about how science funding works.

The Avaaz campaign pitch has a lot of cluelessness, as we’ve noted. They don’t seem to get how science is done – they think they’re going to change the field by funding a single study? They don’t consider how their funding might be just as likely to introduce bias into the science that they fund as any other funding. They don’t explain how they’re going to find a lab to fund. They don’t appear to know the state of bee research very well, or if they do, they’ve sacrificed that in the name of a more easily accessible funding pitch. So what do scientists do with this? What do activists do with this? What’s a better model than this campaign? What can we do to help ensure that people who are committed to action on the problems that affect their lives understand the science behind it?

I’ve been active in my state’s climate justice movement over the last year, and one of the things that struck me was how many scientists there were at the protests. I’ve participated in a lot of social movements, and let me tell you, that is not something that you usually see. There was a time when I was in a group doing jail support (where you wait outside a jail for arrestees to be released so that you can give them food and water and first aid and emotional support), and a tenured physicist, who was also part of the jail support group, gave an impromptu lecture on introductory thermodynamics to a group of interested fellow protesters to pass the time. I’ve gotten rides to events with people who tell me about their research in geology or atmospheric science. One of the major local organizers is on leave from a math PhD program. I’ve been on a six-day march where one evening, after we got to our camp site, we sat around and had a Q&A with a photovoltaics guy about the current state of solar energy. These people add a lot to the movement. They participate in a variety of ways, and they also educate. The climate justice movement has been bringing scientists to teach-ins, to improve other participants’ scientific literacy, for years.

Another interesting model is the Union of Concerned Scientists , which started as a project of MIT students and faculty in 1969. They produce layperson-friendly issue briefings (including on science funding as well as on relevant science and engineering issues themselves), produce original research and analysis, run scientifically-literate petition campaigns, and much more. Many of their issue briefings contain “What You Can Do” pages for laypeople, and they have an activist toolkit on their site.

Layperson activists can play important roles in the politics of science. In the first example that pops into my head, the often-confrontational AIDS action group, ACT UP, which was not exactly known for its nuance, won lower prices per patient for AIDS treatments, accelerations in FDA review of treatments, and improved NIH guidelines for clinical trials. Were there ACT UP participants who reduced complex issues of funding, safety, and research pace, to simplistic talking points? Yes. Did they sometimes say things that were unfair to well-meaning scientists? Probably. But they got results – working together with scientist-activist mentors like organic chemist Dr. Iris Long.

If you think that a movement is trying to support a worthy cause but is missing important points (or making wrong ones) with their sloganeering, help them come up with better slogans (yes, you need slogans, not just journal articles, in any form of activism) and better talking points.

Our expert advice remains unheeded

Standard

Once in a while, tropical biologists get bot flies. We sometimes find this out while were are in the field. But on five occasions, my students have returned to the US, and then discovered that they are hosting a bot. They all contacted me for advice. I told them a few things, but the most important one was:

Whatever you do, don’t go see a doctor. That could be disastrous.

Nonetheless, three of these students went to the doctor.

The bot fly Dermatobia hominis, that came out of my student's arm while he was sleeping. Photo: T. McGlynn

A mature bot fly larva, Dermatobia hominis, that emerged from my student’s arm while he was sleeping. He intentionally reared this one out and allowed it to pupate. Pencil is for scale. Photo: T. McGlynn

This has always troubled me. Without any additional context, it looks like the students just didn’t trust me, and thought that I’m stupid. At the very least, it shows that they trusted their own intuition over my recommendation based on a long history of experience. It shows that they followed the misinformed advice of family and friends over the judgment of the person who was responsible for the trip to the rainforest.

It shows that when it really really really counts, my guidance ain’t worth much at all to my own students.

I don’t give students this instruction without an explanation. I tell them that nearly every doctor in the US will want to cut the creature out. History shows that bot fly larvae are smarter than doctors. If you present yourself to a US doctor with a bot inside you, the predictable result is that you leave the doctor with your bot inside you. You will also leave without a large chunk of flesh that the doctor removed in a futile attempt to get the bot. Sometimes the bot is killed in the surgery, but not excised, which leads to a rotting carcass and infection, and the need for serious antibiotics. I tell them that, if they can’t get it out using the variety of techniques we’ve discussed, and they feel compelled to go to a medical professional, they must go to a vet and not to a doctor. (The students who did the opposite of my recommendation came to regret their choice, if you’re wondering.)

These bot fly incidents are convergent with a recurring incident in a non-majors laboratory that I have taught. The week before an exam, I hand out a review sheet that specifies the scope of the exam. I then tell the class:

Check out item number three on the review sheet. This is a straightforward question about osmosis. The answer is that the volume of water in the tubing will “increase.” The correct answer to this question is “increase.” Just circle the word “increase” and do not circle the word “decrease.” I’m letting you know the answer to this question now and I guarantee — the odds of this question being on the exam next week are 100%. I promise to you, with all of my heart, that this question will be on the exam word for word, and this one question will be worth 20% of your grade on this exam. You don’t want to get this question wrong, and I’m telling you about it right now. So, be sure to write down in your notes that this question will be on the exam and be sure to remember the correct answer when you see it.

The reason that I’m being really obvious about telling you about this question its that in the past, half of the class has gotten the answer to this question wrong. It’s a simple question, and it addresses the main point of the lab we conducted for more than two hours last week, but still, lot of people got it wrong last semester.

You should know that those students also were told in advance what would be on the exam. Just like I’m telling you right now. They knew that 20% of their exam hinged on remembering one word, “increase,” and still the majority of them got it wrong. I’m telling you this now because I don’t want you to suffer the same fate of those other students. DON’T BE LIKE THE STUDENTS FROM LAST SEMESTER WHO WERE FED THE ANSWER AND THEN GOT IT WRONG THE FOLLOWING WEEK. Just remember that “increase” is correct and the other word is not correct. I’d like you to remember the physical mechanism that explains this osmosis, but more than anything else I’d like you to demonstrate that you can be prepared for the exam and remember this small fact which I am hand-feeding to you right now. I promise to you this exact question will be on the exam Learn from your predecessors, don’t make their mistake. I’m giving you 20% of the exam for free right now, so write this down.

As I give this slightly overwrought speech, the students are paying attention. There is eye contact. They might be note-taking activity. Nobody’s on their phone, and nobody’s chitchatting.

When I administer the exam, more than half of the class circles “decrease” instead of “increase.” This has happened four times, and each time it happens a little piece of my heart dies.

As you can imagine, many of the students in our non-majors class are as disengaged as humanly possible. By no means is this a difficult course, even with low standards, but the fail rate for the corresponding lecture course is about 50%. The students who fail are clearly doing so because they aren’t even making the slightest effort. The reason that I keep giving students that same question over and over, and give them the correct answer over and over, is to give me some reassurance that the wretched performance by so many of the students is not my fault. I do this to grant myself absolution.

In these labs, each week is designed to give students the opportunity to develop their own experiments, find new information on their own, and work together to solve problems. This happens to some degree. But half of the students do not exert the tiniest amount of thought about doing what it takes to pass the exam. Why don’t they even try even the slightest, despite my best efforts to both inspire and feed them the right answers?

The students who fail these exams trust their own intuition, or some other model of behavior, instead of my own advice. If anybody is the person to tell you how to pass the exam, it should be the professor who is telling you the answers to the exam. But in this case, the students weren’t even bothering to look at their notes for five seconds before stepping into the exam. They’ve presumably heard from other people that work is not required for this class whatsoever, or perhaps they don’t care for some other reason. All I know is that no matter what I do, I can’t get these students to care about their grade on the exam. Some are excited about the labs, but not necessarily in passing.

So, what do the bot fly story and the osmosis story have in common? No matter how hard we try, sometimes our students won’t follow our recommendations. At least, not mine.

We are fancy-pants PhD professors, with highly specialized training. We’re paid to be the experts and to know better. That doesn’t mean that our words are prioritized over other words. Anything we might say just ends up in a stream of ideas, most of these ideas just flow out as easily as they flow in. It’s no accident that my teaching philosophy is “you don’t truly learn something unless you discover it on your own.” This is why I focus on creating opportunities for self-discovery in teaching. This is the only way in which people truly learn.

No matter what we professors might say or do about bot flies, or studying for exams, or anything else, other people will rely on their own judgment over our own. Even when the experts are overtly correct on the facts, even smart people often use misguided intuition when making important decisions, even when they are obviously wrong on the facts and the experts are overtly correct.

It’s easier to listen to other people than it is to heed their words. As a professor and research mentor, I’ve given up on the expectation of being heeded. I just work to speed up the process of self-discovery of important ideas. But, for the most part, I still don’t know how to do that. I think it’s an acquired skill, and a craft, and I think I still have a ways to go.

Why I prefer anonymous peer reviews

Standard

Nowadays, I rarely sign my reviews.

In general, I think it’s best if reviews are anonymous.  This is my opinion as an author, as a reviewer, and as an editor. What are my reasons? Anonymous reviews might promote better science, facilitate a more even paying field, and protect junior scientists.

The freedom to sign reviews without negative repercussions is a manifestation of privilege. The use of signed reviews promotes an environment in which some have more latitude than others. When a tenured professor such as myself signs reviews, especially those with negative recommendations, I’m exercising liberties that are not as available to a PhD candidate.

To explain this, here I describe and compare the potential negative repercussions of signed and unsigned reviews.

Unsigned reviews create the potential for harm to authors, though this harm may be evenly distributed among researchers. Arguably, unsigned reviews allow reviewers to be sloppy and get away with a less-than-complete evaluation, which will cause the reviewer to fall out of the good graces of the editor, but not that of the authors. Also, reviewer anonymity allows scientific competitors or enemies to write reviews that unfairly trash (or more strategically sabotage) the work of one another. Junior scientists may not have as much social capital to garner favorable reviews from friends in the business as senior researchers. But on the other hand, anonymous reviews can mask the favoritism that may happen during the review process, conferring an advantage to senior researchers with a larger professional network.

Signed reviews create the potential for harm to reviewers, and confer an advantage to influential authors. It would take a brave, and perhaps foolhardy, junior scientist to write a thorough review of a poor-quality paper coming from the lab of an established senior scientist. This could harm the odds of landing a postdoc, getting a grant funded, or getting a favorable external tenure evaluation. Meanwhile, senior scientists may have more latitude to be critical without fear of direct effects on the ability to bring home a monthly paycheck. Signed reviews might allow more influential scientists to experience a breezier peer review experience than unknown authors.

When the identity of reviewers is disclosed, these data may result in more novel game theoretical strategies that may further subvert the peer-review process. For example, I know there are some reviewers out there who seem to really love the stuff that I do, and there is at least one (and maybe more) who appear to have it in for me. It would only be rational for me to list the people who give me negative reviews as non-preferred reviewers, and those who gave positive reviews as recommended reviewers. If I knew who they were. If everybody knew who gave them more positive and more negative reviews, some people would make choices to help them exploit the system to garner more lightweight peer review. The removal of anonymity can open the door to corruption, including tit-for-tat review strategies. Such a dynamic in the system would further exacerbate the asymmetries between the less experienced and more experienced scientists.

The use of signed reviews won’t stop people from sabotaging other papers. However signed reviews might allow more senior researchers to use their experience with the review system to exploit it in their favor. It takes experience receiving reviews, writing reviews, and handling manuscripts to anticipate the how editors respond to reviews. Of course, let’s not undersell editors, most of whom I would guess are savvy people capable of putting reviews in social context.

I’ve heard a number people say that signing their reviews forces them to write better reviews. This implies that some may use the veil of their identity to act less than honorably or at least not try as hard. (If you were to ask pseudonymous science bloggers, most would disagree.) While the content of the review might be substantially the same regardless of identity, a signed review might be polished with more varnish. I work hard to be polite and write a fair review regardless of whether I put my name on it. But I do admit that when I sign a review, I give it a triple-read to minimize the risk that something could be taken the wrong way (just as whenever I publish a post on this site). I wouldn’t intentionally say anything different when I sign, but it’s normal to take negative reviews personally, so I try to phrase things so that the negative feelings aren’t transferred to me as a person.

I haven’t always felt this way. About ten years ago, I consciously chose to sign all of my reviews, and I did this for a few years.  I observed two side effects of this choice. The first one was a couple instances of awkward interactions at conferences. The second was an uptick in the rate which I was asked to review stuff. I think this is not merely a correlative relationship, because a bunch of the editors who were hitting me up for reviews were authors of papers that I had recently reviewed non-anonymously. (This was affirmation that I did a good job with my reviews, which was nice. But as we say, being a good reviewer and three bucks will get you a cup of coffee.)

Why did I give up signing reviews? Rejection rates for journals are high; most papers are rejected. Even though my reviews, on average, had similar recommendations as other reviewers, it was my name as reviewer that was connected to the rejection. My subfields are small, and if there’s someone who I’ve yet to meet, I don’t want my first introduction to be a review that results in a rejection.

Having a signed review is different than being the rejecting subject editor. As subject editor, I point to reviews to validate the decision, and I also have my well-reasoned editor-in-chief, who to his credit doesn’t follow subject editor recommendations in a pro forma fashion. The reviewer is the bad guy, not the editor. I don’t want to be identified as the bad guy unless it’s necessary. Even if my review is affirming, polite, and as professional as possible in a good way, if the paper is rejected, I’m the mechanism by which it’s rejected. My position at a teaching-focused institution places me on the margins of the research community, even if I am an active researcher. Why the heck would I put my name on something that, if taken the wrong way, could result in further marginalization?

When do I sign? There are two kinds of situations. First, some journals ask us to sign, and I will for high-acceptance rate journals. Second, if I recommend changes involving citations to my own work, I sign. I don’t think I’ve ever said “cite my stuff” when uncited, but sometimes a paper that cites me and follows up on something in my own work, and I step in to clarify. It would be disingenuous to hide my identity at that point.

The take home message on peer review is: The veil of anonymity in peer review unfairly confers advantages to influential researchers, but the removal of that veil creates a new set of more pernicious effects for less influential researchers.

Thanks to Dezene Huber whose remark prompted me to elevate this post from the queue of unwritten posts.

Maybe I am a writer after all

Standard

I’ve been head down, focusing on writing grants lately. These days I spend a good deal of my time writing and thinking about writing, which isn’t what I imagined life as a scientist to be.

When I was much younger, I wanted to be a writer. I read voraciously. Mainly fantasy novels and classics like Jane Austen and Lucy Maud Montgomery. I spent a lot of time out in the fields and woods around the places we lived and in my head in worlds far from my own. Being a writer sounded so romantic. But along the way that idea faded. Writing in my English classes was uninspiring and the one thing I didn’t do was write, which is of course what makes one a writer. I continued to read with my tastes broadening (but I still enjoy a good fantasy novel when I get the chance) but honestly I didn’t write that much and most of that was because I had to.

Fast-forward to my first undergraduate research project, I was working on sex-allocation in plants. The measurements came fairly easy (besides all the time they took) but once I had a complete and analyzed dataset, then came the writing. It was my first experience writing and rewriting and rewriting something. And then there was submitting it to a journal and rewriting again. I never had worked so hard at writing something but I definitely done so since then.

As my career in science has progressed, I’ve needed to take writing seriously. As an undergrad, I really had no idea how much writing was involved in most scientific fields. Unfamiliar with such things as peer-review, I was ignorant about the process between doing research and published papers.

These days I’ve published a modest number of papers but the stories behind them have really helped me grow as a writer. There was that paper that we decided to cut a significant number of words (I can’t remember the number but maybe a quarter of the paper) to try for a journal with a strict word limit (where it was rejected from). It meant looking at every single sentence to see if every word was truly necessary. The process was kind of fun and became a little like a game or puzzle. I’m still overly wordy at times but now I’m better at slashing in the later drafts. Then there was that time our paper kept getting rejected and we realized (read: my co-author because I didn’t even want to think about it anymore) that the entire introduction needed to be reframed. So we basically tossed the intro and discussion and started again. It was painful but ultimately what needed to be done. What was there before wasn’t bad writing but was setting up expectations that weren’t fulfilled by our data.

Through all of this and especially writing here, I realised that I became a writer with out even realizing it. My science has taught me more about the craft of writing than any of the English classes I took ever did (but to be fair I stopped taking these after first year of my undergraduate degree). I’m not sure if I’ll ever tackle a fiction story, and that is ok. I turned into a different kind of writer than my childhood self imagined. And I know there is a whole other craft of understanding how to construct a story, which is very different than writing a paper or a grant proposal or a blog post. I’m not arrogant enough to think my writing is a universal skill but if I did want to write a novel I now have a better idea of what that might take (writing and rewriting and rewriting and repeat).

There are lots of scientists who also write books for more general audiences suggesting that the transition from scientist to what most would consider a writer isn’t that farfetched. This Christmas I enjoyed the writing of one of my favourite people from my graduate school days, Harry Greene. “Tracks and Shadows” is a lovely, often poetic read about life as a field biologist, snakes and much more. And I haven’t picked it up yet but another Cornellian I knew has gone on to do science television and write “Mother Nature is Trying to Kill You”. It looks fun. These examples of scientists I know writing books also speak to the possibility of writing beyond scientific papers. And as the Anne Shirley books taught me, you should write what you know.

Maybe someday I’ll decide to write a book, but for now, back to those grants.

Scientists know how to communicate with the public

Standard

I bet that most of us are steady consumers of science designed for the public. Books, magazines, newspaper, museum exhibits, radio, the occasional movie. The people who bring science to the masses are “science communicators.” (The phrase “science communication” is a newish one, and arguably better than “science writing,” as a variety of media involve more than just writing.)

Nearly everything I’ve seen in science communication shares a common denominator: scientists. Science communication doesn’t amount to much without researchers. Science is a human endeavor, and it’s rarely possible to tell a compelling story without directly involving the people who did the science. As restaurant servers bring food to the table and cooks typically stay in the kitchen, science communicators bring the work of scientists to the public while scientists typically focus on publishing scientific papers.

I interact with practitioners of this craft on the uncommon occasions when my research gets notice beyond the scientific community. (My university doesn’t send out press releases when my cooler papers come out, so the communicators need to find me.)

When I listen to what science communicators have said to us scientists, there are two items that are a heavy and steady drumbeat:

  1. It the duty of scientists to some of our time doing science communication, and it’s also in our interests.

  2. Most scientists don’t yet know how to communicate with the public.

I’m not so sure about #1. I have decided the second one is off mark, or at least so overgeneralized that it’s either wrong or useless.

It may or may not be our duty to share science with the public. (Yes, I know the arguments, reviewed here, for example.) Regardless, the last interest group that I’d look to for impartial advice on this matter would be science communicators. This would be like learning about the need for propane grilling from a propane grilling salesperson. It would be like learning about K-12 energy education from a workshop funded by a petroleum company (sadly, this is happening this week in my city). Of course science communicators think that science communication is important!

For most scientists, the division of labor between cooks and servers is just fine. (Of course there is nothing about being a technical scientist that disqualifies someone from being an effective public communicator.) There are many important things in this world, and some of us choose other things. (This next month, for what it’s worth, I’m talking to three community organizations, volunteering for an all-day science non-fair, and writing a blog post about my lab’s latest paper.) My funding agency places science communication as one potential component of broader effects, and I’m definitely listening to them. Scientists, if we want to engage the broader public, that’s great! But it would be disingenuous to tell you that it’s your duty. We all owe many things to society, and I’m cool with it if you choose, or don’t choose, to put science communication on your plate. I’m not going to be that person who is telling you what your duties are with respect to your own career. It’s up to us to forge our own trajectories and priorities.

So we all agree that scientists that don’t spend time on science communication either are, or are not, selfish bastards.

But, is it really true that most of us scientists aren’t capable of sharing our science effectively? I call BS on this canard.

If there happens to be a stray professional science communicator reading this, I imagine that I just induced a few chuckles and a shake of the head. Let me write some more to clarify.

Most of us are wholly capable of sharing our science with the public in an understandable and even interesting fashion. However, that doesn’t mean that, when interacting with the media, that we are always willing to play along. We might not want to provide the sound bite you’re looking for. We might be resisting a brief interpretation because we don’t have enough confidence that the science would end up correct in the final product. Nearly every time some scientific finding is presented to the public, it happens along with some form of a generalization. If you’re familiar with the genre of peer reviewing, you’ll know that scientists typically disdain generalizations.

How is it that we can resist the digestion of our work for public consumption? When someone claims that one of us “doesn’t know how to communicate with the public,” I propose that this overgeneralized diagnosis can almost always be broken down into two distinct categories which might apply.

  1. We don’t want to discuss our science in broad terms for the public because we feel that we are unqualified for the task. While the popular image of the arrogant know-it-all scientist plays well, most of us are driven by the fact that we don’t understand enough about our fields of expertise. We are resistant to analogies or general statements of findings in lay terminology because it involves a generalization from our very specific findings that may be unwarranted. And, if it is warranted, then it falls outside our expertise to comment on such a broad topic. While our experiments were designed to advance knowledge on some general topic, we feel that it is not up to us to make the decision that our findings are informative on that general topic in a way to be digested outside the scientific community.
  2. We actually aren’t doing an experiment that has any general relevance to the public at large. We actually are working on minutia that will not have any broad relationship to the scientific endeavor at large. We are having trouble making a generalization about its scientific importance because it lacks a broad scientific importance.

The prescription for diagnosis #1 is for us to become more arrogant and think that we are qualified to speak with the media about broader issues in science. For us to think that, as scientists sensu lato, we are able to speak broadly about scientific issues. Just as we teach about all kinds of scientific topics in the university classroom, we can interact with the media in the same way. And this is the kind of stuff that scientists who communicate with the public do all the time. They often talk about things outside the realm of their research training and expertise and get away with it. If we’re going to be doing science communication as practicing scientists, then we need to own the fact that we can talk about a whole bunch of scientific topics even though we’re not top experts in a subfield. For example, Richard Feynman once wrote a book chapter about ants. (I thought it was horrible way to illustrate his main point about doing amateur science, actually.)

The prescription for diagnosis #2 is to be a better scientist. If you’re conducting an experiment that, at its roots, lacks a purpose that can be explained to a general audience, then what is the science really work? I can explain that I work on really obscure stuff (the community ecology of litter-dwelling ants, how odors affect nest movements of ants, and how is it that some colonies of ants control the production of different kinds of ants, and how much sunlight and leaf litter ants like, for starters). But I’m working on this obscure stuff to build to a generalized understanding of biodiversity, the role of predators in the evolution of defensive behavior, how ecology and evolution result in optimized allocation patterns, and responses to climate change. I am sometimes reluctant to claim that my results can be generalized to entire fields (I need to get more arrogant in that respect), but I recognize the fact that my work is designed to ask these broad questions. If you don’t have these broad questions in mind while running the experiment, I recommend a sabbatical and a visit to the drawing board. I don’t know how often this phenomenon happens, but I have met some scientists who, when asked for the broadest possible application of their work, can only talk about the effect on a subfield of a subfield that would only influence a few people. If a project, at its greatest success, can only influence a few other scientists in the whole world, then, well, you get the idea.

Yes, scientists are good communicators. And we know how to talk to the public. We just might not think we’re the right people for the job, or that our science isn’t built for the task.

Negotiating for a faculty position: An anecdote, and what to do

Standard

This post is about a revoked job offer at a teaching institution that was in the news, and is also about how to negotiate for a job. I’ve written about negotiation priorities before, but this missive is about how to discuss those priorities with your negotiating partner.

Part A: That rescinded offer in the news

Last week, a story of outrage made the rounds. The capsule version is this: A philosopher is offered a job at a small teaching school. She tries to negotiate for the job. She then gets immediately punished for negotiating, by having the offer rescinded.

This story first broke on a philosophy blog, then into Inside Higher Ed, and some more mainstream media, if that’s what Jezebel is. There are a variety of other posts on the topic including this, and another by Cedar Reiner.

Some have expressed massive shock and appall. However, after reading the correspondence that caused the Dean to rescind the job offer, I’m not surprised at all. After initial conversations, the candidate wrote to the Dean:

As you know, I am very enthusiastic about the possibility of coming to Nazareth. Granting some of the following provisions would make my decision easier.

1) An increase of my starting salary to $65,000, which is more in line with what assistant professors in philosophy have been getting in the last few years.

2) An official semester of maternity leave.

3) A pre-tenure sabbatical at some point during the bottom half of my tenure clock.

4) No more than three new class preps per year for the first three years.

5) A start date of academic year 2015 so I can complete my postdoc.

I know that some of these might be easier to grant than others. Let me know what you think.

Here is what the Dean thought, in her words:

Thank you for your email. The search committee discussed your provisions. They were also reviewed by the Dean and the VPAA. It was determined that on the whole these provisions indicate an interest in teaching at a research university and not at a college, like ours, that is both teaching and student centered. Thus, the institution has decided to withdraw its offer of employment to you.

Thank you very much for your interest in Nazareth College. We wish you the best in finding a suitable position.

There has been a suggestion of a gendered aspect. That viewpoint is expressed well here, among other places. (There doesn’t seem to be a pay equity problem on this campus, by the way.) I wholly get the fact that aggressive negotiation has been seen as a positive trait for men and a negative trait for women. I think it is possible that gender played a role, but in my view, the explanation offered by the Dean is the most parsimonious one. (Now, my opinion will be dismissed by some because of my privilege as a tenured white dude. Oh well.) Given the information that we’ve been provided, and interpreted in light of my experiences at a variety of teaching campuses, I find the “fit” explanation credible, even if it’s not what I would have done.

A job offer is a job offer, and once an offer is made the employer should stand behind the offer. Then again, if some highly extraordinary events unfold before an agreement is reached, the institution can rescind the job offer. In this circumstance, is the candidate’s email highly extraordinary?

Did this start at “negotiation” communicate so many horrible things about the candidate that the institution should have pulled its offer? The Dean’s answer to that question was, obviously, “Yes.”

I would have answered “no.” Many others have done the yeoman’s blog work of explaining exactly how and and why that was the wrong answer to the question. I’m more interested in attempting to crawl inside the minds of the Dean and the Department that withdrew the offer. What were they thinking?

The blog that first broke this story called these items “fairly standard ‘deal-sweeteners.’” I disagree. If I try to place myself in the shoes of the Dean and the Department, then this is how I think I might have read that request:

I am not sure if I really want this position. If you are willing to stretch your budget more than you have for any other job candidate in the history of the college, then I might decide to take the job, because accepting it is not an easy decision.

1) I realize that your initial salary offer was about what Assistant Professors make at your institution, but I want to earn 20% more, as much as your Associate Professors, because that’s what new faculty starting at research universities get.

2) I’d know that 6 months of parental leave is unofficial policy and standard practice, but I want it in writing.

3) I’d like you to hire adjuncts for an extra sabbatical before I come up for tenure. By then I’m sure I’ll need a break from teaching, even though everybody else waits until after tenure to take a sabbatical.

4) Before I take this special extra sabbatical, I want an easier teaching schedule than everybody else in my department.

5) I want to stay in my postdoc for an extra year, because I’d rather do more research somewhere else than teach for you. I realize that you advertised the position to fill teaching needs, but you can hire an adjunct.

While some of these requests are the kind that I’d expect to be fulfilled by a research institution, I’m hoping that you are able to treat me like a professor from a research institution. Now that you’ve offered me this teaching job, I want my teaching obligations to be as minimal as possible. Let me know what you think.

And the Dean did exactly that: she let her know what she thought. I’m not really joking: that’s really how I think it could be seen, inside the context of a teaching- and student-centered institution.

Here is a more unvarnished version of what I imagine the Dean was thinking:

Holy moly! Who do you think we are? Don’t you realize that we want to hire you to teach? I didn’t pull the salary out of thin air, and it was aligned with what other new Assistant Professors earn here. And if you want to teach here, why the heck do you want to stay in your postdoc which presumably pays less money? If you wanted to stay in your for 18 months earning a postdoc salary, instead of coming to teach for us at a faculty-level salary, then why would you even want this job at all? Also, didn’t you realize that we advertised for the position to start this year because we need someone to teach classes in September? If you have such crazy expectations now, then I can only imagine what a pain in the butt you might be for us after you get tenure. I think it’s best if we dodge this bullet and you can try to not teach at a different university. We’re looking for someone who’s excited about teaching our students, and not as excited about finding ways to avoid interacting with them.

The fact remains that the candidate is actually seeking a teaching-centered position. However, she definitely was requesting things that an informed candidate would only ask from a research institution. I don’t think that she necessarily erred in making oversized requests, but her oversized requests were for the wrong things. They are focused on research, and not on teaching. While it might be possible that all of those requests were designed to improve the quality of instruction and the opportunities to mentor students, it clearly didn’t read that way to the Dean. We know it didn’t read that way, because the Dean clearly wrote that she thought the candidate was focused too heavily away from teaching and students. I’m not sure if that’s true, but based on the email, that perspective makes a heckvualotta sense to me.

I’d would be more inclined to chalk the unwise requests to some very poor advice about how to negotiate. I’d would have given the candidate a call and try to figure out her reasons, and if the answers were student-centered, then I’d continue the negotiation. But I can see how a reasonable Dean, Department, and Vice President of Academic Affairs could read that email and decide that the candidate was just too risky.

New tenure-track faculty hires often evolve into permanent commitments. You need to make the most of your pick. Hiring a dud is a huge loss, and it pays to be risk averse. If someone reveals that they might be a dud during the hiring process, the wise course of action is to pick someone who shows a lower probability of being a dud. However, once an offer is made, the interview is over.

But according to Nazareth College, this candidate showed her hand as a total dud, and a massive misfit for institutional priorities. Though I wouldn’t have done it, I have a hard time faulting them for pulling the offer. If they proceeded any further, they would have taken the chance that they’d wind up with an enthusiastic researcher who would have been avoiding students at every opportunity. Someone who might want to bail as soon as starting. Or maybe someone who got a better job while on the postdoc and not show up the next year. The department only has four tenure-track faculty, and would probably like to see as many courses taught by tenure-line faculty as possible.

Having worked in a few small ponds like Nazareth, I don’t see the outrageousness of these events. We really have no idea, though, because there is a lot of missing context. But we know that the Dean ran this set of pie-in-the-sky requests by the Department and her boss. They talked about it and made sure that they weren’t going to get into (legal) hot water and also made sure that they actually wanted to dump this candidate. It’s a good bet that the Department got this email and said, “Pull up, pull up! Abort!” They may have thought, “If we actually are lucky enough to fill another tenure-track line, we don’t want to waste it on someone who only wants to teach three preps before taking a pre-tenure sabbatical while we cover their courses.” I don’t know what they were thinking, of course, but this seems possible.

Karen Kelsky pointed out that offers are rescinded more often at “less prestigious institutions.” She’s definitely on to something. Less prestigious institutions have more weighty teaching loads and fewer resources for research (regardless of the cost of tuition). These are the kinds of institutions that are most likely to find faculty job candidates who are wholly unprepared for the realities of life on the job.

When an offer gets pulled, I imagine it’s because the institution sees that they’ve got a pezzonovante on their hands and they get out while they still can.

At teaching institutions, nobody wants a faculty member who shies away from the primary job responsibility: teaching.

In a research institution, how would the Dean and the Department feel if a job candidate asked the Dean for reduced research productivity expectations and a higher teaching load for the first few years? Wouldn’t that freak the Department out and show that they didn’t get a person passionate for research? Wouldn’t the Dean rethink that job offer? Why should it be any different for someone wanting to duck teaching at a teaching institution?

I don’t know what happened on the job interview, but that email from the candidate to the Dean is a huge red flag word embroidered with script that reads: “I don’t want to teach” and “I expect you to give me resources just like a research university would.” Of course everybody benefits when new faculty members get reassigned time to stabilize. But these requests were not just over the top, they were in orbit.

If I were the Dean at a teaching campus, what kinds of things would I want to see from my humanities job candidates? How about a guarantee for the chance to teach a specialty course? Funds to attend special conferences and funds to hire students as research assistants. Someone wanting to start early so that they could start curriculum development. Someone wanting a summer stipend to do research outside the academic year?

Here’s the other big problem I have with the narrative that has dogpaddled around this story. It’s claimed that the job offer was rescinded because she wanted to negotiate. But that’s not the case. The job candidate was not even negotiating.

Part B: What exactly is negotiation and how do you do it with a teaching institution?

A negotiation is a discussion of give and take. You do this for me, I do this for you. You give me the whip, and I’ll throw you the idol.

In the pulled offer at Nazareth College, the job candidate was attempting to “negotiate” like Satipo (the dude with the whip), but from other side of the gap.

What the Dean received from the candidate wasn’t even a start to a negotiation. It was, “Here is everything I want from you, how much can you give to me?” That is not a negotiation. A negotiation says, “Here are some things I’m interested in from you. If you give me these things, this is what I have to offer.”

How should this candidate have started the negotiation? Well, actually, the email should have been a request to schedule a phone conversation. What should the content of that conversation have been? How could the candidate have broached the huge requests (pre-tenure sabbatical, starting in 18 months, very few preps, huge salary)? By acknowledging that by providing these huge requests, huge output would come back.

“Once I get a contract for my second book, could you give me a pre-tenure sabbatical to write this book?”

“I’m concerned I won’t be able balance my schedule if I have too many preps early on. If you can keep my preps down to three per year, I’ll be more confident in my teaching quality and I should be able to continue writing manuscripts at the same time.”

“Right now, I am working on this exciting project during my postdoc, which is funded for another year. If it’s possible for me to arrive on campus after I finish my postdoc, this work will really help me create an innovative curriculum for [a course I will be teaching]. During this postdoc, I’d be glad to host some students from the college for internships and help them build career connections.” Of course, it’s very rare a teaching institution wants to wait a whole extra year. They want someone to teach, after all! It couldn’t hurt much to ask, if you phrase it like this, verbally.

“After running the numbers, I see that a salary of $65,000 is standard on the market for new faculty at sister institutions. But from what I’ve seen from the salary survey, this is well above the median salary for incoming faculty. If you can find the funds to bring me in at this salary, I’m okay if you trim back moving expenses. Being paid at current market rate in my field is important to me, and if you let me know what level of performance is tied to that level of compensation, I’ll deliver.”

By no means am I a negotiation pro. What I do know comes mostly from the classic book, “Getting to Yes.” The main point of this book is that “positional negotiation” is less likely to be successful. This approach involves opposite sides taking extreme positions and then finding a middle ground. Just like asking for a huge salary, and lots of reassigned time and easy teaching.

Getting to Yes explains how to do “principled negotiation.” In this case, you have a true negotiating partner in which you understand and respect one another’s interests. So, instead of haggling over salary like buying a used piece of furniture at a swap meet, you discuss the basis for the salary and what each of you will get out of it.

If you are asking for a reduced teaching load, then you explain what you will deliver with this reduced teaching load (higher quality teaching and more scholarship), and what the consequences will be if you don’t get it (potential struggle while teaching and fear that you won’t have time to do scholarship). And so on. The quotes I suggested above are what you’d expect to see in a principled negotiation. The book is a bit long but there are some critical ideas in there, and I’m really glad I read it before I negotiated my current position. When it was done, both I and the Dean thought we won, and we reached a fair agreement.

If you are in the position of receiving an academic job offer, negotiating for the best starting position is critical. You don’t have to be afraid of having the offer withdrawn as long as you’re negotiating in good faith. That mean you communicate an understanding the constraints and interests of your negotiating partner. And being sure that when you are ask for something, your reason is designed to fulfill the interests of your partner as much as yourself. So, asking for a bunch of different ways to get out of teaching responsibilities is a non-starter when your main job responsibility is teaching.

It’s not only acceptable to negotiate when you are starting an academic job, it’s expected. The worst lesson to take from this incident is Nazareth is that there is peril in negotiation. I suggest that the lesson is that you must negotiate. And, keep in mind that negotiation is a conversation and a partnership towards a common goal. Even when it comes to money, there is a common goal: You want to be paid enough that you’ll be happy and stay, and they want you to be paid enough that you’ll stay.

You won’t have anybody pull a job offer from you if you’re genuinely negotiating. It’s okay to ask for things that your negotiating partner can’t, or may not want to, deliver. However, what you ask for should reflect what you really truly want, and at the moment you’re asking, provide a clear rationale, so that you appear reasonable. If you’re interviewing for jobs, then I recommend picking up a copy of Getting to Yes.

I own my data, until I don’t.

Standard

Science is in the middle of a range war, or perhaps a skirmish.

Ten years ago, I saw a mighty good western called Open Range. Based on the ads, I thought it was just another Kevin Costner vehicle. But Duncan Shepherd, the notoriously stingy movie critic, gave it three stars. I not only went, but also talked my spouse into joining me. (Though she needs to take my word for it, because she doesn’t recall the event whatsoever.)

The central conflict in Open Range is between fatcat establishment cattle ranchers and a band of noble itinerant free grazers. The free grazers roam the countryside with their cows in tow, chewing up the prairie wherever they choose to meander. In the time the movie was set, the free grazers were approaching extirpation as the western US was becoming more and more subdivided into fenced parcels. (That’s why they filmed it in Alberta.) To learn more about this, you could swing by the Barbed Wire Museum.

The ranchers didn’t take kindly to the free grazers using their land. The free grazers thought, well, that free grazing has been a well-established practice and that grass out in the open should be free.

If you’ve ever passed through the middle of the United States, you’d quickly realize that the free grazers lost the range wars.

On the prairie, what constitutes community property? If you’re on loosely regulated public land administered by the Bureau of Land Management, then you can use that land as you wish, but for certain uses (such as grazing), you need to lease it from the government. You can’t feed your cow for free, nowadays. That community property argument was settled long ago.

Now to the contemporary range wars in science: What constitutes community property in the scientific endeavor?

In recent years, technological tools have evolved such that scientists can readily share raw datasets with anybody who has an internet connection. There are some who argue that all raw data used to construct a scientific paper should become community property. Some have the extreme position that as soon as a datum is collected, regardless of the circumstances, it should become public knowledge as promptly as it is recorded. At the other extreme, some others think that data are the property of the scientists who created them, and that the publication of a scientific paper doesn’t necessarily require dissemination of raw data.

Like in most matters, the opinions of most scientists probably lie somewhere between the two poles.

The status quo, for the moment, is that most scientists do not openly disseminate their raw data. In my field, most new papers that I encounter are not accompanied with fully downloadable raw datasets. However, some funding agencies are requiring the availability of raw data. There are a few journals of which I am aware that require all authors to archive data upon publication, and there are many that support but do not require archiving.

The access to other people’s data, without the need to interact with the creators of the data, is increasing in prevalence. As the situation evolves, folks on both sides are getting upset at the rate of change – either it’s too slow, or too quick, or in the wrong direction.

Regardless of the trajectory of “open science,” the fact remains that, at the moment, we are conducing research in a culture of data ownership. With some notable exceptions, the default expectation is that when data are collected, the scientist is not necessarily obligated to make these data available to others.

Even after a paper is published, there is no broadly accepted community standard that the data that resulted in the paper become public information. On what grounds do I assert this? Well, last year I had three papers come out, all of which are in reputable journals (Biotropica, Naturwissenschaften, and Oikos, if you’re curious). In the process of publishing these papers, nobody ever even hinted that I could or should share the data that I used to write these papers. This is pretty good evidence that publishing data is not yet standard practice, though things are slowly moving in that direction. As evidence, I just got an email from Oikos as a recent author asking me to fill out a survey to let them know how I feel about data archiving policies for the journal.

As far as the world is concerned, I still own the data from those three papers published last year. If you ask me for the data, I’d be glad to share them with you after a bit of conversation, but for the moment, for most journals it seems to be my choice. I don’t think any of those three journals have a policy indicating that I need to share my dataset with the public. I imagine this could change in the near future.

I was chatting with a collaborator a couple weeks ago (working on “paper i”) and we were trying to decide where we should send the paper. We talked about PLOS ONE. I’ve sent one paper to this journal, actually one of best papers. Then I heard about a new policy of the journal to require public archiving of datasets from all papers published in the journal.

All of sudden, I’m less excited about submitting to this journal. I’m not the only one to feel this way, you know.

Why am I sour on required data archiving? Well, for starters, it is more work for me. We did the field and lab work for this paper during 2007-2009. This is a side project for everybody involved and it’s taken a long time to get the activation energy to get this paper written, even if the results are super-cool.

Is that my fault that it’ll take more work to share the data? Sure, it’s my fault. I could have put more effort into data management from out outset. But I didn’t, as it would have been more effort, and kept me from doing as much science as I have done. It comes with temporal overhead. Much of the data were generated by an undergraduate researcher, a solid scientist with decent data management practices. But I was working with multiple undergraduates in the field in that period of time, and we were getting a lot done. I have no doubts in the validity of the science we are writing up, but I am entirely unthrilled about cleaning up the dataset and adding the details into the metadata for the uninitiated. And, our data are a combination of behavioral bioassays, GC-MS results from a collaborator, all kinds of ecological field measurements, weather over a period of months, and so on. To get these numbers into a downloadable and understandable condition would be, frankly, an annoying pain in the ass. And anybody working on these questions wouldn’t want the raw data anyway, and there’s no way these particular data would be useful in anybody’s meta analysis. It’d be a huge waste of my time.

Considering the time it takes me to get papers written, I think it’s cute that some people promoting data archiving have suggested a 1-year embargo after publication. (I realize that this is a standard timeframe for GenBank embargoes.) The implication is that within that one year, I should be able to use that dataset for all it’s worth before I share it with others. We may very well want to use these data to build a new project, and if I do, then it probably would be at least a year before we head back to the rainforest again to get that project done. At least with the pace of work in my lab, an embargo for less than five years would be useless to me.

Sometimes, I have more than one paper in mind when I am running a particular experiment. More often, when writing a paper, I discover the need to write different one involving the same dataset (Shhh. Don’t tell Jeremy Fox that I do this.) I research in a teaching institution, and things often happen at a slower pace than at the research institutions which are home to most “open science” advocates. Believe it or not, there are some key results from a 15-year old dataset that I am planning to write up in the next few years, whenever I have the chance to take a sabbatical. This dataset has already been featured in some other papers.

One of the standard arguments for publishing raw datasets is that the lack of full data sharing slows down the progress of science. It is true that, in the short term, more and better papers might be published if all datasets were freely downloadable. However, in the long term, would everybody be generating as much data as they are now? Speaking only for myself, if I realized that publishing a paper would require the sharing of all of the raw data that went into that paper, then I would be reluctant to collect large and high-risk datasets, because I wouldn’t be sure to get as large a payoff from that dataset once the data are accessible.

Science is hard. Doing science inside a teaching institution is even harder. I am prone isolation from the research community because of where I work. By making my data available to others online without any communication, what would be the effect of sharing all of my raw data? I could either become more integrated with my peers, or more isolated from them. If I knew that making my data freely downloadable would increase interactions with others, I’d do it in a heartbeat. But when my papers get downloaded and cited I’m usually oblivious to this fact until the paper comes out. I can only imagine that the same thing could happen with raw data, though the rates of download would be lower.

In the prevailing culture, when data are shared, along with some other substantial contributions, that’s standard grounds for authorship. While most guidelines indicate that providing data to a collaborator is not supposed to be grounds for authorship, the current practice is that it is grounds for authorship. One can argue that it isn’t fair nor is it right, but that is what happens. Plenty of journals require specification of individual author contributions and require that all authors had a substantial role beyond data contribution. However, this does not preclude that the people who provide data do not become authors.

In the culture of data ownership, the people who want to write papers using data in the hands of other scientists need to come to an agreement to gain access to these data. That agreement usually involves authorship. Researchers who create interesting and useful data – and data that are difficult to collect – can use those data as a bargaining chip for authorship. This might not be proper or right, and this might not fit the guidelines that are published by journals, but this is actually what happens.

This system is the one that  “open science” advocates want to change. There are some databases with massive amounts of ecological and genomic data that other people can use, and some people can go a long time without collecting their own data and just use the data of others. I’m fine with that. I’m also fine with not throwing my data in to the mix.

My data are hard-won, and the manuscripts are harder-won. I want to be sure that I have the fullest opportunity to use my data before anybody else has the opportunity. In today’s marketplace of science, having a dataset cited in a publication isn’t much credit at all. Not in the eyes of search committees, or my Dean, or the bulk of the research community. The discussion about the publication of raw data often avoids tacit facts about authorship and the culture of data ownership.

To be able to collect data and do science, I need grant money.

To get grant money, I need to give the appearance of scientific productivity.

To show scientific productivity, I need to publish a bunch of papers.

To publish a bunch of papers, I need to leverage my expertise to build collaborations.

To leverage my expertise to build collaborations, I need to have something of quality to offer.

To have something of quality to offer, I need to control access to the data that I have collected. I don’t want that to stop after publication.

The above model of scientific productivity is part of the culture of data ownership, in which I have developed my career at a teaching institution. I’m used to working amicably and collaboratively, and the level of territoriality in my subfields is quite low. I’ve read the arguments, but I don’t see how providing my data with no strings attached would somehow build more collaborations for me, and I don’t see how it would give me any assistance in the currency that matters. I am sure that “open science” advocates are wholly convinced that putting my data online would increase, rather than constrict opportunities for me. I am not convinced, yet, though I’m open to being convinced. I think what will convince me is seeing a change in the prevailing culture.

There is one absurdity to these concerns of mine, that I’m sure critics will have fun highlighting. I doubt many people would be downloading my data en masse. But, it’s not that outlandish, and people have done papers following up on my own work after communicating with me. I work at a field site where many other people work; a new paper comes out from this place every few days. I already am pooling data with others for collaborations. I’d like to think that people want to work with me because of what I can bring to the table other than my data, but I’m not keen on testing that working hypothesis.

Simply put, in today’s scientific rewards system, data are a currency. Advocates of sharing raw data may argue that public archiving is like an investment with this currency that will yield greater interest than a private investment. The factors that shape whether the yield is greater in a public or private investment of the currency of data are complicated. It would be overly simplistic to assert that I have nothing to lose and everything to gain by sharing my raw data without any strings attached.

While good things come to those who are generous, I also have relatively little to give, and I might not be doing myself or science a service if I go bankrupt. Anybody who has worked with me will report (I hope) that am inclusive and giving with what I have to offer. I’ve often emailed datasets without people even asking for them, without any restrictions or provisions. I want my data to be used widely. But even more, I want to be involved when that happens.

Because I run a small operation in a teaching institution, my research program experiences a set of structural disadvantages compared to colleagues at an R1 institution. The requirement to share data levies the disadvantage disproportionately against researchers like myself, and others with little funding to rapidly capitalize on the creation of quality data.

To grow a scientific paper, many ingredients are required. As grass grows the cow, data grows a scientific paper.

In Open Range, the resource in dispute is not the grass, but the cows. The bad guy ranchers aren’t upset about losing the grass, they just don’t want these interlopers on their land. It’s a matter of control and territoriality. At the moment, the status quo is that we run our own labs, and the data growing in these labs are also our property.

When people don’t want to release their data, they don’t care about the data itself. They care about the papers that could result from these data. I don’t care if people have numbers that I collect. What I care about is the notion that these numbers are scientifically useful, and that I wish to get scientific credit for the usefulness of these numbers. Once the data are public, there is scant credit for that work.

It takes plenty of time and effort to generate data. In my case, lots of sweat, and occasionally some venom and blood, is required to generate data. I also spend several weeks per year away from my family, which any parent should relate with. Many of the students who work with me also have made tremendous personal investments into the work as well. Generating data in my lab often comes at great personal expense. Right now, if we publicly archived data that were used in the creation of a new paper, we would not get appropriate credit in a currency of value in the academic marketplace.

When a pharmaceutical company develops a new drug, the structure of the drug is published. But the company has a twenty year patent and five years of exclusivity. It’s widely claimed – and believed – that without the potential for recouping the costs of work in developing medicines that pharmaceutical companies wouldn’t jump through all the regulatory hoops to get new drugs on the market. The patent provides incentive for drug production. Some organizations might make drugs out of the goodness of their hearts, but the free market is driven by dollars. An equivalent argument could be wagered for scientists wishing for a very long time window to reap the rewards of producing their own data.

In the United States, most meat that people consume doesn’t come from grass on the prairie, but from corn grown in an industrial agricultural setting. Likewise, most scientific papers that get published come from corn-fed data produced by a laboratory machine designed to crank out a high output of papers. Ranchers stay in business by producing a lot of corn, and maximizing the amount of cow tissue that can be grown with that corn. Scientists stay in business by cranking out lots of data and maximizing how many papers can be generated from those data.

Doing research in a small pond, my laboratory is ill equipped to compete with the massive corn-fed laboratories producing many heads of cattle. Last year was a good year for me, and I had three papers. That’s never going to be able to compete with labs at research institutions — including the ones advocating for strings-free access to everybody’s data.

The movement towards public data archiving is essentially pushing for the deprivatization of information. It’s the conversion of a private resource into a community resource. I’m not saying this is bad, but I am pointing out this is a big change. The change is biggest for small labs, in which each datum takes a relatively greater effort to produce, and even more effort to bring to publication.

So far, what I’ve written is predicated on the notion that researchers (or their employers) actually have ownership of the data that they create. So, who actually owns data? The answer to that question isn’t simple. It depends on who collected it, who funded the collection of the data, and where the data were published.

If I collect data on my own dime, then I own these data. If my data were collected under the funding support of an agency (or a branch of an agency) that doesn’t require the public sharing of the raw data, then I still own these data. If my data are published in a journal that doesn’t require the publication of raw data, I still own these data.

It’s fully within the charge of NIH, NSF, DOE, USDA, EPA and everyone else to require the open sharing of data collected under their support. However, federal funding doesn’t necessarily necessitate public ownership (see this comment in Erin McKiernan’s blog for more on that.) If my funding agency, or some federal regulation, requires that my raw data be available for free downloads, then I no longer own these data. The same is true if a journal has a similar requirement. Also, if I choose to give away my data, then I no longer own them.

So, who is in a position to tell me when I need to make my data public? My program officer, or my editor.

If you wish, you can make it your business by lobbying the editors of journals to change their practices, and you can lobby your lawmakers and federal agencies for them to require and enforce the publication of raw datasets.

I think it’s great when people choose to share data. I won’t argue with the community-level benefits, though the magnitude of these benefits to the community vary with the type of data. In my particular situation, when I weigh the scant benefit to the community relative to the greater cost (and potential losses) to my research program, the decision to stay the course is mighty sensible.

There are some well-reasoned folks, who want to increase the publication of raw datasets, who understand my concerns. If you don’t think you understand my concerns, you really need to read this paper. In this paper, they had four recommendations for the scientific community at large, all of which I love:

  1. Facilitate more flexible embargoes on archived data
  2. Encourage communication between data generators and re-users
  3. Disclose data re-use ethics
  4. Encourage increased recognition of publicly archived data.

(It’s funny, in this paper they refer to the publication of raw data as “PDA” (public data archiving), but at least here in the States, that acronym means something else.)

And they’re right, those things will need to happen before I consider publishing raw data voluntarily. Those are the exact items that I brought up as my own concerns in this post. The embargo period would need to be far longer, and I’d want some reassurance that the people using my data will actually contact me about it, and if it gets re-used, that I have a genuine opportunity for collaboration as long as my data are a big enough piece. And, of course, if I don’t collaborate, then the form of credit in the scientific community will need to be greater than what happens now, which is getting just cited.

The Open Data Institute says that “If you are publishing open data, you are usually doing so because you want people to reuse it.” And I’d love for that to happen. But I wouldn’t want it to happen without me, because in my particular niche in the research community, the chance to work with other scientists is particularly valuable. I’d prefer that my data to be reused less often than more often, as long as that restriction enabled me more chances to work directly with others.

Scientists at teaching institutions have a hard time earning respect as researchers (see this post and read the comments for more on that topic). By sharing my data, I realize that I can engender more respect. But I also open myself up to being used. When my data are important to others, then my colleagues contact me. If anybody feels that contacting me isn’t necessary, then my data are not apparently necessary.

Is public data archiving here to stay, or is it a passing fad? That is not entirely clear.

There is a vocal minority that has done a lot to promote the free flow of raw data, but most practicing scientists are not on board this train. I would guess that the movement will grow into an establishment practice, but science is an odd mix of the revolutionary and the conservative. Since public data archiving is a practice that takes extra time and effort, and publishing already takes a lot work, the only way will catch on is if it is required. If a particular journal or agency wants me to share my data, then I will do so. But I’m not, yet, convinced that it is in my interest.

I hope that, in the future, I’ll be able to write a post in which I’m explaining why it’s in my interest to publish my raw data.

The day may come when I provide all of my data for free downloads, but that day is not today.

I am not picking up a gun in this range war. I’ll just keep grazing my little herd of cows in a large fragment of rainforest in Sarapiquí, Costa Rica until this war gets settled. In the meantime, if you have a project in mind involving some work I’ve done, please drop me a line. I’m always looking for engaged collaborators.

Avoiding bad teaching evaluations: Tricks of the trade

Standard

Student evaluations are the main method used to evaluate our teaching. These evaluations are, at best, an imperfect measuring tool.

Lots of irrelevant stuff affects evaluation scores. If you’re attractive or well dressed, this helps your scores. If you are a younger woman, you have to reckon with a distinct set of challenges and biases. If the weather is better out, you might get better evaluations, too. So, don’t feel bad about doing things to help your scores, even if they aren’t connected to teaching quality.

My university aptly calls these forms by their acronym, “PTE”: Perceived Teaching Effectiveness. Note the word: “perceived.” Actual effectiveness is moot.

People are aware whether or not they learned. However, superficial things can really affect perception. What our students think about the classroom experience is important. But evaluation forms are not really measuring teaching effectiveness. These evaluations measure student satisfaction more than learning outcomes. Since we are being held accountable for classroom performance based on student satisfaction, it is in our interest to pay attention to the things that can improve satisfaction.

Here are some ways I’ve approached evaluations with an effort to avoid getting bad ones.

  • I try to teach effectively. The best foundation of perception is reality. I put some trust in my students’ ability to assess performance. If I’m doing a good job, my students should know it.
  • I work hard to demonstrate that I respect my students. It’s easy to give in to the conceit that my time is more valuable than the time of my students. When I see myself going down that dark pathway, I try to follow the golden rule, and treat the time of my students with as much concern as I would like my own time to be treated. For example, I make sure class always ends on time.
  • I emphasize fairness. On the first day of class, I let students know that life isn’t fair, but I try hard to make sure that my class done as fairly as possible. Students often volunteer gripes about their other classes, and unfairness is always the common thread in these discussions. Even if students perform poorly in a class, if they think that it was conducted fairly, then they are still usually satisfied.
  • I recall Hanlon’s Razor: “Never ascribe to malice that which is adequately explained by incompetence.” None of my students are out to get me. Ideally, they’re out for themselves. Sometimes, I’m not clear enough about expectations. When a student needs something, I approach the interaction with the default assumption that it’s my fault. And if it’s not my fault, it’s not an intentional flaw, so I can’t give students a bad time about the shortcoming.
  • I don’t engage in debates about graded assignments. I tell my students that if there is a very simple mathematical error or something I missed, they can bring it to me immediately after class. Any other errors need to be addressed with a written request by the start of the next class meeting. I’ve only gotten a few of these, and in all cases, the students were correct.
  • When a student is persistent about points, I avoid the argument whenever possible. I don’t concede unearned credit, but I don’t dismiss the concern either. Nearly all requests for grade changes are so tiny, they have a negligible on the final grade. I show, numberwise, it doesn’t seem to make a difference. I tell them that if they are right on the borderline at the end of the semester, I’ll make a note of it and we could talk about it at that time. This prevents the student from waging a futile argument, and keeps me out of the business of catering to minutia.
  • I run a tight ship. I can get annoyed by inappropriate behavior, but the students are usually even more annoyed. When someone is facebooking in the front row or monopolizes discussion, the rest of the class is usually super-pleased that I shut it down, as long I do it with respect. Classroom management is a fine art that we are rarely taught. (I’ve learned some education faculty and K-12 teachers.) I think establishing the classroom environment in the first few days is critical. I don’t enforce rules, but I develop accepted norms of behavior collaboratively on the first day of class. When things happen outside the norm, I address them promptly and, I hope, gently. When anybody (including myself) is found to be outside the norm, we adjust quickly because we agreed to the guidelines on the first day of class. I’ve botched this and have been seen as too severe on occasion, but I’d prefer to err on that side then having an overly permissive environment in which students don’t give one another the respect of their attention.
  • A classic strategy is to start out the term with extreme rigor, and lessen up as time goes on. I don’t do this, at least not intentionally, but I don’t think it’s a bad idea as long as you finish with high expectations. In any circumstance, I imagine it would be disaster to increase the perceived level of difficulty during the term.
  • I use midterm evaluations, using the university form partway through the semester, for my own use. This gives me early evidence about perceptions with the opportunity to change course, if necessary. I am open and transparent about changes I make.
  • I often use a supplemental evaluation form at the end of the term. There are two competing functions of the evaluation. The first is to give you feedback for course improvement, and the second is to assess performance. What the students might think is constructive feedback might be seen as a negative critique by those not in the classroom. It’s in our interest to separate those two functions onto separate pieces of paper. Before we went digital, I used to hold up the university form and say: “This form [holding up the scantron] is being used by the school as a referendum on my continued employment. I won’t be able to access these forms until after the next semester already starts, so they won’t help me out that much.” Then I held up another piece of paper [an evaluation I wrote with specific questions about the course] and said, “This one is constructive feedback about what you liked and didn’t like about the course. If you have criticisms of the course that you want me to see, but don’t think that my bosses need to see them, then this is the place to do it. Note that this form has specific questions about our readings, homework, tests and lessons. I’m just collecting these for myself, and I’d prefer if you don’t put your names on them.” I find that students are far more likely to evaluate my teaching in broad strokes in the university form when I use this approach, and there are fewer little nitpicky negative comments.
  • I try to avoid doing evaluations when students are more anxious about their grade, like on the cusp of an exam or when I return graded assignments. When I hand out the very helpful final exam review sheet, which causes relief, then I might do evaluations.
  • I don’t bring in special treats on the day I administer evaluations. At least with my style, my students would find it cloying, and they wouldn’t appreciate a cheap bribe attempt. Once in a long while, I may bring in donuts or something else like that, but never on evaluation day.
  • I’ve had some sections in with chronic attendance problems, in which some students would skip or show up late. On those occasions, I made a point to administer evaluations at the start of class on a day that had low attendance. I imagined that the students who weren’t bothering to attend class were less likely to give a stellar rating. Moreover, the absent students weren’t as well qualified to evaluate my performance as those sitting in class. (Of course, those attendance problems indicated that I had a bigger problem on my hands.)
  • Being likable and approachable. Among all the things that influence evaluations, I think this is the biggest one. There are many ways to be liked by your students, as a human being, but I think being liked is prerequisite to really good scores. Especially with our students who face a lot of structural disadvantages, approachability is important for the ability to do the job well. I’m not successful enough on this front. It hasn’t tanked me in evaluations, because by the end of the semester the students are comfortable with me, but that doesn’t emerge as quickly as I’d like. This is the area I need to work on the most. I am to do all the professorly things with students with the greatest needs, they need to be able to talk to me.

Of course, some of these tips don’t apply if the evaluations are being administered online. This is a growing trend, and my university made the switch a couple years ago. (Thoughts and experiences with paper vs. online evaluations are in the ever-growing queue for future posts.)

Are there different or additional approaches that you use for the non-teaching-performance related aspects of student evaluations?

[update: be sure to read this comment. I think everything in this post is relevant to professors of both genders, but there are additional issues involving student biases that female professors need to deal with that I haven’t addressed. Professors need to be approachable to do their jobs. If students can’t talk to us, then that puts a low ceiling on what we can help our students achieve. However, what it means to be professional and “approachable” for a younger female professor might look really different than for an older guy. As I don’t have experience being a younger female professor, I’m not as well qualified to address this as some others. Another good reason to cruise over to Tenure, She Wrote.]

Putting faces to names: meeting fellow academics

Standard

I just got back from a tour of North America, including a stop to visit my family in Nova Scotia and a conference in California. It was a great trip and a reminder of how lucky I am these days. Not only did my daughter and I get spoiled by my parents but I also had the opportunity to meet and interact with many of the leaders and new up and coming researchers of my field*.  As we recover from jet lag and get back to the routine, I have a chance to reflect on my travels.

One of the benefits of traveling for conferences is, of course, the chance to meet people. Seeing talks on the forefront of everyone’s research is definitely good for learning and stimulating new ideas, but I often find the most valuable parts of any conference are the causal conversations you end up having. It can also be pretty interesting to put faces (and characters) to the names you know from the literature.

Although not unique to academia, you often ‘know’ people before meeting them through their work. I find that I don’t often have a particular preconceived picture of authors I read, but meeting someone in person or seeing them talk does change the way I interact with the literature to some extent. For one thing, the more people I meet, the more human the literature feels. I can put faces to author names and pictures to their study systems (if I’ve seen a talk). As a student, in some ways the primary literature felt so, well, scientific and perhaps a bit cold. These days, that is less of an issue and science feels much more like an endeavour that I belong to. However, as you become more apart of the community doing science, there is the potential for things to swing the other way. I’m probably more likely to notice a publication on a list if I’ve met the author. It is always nice to see people I went to grad school with pop up in journal alerts, for example. And although I try not to be biased by my impressions of a person when I read a paper, I’m only human after all. I wouldn’t say it stops me from appreciating good work (I hope!) but personal interactions do colour whether I would want to invite a person for a talk, for example. And interactions at conferences, etc. definitely influences who I want to work with. Of course, I’m more likely to collaborate with people I hit it off with then those I don’t. I wonder if that is also true for citations and the like. Are we more likely to read and cite people we’ve met? How about those we like? I’m not sure I want to know the answers to those questions and I certainly try not to let biases like that enter my work, but science is a human activity after all.

I think it is always interesting to meet/see people in person who you know from other means. In academics, that used to be meeting or seeing someone give a talk at a conference whose papers you’ve read. Maybe their papers are seminal to yours, and especially as a grad student, seeing people behind the work can be very eye opening. I once was at a famous ecologist’s talk at a big conference. The room was packed but it was one of the poorer talks I’d ever seen. The slides were directly transferred from papers and impossible to read. Pointing from the lectern to a screen meters away also did not help (‘as you can clearly see…’ was a memorable quote). A friend and I sat at the back trying to figure out the main tenets of the classic theory from this person because it was the keystone of the talk but never directly described (we were of course all expected to be familiar with it, I suppose). The experience taught me that great thinkers don’t necessarily make great presenters. But I’ve also seen wonderful talks by some big names too.

Over the last few weeks, I’ve gotten to see old friends and put faces to more names I’m familiar with. I also got a chance to hear from and meet people I might have never have known otherwise. And seeing what the grad students are up to is always interesting. Communicating science and hearing about people’s studies is part of what I find fun in this job.

Interestingly, this blog and twitter has also opened up my scientific community beyond the boarders of my research. So whereas before putting faces to names was all about meeting people I had read in the literature, this time it included a chance to meet up with Small Pond’s very only leader, Terry. We were lucky to overlap in the LA area for a day and were able to see each other face to face. I have to admit, it felt a bit like an academic version of on-line dating or something. I was nervous to meet. What if it was awkward? What if we didn’t like each other? I’d been having fun posting on this blog but if our in person interaction didn’t work I wasn’t sure what that would mean. I’m happy to report that we had a good time and a fruitful discussion about blogging, twitter and this new-to-me on-line community. I hope it is only the first of many meetings with those that I am getting to know through their blogs and tweets. I’m sure it will mean that I will also pop in on talks far removed from my research if we happen to be at the same conference in the future. I think that is a good thing.

*being a bit of a generalist, the conference was in one of my fields of interest, plant volatiles.

Novels, science, and novel science

Standard

I was chatting with a friend in a monthly book group. A rare event happened this month: everybody in the group really liked the book. It turns out that that most of the books they read are not well-liked by the group. How does that happen? Well, this is a discriminating group, and there are lot of books on the market; many books aren’t that good.

We speculated about why so many non-good books are sold by publishers. The answer is embedded within the question: those books sell.

Let me overgeneralize and claim that there are two kinds of novels: First, there are those that were brought into the world because the author had a creative vision and really wanted to write the book. Second, there are novels that are written with the purpose of selling and making money. Of course, some visionary works of art also sell well, but many bestselling books aren’t great works of art. (No venn diagram should be required.) Some amazingly great novels don’t sell well, and weren’t created to be sold easily in the marketplace.

Most novels were never intended for greatness. The authors and the publishers know this, but have designed them to be enjoyed and to have the potential to sell well. When someone is shopping for a certain kind of book, then they’ll be able to buy that kind of book. Need a zombie farce? A spy thriller? A late-20s light-hearted romance? I have no problem with people writing and selling books that aren’t great. Books can be a commodity to be manufactured and sold, just like sandwiches or clothing. A book that is designed to sell fits easily fits into a predetermined category, and then does its best to conform to the expectations the category, to deliver to the consumer what was expected.

I think a similar phenomenon happens when we do experiments and write scientific papers.

First, some research happens because the investigators are passionately interested in the science and have a deeply pressing creative urge to solve problems and learn new things.

On the other hand, some research is designed to be sold on the scientific marketplace.

To advance in our careers, we need to sell our science well. The best way to do this, arguably, is to not aspire to do great science. We can sell science by taking the well trod path on theoretical bandwagon, instead of blazing our own paths.

If you want a guarantee that your science will sell well, you need to build your research around questions and theories that are hot at a given moment. If you do a good set of experiments on trendy topic, then you should be able to position your paper well in a well-regarded journal. If you do this a dozen times, then your scientific career is well on its way.

On the other hand, you could choose a topic that you are passionately interested in. You might think that this is an important set of questions that has the potential to be groundbreaking, but you don’t know if other people will feel the same way. You might be choosing to produce research that doesn’t test a theory-of-the-moment, but you think will be of long-term use to researchers in the field for many years to come. However, these kinds of papers might not sell well to high-profile journals.

Just like a novelist attempting to write a great novel instead of one that will sell well, if you are truly attempting to do great science, there is no guarantee that your science will sell. Just like there are all kinds of would-be-great novelists, there are some would-be-great scientists who are not pursuing established theories but are going off in more unexplored directions.

Of course, some science created for the marketplace is also great science, too. But the secrets to creating science that sells, are very different than the secrets to doing great science.

After all, most papers in Science and Nature are easily forgettable, just like the paperbacks for sale at your local chain bookstore.

Update: For the record, y’all, I’m not claiming that I am above doing science to be sold. That’s mostly what I do. I’m just owning that fact. There’s more on this in the comments.

Faith, knowledge, respect and science education

Standard

People sometimes make decisions and solve problems without using reason. It’s part of our nature. People seek understanding through a variety of modalities. It’s normal.

I don’t use reason and science to deal with everything I encounter in the world, but I rely heavily on evidence. Faith remains perplexing to me, and not for the lack of education about a variety of religious traditions. Faith is the choice to believe that something is true without evidence. I won’t choose to use faith about anything of real consequence. I am not a religious person, and I choose against faith.

I am aware that my approach to understanding remains a minority view. Remembering this fact is an important part of my job, if I am to be an effective science educator.

Last year, the blog Sci-Ed (I’m a fan of the site) ran a piece by Adam Blankenbicker arguing that we should not “believe” in science because the belief requires faith, whereas knowledge is gained though evidence and investigation. With respect to the facts and the concepts, I agree with Mr. Blankenbicker, wholeheartedly.

However, I never would attempt to sell his concept, as written, in a blog devoted to science education. Science is about evidence, but just because science educators put an emphasis on evidence that does not mean that we need to go out of the way to insult belief.

The first concern about this post was expressed by Holly Dunsworth, who wrote that an interview with her for that piece was taken out of context.

In contemporary culture, the prevailing view is that faith is a virtue rather than a vice. On the other hand, many scientists have gone to the great trouble to point out that faith more often leads to bad behavior. But, as a science educator, that’s never an argument I want to actively seek out. That conversation will not be resolved anytime soon, and if you bring that conversation to the forefront of science education, the conversation will promptly stall.

One cannot win the argument that faith is a vice, if the definition of winning includes earning respect from people of all backgrounds. In my book, science education wins when everybody learns and loves evidence-based science, and that includes people of faith.

Some science educators, such as Mr. Blankenbicker, attempt to convince others that the use of faith is a vice. I may agree with him, but delivering that argument would hobble my own efforts as a science educator. Once a person who has strong religious faith sees the “faith = bad” idea coming from science educator, the analytical part of the brain turns off.

Too much science education involves preaching to the converted, in which people who are already interested in science learn even more about science. A different approach is required when informal education efforts target an audience that arrives with both scientific ignorance and suspicion of the motives of the science educator. With some topics that are (allegedly) connected to religious doctrine, such as the origin of life on Earth and the diversification of biodiversity, lessons involving facts, knowledge, and evidence won’t be accepted if the same lessons simultaneously attack faith.

To bring new people over to science, we can’t start by insulting them. No matter how many fan emails published by Dawkins, this basic fact remains: Whenever a science educator argues that religious faith is a delusion, the receptivity of the target audience shrivels.

To put it more simply, when someone feels that an educator just insulted their beliefs, they’re not going to consider the content of that educator’s science lesson. Ever since Sci-Ed published a piece insulting the use of faith, I imagine that religious readers of the site, if any remain, will be less receptive to the science content within. I find it dismaying that some science educators have written off the majority of the US population because they are religious. That religious population is the one that informal science educators need to reach the most, if we are to reverse the nation’s decline in science education.

When people don’t trust science educators for information, they’re not necessarily leaning heavily on Descartes either. Lots of people simply make decisions without any useful evidence. Most people who reject facts generated by science don’t necessarily see their views as a product of “faith” or “belief.” Some people use faith about empirical matters in which it is often useless, when knowledge would be more useful is more useful. But most people who use faith for spiritual matters don’t have the theological or philosophical training to understand which kinds of decisions are better solved with knowledge instead of faith.

Here is a small story, to illustrate how people use faith when knowledge and reason is required. When my son was in kindergarten, he was having a friend over, and they were playing with some toys. The friend was struggling mightily to join together two pieces in a puzzle, even though these pieces weren’t designed to connect to one another. Literally, one piece had a square peg and the other had a round hole. When the friend was told that the pieces would not fit together, the child replied, “They will fit. I have faith that they’ll fit.” Then he continued to twist and push, but the pieces never joined.

If you know typical 5-year-olds, that conversation is perfectly normal except for the fact that the child specifically explained that he made his decision based on faith. This child learned, at home, to use faith to solve an everyday problem to which knowledge was suited. It so happens that one of his parents was being trained as an evangelical minister. I have no idea if the parents would have been proud of the child’s faith in this circumstance. I don’t know how the parents would have handled the situation if they were present. I’m sure that he eventually figured out that spatial problems using puzzles are solved using reason, and not with faith.

When it comes to more complicated problems that take a little more than round holes and square pegs, I don’t know if he’ll learn to drop faith and pick up knowledge. Will he use the same reasoning as biologists to measure natural selection and reconstruct evolutionary histories? Will he use the same logic and evidence that geologists and physicists use when seeking to understand the age of the Earth? Many adult Americans inappropriately apply faith instead of reason to these topics. Or, they use poor quality reasoning from lines of inquiry that originate from faith-based assumptions.

To get to the factually correct answers, faith must be set aside. Effective science education doesn’t require that the entire audience reject the use of faith for everything. It just requires that the audience uses reason when it comes to matters of science. Emphasizing that knowledge is useful and appropriate is a positive, but emphasizing that faith is useless and inappropriate is a negative. People rarely learn, or adopt constructive approaches, by focusing on the negative.

As far as I’m concerned, as a science educator, it’s beyond my job description to judge other people if they use faith about matters that are not informed by science. Moreover, if I do judge other people because they use faith, then I’ve just made my job impossible because I have cut myself off from my target audience. Some science educators don’t worry so much about teaching science content, but instead primarily argue that it’s stupid to be religious. This approach is not going to solve the science education crisis in the United States.

I want everybody to use the knowledge gained from science to make factual decisions about the natural world. If I can demonstrate that knowledge provides answers, then others will be able to conclude that faith is not suited to scientific matters. There are a small number of people who insist on using faith to directly controvert factual evidence. These people have no interest in knowledge, and these people are lost to science education efforts.

If science educators focus heavily on the small minority of the uber-faithful and anti-factual, we alienate the nearly everybody else: the people who who use faith at some times in their life but are open to knowledge. Effective outreach begins with respecting the notion that some people use faith and religion in some aspects of their life. Any science educator who can’t respect the fact some of the audience is religious and uses faith at times is in the wrong line of business.

Science and religion may or may not be compatible. But much of the country is religious, and it’s in all of our interests for this majority to use reason to understand and accept facts that have been established through science. It’s the job of the science educator to convince the faithful that science requires reason and knowledge. You can’t do it successfully if you start by insulting the faithful for their faith.

Why I write with my own name

Standard

This post was written in concert with four others on the same topic, which can be found at this link on Hope Jahren’s site.

When you click on “about,” you see my unveiled face and my real name. Some of my credibility – and the lack thereof – comes from who I am and what I have done. It’s self-evident that the identity of the messenger affects how the message is received.

It is my hope that my identity gives more credence to my words. If I talk the talk, then I’d like to show that I walk the walk. If I write about research productivity, then I need to show that I actually, you know, publish. If I write about mentorship, then a cranky person can track down my students. Of course, any writer should be judged by one’s words and not by one’s credentials. So, the credence that I might get from my identity would only be temporarily bought, from the population that is unfamiliar with the mores of the pseudonymous science “blogosphere.” That turns out to be most people.

Before I started blogging, I did a little bit of amateur sociological fieldwork. I learned that most people don’t read blogs on a regular basis. I learned that a visit to a blog can be like arriving at an intimate party where you don’t know anybody. In contrast, I want my blog to be approachable to everybody. I want to be the guy who walks over to the front door, says “Hi, I’m Terry. Come on in. Can I get you something to drink? Let me introduce you to these folks.” I want every single blog post to be able to stand on its own, and to not make any references to other people or other blogs that aren’t fully understandable to a novice.

And I want people to know who is addressing them. It’s more approachable to guests who just put their foot through the front door. I’ve written more here about my approach to running my site so that it is transparent, professional, and inclusive. I’m not claiming that my approach is better than others, but I try to be different in a way that, I hope, broadens the audience.

Compared to most other bloggers, it’s easy for me to be public: I represent the trifecta of privilege as a tenured white man. And I’m straight. I don’t have to worry about the job market anymore, and I won’t be attacked because of my gender or ethnicity, like some of my colleagues.

It’s my duty to use this relative comfort to agitate for change. It’s the best and most important part of my job.

I am the great grandchild of wops and micks who immigrated into a low-income ethnic enclave of New York City. I fight a similar battle as my great grandparents, not for myself but on behalf of my students. My lab is mostly composed of students from traditionally underrepresented groups, from low-income backgrounds, who are often the first in their families to attend college. Every day, I work to ameliorate the mountain of prejudice and disadvantage facing my students.

I can stick my neck out on occasion. I can press for student rights, call out bias, and encourage practices that make sure that the future generation of scientists looks like the American populace. My privilege doesn’t come without minor challenges. I need to be clear about my awareness of power differentials and where privilege lies. While I have been working very hard to declare myself as an ally and advocate, I’ve heard far too often that I’m not the right person to advocate for my students. But I won’t shut up, and it’s a challenge that I’m up to, because these things matter.

It’s rare that people accuse me of being out of touch because of my tenured-white-dudeness, but it happens. The last time I touched on the topic of my pseudonymity, I got burned. A formerly-pseudonymous colleague posted my name and picture, right next to a picture of herself with a black eye from a vicious assault, suggesting that attitudes like mine were partly to blame. One commenter remarked that I am a danger to children. My crime was ignorance of the fact that some people have good reasons to use pseudonyms. Like I didn’t know that or something. I was also guilty of not doing a literature search on the history of writing about pseudonymity in the “blogosphere.” You know you’re been shamed when the author has to write a caveat that you’re not actually being shamed.

I’m okay with the occasional potshot because risks are necessary to make change. The real risk is that I am a highly flawed model for the change I wish to see. I write about being an effective professor, but I was denied tenure. I push for more and better mentorship of minority students and women, but I’m a white guy. I write a regular set of posts on efficient teaching but I’m not winning any teaching awards. I write about time management and how to do research with a heavy teaching load, but lately I’ve been in the classroom much less than my departmentmates.

I’d like to help change the environment so that more people find it possible and worthwhile to write with their own names. For some, that environment already, tenuously, exists. This post by tressiemc about her choice to use her own identity is powerful and inspirational. I applaud her courage, and I believe we all stand to gain from it.

Based on the volume of what I’ve written, there is no shortage of people who consider me to be a rube, buffoon, blowhard, or a narcissist. That’s a chance I’ve taken. But these challenges and worries are infinitesimal compared to the truckloads of bunk that my students, and many of my junior colleagues, have to face every day. Because I am capable of using my own name while writing on their behalf, I am.

Why students don’t raise hands in my classroom

Standard

People learn when they are engaged. So, then, what is engagement? Don’t hold me to this definition, but I think it’s when students are actively thinking about a topic. Engagement is not just paying attention. It happens when concepts are evaluated, synthesized, compared, and all that. Engagement is when the mind is actively churning on the topic at hand.

Effective teaching happens when we do things that promote engagement, and ineffective teaching is when we do things that allow students to disengage.

Whenever I’ve had any kind of professional development about engagement, one of the first topics that gets mentioned is raising hands in class.  And the message is always the same:

No raise hands good. Raise hands bad.

And I agree with this.

It’s what K-12 teachers are taught when they’re being trained as teachers, and the concept is just as relevant for university faculty as well.

When we ask a question to a class, we shouldn’t do it in a way that requires only a small number of students to volunteer. This allows the entire class an opportunity to put their brains into cruise control. Some students, regardless of engagement, will never raise their hands. Over time, they’ll know that they aren’t required to engage, and they might not.

Good teaching keeps everybody on their toes and requires everyone to think. Calling out questions and asking people to raise their hands with the answer is the opposite of requiring everybody to think.

I want my students to emerge from the classroom thinking that they’ve had an exhausting mental workout. A gym for the brain. Zumba instructors don’t call on certain members of class participate. Likewise, in my classroom, everybody has to dance.

There is a variety of ways to ask questions and make sure that everybody is engaged. One great way is a think-pair-share. I know some people who use set of index cards and draw student names randomly for every question. If I don’t want to make a big deal about a question, sometimes I just call on a student arbitrarily. Sometimes I make a point of calling on a student who doesn’t appear to be engaged, though if this happens too often then some students might (correctly) think that they’re being singled out unfairly.

And, of course, the entire rationale behind clickers is that they prevent the disengagement that happens when only asking a small number of students raise their hands. But you can engage students just as well without clickers, but you don’t get the data in a digital format. In a very large lecture, using clickers could be an effective strategy if classroom management of group work is difficult to manage.

Another reason to not ask students to raise hands is that there is a clear gender bias at work. Men are far more likely to raise hands, and with many instructors, men are more likely to be called. So, by asking students to raise hands, men are more likely to be engaged by the instructor than women. This, obviously, is no good.

Do you have other ways of asking questions to the class that keep everyone engaged?

The rejection that wasn’t

Standard

I remember when I got the reviews back from the first big paper that I submitted. I was mad to have to deal with a rejection after such petty reviews.

Then I showed the editor’s letter to my advisor. He said, “Congratulations!” It turns out it was not a rejection, but a minor revision. Who would have thought that a request for a minor revision would have had the word “reject” in the decision letter?

I think editors are more clear about their decisions nowadays. That incident was a while ago. That was an actual letter. Which arrived via postal air mail from another continent.

More recently, in 2007, I got another rejection I found annoying. I inadvertently unburied the decision letter last week, when I was forced to clean up my lab before the holidays (because work crews need all surfaces clear for work being done in the building). Here’s what the letter said:

Enclosed is your manuscript entitled “Moderately obscure stuff about ants” and the reviews. Based upon these reviews, in its present form the manuscript is not accepted for publication in the Journal of Moderately Obscure Stuff.

Significant work/re-write will be needed before the manuscript can be resubmitted.

The reviews were not bunk, but were simply prescriptive and didn’t require massive changes. I realize now, years later, here is another rejection that wasn’t a rejection! I was fooled again! This was a pretty straightforward “major revision.” This paper still is sitting on my hard drive, unpublished, and down low in the queue. I just forgot about it because I was occupied with stuff that was more interesting at the time. The coauthor on the paper, who was a postdoc at the time, now has tenure. So there’s no rush to get this paper out to enhance his career.

The moral of the for authors is: If you’re not an old hand at reading decisions from editors, be sure to have senior colleagues read them and interpret them. When in doubt about what you need to do for a revision, it’s okay to ask the editor.

The moral of the story for editors is: We need to be careful to construct decisions so that there is no doubt that less experienced authors will be able to understand if a revision is welcome, and if so, what needs to be done to make the revision acceptable.

Teaching Tuesday: talking about teaching

Standard

When I did a survey of ecology teachers earlier this year*, I left a space for further comments on teaching in ecology. Here, I got perhaps some of the most interesting opinions. One respondent took the time to practically write a post themselves, which I have pondered quite a bit. Instead of commenting on bits and pieces, I decided to post it in full:

There is a big difference between large lecture hall sophomore courses (Introductory) and upper division courses.  My approach to these is almost totally in opposition.  In the upper division course I do many of the new fangled things you mention above including- think-pair-share, multiple drafts of written work, in class presentations, etc.  In the lower division course, though, this kind of activity is nearly impossible to execute- and the students, many of whom are uninterested, don’t WANT any of that.  So it becomes a pure waste of time.  I have tried many of these techniques in the large lecture hall setting and it becomes mayhem and nothing is accomplished.  So I settled back into pretty straight lecturing, which seems to work just fine- students are happy, they seem to get it, and my time is not wasted.

My Upper division courses are the opposite end of the spectrum.  Sometimes students enroll in my upper division course because they LIKED my large lecture hall technique, and they end up displeased with all the group interactions, presentations, class participation, etc. that happens in the upper division course.  I actually have a little trouble in my reviews from students RESISTING those techniques (that we all think are student friendly).

I approach upper division courses like a “workshop” and I tell them that before we begin at the start of the semester.  Interestingly, some of my smartest students have told me personally, and also in evaluations- ” YOU are the expert in this field- I don’t want my time wasted by listening to the novice opinions of other students.” I think that is an interesting perspective, although most of the students like a more participatory setting.

Finally, I have been involved in a number of teaching workshops and I think it is important to point out that those kinds of settings can become akin to moralizing.  Preachy, in fact.  And, I have excellent data to support the notion that sometimes the strongest advocates of new, “modern,” student friendly, engaging, technologically innovative, etc. are also people who have terrible natural rapport with students!  I have had advisees come into my office and complain bitterly about how terrible faculty member X is, and how everyone tries to avoid their sections of the class, when I know for a fact that faculty member X is the leading advocate on campus for all of these supposedly student – friendly techniques.  In contrast, I know faculty members who have been around for a long time who just us chalk and a chalk board- that is it- 100% lecture, no AV at ALL, who the students love and get a ton from.

E.g., I went to a session once all about how students these days are “Millennials” and they expect to have information delivered in small packages etc.  Have you ever spelled out that tripe to actual students?  I did in my class a couple of times and the students themselves think this is absolutely ridiculous.  They are not a simple “they” and “they” don’t fit into pigeonholes easily, and they don’t want you stereotyping them this way.

There is a high-horse mentality, and even taking this survey I could feel it a little bit… I expect to see some report from this survey bemoaning how ecology teaching is “behind the times” or missing opportunities for real “student engagement.”

I urge extreme caution before making any kind of statements of this sort.  What is missing from any of this discussion is actual OUTCOMES for students!  Has there been content delivery?  We watched some Youtube clips, had a scientific debate on twitter, used clickers, paired and shared, etc—-so what?  Did they get more than would have been accomplished through use of chalk?  Data on this are VERY scanty in my view- and, unfortunately, a lot of our critique of teaching has absolutely no rigor when it comes to measuring OUTCOMES.

As outlined above, I use many of these techniques, and appreciate them- and I will vocally support anyone who choses to use them.   But, I think they are mostly irrelevant to success in teaching.  In my experience, teaching is pretty simple:

(1) Bring good material to the classroom
(2) Be organized, have a plan for the semester- explain the plan- and stick to it.
(3) Demonstrate that you care about the students- you are not there to battle them or prove them stupid, that you really do want them to “get it”
(4) Be transparently fair in grading and other forms of evaluation.
(5) Demonstrate passion for the topic.

There are things I agree with and many I don’t in this commentary, but I want to be careful to not simply argue with what is written here. Instead, the comments have got me thinking about many of the assumptions, biases and difficulties around talking about teaching. Some of those are highlighted above, some not. Mainly I want to use the comments as a springboard. What follows are the somewhat random thoughts that this reading inspired…

First, should we be concerned with whether techniques are “student-friendly” or not? Or what the students want? I keep coming back to this one. Ultimately, as the commenter suggests, it is the outcomes that are important. So regardless of what the students think they want or are comfortable with, I believe we should be doing what helps them to learn.

That leads me to the purpose of teaching in the first place. What are our goals? Do we want students to pass our tests or to take the fundamentals learned in our courses with them for life? Are we exposing students to ideas or do we want them to understand them? Is the main thing to get students to be passionate or at least respect the natural world around them? None of these are mutually exclusive, of course but the goals we have as teachers will determine the kind of teaching we do. And for some, teaching is just the price for working at a university, the goal is get by doing as little as possible. But in general, it seems to me that we as teachers should mindful of our goals and do what is best able to achieve those. It seems to me that there is a fair amount of evidence that straight lecturing isn’t the best way to achieve learning. However, there are many different ways to engage students.

Another assumption is that technology = engagement. Students can be just as engaged with chalk as with clickers. A YouTube video is just as passive as a lecture. What I find interesting is that using some forms of technology such as clickers can force you as a teacher to be more purposeful with engagement. Maybe it doesn’t come naturally to you to get students engaged, so directly incorporating activities aimed at engagement will make that happen. But one of the things I’ve taken from my teaching is that for anything to be successful, you need to think through what you’re trying to achieve.

Are the data truly scant? It seems to me that there is a lot of research on teaching and learning. I’ve only dipped my toe in the literature but it is its own discipline.  I don’t think I’m really qualified to assess whether there is enough data on particular techniques, etc. I’d have to read much more. But it seems to me that we as teachers could benefit a lot from knowing more about what has been studied. Some of the best exams I ever took as an undergraduate were in a psychology class called simply “Memory”. Now that prof knew how to cut through our crap and ask a multiple choice question that actually tested our understanding. Although I didn’t realise it at the time, that course impressed upon me that understanding how our minds work could lead to better teaching and testing materials.

But one of the big questions I am left with is: Why can teaching be so difficult to talk about? I worked hard to ask questions in the survey in a very neutral tone. I was curious, but not coming from a place of judgement. I wanted to know what people were doing but am a far cry from knowing what the best practises are/should be.  But despite that, even asking about teaching leads some to think that the results will lead to critical conclusions about the field, without even knowing the outcome of the questions. But what are we so protective of? If the data exists that we’re doing it ‘wrong’, shouldn’t we change? And what if we’re doing it ‘right’? How can we know without investigating, both the teaching practice and the learning outcomes? And does discussing teach techniques always come off as moral/preachy? I’ve certainly had different experiences. But I wonder about where the preachy overtones come from—is it the presenters or perceptions of the receivers of the information? I’m sure it varies from situation to situation. But why is it there at all?

Honestly, I was a bit nervous to send out the survey broadly in the first place. I wasn’t sure how people would respond and it was a new kind of data collection for me. Overall, I got a lot of very positive responses to my doing the survey and sharing it on this blog. But I still wonder why resistance to discussing teaching exists. Are we so sure that we know what it takes to be a good teacher? I know I’m not. I certainly look for feedback on my research from experts in the field—why should teaching be any different?

What are your thoughts? Do you think teaching seminars/workshops are too preachy? Are we paying enough attention to the outcomes or getting caught up with flashy new technologies? Should there be more data on what works? Do we pay enough attention to the data that exists?

*for those interested there are some other posts on the results of the survey to be found: here (and links within)

Conflicting interests of faculty and administrators

Standard

Motives of faculty and administrators can be highly variable. But even though many administrators were once faculty themselves, I can only imagine that things inevitably change when you put on that suit.

What are the ranges of possible interests of faculty members and administrators?

Administrators aren’t monolithic. Here are some various priorities that you might identify in an administrator, all of which might be mutually compatible. Of course I’m leaving plenty out, and of course many of these might not apply to any given administrator.

  • An administrator wanting to climb the ladder will need to keep a balanced budget, carry out the vision of higher-ups, and be well-liked.
  • An administrator who wants to make the university successful will also want to balance the budget, work to promote the visibility of the institution, and try to get the most work out of everyone as possible.
  • An administrator working to promote student success will support faculty efforts to teach and support students, will allocate resources to individuals who best enhance the education of students and is not overly focused on carrying out the nonsensical orders of higher-up administrators.
  • An administrator who just wants to collect the salary of the position until retirement will want to do as little as possible and delegate tasks without much thought. This administrator won’t allocate resources in a way that will require additional management or accountability.
  • An administrator who want to directly support the faculty interests experiences conflicts with higher levels of administration that have distinct expectations of administration.

What do faculty want? This group is more heterogenous than the administrators. Only a small, non-random, subset of faculty move into administration, after all.

  • Some faculty will do anything to teach effectively and want resources allocated towards classroom resources, student experiences, professional development of faculty and staffing to support student needs.
  • Some faculty are focused heavily on research, and want resources allocated towards the equipment and time required for research to get done, as well as support for campus-wide emphasis on research, including support for students conducting research
  • Some faculty are focused on things away from the university (a.k.a. retired on the job), and want resources allocated to minimize their efforts towards the job, so that they can ride horses and play with their dogs. They’ll want more staff, lower and easier teaching loads and no service commitments. They might want teaching technology that lets them be on campus less frequently.
  • Some faculty want to be accorded with respect and perceived to have prestige. These faculty members will want resources allocated to their pet interests and in ways that they may be able to exert direct control over these resources, often in a way that maximizes their visibility.
  • Some faculty want to have a faculty job at a different university because they are not fulfilled do not feel that they are being treated fairly. They are looking for resources that are allocated in a way that will help them to reinforce their CV to make them the most competitive on the job market.
  • Some faculty want to become administrators. They’ll spent lots of time doing service on campus and aren’t picky about how resources are allocated, so long as they’ll have the ability to do the allocating in the future. These faculty don’t have much overt conflict with administrators, though the administrators might be annoyed that they these faculty are pretending to run things instead of focusing on their actual job, to teach and do research.

Note that when faculty goals come in direct conflict with the goals of administrators, or of other faculty members, that’s when junior faculty members demonstrate the mythically poor “fit” that sinks tenure bids.

It’s no wonder that faculty and administrators can get into intense, and frequently petty, disagreements. Both the faculty and administration are diverse groups that can’t even agree on their own interests and priorities. As a result, productive cooperation with administrators is unlikely to emerge because there is a complex mélange of conflicts that define the structure of the relationship. The only thing that everyone has (or, you would hope, should have) in common is the interest in bettering the lives of our students.

I am consistently surprised at how many faculty members don’t perceive that their interests fail to match those of other faculty and administrators. As a result, some individuals consistently rail about one pet priority of theirs, which results in deaf ears all around. Some people are widely known for their pet issues. Pet-issue people aren’t ever in a position to convince others to make change happen.

Here is an attempt at a grand summary about conflict-cooperation between faculty and administration:

Admins and faculty have different priorities. Even within faculty, there are often be conflicts that prevent cooperation. Everybody is better off if the non-essential conflicts are overlooked, and the benefits of shared cooperation are emphasized. Conflict results in a waste of resources and results in lower productivity for all individuals.

I’m not advising faculty to roll over when administrators tell them what to do, but it might be wise to simply ignore the things that administrators tell you to do that are not mutually beneficial. Instead, we should focus on things that deliver for both the administration and faculty. There are only so many hours in the day, and if any of that time is spent arguing about something that isn’t in one’s mutual interest, it better be important enough to outweigh the lost benefits that could emerge from cooperation.

By the way, this happens to be the last installment of a 5-part series on conflict and cooperation between faculty and administration. Here are parts one, two, three and four.

Could twitter have saved the lives of seven astronauts?

Standard

When the space shuttle Challenger launched on the morning of 28 January 1986, Roger Boisjoly couldn’t muster the fortitude to watch the launch of the shuttle, as its engines ignited on the launch pad. Moments later, the crew was lifted through the sky to their deaths. Boisjoly and some of his colleagues had spent the preceding night petitioning and pleading, in vain, to avert this tragedy.

Boisjoly was an engineer working for Utah-based NASA contractor Morton Thiokol, who worked on the design of the solid rocket boosters for the space shuttle program. (Morton Thiokol received a $800 million in contracts for their work on the shuttle program, equivalent to a value of almost $1.5 billion today.) Boisjoly and his colleagues were terrified about the prospect of a disaster on this particular launch, because of the weather forecast for Cape Canaveral. The cold temperature triggered events resulting in the loss of the entire vehicle in the timespan of a couple heartbeats.

The explosion of the Space Shuttle Challenger in 1986.

The explosion of the Space Shuttle Challenger in 1986.

Is Boisjoly complicit in the deaths of the shuttle crew? Not at all – he was a true hero. He did everything he could, ultimately sacrificing his own career.

This disaster may be blamed on those who failed to heed the specific and detailed warnings offered by Boisjoly over the year preceding the avoidable tragedy. However, this might not be what you will read in the Rogers Commission report issued in the wake of the disaster.

The Challenger disaster occurred because of a failure of leaders who did not think that public knowledge would, or could, have any bearing on the life-or-death decisions happening in NASA headquarters.

A lot has changed since 1986. The veil that separates the public from governmental and industrial organizations has been partially lifted, through the distributed access to information through social media. When the public has access to technical information about government operations, then the mechanisms of accountability may change.

In the media environment of 2013, is it possible that Boisjoly could have prevented a disaster like the loss of the Challenger? Could Twitter have saved the lives of the Challenger astronauts?

Imagine these tweets, if they came out 24 hours before a predictably fatal shuttle launch:

https://twitter.com/RogBoisjoly/status/400856882463535104

https://twitter.com/RogBoisjoly/status/400857411348467712

Why was Boisjoly so fearful that shuttle was going to blow up? One component of the design of the solid rocket boosters was an O-ring that would become predictably unsafe when launching in cold temperatures. The forecast on that fatal morning was for conditions colder than any previous launch — below freezing — and below the temperature threshold that Boisjoly knew was required for safe performance of the elastic component of the O-ring seal. (If you’re older than 40, then I would bet that you must remember hearing a lot about the O-ring.)

Three weeks after the disaster, in an interview with NPR, Boisjoly reflected:

I fought like Hell to stop that launch. I’m so torn up inside I can hardly talk about it, even now.

One year later, in a subsequent interview, he explained how close he could have been to stopping the launch, if he could have been more convincing:

We were talking to the people who had the power to stop that launch.

Is it possible that the people — the taxpaying public — could have been the ones with that power? I don’t know the answer, but it’s an interesting question. If Boisjoly was an active twitter user, with followers who were fellow engineers able to evaluate and validate his claims, wouldn’t they have amplified his concerns on twitter and other social media? Wouldn’t it be possible that, in just an eight hour period, that a warning presaging the explosion of the Challenger would be retweeted so many times that the mass media and perhaps even NASA would have to take notice?

Wouldn’t aggregator sites like The Huffington Post and Drudge pick up a tweet like Boisjoly’s warning, if it got retweeted several thousand times?

Wouldn’t the decision-makers at NASA have to include very public warnings about a disaster in their calculation about whether to greenlight or delay a launch? Don’t you think they’d get even more anxious about the repercussions of overlooking the engineers’ concerns?

Wouldn’t the risk of an disaster, after warnings by an engineer who worked on the project, alter the cost/benefit calculus in the minds of the people who would have been able to delay the shuttle launch? Even if they didn’t believe the claims of Boisjoly and his colleagues, then maybe they would choose to delay the launch anyway, if engineers using social media were claiming it would explode? Just maybe?

Social media has altered the power relationships among large agencies, the media, and the public. Individuals with substantial issues may have their voices heard, worldwide, over a very short period of time. It is possible that information sharing on social media could have prevented the loss of the Challenger?

Even though Boisjoly was, obviously and without any doubt, in the right, he was shuffled out of the industry because he dared to challenge authority in order to save lives. He should have been lauded as a hero, but I only heard of his heroics when I read his obituary last year in the LA Times.

If Boisjoly was successful in his bid to delay the launch using a rogue social media campaign, he still would have been blackballed by the industry as a whistleblower. If such a plea would have been successful, then none of us would ever have known for certain if his actions prevented a tragedy. All of us, including the lost crew of the Challenger, would be able to live with that uncertainty.

Richard Feynman was a member of the Rogers Commission investigating the loss of the Challenger. He issued personal observations as an appendix to the official report, and it’s not surprising that they deal with technical details with accurate conversational aplomb, while also cutting to the heart of the matter:

NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources.     For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

The crew of mission STS-51L: Francis R. Scobee, Michael J. Smith, Judith A. Resnik, Ellison S. Onizuka, Ronald E. McNair, Gregory B. Jarvis and Sharon Christa McAuliffe.  Image from NASA

The crew of mission STS-51L: Francis R. Scobee, Michael J. Smith, Judith A. Resnik, Ellison S. Onizuka, Ronald E. McNair, Gregory B. Jarvis and Sharon Christa McAuliffe. Image from NASA

Teaching Tuesday: Interviewing–the teaching test lecture

Standard

This week I’ve been a bit distracted by instructions I’ve been given for a demonstration teaching lecture. It is for a permanent position in my department so the interview is stressful, important, and far from certain. There are three others interviewing for the spot, all colleagues and/or collaborators*, all friends, and all deserving of the position. It is also a little strange in that you can exactly know the CV of your fellow candidates and that all of us will show up for work after the interview, regardless of the result of the job search. The only difference is that one of us will have a permanent job and the others will not (still). I have talked a bit about the Swedish interview process previously and the upcoming one will function in a similar way. One major difference is that in addition to a short research lecture, we’ve been asked to give a 20 min teaching lecture. The topic is outside everyone’s expertise (Ecology of Plant-Pathogen Interactions), so in some senses an even playing field.

I have taught classes previously but not on this particular topic. But given that I’ve never done a demonstration lecture, I’ve been thinking a lot about how to tackle the task. Unfortunately, teaching talks don’t seem to be a common feature of the interview process, so unlike the research seminars and chalk talks, there isn’t so much out there (see Meg Duffy’s post on links for tenure-track job searches, for example).

However, I did find this helpful post about giving test lectures with a focus on those given to actual students in an on-going class (yikes!). It would be tough to drop in on a class that has already established a rhythm between the students and teacher, although I think it would be a good test of your teaching. It might not be fair to the students in the course, however, if they are continually interrupted by different interviewees. The teaching talks I’ve heard of are more commonly to faculty and maybe grad students. Anurag Agrawal compiles some advice on finding an academic job with this bit of wisdom on the teaching lecture (you can find more advice here; HT: Meg):

Teaching talks: Many places will have you give a teaching talk—they may give you a topic or let you choose one from a list. Some will want a sample lecture—others may actually want a verbal statement of your teaching philosophy. In general, ask those around you that actually teach those subjects for outlines or notes. It is usually fine to have notes for your teaching talk. They will probably ask you to not use slides, but overheads and handouts may be very useful. The faculty may interrupt you during your talk and pretend to be students asking questions. Try not to get flustered by them, but rather have fun with them.

Even before reading this, I began my canvasing of people for lectures on plant-pathogen interactions. So far I haven’t found it to be a common topic in ecology courses (if you lecture on the topic and are willing to share, yes please!). So after researching for this interview, I might also advocate for including the lecture in one of our ecology courses (I have funding for two more years regardless of the outcome of the interview).

I’ve only had one experience with this sort of interview requirement and that was indirect. When I was a masters student, my department was hiring a number of people to expand and we were also going to an Integrative Biology model from an organismal division (merging depts). So there were a lot of positions (~6) and likely a lot of opinions on how to best fill them from colleagues who hadn’t worked together before. In any event, I got to witness a bunch of job talks and meet with a lot of candidates. It was a useful lesson as a grad student but the one portion that was closed was the test lectures. I’m guessing these were to distinguish people’s ability from very different fields but I don’t know what the exact instructions were. We (the grad students) did hear rumours that some people’s talks were terrible, so it clearly doesn’t do to blow teaching talks off. But how to do it well?

Turning to advice on how to give lectures can give some clues. Improving lecturing has a bunch of hints and tips for generally improving your lectures. Another list of practical pointers for good lectures is focused mainly on the classroom but can also be helpful in thinking about how to demonstrate your teaching. I had to link this good talk advice for the hilarious nostalgia it created for the overhead strip tease (advice: don’t do it, and I think this also applies to powerpoint reveals).

From the Columbia University Graduate School of Arts & Sciences Teaching Center (many useful pdfs here including one on giving effective talks), it is better to:

  1. Talk than read
  2. Stand than sit
  3. Move than stand still
  4. Vary your voice’s pitch than speak in a monotone
  5. Speak loudly facing your audience rather than mumble and speak into your notes or blackboard
  6. Use an outline and visual aids than present without them
  7. Provide your listeners with a roadmap than start without an overview

There is also this simple and eloquent advice from a twitter friend:

https://twitter.com/labroides/status/398443862529941504

My plan is to demonstrate how I would give a lecture to a course, including emphasizing where I would stop lecturing and turn things over to the students. As I move away from straight lecturing, it feels a little strange to demonstrate my teaching through lecturing only. But I only have 5 minutes to describe the structure of the course, where this lecture would fit in and how I would evaluate learning, followed by the first 15 minutes of the lecture. Given all that is required to pack into 20 mins, this teaching talk is really a demonstration, rather than a lecture. I won’t prepare for it as I would do for a regular course lecture and given my unfamiliarity with the topic, it is also going to take a fair amount of research. This is a job interview, so I know it isn’t really a teaching lecture, it is a performance. One I’m hoping will convince the committee to let me get on with actual teaching for years to come.

I’d love to hear from anyone who’s done a teaching lecture as a part of their interview! Advice on how to nail this will be greatly appreciated by me but I’m sure others on the TT job search will also appreciate pointers.

*

relationships

Efficient teaching: Rubrics for written assignments

Standard

I’ve often emphasized the importance of transparency and fairness in teaching. The evaluation of written assignments is an inherently subjective activity, at least from the perspective of students. The grading of written assignments is most prone to the appearance of unfairness. When students think they’re being treated unfairly, they are not inclined to focus on learning.

Moreover, in the grading of written assignments we are most likely to be inadequately transparent and unfair. By using rubrics to grade writing, we can mitigate, or perhaps even eliminate, this problem.

Some folks don’t like using rubrics because they think that written assignments should be evaluated holistically or by gestalt. As experts in our field, we can tell apart a B paper from a C paper based on reading without the use of a rubric, and we can explain to students in our evaluation how this distinction is made without resorting to over-simplified categories. We can reward deep insight without being captive to a point-making system.

Even if the concepts in the preceding paragraph were factually correct, the choice to formulate is such an argument indicates a lack of focus on student learning. Rubrics should be used to grade written assignments not only because they lend themselves to the appearance of fairness in the eyes of students, they actually result in more fairness.

Grading written assignments without a rubric is unfair. Why is that? It’s very simple: when an assignment is graded without a rubric, students do not know the basis upon which their writing is to be evaluated. Fairness requires that students know in advance the basis upon which their grade is being assigned.

There are many different components to good writing, and presumably someone who grades holistically takes all of these into account in an integrated fashion and then assigns a grade. However, if the purpose of the assignment is to learn about writing, then the student needs to which components are important constituents of good writing. And then the student needs to receive credit for including these components, and not receive credit if not including these components.

If a professor wishes to reward students for making “deep insights,” then these deep insights can be placed as a category on the rubric. And, when handing out the rubric when assigning work to students, the professor can then explain in writing on the rubric what constitutes deep insights that are worthy of receiving points in the rubric.

Rubrics don’t rob professors of flexibility in grading written assignments; they only prevent professors from ambushing students with criticisms that the students would not have been able to anticipate. They also prevent professors from unfairly rewarding students who are able to perform feats that satisfy the professor’s personal tastes even though these feats are not a required part of the assignment.

Is bad grammar something that deserves points off? Put it on the rubric.

Should it be impossible to get an A without a clearly articulated thesis and well supported arguments? Build that into the rubric.

Does citation format matter to you? Put it on the rubric? Don’t care about citation format? Then don’t put it on the rubric.

When you’re grading, you should know what you are looking for. So, just put all of those things on the rubric, and assign the appropriate amount of points to them as necessary. Of course any evaluation of “clear thesis” and “well supported argument” is to some degree subjective. However, when students know that the clarity of their theses and the quality of their arguments are a big part of their grade, then they will be aware that they need to emphasize that up front, and focus on writing well. This point might be obvious to faculty, but it’s not necessarily obvious to all of the students. To be fair, every student needs to know these kinds of things up front and in an unbiased fashion.

There are several other reasons to use rubrics:

Rubrics help reduce the unconscious effects of cultural biases. Students who write like we do are more likely to come from similar cultural backgrounds as ourselves, and students who write well, but differently than we do, are likely to come from a different cultural background. If grading is holistic, then it is likely that professors will favor writing that reflects their own practices. Without the use of a rubric, professors are more likely to assign higher grades to students from cultural backgrounds similar to their own.

Rubrics save your time before grading. Students often are demanding about their professors’ time when they are anxious about whether they are doing the right thing. The more specific information students receive about what is expected of them, the more comfortable they are with fairness and transparency in grading, the less often instructors are bothered with annoying queries about the course, and the more often they’ll contact instructors about substantial matters pertaining to the course material.

Rubrics save your time while grading. If you grade holistically without using a rubric, and it takes you appreciably less time than it takes with a rubric, I humbly suggest that you’re not performing an adequate evaluation.  The worse case scenario, with respect to time management while grading, is that a complete evaluation happens without a rubric, and then it takes only a few moments for the professor to then assign numbers on a rubric after being done with a holistic evaluation.

Rubrics save your time after grading. If students are unpleased with a grade on a written assignment, and all they have to go on is a holistic assessment and written comments – regardless of verbosity – they are far more likely to bother you to ask for clarification or more points. If they see exactly where on the rubric they lost points, they are far more likely to use their own time to figure out what they need to do to improve their performance rather than hassle you about it.

Most importantly, rubrics result in better writing practices from your students. It is a rare student who relishes receiving a draft of an assignment with massive annotations and verbose remarks about what can be done better. Those remarks are, of course, very useful, and students should get detailed remarks from us. When fixing the assignment, students will be focused on getting a higher grade than they received on their draft. The way to do promote success by students is to provide them specific categories on which they lost points. This kind of diagnosis, along with any written comments that professors wish to share, is more likely to result in a more constructive response and is less likely to terrify students who are unclear how to meet the expectations of a professor who gave a bad grade without providing a specific breakdown about how that bad grade was assigned. If a student wonders, “what can I do to produce excellent writing?” all they’ll need to do is look at where they lost points on the rubric. That’s a powerful diagnostic tool. If you think the use of a rubric in your course cannot be a great diagnostic tool, then you haven’t yet designed an adequate rubric.

Of course, it’s okay to disagree with me about writing rubrics. If you do, I’d be really curious about what your students think. The last time I graded a written assignment (a take-home exam), I asked my students if they wanted to receive a copy of a grading rubric before I handed out the exam. They all wanted it, and they all used it. By choosing carefully what I put on the rubric, I was sure that their efforts were allocated in the best way possible.

Teaching Tuesday: Writing in Ecology

Standard

In my continuing series on teaching ecology, I am going to focus on using writing in ecology classes. The following is a lot of my opinion, some of the results related to writing from a survey of ecology teachers and a few links to writing resources that I find helpful. If you are interested in exploring past posts stemming from the survey I did of ecology teachers you can read them here (intro, difficulties, solutions, and practice).

Writing is a particular interest of mine, stemming from before I taught a ‘writing in the majors’ section of ecology as a graduate student. Students applied for this section and they attended two sections a week with me with their grades based on my section rather than exams. I was given an amazing amount of freedom to run the section and both times it was incredibly fun. I didn’t need to give lectures (they attended those with the rest) but I had my first opportunity to organise a syllabus and be in charge as a teacher. It was a wonderful experience as a graduate student. In conjunction to teaching a writing-intensive section, teaching assistants for these writing-intensive classes also took a short course on how to teach writing. I learned an incredible amount by taking the course and teaching myself. My advice to any PhDs out there is if you have the opportunity to do something like this: do it! The skills I learned teaching these sections have been invaluable to me as a teacher.

I think that learning to write and specifically scientific writing is an important skill. Of course, writing is crucial if you want to go on in science, but scientific writing is also something that students can benefit from regardless of what they ultimately do. So I’m showing my colours and biases here. I think writing is essential and if we haven’t made an effort to teach students to be better writers, than I think we have failed as university teachers. Of course, it is possible to divide the responsibility of teaching writing skills across classes in a program and there are places where it is easier to do (fewer students, for example). However, I always find it disappointing when I see upper level undergraduates that have been able to get by without being able to write well. I know that some think that their subject should take precedent over skills like writing (they should have learned that elsewhere!). Given how important the ability to write is for science careers and so many others, I think we need to have some focus on writing in every course. After all, what is the use of knowing an answer if you can’t communicate it?

Maybe we ecologists are just a communicative bunch, but 62% of the responses said that writing is essential for teaching ecology.

writingimportance

So how many use writing assignments in their courses? Well, a quarter rarely or never assigns writing research papers or proposals. So there seems to be a bit of a contradiction here. It could also be that teachers are using different forms of writing assignments in their courses or make exams that emphasize writing as well as content. Being a skill, writing takes practice, so if we want students to learn to write we need to give them the opportunity to do so. I think with effective time management and teaching, writing can be incorporated to any class. For example, I’ve had students write exam questions and figure captions as very short writing assignments. Of course one of the best ways to learn how to write, as well as how ‘real writing’ works, is to have multiple drafts. I was lucky enough to be exposed to forced multiple drafts as an undergrad. Without the forced part, I wasn’t really learning how to improve my writing but that is only something I realised after the fact. For an upper-level plant ecology class I took, Elizabeth Elle had a clever way to use her time efficiently by doing not quite multiple drafts of the same work. We had a report early on in the class that was heavily commented on and then a larger paper towards the end. Even though these papers weren’t the same topics, capitalizing on the fact that students tend to make many of the same general mistakes again and again, we had to show that we had improved any issues in the final paper. Later working with Elizabeth and my masters advisor, Chris Caruso, really helped me hone my writing. I am still appreciative of their patience. It was only working through many drafts of my writing that got me to think directly about the writing, rather than just the content I needed to include. For me, writing is an on-going learning process. However, multiple drafts are time-consuming for students and teachers and only 15% of ecology teachers always use them. The trend is generally that fewer who have writing assignments also get students to do multiple drafts but the difference isn’t by much. To me this suggests that many who emphasize writing in class are also utilising feedback on drafts to help students learn the skill. I think that with effective time management and

writingassignments

So if writing is important, than how should we teach it? I’ve gathered a few sources that are mostly directed towards professional scientific writing but I think they contain lots of good tips than can be adapted to use in classes as well.

Here’s a detailed post on clear writing including a macro that detects your most verbose of sentences. Honestly, I’m a little afraid to use it, I tend towards long and involved sentences where I include lots of information that I end up needing to break up into smaller pieces in the revision process but I would probably benefit from getting those run-on sentences highlighted in red straight away. Here’s some more tips on how to write a scientific paper and on the beginning, middle and end of scientific papers. There is also this simple intro to writing for scientific journals and as mentioned by Brian McGill in his post about clear writing the Duke scientific writing site is also useful.

Writing in ecology assignments can also include summarizing existing research, so this plain language summaries post might give you some useful tips for students. It is written for scientists who want to communicate their findings more broadly but it seems that this is a good way to also assess if students really understand the literature they are reading.

Further guidance for writing detailed research proposals can be found as an example in TIEE (teaching issues and experiments in ecology). Here the students build upon data they collect and then create proposals but it also provides lots of good tips on helping students to come up with ideas and write proposals.

Finally, a list of common writing errors.

Up next week: ? I have a few more posts in mind from the survey results, including getting into the demographics and potential biases of the answers. I also haven’t included all the questions thus far and there are a few interesting things to discuss from the comments section. I want to reflect a bit more on what I’ve already written about and what might be left that is interesting to say. If you have anything in particular you want me to address, just leave it in the comments and I’ll see if I can include it.

The conflict-cooperation model of faculty-admin relations, Part 4: Consequences of our social interactions

Standard

This is the penultimate piece in a series on faculty-admin relations. Here are parts one, two, and three. You don’t need to get caught up to appreciate the set of tips inferred from prior observations:

  • Faculty are the ones who really run the show at universities. This is true as long as there is tenure, and especially as long as there is collective bargaining. Universities exist to let us do our research and teaching jobs, and any service on campus is designed to facilitate that core function. Any administrator that runs afoul of the faculty as a group will not be able to implement their vision with any kind of fidelity.
  • Administrators cannot be effective at serving students unless the faculty are on board.
  • In a university of adjuncts without tenure, the show is run by regional accreditors, because they can get administrators fired. This is why places run almost entirely by adjunct labor, such as “University” of Phoenix, have curricula that follow the prescriptions of regional accrediting agencies, without anything above or beyond what is required.
  • Faculty and administrators need one another. The more they can get along to meet shared goals, the better things are. When individuals pursue their own goals, that don’t contribute to the shared goal, conflict results. When there is cooperation toward shared goals, then all sides will be more able to fulfill their individual interests.
  • Good administrators and faculty share one common interest – serving students – but they also have many conflicting interests, and these are highly variable and shaped by the environment.
  • Professors typically want vastly different things from one another, so organization around a common interest is uncommon. This may result in administrators having their own interests met more often than the faculty.
  • Administrators can spend money on any initiatives they wish, but unless faculty choose to carry out the work in earnest, it will fail.
  • Conflict with your direct administrators over things that they are unable to change harms everybody. Individuals who can successfully minimize the costs of conflict are in a position to experience the greatest gain at the individual level, and these actions also serve to increase the group-level benefits of cooperation.
  • Administrators who don’t cooperate with their faculty will be ineffective, and faculty who don’t find common ground with administration don’t get what they need.
  • Universities have often evolved to take advantage of the faculty even though they collectively the machine that runs the show. Adjuncts have little power to individually control what happens in the university, and are highly subject to manipulation by administration and other faculty. If they wish to be a part of the system then they have little choice but to carry out the will of the administration.

What happens when you don’t know anything about the subject you’re teaching?

Standard
Biologie & Anatomie & Mensch, via Wikimedia commons

Biologie & Anatomie & Mensch, via Wikimedia commons

Like many grad students in Ecology and Evolutionary Biology, I my made a living through grad school as a TA.

One semester, there were open positions in the Human Anatomy cadaver lab. I was foolish enough to allow myself to be assigned to this course. What were my qualifications for teaching an upper-division human anatomy laboratory? I took comparative vertebrate anatomy in college four years earlier. We dissected cats. I barely got a C.

You can imagine what a cadaver lab might be like. The point of the lab was to memorize lots of parts, as well as the parts to which those parts were connected. More happened in lecture, I guess, but in lab nearly the entire grade for students was generated from practical quizzes and exams. These assessments consisted of a series of labeled pins in cadavers.

My job was to work with the students so that they knew all the parts for quizzes and exams. (You might think that memorizing the names of parts is dumb, when you could just look them up in a book. But if you’re getting trained for a career in the health sciences, knowing exactly the names of all these parts and what they are connected to is actually a fundamental part of the job, and not too different from knowing vocabulary as a part of a foreign language.)

The hard part about teaching this class is: once you look inside a human being, we’ve got a helluva lotta parts, all of which have names. I was studying the biogeography of ants. Some of the other grad student TAs spent a huge amount of time prepping, to learn the content that we were teaching each week. Either I didn’t have the time, or didn’t choose to make the time. I also discovered that the odors of the preservatives gave me headaches, even when everything was ventilated properly. Regardless of the excuse that I can invent a posteriori, the bottom line is that I knew far less course material than was expected of the students.

Boy howdy, did I blow it that semester! At the end, my evaluation scores were in the basement. Most of the students thought I sucked. The reason that they thought I sucked is because I sucked. What would you think if you asked your instructor a basic question, like “Is this the Palmaris Longus or the Flexor Carpi Ulnaris?” and your instructor says:

I don’t know? Maybe you should look it up? Let’s figure out what page it is in the book?

The whole point of the lab was for students to learn where all the parts were and what they were called. And I didn’t know how to find the parts and didn’t know the names. I lacked confidence, and my students were far more interested in the subject. It was clear to the students that I didn’t invest the time in doing what was necessary to teach well. They could tell, correctly, that I had higher priorities.

Even though students were in separate lab sections, a big chunk of the grade was based on a single comprehensive practical exam that was administered to all lab sections by the lecture instructor. Even though I taught them all semester – or didn’t teach them at all – their total performance was measured against all other students, including those who were lucky enough to be in other lab sections taught by anatomy groupies. Even I at the time realized that my students drew the short straw.

One of my sections did okay, and was just above the average lab section. The other section – the first of the two – had the best score among all of the lab sections! My students, with the poor excuse of an ignoramus instructor, kicked the butts of all other sections. These are the very same students that gave me the most pathetic evaluation scores of all time. They aced the frickin’ final exam.

What the hell happened?

I inadvertently was using a so-called “best practice” called inquiry-based instruction. That semester, I taught the students nothing, and that’s why they learned.

Now, I know even less human anatomy than I did back then. (I remember the Palmaris Longus, though, because mine is missing.) I bet my students would learn even more now than mine did then, and I also bet that I’d get pretty good evals, to boot. Why is that?

I’d teach the same way I taught back then, but this time around, I’d do it with confidence. If a student asked me to tell the difference between the location of muscle A and muscle B, I’d say:

I don’t know. You should look it up. Find it in the book and let me know when you’ve figured it out.

The only difference between the hypothetical now, and the actual then, is confidence. Of course, there’s no way in heck that I’ll ever be assigned to teach human anatomy again, because the instructors really should have far greater mastery than the students. In this particular lab, I don’t think mastery by the instructor really mattered, as the instructor only needed to tell the students what they needed to know, and the memorization required very little guidance. (For Bloom’s taxonomy people this was all straight-up basic “knowledge.”)

I do not recommend having an ignorant professor teach a course. If a class requires anything more than memorizing a bunch of stuff, then, obviously, the instructor needs to know a lot more than the students. Aside from a laboratory in anatomy, few if any other labs require (or should require) only straight-up memorization of knowledge. Creating the most effective paths for discovery requires an intimate knowledge of the material, especially when working with underprepared students.

For contrasting example, when I’ve taught about the diversity, morphology and evolutionary history of animals, I tell my students the same amount of detail that I told my anatomy students back then: nothing. I provide a framework for learning, and it’s their job to sort it out. If a students asks about the differences between an annelid and a nemadote, I refrain from busting into hours of lecture. But I don’t just lead them to specimens and a book. I need to provide additional lines of inquiry that put their question into context. It’s not just memorizing a muscle. In this case, it’s about learning bigger concepts about evolutionary history and how we study attempt to reconstruct the evolutionary trees of life. I ask them to make specific comparisons and I ask leading questions to make sure that they’re considering certain concepts as they conduct their inquiry. That takes expertise and content knowledge on my part.

To answer the non-rhetorical question that is the title of this post, then I guess the answer is: It will be a disaster.

But if you act with confidence and don’t misrepresent your mastery, then it might be possible to get by with not knowing so much and still have your students learn. Then again, if you’re teaching anything other than an anatomy lab that involves only strict memorization, I’d guess that both you and your students are probably up a creek if you don’t know your stuff.

The semester after was the TA for human anatomy, I taught Insect Biology lab. That was better for everybody.