Science moves faster than curriculum. This means we are leaving some of our students behind.
The times have changed, and our curriculum is not keeping up.
In the various majors offered by our Department of Biology, I’m convinced we’re not providing our students the most useful set of quantitative skills. After browsing the catalogs of a variety of other universities, I think we’re not alone.
Question: When you’re teaching, how much should you cover?
I propose a couple answers:
Answer A: You shouldn’t cover much, because the more you cover, the less they learn.
Answer B: Trick question! You’re not supposed to “cover” anything! If you teach a topic by just making sure it gets covered in a lecture, then you’re not really teaching it.
I just completed my last lecture of my first year as a Visiting Assistant Professor at a liberal arts University. Each semester I got to design my own course and teach three lab sections of a general biology course called Ecology, Evolution, and Diversity. Having graduated in August 2013, this was my first experience in designing and teaching my own course and it was absolutely amazing.
I did stumble a bit at the beginning though. In the fall I taught Plant Physiology, a junior level course of my own design, and had a bumpy start trying to figure out how to teach. Given that all of my post-secondary education has been at research I universities, I assumed the most familiar teaching format I knew – standing in front of students, powerpoint up, throwing information and numbers at them. That was my first lecture. I blew through what I thought would take me three lectures in one hour.
Then I did what anyone in my position would have done: sought advice from fellow faculty. This is a top-notch liberal arts university after all, and I am surrounded by teaching gurus. Within a couple of hours and several meetings with different faculty post-first lecture, I completely changed how I thought about teaching. As per the advice of the faculty, I abandoned my powerpoints (except for complicated images and figures) and returned to the most basic method of teaching: the chalkboard.
My second lecture, I asked what they had learned from my first lecture and, after many mumbles and looks of confusion, I decided to start from scratch and re-teach the first lecture. I was honest and open about it and told them that if I was doing something that confused them, I wanted them to let me know. I used a socratic method and got them engaged and involved by asking questions constantly. I used the chalkboard to write and explain key concepts. The classroom transformed into an open and engaged learning environment. I was happier, my students were happier, and my teaching was way better. The learning curve wasn’t just steep, it was 180°!
Through my Masters and Ph.D., I had so many opportunities to TA courses as a graduate student that I realized my teaching skills were developed for running labs. So the lab sections of the biology course that I ran were much smoother than my Plant Phys course. I shadowed the faculty member who was the coordinator for the course, by which I mean I went to every MWF lecture and to her Monday lab so that my Tuesday, Thursday, and Friday afternoon labs went smoothly. Although it took quite a large time commitment, I learned a lot by doing this and incorporated the same questioning and engaging teaching methods from my classroom into the labs.
With new skills in hand and great feedback from my students in the fall, I designed a CORE science course on agriculture called Food for Thought this spring. By far, this has been my most rewarding teaching experience. The class is for freshmen and sophomores in any discipline. I only have three students from biology the rest being from varying departments – political science, economics, philosophy, English, and sociology. Students discovered biology through the history of agriculture and current farming practices. We examined environmental impacts of farming, GMOs, and had a continuous debate about the global food crisis and how to feed the world. This class (again!) taught me how to be an effective teacher because of the new challenge of teaching non-biology students. The course went so well that I have students knocking on my door asking if I could teach it again in the fall so they could take it. I am so touched.
I am so grateful to have had this experience. I am a much more effective and creative teacher and would recommend this job to anyone looking to better their teaching skills. I liked it so much that I have decided to stay for another year.
Here’s an incident, or really just a conversation, that left a little scar on me.
Around the time I was finishing up my PhD, I was given the opportunity to give a seminar at my alma mater. I had sit-down conversations with some of my undergraduate professors. As I was somewhere in the process of starting a faculty position, this was a mind-bending role change, no longer a student but now a junior colleague of my former professors.
I took three excellent courses with one professor, whose courses were well designed, all with engaging and creative labs, and with lecture content well grounded in the primary literature. I worked somewhat hard and I learned a helluva lot, especially in Evolutionary Biology and Biogeography. He definitely had his theoretical biases (a la Gould & Lewontin), but this didn’t stop him from being an excellent instructor.
When we were chatting, he was interested in learning what I was up to, and how I ended up working on the ecology of ants. I told him that my interest started with the evolution of sociality, and that I was curious about the Hamiltonian predictions for colony structure. (When I started my dissertation, people had only just abandoned using allozymes to look at relatedness inside colonies). He tilted his head and said asked me a bit more. Before I realized what was happening, he let me know he wasn’t familiar with kin selection theory. He said that he hadn’t heard of any of the WD Hamilton papers from the 1960s. He didn’t seem think that this wasn’t a big deal, that this was just an inside baseball discussion among social insect experts. We moved on to new topic.
But it was a huge deal. If you’re not a biologist, you might not recognize this But if you are a biologist, you’ll recognize that my professor openly volunteered (to his credit?) that he was ignorant of something really foundational in his field. Frankly, nobody teaching evolutionary biology at the college level, at the time, should have been unfamiliar with the concept of kin selection.
This blew my mind in three ways. First, it’s bizarre to think that the man who started me on the path to Ecology and Evolutionary Biology didn’t have an adequate map of the territory. Second, he was a top-notch instructor and it was clear to me that we didn’t suffer much (if at all) for his lapses of awareness in the field. Third, I suddenly realized why the supposed controversies that I learned in college were actually tired arguments among everybody in grad school. My professor was merely out of date.
In hindsight, I see he was a classic example of driftwood. But not deadwood. He was a dedicated teacher and engaged evolutionary biologist, but his research was not well engaged off campus.
The stereotype of the professor who teaches outdated material is one who is retired-on-the-job, uses the same powerpoints over and over, appears bored, and uses old textbooks because they can’t bother to update the course. That stereotype was not embodied by my Evolutionary Biology professor. But the content itself was not only stale, but it wasn’t even up to date at the outset. And he was an evolutionary biologist!
Ironically, I think the content of the course would have been more representative of introductory evolutionary biology if it was taught by someone who was not an evolutionary biologist. This instructor would have been relying more heavily on a textbook, and covered the major topics as decided by the textbook authors.
So, which one would have been better for me? Either the professor who was an amazing teacher and specialist who was aware of some topics, or one someone who was not a specialist who covered all of the bases? I think that’s an unfairly dichotomous question, so I won’t answer it. But it’s fuel for thought.
If I had to list three undergradaute-level course titles that would be in my field of expertise, they would be ecology, insect biology and tropical biology. I would clearly choose to amplify some topics over others, and these decisions would result in a course that would look very different than if it were structured by a non-specialist who was merely assigned these courses.
For example, I’m not a cracker-jack population biologist, and I don’t build life tables for my work. This is, however, bread and butter for introductory ecology courses. Since I don’t regularly work in population biology, I can’t honestly tell you whether this is an actual skill that every undergraduate biology majors needs to know. (I’m not sure it is, though some of the embedded concepts are very important.) Would I include in my class? You bet I would, because it’s in every textbook and it’s expected of everyone who finishes introductory ecology and I wouldn’t want to be responsible for underpreparing my students. Even though I’m an ecologist, I wouldn’t teach only the parts of the ecology that are my specialty. But I can see how some others can be tempted to leave life tables out of an ecology course. And, I wonder if they need to be within the 30 lessons we get to teach each semester.
Overall, I have no idea how the community collectively decides what concepts are truly important. I don’t think the K-12 approach of statewide standards is the way to go for higher education, and the culture of Assessment is still leaving us plenty of latitude, which is good. But why do we teach some things as canon, and overlook others?
I get that some topics are important. But what makes them important? What defines a field? is it the people actively doing research or the people looking at a distance? When we define the topics of lessons in our syllabi, what are the criteria we use when making our choices? I haven’t thought much about this other than “I think it’s important,” but I realize that’s not good enough.
Sometimes I hear questions like, “Why is academic freedom so important? Why should university professors should have total control over what they teach?”
Let me answer those questions with a cautionary tale.
Last semester, a shortage of academic freedom in one department at my university caused what can only be characterized as a tragic boondoggle. This is causing an entire cohort of students to graduate one year late.
Over fifty Biology majors were enrolled in the second semester of General Chemistry. An adjunct lecturer planned and taught this course. The tenure-track faculty in Chemistry implemented their own common internal exam to be administered to all General Chemistry students. The instructor was not privy to the contents of this exam while she was teaching this course. Consequently, over the entire semester, the lectures and homework assignments did not correspond to the material that the students were tested on at the end of the course.
The students, who had been performing well throughout the semester, were blindsided with an exam that looked nothing like they had been studying for the whole semester. This class historically has a pass rate exceeding 80%. Last semester, however, more than 80% of the students failed. The instructor of record for this course, who taught the whole semester, did not apparently have authority over the grading of the exams, nor final authority over the grades that she was directed to submit to the university. This sounds outrageous, but also sounds like the only sensible explanation for what transpired.
Most of these students clearly did not deserve to fail. They did not deserve an exam that did not reflect the content of the course itself. They deserved an instructor who has the authority to control the grades assigned in the course.
The chair of the department is not making any accommodation for the students who got screwed over in her department. The chair claims that the students simply weren’t prepared for the exam. I don’t dispute that fact, but in this circumstance the lack of student preparation is the fault of the Chemistry department, not the students. The students fulfilled the academic expectations of the instructor, but that had no connection to their grade. That is flat-out unethical.
The consequences of this F go well beyond this single course. None of the students can retake the course this semester, because those sections were filled by those who passed preceding course in the sequence.
The soonest these victims can retake the course is one year after they were originally enrolled, but now we have twice as many students trying to take this course and the Chemistry Department refused to offer any additional sections to its victims from last semester.
This course is a prerequisite to Organic Chemistry, which is a prerequisite for other courses. Nearly all of our majors in this section – more than fifty students – are now going to graduate at least one year later than they had planned.
What’s the worst part of all this? It happened two months ago, and as far as I can tell, the only people who aware and troubled are the ones who have no power to change anything.
If any of our students had families donating large sums of money to the school, this situation would have been resolved lickety-split. If anybody with authority in Chemistry actually cared about the students, this would have been fixed before the semester ended. If department had any confidence in their trained contingent faculty, then this unjust situation wouldn’t have emerged.
The students can file a grade grievance, but that won’t fix the problem. It takes at least a year for that process to go through the system. (I served once as a “preliminary investigator” for a grade grievance claim, and the incident happened three semesters earlier.)
You might ask, “Aren’t common exams an effective way to make sure that there is consistency in grading when section are taught by different instructors?” The answer to that question is yes. However, that consistency has a price. In this case, the price is reasonable academic progress for scores of students. Keep in mind that most of our students work long hours in addition to a full class load, and also have substantial family concerns at home. Being in school is a great challenge, and we just made made the climb to graduation even steeper.
The required use of common exams deprives instructors of the academic freedom to evaluate their own students.
If similar events had taken place in any of the three private institutions in which I’ve taught (as adjunct, visiting, and tenure-track), this disgrace would be unthinkable and scandalous. There would be mass protest. But at this disadvantaged university, it’s just one more injustice.
At this point, I’m not even sure if our administrators are aware of this incident. I have a huge amount of confidence in the Dean and the President, who I imagine would do everything they can to resolve this situation, insofar as it is possible. The fact that this problem wasn’t a howling and yelling crisis at their doorstep at the end of last semester is a sad testament to the fact that our students are just accustomed to being disempowered, and they just roll with being wronged. It’s our job, as faculty, to prevent these wrongs from at the outset. That starts with giving all instructors that academic freedom over their own workload.
If any instructor is good enough to be hired as to teach for the university, then they’re good enough to be trusted by the university to carry out their job independently. Any department that lacks the faith that its own instructors can teach appropriately has huge problems that can’t be fixed by imposing a top-down exam.
As a postscript, I should note that common exams are not always a disaster, though I think they are inadvisable. In grad school, I used to teach three sections in a class that had more than 40 sections. All of the TAs gave the same exam, and we had little control over this exam. We didn’t even get to see it until a few days before we taught, because it was a practicum set up at the last moment. I see the need for consistency among sections taught by graduate students with little to not teaching experience. I don’t see the need, however, for this particular solution.
How the heck was I supposed to know what to teach when I didn’t know the basis on which students were going to be evaluated? This was obviously a problem for students. (I also lacked the experience and professionalism to deal with this situation effectively.) This was mostly an annoyance, though, and the students did just fine in the end as best as I can recall. The lab was not overly detailed, and the exams weren’t overly idiosyncratic. As a novice instructor, I found the system to be unfair to both myself and the students. If instructors are teaching a course, they should be able to construct or choose their own evaluation. If for some reason that doesn’t happen, at the very least the faculty need to know exactly what is in exams before the start of the semester.
This is a guest post by Lirael.
I’m a PhD student in computer science at a university where most of the undergrads come from pretty affluent, educationally privileged backgrounds (as I did myself, back in my undergrad days). I’m a teaching assistant and/or tutor for a couple of different programs that we have for students who are not from such backgrounds. One is for students who are motivated but have been educationally disadvantaged in some way (whether this was poverty, major illness in high school, an unstable housing situation, war in their home country, or any other life circumstance that would have left them at a disadvantage in their schooling), who take catch-up classes as a cohort and get extensive advising in order to prepare them for a full undergrad program. The other is for students who are first-generation college students or who come from families with incomes below 150% of the poverty line, and gives them free tutoring, extensive advising, career prep, and leadership development. Some students are in both programs. Neither program is exclusively for students of color or poor students, but in practice, most of my students are both.
Computer science has unusual status compared to most science, social science, and humanities programs, because so many people associate it so strongly with a quick and direct path to good jobs. There is some truth to this association – when I graduated from college at 22 and started my first industry job, I had a salary that put me in the top 20% of all US wage earners, plus excellent benefts and good working conditions. This gives computer science obvious appeal for my students (and for other marginalized groups — I have a friend, a trans woman, who teaches at a program to ecnomically empower other trans people by teaching them to code). It also makes it very popular at, for example, many community colleges.
My concern, though, is what sort of computer science marginalized and underrepresented groups are learning in the name of economic advantage.
Some community colleges have excellent offerings, of the sort that will prepare their students well for upper-level classes. In others, the curriculum seems to be dominated by courses that could be described as “How to use a currently-popular technological tool for immediate commercial applications.” Sometimes they are “Intro to a currently-popular computer language.” There’s generally a data structures class, but not much else on the more foundational side of CS. Some four-year departments like this approach too. The thing is that in the tech world most of these skills and languages are likely to be archaic in a few years – I don’t often see job listings asking for people who know Pascal or BASIC or who can hand-write websites in HTML or make an eye-catching GeoCities site, all of which were in the currently-popular category when I was in high school. The CS programs, much more than, say, the biology or history programs, stress the idea that this is vocational training. Again, I don’t want to imply that every community college or state non-flagship is doing this, but I have noticed that plenty do, especially community colleges.
At schools where the idea that learning specific current tools = employability doesn’t drive the curriculum quite so hard –- which includes affluent schools with affluent student bodies — students focus on subjects like AI, algorithms, operating systems, robotics, computational biology, distributed computing, software design. They learn specific currently-popular skills in class projects or paid industry internships where they apply, say, AI to creating Android apps, or software design to creating a new video game. They don’t seem to have a problem getting good tech jobs after they graduate. Meanwhile, if a student from a vocationally-focused school wants to transfer to a prestigious one, will they be prepared for the classes at the new school? Will their credits from the vocationally-focused classes transfer?
Are there tech jobs where hiring managers care mostly that applicants have a list of buzzword Skills O’ the Day, and will seriously consider candidates whose whole CS education is an associate’s degree? Yep. What kinds of tech jobs, in general, are those? The crappy tech jobs. The code monkey jobs. The ones that pay less. The ones with less prestige and less respect. The ones that get outsourced to developing countries.
I think it’s incredibly important that people be able to get jobs after they graduate from college. It’s often more important for students from poor or working-class backgrounds, who don’t have family money to fall back on if they don’t get a job right away, so I understand why schools with many such students would be very concerned about employability. But I worry that focus on vocational training will ironically lead to less employability, and less upward mobility, for the people who need it the most. I also worry that increased focus on college as preparation for the workforce, which has had consequences already for the humanities and social sciences, will push computer science in the direction of vocational training.
I am not saying that there should be no vocational focus at all in computer science (indeed, some affluent schools have been criticized for not having enough of one) only that there needs to be balance. The course that I TA is an intro to computer science course focused on game design. Students learn basic computing and engineering concepts along with skills like how to create their own webpage and how to use game-creation software. I make a point of talking about how they can use what they’re learning in other fields, like biology or public health or economics, as well, since after all not all of them want to go into computer science. My hope is that they’ll get something out of it no matter what field they go into, and that if they do want to continue in computer science, they’ll be well-prepared to do so.
The university curriculum evolves, and is a creature that is shaped by a variety of environmental forces tugging at it in different directions. Just like any other organism.
The curriculum is pulled by budgetary swings, administrative agendas, educational fads, and the politics of interdepartmental relations. Changes to GE happen, but are rarely optimal because they always are forged in compromise.
There’s is always the weight of past precedent, from prior circumstances, that weigh down the GE.
As a biologist, I see this as a university-scale example of genetic load.
J.B.S. Haldane coined the term “genetic load,” and mathematically expressed it. In a nutshell, for non-biologists, genetic load is the evolutionary baggage that you carry along with you as the result of natural selection on something else. As evolution improves on some traits, other non-adaptive ones often get packed for the trip as well. (No population of organisms is optimal in all respects, and deleterious mutations creep into the gene pool. An older post about genetic load is over at Sandwalk.)
At most universities with which I’m familiar, the General Education curriculum is weighed down with superfluous courses that were inserted at some point in the past but have lost their relevance or effectiveness. However, once these courses make it into the GE, then the courses stay there for good, because the become pets of the departments and faculty teaching them.
Eliminating a course from the GE is way harder than adding one. So, more and more courses get stacked on top of one another, often independent of their relevance or redundancy.
How do fix this problem? Well, don’t tinker with GE unless it’s broken. And when it is broken, then rebuild it from the ground up. I realize this is pretty much an impossible task. If someone knew how to fix GE, then GE wouldn’t be messed up at so many universities.
What do I mean by useless pap in GE? I’m a big supporter of a classic liberal arts education and I greatly value breadth. But, most “computing” requirements are out of date, and the implementation of writing courses sometimes doesn’t result in more or better writing. My university has some upper-division general education requirements in the sciences that make no sense to me at all, and the students seem to agree with me. Some courses are allowed for GE credit, while others would be great for GE but for political or historical reasons aren’t included.
Whenever someone wants to fix GE by mutating it, all the other stuff from decades ago sticks along for the ride. It’s a huge stinking mess, overloaded with units but short on a genuine broad-based education. At least, that’s how I see it at my place.
By the way, JBS Haldane was a top-flight ranconteur, and has taken to tweeting from the grave. He doesn’t tweet often, but he’s worth a follow.
As a scientist, I am often doing science. It’s my job. I know science. By any measure, I’m as much a scientist as any other scientist.
But if you look at what I do on a day to day basis, it looks absolutely nothing like what people think science should look like. Fixing misconceptions about science requires much more than correcting stereotypes of what scientists look like, though that’s a great start. (By the way, here’s my entry to This is what a scientist looks like.)
Science is taught in school as a linear process. In practice, it never is a linear process. It’s not even a linear process in the labs in which we teach the scientific method.
I was in a high school classroom last week, and on the whiteboard of this classroom was that odiously wrong conception that we see everywhere. I see this all the time, and if I was doing my job better I would openly contest it every time I see it.
In its stead, let me share with you what it looks like when I am doing science:
What science really looks like is a little more complex than how it is marketed by publishing companies to our children when they are in school. They’re not training kids to be scientists, they’re selling textbooks to teachers who are not scientists. Many of these teachers are reluctant to teach science because they are not adequately prepared, and because their bosses are making them overdose on math and English to maintain test scores.
Teaching science isn’t easy for those who aren’t used to doing science, so this cute linear process that kids see in school is what publishers have done to make teaching science as simple and boring as possible.
How do our kids really learn what science is? By actually doing science. By having teachers that understand science and do real science with them. What is often missing is the red arrow in the figure above. Even those who buy into the linear model of science need to realize that it is cyclical, that answers lead to questions. It’s not a plodding march of progress. It’s a messy tumble and jumble forward in which new information leads to even more confusion, but with broader horizons. Science expands the circumference of our ignorance.
How can teachers get our kids to do science if they don’t even know what real science looks like? We need to teach real science to the teachers. We can do this in college, but if you look at the science coursework that is required by future elementary- and middle-school teachers, you’d be either dismayed or outraged. This is the starting point in fixing the science education crisis in the US. We need elementary and middle-school teachers who understand, enjoy and prioritize science.
As scientists in science departments, we have the latitude to seize this curriculum and teach these classes the right way, and by the right people. And we can make sure that people don’t leave our classes without understanding and being excited about science. We can make sure that they’ve been involved in a genuine science experience. We can use genuine inquiry in our teaching.
We also can skip the middle man and do science with current teachers.
This summer I’m taking one of many small steps. I’m having an experienced master teacher at the middle school level joining my group in Costa Rica for a month. He should go home with a better idea what science looks like, I expect. If you want a teacher in your lab, and you’re one of those (declining few) with federal funding, just call up your program director and you probably could get hooked up mighty quickly once you find your teacher. To find a teacher, just ask around, and many will jump at the chance as long as they’re getting paid. Even if it’s just a lot of pipetting. Having a teacher in your lab can change science education for hundreds of kids in a short period of time.
To be clear, I’m not the only one who has this idea in mind. The more of us working to explode the notion that science is linear, the more opportunity kids have to get to do real science.