Lessons from serving on NSF panels

Standard

Last year, I served on a couple NSF panels*, and I’d like to share some thoughts. Instead of a coherent narrative, I’ll just give a bulleted set of observations and ideas.

Panels are diverse, and this diversity matters. I was rather surprised by the strong representation of researchers from all kinds of institutions on the panels, including teaching-focused institutions. (My sample size is not infinite, though — and if a panel was excluding such folk, then I wouldn’t have been on it, of course). There were folks from liberal arts colleges, regional comprehensives, other research organizations, and of course R1s. (There was also solid representation of folks based on gender and ethnicity, as well.) I witnessed so many conversations where this inclusion made a difference. Everybody in the room was working to understand institutional context when relevant (with respect to reassigned time, mentorship, the role of student researchers, institutional expectations and resources, and so on). There was an earnest interest in being fair and informed. Having people who work in all kinds of institutions was important to keep discussions grounded in reality. I’ve long thought that panels are about as fair as they can get, before serving on them (because of the detailed and well-balanced reviews I’ve gotten when both unfunded and funded), and everything I saw validated this view.

The ethos of a proposal is most critical. This is a vague and intangible point, but it reflects the vague and intangible nature of this phenomenon. Some proposals are just very well written and have a nice sound to them, and make a more compelling case. Yes, badly written proposals might get ranked high despite the writing, and amazingly written proposals will tank if fundamental problems with the science are detected. But a well-composed and compelling read has a huge leg up. Even on a panel of specialists, there are people coming  to the table from different academic specializations. A great proposal can simultaneously convince a specialist and a person in a different subfield that the work is interesting and cool. As a novice grant writer many moons ago, I thought that appealing to the generalist or the specialist was a tradeoff, and that by convincing one group, you would lose the other — and that managing this tradeoff was the key to a good grant. As soon as I realized no such tradeoff exists, I started to get funded.

A well-written grant happens when the ideas are very clear to the PI, and it’s written in an equivalently clear fashion. It can simultaneously appeal to the generalist and the specialist. You can make a case with a minimum of jargon and still be compelling to the experts. Every proposal in the world can get seriously dinged for missing something. Every proposal could be 100 pages. There are always things that don’t get included. When proposals get dinged because something is missing, the subtext is that the person writing the review is concerned that the thing is missing. Those kind of concerns tend to fade away when the proposal has an authoritative but realistic ethos. A well written proposal tacitly says, “You and I both know that I can’t explain every little thing in here, but check out how gorgeous this is, and you and I both know that I can pull this off.” This requires writing skill, an understanding of human nature, a lot of experience reading and editing, and also a bit of magic. Look at all of the famous scientists in your midst. They have that bit of magic salespersonship about themselves. A less great proposal is a little desperate to try to fill in the blanks and cover all of the bases, instead of making a proactive case for the importance of the work. In research proposals, the best defense is a good offense.

Proposals get ranked highly when one or more panelists think that the proposal is really great and they want to see it funded. As a grant writer, you want to deliver for those advocates, by giving them something to work with and not providing other people material that can be used to sink a proposal. How do you figure out how to do this? That’s the bit of magic, I think. One good way to figure this out is, if you have the opportunity, to serve on a panel. You also can find friends who write great proposals (you know this because they get funded once in a while) and ask them to edit your proposals and provide you guidance.

Panels don’t make funding decisions, program directors do. When panelists finish their job, they don’t know what’s going to be funded. Those decisions are made by program directors, who need to have a balanced portfolio, dealing with a great variety of issues that don’t enter into the discussions of panelists. The panels rank, evaluate, and discuss the merits and the flaws, and that’s it.

Once funding rates drop below 30%, it’s all a crapshoot. I think the panels do their job really well. They separate out the stuff that isn’t worth the support. Then the really great science is separated out from the good stuff. If one out of three proposals gets funded, drawing that line seems kind of do-able. But if you have to draw that line at one out of ten, well, then, that’s asking a panel to do the un-doable. (This idea isn’t new, it’s one I’ve heard from many people many times over the years, especially as funding rates have declined of late.)

Different NSF programs have different cultures. The two panels I’ve served on were different from one another. The environment of open discussion and respect among colleagues was a common thread. In the end, the job is to produce a ranking along with panel summaries that are helpful to both the organization and the PI. But there are some slightly different ways of getting to that end, though, and the workflow can vary quite a bit. Chatting with other panelists, it seems that the culture of every panel is a bit different, though I don’t know or can’t imagine how this would affect how you would write the proposal.

Don’t get too hung up on a weird remark out of the blue in a review. Not all reviews are created equal. In many of the programs, there will be ad hoc reviews from specialists, some of which may have been named as potential reviewers by the PI. Then there are reviews from each of the panelists. If the grant has the misfortune of being cross-reviewed, then there’ll be reviews from more than one set of panelists. The panelists know who wrote all of them. The discussion of the proposal is led by the panelist who is assigned a primary role, and the two secondary people also have written reviews and then discuss it with the primary. It’s the opinion of these three people (and other folks in the room, more or less depending on the context and details) that drive the ranking of the proposal. The ad-hoc reviews can influence the process if they bring detailed information to light that the panelists had missed. In theory the ad hocs come from someone who really knows the science in this particular proposal, perhaps more than any of the panelists.

The panelists submit their reviews without knowing what the other reviews said, and a good panelist is prepared to change their mind (either up or down) in light of conversation and additional reviews. It’s possible for all reviews to come in as lukewarm, but after the discussion the panel sees a proposal as really important and amazing, or strong reviews to end up with a lowish ranking if the panel identified some major problems that barely appeared in the review. Essentially, what happens in the panel discussion is what matters. The reviews structure this conversation, of course, but it’s how the panelists think about the reviews and the proposal that really matters. If there’s a whacked-out line in one of the reviews that somehow is off the mark, then odds are the panel also knows it’s off the mark.

Do read the panel summary with extreme detail. Then again, information in one of the reviews can heavily influence a panel. How do you know if this is the case? Read the panel summary very carefully. Let me say this again because this is how important it is: If you want to know what happened in the panel discussion of your proposal, read the panel summary. If you want to know what it is that brought your proposal down a notch or three, then read the panel summary. Please trust me, it’s in there. If there’s a reason the panel is excited and wants to fund the project, then the reason is in the summary. If there are sources of reservation and things that you need to change in the proposal on the next go-round, then the reasons are in the panel summary. Panel summaries are written with much care, and then wordsmithed with even more care to be clear and accurate and fair and provide information that will be useful to the PI. If there is anything critical mentioned in the panel summary, you can be sure that item came up in the panel discussion. The panel summary is a consensus document, and everybody signed off on it. It wouldn’t be mentioned if it didn’t matter, for both positives and negatives.

Broader impacts matter a lot. Proposals that lack good science won’t get funded. Period. However, weak broader impacts can bring a proposal with great science down — and not just a little bit, but potentially a lot. Likewise, a proposal that has sound science, but nothing particularly earth-shattering, can become a lot more exciting and have a higher priority because the broader impacts are great. When NSF says that proposals are evaluated on the quality of both the intellectual merit and the broader impacts, they aren’t joking, and in my experience the panelists take this just as seriously. If you build greater broader impacts, then this will definitely help your proposal. And if you don’t, it’ll definitely hurt your proposal.

The training of scientists from underrepresented groups is a high priority for everybody in the room. I suppose there are folks out there that don’t want to broaden representation in our own field, but they either aren’t on panels or they’re so outnumbered that they keep it to themselves. Seriously, it’s clear that we’re all on board with this priority. I mean, panelists get excited about this. I don’t think we can pin inadequate progress in this realm on the lack of buy-in from panel members or a lack of urgency felt by the NSF program officers who run each panel. If you tack this component in without much planning or substance, it looks shallow and pandering. But proposals that do it quite well are uncommon and this is highly valued.

A lot of people get broader impacts wrong. In my opinion, the majority of folks just don’t take broader impacts seriously and just give them enough lip service to avoid being called out on giving them lip service. Among the folks who do take broader impacts seriously, the majority don’t go about it as well as they could (I’m guessing it’s due to some combination of being underinformed, overambitious, underambitious, and overconfident). Here’s my single piece of advice for broader impacts: Treat them like you treat your science. What do I mean like that? Design broader impacts for success based on best practices that are substantiated by peer-reviewed literature, with clear rationale, methods, measurement of success, and mechanisms for accountability.

Folks often seem to confound research training with actual mentorship. Rarely do proposals provide any specific information about how they will mentor students, even though mentorship is often cited as a key component. Also, when folks say they’re going to recruit students from underrepresented groups, they might point to some other organization that they will use to recruit students, but there are rarely outright statements that lead the reader to understand that students from target populations actually will be recruited. Almost nobody specifies metrics that will be used to evaluate the success of the mentorship efforts that will end up in the final report. If lesson plans are going to be developed for schools, often it’s not clear if anybody really wants these lessons, and there is rarely an indication that they’ll be designed to meet the standards of instruction used by local schools. (I’m not violating any kind of confidentiality here, because in all sincerity, this describes the bulk of the proposals. If you’re reading this and imagine it might be specifically about you, let me reassure you it’s only because this is something seen from most folks.)

While there are folks bending over backward to provide people with information about how to do broader impacts to be, um, impactful, it doesn’t look like people writing the proposals are paying any actual attention. And it harms the prospects for funding. (By the way, a big warning sign that I just discovered while writing this post is that the National Alliance for Broader Impacts seems to be entirely steered by people at R1 institutions. If you’re reading there over at the NABI, hey feel free to drop me a line, I can give you some leads on how to reach the institutions that really need broader impacts and get these folks involved in your leadership. Right now, the situation doesn’t bode well for effective minority recruitment, which in fact rarely happens in NSF-funded projects in Biology.)

Preliminary proposals result in double jeopardy. This is just me editorializing here: Both DEB and IOS programs have been running an “experiment” with the preproposal system. In which you send in a 5-pager that gets reviewed, in which about 75% or so get triaged, and then the rest get an invite to submit a 15-pager. I hope this experiment ends, because it’s rarely good for the applicant when the panel for the preproposal disagrees with the panel for the full proposal. I have high confidence in what NSF is doing, but with preproposals, the process seems more than a bit haphazard when funding rates are so low. Another “experiment” in Geosciences was to eliminate deadlines. This effectively doubled funding rates because the number of submissions was cut in half. I think a lot of people submit because they feel like they have to, to hit a particular deadline rather than wait a whole year. How about just submitting when you’re ready? I do hope they expand this “experiment” more widely, including the programs that I submit to.

Call your program director. The program directors make a point to not insert themselves into the discussion of proposals, to allow the panelists to do their thing. They’ll help move the discussion along with relevant information, but the views of the program officer don’t really influence the outcome, because those views aren’t expressed. Nonetheless, program officers obviously really know what they’re doing, and they also can give you information that can help you build a more competitive proposal. That’s not an unfair stacking of the deck — anybody can ask a program officer questions and they can answer the questions about what NSF wants and doesn’t want. I’m not saying to call just for the purpose of schmoozing, because that’s a waste of everybody’s time. But if you have a real question that you think the PO could address for you, it’s a good idea to ask. And if you happen to be in town, feel free to schedule a visit to have a short conversation. Dealing with PIs isn’t a distraction from their job, it is their job.

Here’s a surefire method to piss off your panel: Use the minimum allowable font size, spacing, and margins.

You can register your availability to serve on a panel. I recommend starting with the online panels for the Graduate Research Fellowships. Here is the link for that. Otherwise, feel free to let a program officer know you’re interested, with a link to a CV and a short description of what you do. (I imagine I haven’t been on disciplinary panels until now because my stuff doesn’t readily fall into a category, which explains how the panel invites I’ve gotten have been for entirely different things.)

If you submit without reading the NSF blogs, you’re crazy. Biology has several blogs, which are super-duper informative. There is one for DEB, and another for IOS, and one for MCB, and also for DBI, and an umbrella blog for all of BIO under the assistant director for Biological Sciences. If my memory serves me well, then the DEB blog was the pioneer and the others saw how useful it was and created their own. It’s not a surprise that all of the posts on these sites are internally vetted at multiple levels, and this obviously constrains what can go into them. But visitors like me can blog a little more freely, of course. Some folks in NSF are very familiar with academic blogs. (And of course folks, if I’m violating any confidentiality or discussing things I shouldn’t be please do give me a heads up here. I think I’m following all of the rules I said I’d follow, but of course I want to do things right.)

To get more funding for NSF, please pick up the phone to call your own congressperson, especially if they’re in the party that controls The HouseThe only way that NSF will have more money is from congressional allocation, and that will come from citizens — like you and me — getting all up in the business of our congressional representatives. (I am convinced that my own rep, the truly honorable Judy Chu, is in our corner. But until her party takes back the House, she’s not controlling the purse strings.) NSF is an apolitical organization that exists to support the development of science in the US. But politicians are the ones that fund NSF, so scientists need to advocate with our politicians to maintain a thriving scientific community.

I hope this has been helpful, and if you have any observations to add, please do so in the comments!


*In the past, I’ve had some folks try to NSFsplain me as if I’m a newbie, so let me spare you the trouble. I’m not naive to what happens in NSF, I’m well versed with the game. But I hadn’t been asked to serve on a panel until this year (and, oddly enough, I was asked to serve on four different panels this year!), and I actually once interviewed for a rotator position more than a decade ago. So while I’m new to panel service, I’m not a rube when it comes to NSF.

17 thoughts on “Lessons from serving on NSF panels

  1. Great post! Thanks for putting in a plug for the NSF BIO blogs – we’re hoping to use them to reach out and demystify things a bit.

  2. Thanks for this useful info. I’very been invited as reviewer for GRFP and eager to do a good job. I’very been wondering if reviewers’ performances get assessed by the PO’s to inform future invites. It would make sense to me.. thanks.

  3. Eunjin, I’ve heard thirdhand that panelists get a rating of some sort, and that this influences future panel invites (like how manuscript management systems have a mechanism for editors to rate reviewers). I think they’d be silly to not keep track of such a thing, both for quality control and to be able to bring back the people who were both willing and served well. (Also, just weeks after serving on my first panel, I got two new invites within the next month! So, I’m guessing that I did just fine on my first panel and now my name is in the system.)

  4. Cheers for this Terry, useful and interesting even to someone like me with no direct reason to be interested.

    I appreciate your emphasis on the thoroughness and fairness of the process and on the fact that one random reviewer comment never derails a proposal. When success rates are so low, there’s a bit of an unfortunate tendency for some PIs to start questioning the fairness of the process, or to misattribute the reasons why they didn’t get funded.

    I also really appreciate your remark that there’s no trade-off between writing for experts in your sub-sub-field vs. writing for a broad audience. That’s my experience as well. The strongest proposals I’ve seen (both for NSF and other funding agencies like NSERC) read well to both experts on the topic and to those working on quite different topics. Conversely, proposals that read badly tend to read badly to everyone.

    I’d only add my sense (which I’m guessing you share) that presentation is a pretty honest signal. Yes, there are cases where poor presentation sinks what’s fundamentally a really exciting proposal. But in my experience, it’s more common for presentation to reflect the content. As you say, when the PI really knows what they’re doing and why it’s worth doing, that usually shines through.

  5. I doubt there is an association between the ability to write a compelling grant and the ability to design and carry out groundbreaking research. I think there are bunch of scientists out there that, if they had the funds, would be doing some great stuff, but they just don’t have the knack for sales. Then again, this sales problem emerges when trying to publish manuscripts in high-profile journals. I think one example of this is Mendel. While I think that clear and convincing writing is evidence of clear thought, placing clarity of thought as a prerequisite skill is a drawback. I don’t have any suggestions to fix this situation, though.

  6. Thanks for this piece of information. In a month I am heading for my first panel meeting. Hopefully this will be similarly positive opinion.

  7. As another recent NSF panelist, I’ll echo Terry’s thoughts, almost to the word. It is extremely gratifying to have the opportunity to serve on a panel, and it is quite eye-opening the extent to which everyone tries their utmost to make the review process actually work in the fair and careful fashion that we all hope that it does.

    The one place where I’d offer a slight difference of opinion is with respect to panel summaries and comments from individual reviewers.

    I would suggest that it’s wise to read between the lines of panel summaries, and compare what appears in them very carefully with what appears in the individual reviewer’s comments.

    Panel summaries are inherently “safe”, and by their nature avoid stressing the sometimes heated nature of panel discussions. While their content is “approved” by all the panelists, this does not mean that all of the panelists would have chosen the words of the summary to express their opinion of the proposal.

    As a result, you’ll see often see panel summaries that mention 2 or 3 weaknesses that reduced enthusiasm, where one of these was the singular issue that killed the proposal in the discussion, and the others were “add on” issues that some reviewer didn’t want to see go unmentioned, but that really was not a tipping point for the proposal. The difference in importance of those issues to the discussion is unlikely to appear in the panel summary, but likely to come out in the text of the individual reviewer’s comments.

  8. Hi terry; you describe a very fair process…for normal/solid science…. , but one that seems unlikely to fund ‘risky research’, which I define as the stuff that will possibly lead the next generation. I am curious about your remark: ‘I doubt there is an association between the ability to write a compelling grant and the ability to design and carry out groundbreaking research’. Does this mean that the panel really does not try very hard to spot the really creative, novel stuff? I am personally terrible at writing grants, but my experience is that the panels are generally terrible at spotting the really novel research in any event, regardless of the grant. Optimal foraging, sex-allocation [ particularly for plants], evolutionary approaches to child development are the 3 examples I know best.
    Discussions among researchers about ‘how to get funded’ usually caution one to avoid much real creativity; but always claim its ground breaking!

  9. Terry: Is the Mendel in your above comment ‘Gregor Mendel’ or another person in the Biology community?

    Your comment about NSF’s blogs would obviously apply to those directorates (!BIO!) that have them. But, a broader comment: “If you submit without reading the NSF [site]” would be just as applicable. I.e., don’t use your colleague next door as your source for information: the NSF site WILL have the answer, if you read the appropriate areas.

  10. @Eric – I was expecting Terry to take a crack at this, but since he hasn’t, I’ll chime in with my thoughts:

    I would absolutely /not/ characterize the panels that I’ve participated in, as unlikely to fund ‘risky research’ – for certain definitions of risky. In fact, the panels I’ve sat on are usually explicitly looking for risk as a feature of the project, and are unequivocally unwilling to fund work that the investigator, or the community, has definitively proven can be done. Risk, and a potentially groundbreaking payoff are near requirements for success.

    That being said, when I say ‘risk research’ here, I do not mean crackpot silliness. The proposed science does not need to be mainstream (actually it has an advantage if it’s not mainstream), but it absolutely does need to be proposed in terms of well-designed experiments or developments that can be properly tested and evaluated.

    Unfortunately, the large majority of the non-mainstream ideas that come through, are proposed by people who appear to possess absolutely no concept of how to design experiments to test hypotheses, how the scientific method works, etc. Panels that I’ve been involved in, are /not/ going to fund projects, mainstream or not, where the investigator is proposing to do work that won’t, by dint of poor design, produce any interpretable results.

    Propose something non-mainstream, with at least a little evidence to support the idea that it’s worth spending money on, and show us a research design that demonstrates that you’re actually going to do science and learn something, and the committees I’ve served on would trip over themselves trying to fund you.

  11. Yes, I was referring to Gregor Mendel. An archetypal example of someone who did groundbreaking research but didn’t sell it to a high profile venue.

  12. I’ll also comment on:

    “Does this mean that the panel really does not try very hard to spot the really creative, novel stuff?”

    Again speaking for panels that I’ve served on, the panel definitely tries to spot creative, novel stuff, BUT only in so far as the proposal writer /tells us/ about really creative, novel stuff.

    There is, of course, a practical “there’s only so much time in the day, both to pore through the proposals and try to learn about them, and in the panel sessions to discuss and deliberate them” aspect to the limitations on how deeply the panel can delve into any proposal.

    Much, much more importantly though, it is NOT the panels job, or prerogative, to put words or ideas into the proposal-writer’s mouth. Whatever aspects of a proposal are novel and creative, had better be spelled out, with adequate evidence supporting their novelty and creativity, in the proposal. We’re not going to take something where a writer has said something that might be interesting, and cast our own interests, ideas, and fantasies about where the research might lead, into our evaluation of what the proposal actually says is going to be done.

    It’s not our job, and moreover, if someone has done a terrible job of conveying the science, novelty and creativity in their proposal, then we have quite little confidence that they’ll ever do an effective job of conveying the results of their work, even if they do the really fantastic things we’ve imagined for them. If they’re such a terrible writer that they can’t effectively convey their results, then any money spent on their science, good science or not, is essentially flushed down the toilet.

    Funding is a tightly limited resource, and there are far too many outstanding proposals with novel ideas and creativity to spare, that aren’t going to get funded simply because there is not enough to go around. Expending resources on the proposals that are poorly written and likely to fail at conveying results, even if their science works, would be a serious fiscal irresponsibility.

Leave a Reply to Terry McGlynnCancel reply