Wednesday, December 9, 2009

Fund that crap

This week has been a bit crazy already, so if you're looking for the NFL weekly summary, I think DGT got the important points across yesterday.

Well, the reviews are back from one of my pending grants, and guess what. Our friend Preliminary Data is back to rear it's ugly head. In this high-stakes whack-a-mole game, each piece of new data we add seems to make criticism of a piece further down the process crop up. Pretty soon I will have performed about 80% of my initial proposal and that estimate might even be low. I only wish I was kidding about that.

The good news is that we have a large amount of pending data that will basically demonstrate that we can do all of this stuff (and will have done almost all of it before the next deadline) and that our predictions are demonstrated by the data. I won't rehash the stupidity of calling the required data "preliminary", because I and others have covered that pretty thoroughly. What was particularly glaring in this round was how saying something should be funded at the end of a reviews means absolutely nothing if there are criticisms in your review.

I know all the war horses will say "no shit" and talk about the need to be at the very top to get funded and blab about Care Bears and tea parties, but people who have not gone trough a few grant cycles or reviewed many grants might find it interesting to realize how tight things are between funded and not and the importance squeaky clean reviews. The break down of the 5 reviews I got back was pretty straight forward. Two listed the proposal as excellent, had reviewed it previously and actually expressed frustration that it had not already been funded. I appreciate it, you two, don't ever change. The other three reviewers had not previously seen the proposal and two rated it as "Very Good" whereas the third used the dreaded "Good". Saying a proposal is "Good" is like telling someone their haircut in "different" or "interesting". You may as well poop on the proposal and send it back, because that's essentially what you're doing. And you thought rating inflation was just for grades.

But I digress.

The frustrating thing is to read through a review that does not use the "excellent" tag but goes on at length about how good the PI's record is, how well thought out the proposal is and that it should be funded. Listen up folks, you either feel the proposal should be funded or you don't. By giving a proposal any rating below "Very Good/Excellent" (I love how NSF let's reviewers chose two categories to split their rating across. Why bother? Just go to a less coarse scale.) you are pretty much saying that you don't think it's good enough to cross the threshold of the fundable, so don't poop on something and say it's worth framing. This is the kind of stuff that drives us young people crazy. I can take the criticism and doubt. I'm getting used to having to complete something before I can get money for it. I even have no problem with someone telling me my ideas are crap if that's what they think. But saying something should be funded, but ranking it in a category that says the opposite, is just plain stupid.

Now we wait for the data that should be coming soon and analyze the shit out of it before the next deadline. Having this proposal declined is a lot less painful when heaps of data are about to arrive.

5 comments:

  1. Yeah, it's the "candid" part of me that wishes the funding agencies would put aside their bullshit lingo- if something is bad, don't say it is "good". If something is excellent, you shouldn't have to say it is "turbo mega awesome, best thing I have ever read". It's like vanity sizing, except for academics.

    LET'S JUST BE HONEST, PEOPLE.

    ReplyDelete
  2. At least your system is clear, JC. And I've handed out a couple of poors in my life, it happens.

    Looking back, this post is kinda all over the place, but you get the idea I think. I guess that's what happens when one blogs at 4:00am.

    ReplyDelete
  3. I'm sorry to hear you didn't get it this time through. It sounds as if you came close. One of two things have worked against you. The reviewer who gave you the good was on the panel (and not the ones who gave you excellents) and/or your PO decided not to fund you. There's not much you can do about the first situation, but you can try to find out what your PO is thinking. Drop them an email asking to set up a time to call and chat about your reviews. Ask them what you need to do to bring it into the funding range. You'll likely get some spiel about responding to the reviewers, but listen carefully - the PO will probably drop hints as to what they think is really important.

    ReplyDelete
  4. Odyssey, it's actually pretty clear what I need to do. They want to see that the expensive part of the project will work. I was previously avoiding doing the work required for that section because, well, it's expensive. But, a little while back I thought I might end up seeing reviews like this so I went ahead and did it. We're waiting on the data which should be here before the holidays and that should solve the problem if I can get some analyses done before early Jan. It does mean that much of the data will be collected for the proposal as it stands already, so I guess I will need to modify the focus accordingly.

    I've also contacted the PO, who is away until next week. We'll see what they say when they return.

    ReplyDelete
  5. Verily, the tight squeeze for grant money necessitates some amount of dishonesty. Use all the data from unpublished (or almost accepted) manuscripts from your postdoc for your first grant as faculty. The papers that result are great for showing progress. Meantime, use the received money for new projects. I have done it and so have most of my junior faculty colleagues. Talk to your program officer to find out what they think is essential to get the proposal funded. Find out where you landed (how high in the medium priority category or how low in the high priority one.) That can help you tailor your revision into the fundable range. That said, NSF panels have limited "institutional memory" unlike NIH study sections, which makes a resubmission to NSF harder. Good luck.

    ReplyDelete