A few weeks ago someone wrote in, concerned that five of Rattle‘s six Pushcart Nominations this year were men, wondering if that said anything about our editorial tendencies. In fact, only four of the six were male — Hayden Saunier is very much female, despite sharing a first name with the famous Carruth — but the question is a good one. The editorship of Rattle was gender-balanced (just Alan and Stellasue), until I arrived in 2004 and tipped the scales. Megan replaced Stellasue two years ago, and the balanced remained the same: two-thirds of our opinion is male. Might that lead to a bias on our part? And maybe even more interesting is a broader question: who writes more poetry, anyway, men or women?*
Let’s address the former first. Honestly, gender is something we never think of at our editorial meetings. It just doesn’t occur to us, and having read hundreds of thousands of poems (literally) over the last few years, there’s not a noticeable aesthetic difference between male and female poets, at least when you’re only looking at individual poems. The mislabeling of Hayden Saunier above is a perfect example — if the gender of a poet’s name is ambiguous, there’s no way to tell whether it was written by a man or a woman.
Another good example is our first Postman Award winner, Cullen Bailey Burns. Read “We Just Want it to Be Healthy” and try to guess the poet’s gender. I can’t tell you the answer, though — I got it wrong the first time, and now I can’t remember which was my bad guess and which the truth.
This also comes up fairly often with the Rattle Poetry Prize, where submissions are blinded. When we get to the great unveiling of winners, looking them up in the database, we sometimes try to guess the poet’s gender, just for fun. And our margin of error is so high that it really shows you how irrelevant gender is to poetry.
But that’s just anecdotal evidence, the worst kind, though usually the most persuasive. I’m a numbers guy — it’s one of the things that makes me like baseball — and I’ve got several data sets at my disposal.
First set is the smallest: Pushcart Nominees. Believe it or not, I don’t have records of our nominations prior to 2005, but here are the last four years:
Year – Male/Female
2008 – 4/2
2007 – 3/3
2006 – 3/3
2005 – 2/4
Tot. – 12/12
It doesn’t get more balanced than that, but the sample size is so small that it doesn’t mean much.
A slightly larger sample are the Rattle Poetry Prize winners. In three years we’ve selected 33 poems, having no idea who wrote what: 18 were men, 15 were women. Not as good, but 50-50 is still well-within the margin of error.
Next come the interviewees. There are two confounding factors influencing the data here. First off, it should be said that our conversations are the only section of Rattle where we pay attention to gender. There are only two slots for interviews in each issue, and the names go right on the cover — two men or two woman stand out as imbalanced, so we actively try (though not too hard) to have one of each. Also, the interviewees have to be popular — the whole point is to have conversations with people who our readers are familiar with and interested in — which means they’re established — which means we’re relying on the establishment to establish them.
With those caveats in mind, we’ve published 42 interviews: 27 men and 15 women. Not so good. Only 36% of interviewees have been women, and the sample size is large enough that the trend, at least, is meaningful. We’ve been at the exact same 36%, too, since I’ve been an editor, so I can’t deny a hand in it. It seems to me that a higher percentage of “prominent” poets are male, due to the slow pace of real institutionalized equality — but we could definitely do more to reverse the trend.
The most telling statistic for Rattle‘s would be poets in our open section. The sample is large, and there aren’t any outside factors influencing the results. It’s all our decision, which makes me nervous. Here we go:
Issue – Male/Female – %F
#30 – 24/13 – 35%
#29 – 37/21 – 36%
#28 – 34/31 – 48%
#27 – 27/34 – 56%
#26 – 31/22 – 42%
#25 – 21/22 – 51%
#24 – 34/27 – 44%
#23 – 30/23 – 43%
#22 – 33/30 – 48%
Tot. – 271/223 – 45%
I stopped at 9 issues back, because apparently I don’t have a copy of #21 with me. The margin of error, if I remember how to calculate it correctly, is +/-4.1%. Fairly balanced in the long-run, but trending in a bad direction. If you graphed this, it would look a lot like the Global Warming hockey stick. I had no idea. We’ll have to take a close look at this summer’s issue to see if it happens again. Nothing changed within our editorial staff between our high of 56%, and our low in the most recent issue, so I’m hoping that this is just an anomaly. I’ll update in April, once the next issue goes to press.
If we ignore the recent effects of CO2, it seems men are still slightly favored on our pages. This isn’t the result I was hoping for, nor the one predicted by qualitative experience. If we back up a step and look at the system more broadly, we’re still at the mercy of the submissions we receive. We don’t solicit work; we can only publish what people send us. If slightly more men submit than women, it will save me from feeling like I’m biased.
Saving the most complicated analysis for last, I’m going to take a random sample of 200 submissions from two reading periods — one from the poorly performing issue #30, for which we read in spring and summer of 2008, and one from the female-friendly #27, for which we read in the fall/winter of 2006. Thanks to the unlimited storage of Gmail, I still have every email submission on file, dating back to 2005. I’m gathering all of this information as I go, so I have no idea what the results will be, but this breakdown should be able to tell us two things — what the ratio of male to female submitters actually is, and how much that influences what we end up publishing.
Issue – Male/Female – %f-submitted – (%f-published)
#30 – 113/87 – 44% – (35%)
#27 – 106/94 – 47% – (56%)
#25 – 109/91 – 46% – (51%)
I wasn’t satisfied with just the two issues, so I threw in issue #25 as a kind of control group. As you can see, submissions seem to come from men slightly more often, corresponding closely with our overall publication numbers. The percentage of female poets in a specific issue trend with submission ratios for their reading periods, but not strongly enough to rise above the statistical noise.
I don’t feel like brushing up on standard deviations and the like, but even without doing that, I think it’s safe to conclude that our m/f publication ratio could be plotted as a fairly shallow bell curve, with it’s statistical mean tied to the gender ratio of the submissions themselves. Fluctuations between issues may be large, but they’ll always average out to match the submissions we receive.**
In the end, I think we’re not biased when it comes to gender, though there are a few red flags to keep an eye on. These findings raise another question for me, which will have to wait until later this week — if men are submitting more poetry, does that mean they’re writing more, too?
Edited to add: Read Part 2 here.
*Please note that for the purposes of this study I’m working under the old-fashioned assumption that there are two distinct genders. The reality is that human sexuality is much more fluid, hence the graphic accompanying this post. Unfortunately it’s too difficult not to oversimplify in an exercise like this.
**For all the baseball fans out there, the relationship between the gender ratio of submissions and that of what we publish reminds me a lot of BABIP — Batting Average on Ball in Play. Sabremetricians only recently realized that no one — not the batters nor the pitchers — seem to have much control over what percentage of balls put in play (any outcome other than strikeouts, home runs, and walks) become hits. The hitters with the highest Batting Average are simply the ones who tend to either strike out the least or hit a lot of home runs (which go into the stands and are thus uncatchable by the defense). BABIP remains the same, on average, no matter how much it might fluctuate between seasons. The average BABIP is around .295, so for example, if someone like Gary Matthews, Jr., hit .313 in 2006 because of a .349 BABIP, you don’t sign him to a big contract expecting him to repeat that next year. Instead, you wait for the Angels to sign him, and giggle as he regresses to his career mean of .259. In this analogy, our m/f submission ratio of 45%f is the BABIP, and our publication ratio in each issue (our season) can be expected to regress to that mean. Which is why I’m not so worried about the hockey stick graph, at this point.