Who Is Qualified?

The WSFS Constitution (section 3.3.9) requires a candidate be "The editor of at least four (4) anthologies, collections or magazine issues (or their equivalent in other media) primarily devoted to science fiction and / or fantasy, at least one of which was published in the previous calendar year."







Who Are the Editors? To determine eligibility, we used our own records, and we consulted the Internet Speculative Fiction Database and Goodreads As we did last year , we estimated the equivalent of one magazine issue to be 45,000 words. Accordingly, we qualified an editor if he/she edited at least 45,000 words in 2016 and at least 180,000 words lifetime, even if those words didn't appear in anything approaching a volume or an issue.

Rocket Stack Rank reviews, we've identified the eleven editors-in-chief (Uncanny has two) as the individuals who should receive the award. That makes sense because the editor-in-chief is the person who makes the decision on what gets printed and what does not. All eleven easily meet the qualifications.



Tor.com isn't exactly a magazine, and it lists specific editors for its novellas and shorter fiction, so we've treated them as individuals. Out of seventeen editors whom Tor credited in 2016, only seven meet the qualifications.



RSR read and reviewed original stories from eleven anthologies in 2016 which credited a total of twelve editors, but only seven were eligible for the 2016 Hugos. Of course there were more than 100 SFF anthologies published in 2016, and we do not cover content from ones that didn't attract our attention or the attention of other reviewers.



Two individuals met the qualifications but RSR didn't review enough of their work to make any rational judgment. Mothership Zeta, but we only read two issues of it.



For the ten SFF magazines besides Tor.com whichreviews, we've identified the eleven editors-in-chief (has two) as the individuals who should receive the award. That makes sense because the editor-in-chief is the person who makes the decision on what gets printed and what does not. All eleven easily meet the qualifications.Tor.com isn't exactly a magazine, and it lists specific editors for its novellas and shorter fiction, so we've treated them as individuals. Out of seventeen editors whom Tor credited in 2016, only seven meet the qualifications.read and reviewed original stories from eleven anthologies in 2016 which credited a total of twelve editors, but only seven were eligible for the 2016 Hugos. Of course there were more than 100 SFF anthologies published in 2016, and we do not cover content from ones that didn't attract our attention or the attention of other reviewers.Two individuals met the qualifications butdidn't review enough of their work to make any rational judgment. Mur Lafferty qualifies easily as editor of, but we only read two issues of it. Paula Guran qualifies as editor of at least 25 anthologies, including two in 2016, but we only read a single story in Tor.com that she edited. Accordingly we dropped both names.

Since Jonathan Strahan is both a Tor.com editor and an anthology editor, we ended up with a list of 22 people:





Ways to Rate the Editors



How much work (in words) did they publish?

What percentage of their work was good?

How much work did they publish by new writers? The charts below divide published works into four categories:

Recommended means stories with at least one recommendation by a prolific reviewer. Not Recommended means stories that RSR thought had problems that reflected poorly on the editor. Not SF/F means stories that weren't science fiction or fantasy. Ordinary is everything else. It's important to keep in mind that "Recommended" is computed from the scores of many different reviewers, but "Not Recommended" and "Not SF/F" represents the opinion of Rocket Stack Rank alone.



NOTE: In the charts below, "Lynn M. Thomas and Michael Damien Thomas" are abbreviated to just "Lynn M. Thomas" in the interest of space. Anyone nominating one should nominate both. We've identified three reasonable ways to evaluate the quality of editors:The charts below divide published works into four categories:

Rate By Quantity

One measure of a good editor is how much work he/she got published, that is, how many words of original fiction did he/she get into print. The following table shows how many words of original fiction each of the above editors produced in 2017, color-coded to show the quality.





Comparison of Editors by Number of Words of Original Fiction Published in 2016

By sheer quantity, Trevor Quachri (Analog) and Sheila Williams (Asimov's) are ahead of everyone--the only editors to publish more than half a million words of original short fiction in 2016.

When you excude bad stories and stories that weren't SFF in the first place, Sheila Williams (Asimov's), Trevor Quachri (Analog) and C.C. Finlay (F&SF) are the only ones over 400,000 words.

When you only include recommended stories, Sheila Williams (Asimov's), C.C. Finlay (F&SF), and Trevor Quachri (Analog) are the only ones over 300,000 words, although Neil Clarke (Clarkesworld) comes very close.

If you think any of those measures is valuable, use them to decide which of the editors in your list should really be nominated for best short-form editor.

Rate By Quality A different way to evaluate an editor is to measure what fraction of his/her published works are any good. Or at least how much isn't bad. One of the biggest jobs of any editor is protecting readers from bad stories, after all.

Comparison of Editors by Percentage of Words of Original Fiction Recommended in 2016 Carl Engle-Laird (Tor.com) distinguished himself by being the only editor in the list who produced no stories that RSR recommended against and none that weren't SFF at all. Liz Gorinsky (Tor.com) and C.C. Finlay (F&SF) both managed better than 90%.



In terms of percentage of stories recommended, Sheila Williams (Asimov's), C.C. Finlay (F&SF), and Jonathan Strahan ("Bridging Infinity," "Drowned Worlds," and Tor.com) all exceeded 70%, and Neil Clarke (Clarkesworld) came close with 69.9%.



As above, take a look at the editors on your list and see how they compare.

Rate by New Authors Promoted A key function of editors is developing new talent. This last chart shows how many words of fiction each editor published that was by writers who were eligible for the





Carl Engle-Laird (Tor.com) distinguished himself by being the only editor in the list who produced no stories thatrecommended against and none that weren't SFF at all. Liz Gorinsky (Tor.com) and C.C. Finlay () both managed better than 90%.In terms of percentage of stories recommended, Sheila Williams (), C.C. Finlay (), and Jonathan Strahan ("Bridging Infinity," "Drowned Worlds," and Tor.com) all exceeded 70%, and Neil Clarke () came close with 69.9%.As above, take a look at the editors on your list and see how they compare.A key function of editors is developing new talent. This last chart shows how many words of fiction each editor published that was by writers who were eligible for the John W. Campbell Award for Best New Writer , which means their first professional publication was either 2015 or 2016.

Scott H. Andrews (Beneath Ceaseless Skies), Niall Harrison (Strange Horizons), and John Joseph Adams (Lightspeed) all published over 70,000 words of original fiction by new writers, and Neil Clarke (Clarkesworld) was close.



When we exclude bad and non-genre stories, Scott H. Andrews (Beneath Ceaseless Skies), Niall Harrison (Strange Horizons) are the only ones at or above 50,000 words, followed by Carl Engle-Laird (Tor.com) above 40,000 words, and John Joseph Adams (Lightspeed), Neil Clarke (Clarkesworld), Trevor Quachri (Analog), and C.C. Finlay (F&SF) are above 30,000.



It's probably unreasonable to expect first-time writers to get recommendations, and that measure doesn't appear to cast any light on the question of who the best editor might be. Likewise, we didn't do the percentage analysis for new authors because we didn't think it was illuminating.



Note that six editors did not publish any fiction by new authors at all.

Source Data All the above charts were generated from this table. If you click on the black arrows in the column headings, you can sort by those values. Or copy and paste into Excel if that's easier.





Editor All Stories Campbell-Only Reco OK Total %Reco %OK Reco OK Total ﻿Andy Cox 126,621 173,462 203,068 62.4% 85.4% 4,907 16,928 21,929 Ann VanderMeer 13,031 36,886 58,934 22.1% 62.6% 0 5,497 5,497 Athena Andreadis 48,096 92,265 116,503 41.3% 79.2% 0 0 0 Bryan Thomas Schmidt 36,400 54,900 81,800 44.5% 67.1% 0 0 0 C.C. Finlay 342,455 432,220 468,703 73.1% 92.2% 24,679 31,211 33,050 Carl Engle-Laird 57,155 130,641 130,641 43.7% 100.0% 0 44,275 44,275 Dominik Parisien 70,831 95,610 109,956 64.4% 87.0% 0 0 0 Ellen Datlow 110,563 146,326 170,825 64.7% 85.7% 0 0 0 Ian Whates 69,890 141,616 182,053 38.4% 77.8% 0 8,165 20,670 Jason Sizemore 71,739 136,858 175,509 40.9% 78.0% 5,570 22,170 22,170 John Joseph Adams 179,047 236,174 294,177 60.9% 80.3% 22,192 37,800 73,185 Jonathan Strahan 202,727 241,896 277,171 73.1% 87.3% 7,956 7,956 7,956 Lee Harris 99,712 269,482 420,967 23.7% 64.0% 0 0 0 Liz Gorinsky 21,256 45,600 48,465 43.9% 94.1% 0 0 0 Lynne M. Thomas and Michael Damian Thomas 48,517 110,585 154,023 31.5% 71.8% 0 5,119 5,119 Neil Clarke 293,048 325,774 420,376 69.7% 77.5% 25,012 37,578 65,511 Niall Harrison 48,926 133,184 196,955 24.8% 67.6% 20,368 50,053 77,671 Patrick Nielsen Hayden 39,186 48,653 83,786 46.8% 58.1% 5,634 5,634 27,694 Scott H. Andrews 159,903 331,268 380,080 42.1% 87.2% 27,983 56,371 81,397 Sheila Williams 470,522 533,198 600,977 78.3% 88.7% 20,301 26,490 39,659 Trevor Quachri 306,147 447,633 611,612 50.1% 73.2% 19,521 32,367 42,163



Here's what the column headings mean:

All Stories covers all the original short fiction first published in 2016 which was attributed to each editor and reviewed by Rocket Stack Rank .

covers all the original short fiction first published in 2016 which was attributed to each editor and reviewed by . Campbell-Only covers the subset of all stories which were written by authors eligible for the John W. Campbell Award for Best New Writer. That is, authors whose first professional publication was in 2015 or 2016.

covers the subset of which were written by authors eligible for the John W. Campbell Award for Best New Writer. That is, authors whose first professional publication was in 2015 or 2016. Reco is the number of words "recommended" in the charts above.

is the number of words "recommended" in the charts above. OK is the sum of words "recommended" and "average" in charts above.

is the sum of words "recommended" and "average" in charts above. Total is the total number of words of original stories that editor edited.

is the total number of words of original stories that editor edited. %Reco is Reco divided by Total .

is divided by . %OK is OK divided by Total. Here's what the column headings mean:

Other Factors

Here are examples of things that arguably should count towards an editor's rating but which the charts above do not include. You should take a look and see if any of these is important to you.

Pushing the Envelope: Did the editor publish anything special or daring this year?

Editorial Stance: Did the editor use his/her position to try to effect positive change in the SF community, either through editorials, interviews, or speeches?

Reprints: Reviewers only recommend original fiction, but many readers probably enjoy the reprints just as much.

Art: Some magazines have far better cover art and illustrations than others.

Nonfiction: All the magazines have content ranging from science articles to book/movie reviews to author interviews to editorials. They take time and effort to produce, and they further separate magazines from anthologies. We didn't count them here, but you might want to reward an editor for quality nonfiction articles.

Author Service: Things like how long it takes for a magazine to accept or reject a manuscript, how promptly they pay authors, whether their contracts are reasonable, etc.

Publication Quality: Are there formatting errors in the print or electronic formats? Do they produce podcasts? Do they include word counts or some other way to identify the short fiction category? Can you buy back issues easily? Is the web site easy to navigate?

Blurbs: Do the blurbs before the stories give too much away?

Advancement: Did the editor elevate his/her publication from semi-prozine to professional this year?

Management: Attracted well-known film critics, book critics, science writers, etc. for nonfiction columns.

Where Did This Data Come From?

So where did we come up with all these numbers, and what do they really mean? From here down, we'll discuss the decisions we made and we'll offer some of the raw data.

What does an editor do?





The exception is Tor.com, which assigns (and credits) different editors to different stories.



Given that, judging them by the quality of the stories that were printed in their magazines and anthologies makes good sense. The Hugo Award for Best Editor (Short Form) is for the people who select the stories that go into magazines and/or anthologies. These people generally have the title "editor-in-chief," as a little time spent reading their editorials and blog posts makes clear. (They constantly say things like "I accepted this story" or "I saved you from stories like that.") The award has nothing to do with copy editors, who correct spelling and grammar.The exception is Tor.com, which assigns (and credits) different editors to different stories.Given that, judging them by the quality of the stories that were printed in their magazines and anthologies makes good sense.

Why exclude the smaller publications?

How do you evaluate an editor?

A. Story-based Metrics

Ultimately editors are gatekeepers responsible for bringing us good stories. We should credit them for good stories and penalize them for bad ones. To do this, we defined four quality levels.

1. Not SFF

RSR rated as not being science fiction or fantasy, regardless of whether anyone else liked them (



It's worth discussing why we chose to ignore the other reviewers on this one. The truth is, almost all of the non-SFF stories were excellent, well-written stories. The trouble is, it isn't fair to include them with SFF. Once you drop the requirement that a story have a speculative element, it becomes far easier to write. You eliminate the need for infodumps. You greatly reduce the amount of suspension of disbelief you require from the reader. You make it easier for the reader to identify with your characters, and you can use the time and words you saved to focus on making the reader care about the characters.



These are all stories thatrated as not being science fiction or fantasy, regardless of whether anyone else liked them ( see list ).It's worth discussing why we chose to ignore the other reviewers on this one. The truth is, almost all of the non-SFF stories were excellent, well-written stories. The trouble is, it isn't fair to include them with SFF. Once you drop the requirement that a story have a speculative element, it becomes far easier to write. You eliminate the need for infodumps. You greatly reduce the amount of suspension of disbelief you require from the reader. You make it easier for the reader to identify with your characters, and you can use the time and words you saved to focus on making the reader care about the characters.

A good editor should understand this and not get so excited about the quality of a story that he/she fails to notice that it's not an SFF story at all.

2. Not Recommended

3. Recommended

4. Ordinary

What Do You Think? What's the Best Way to Help People Make Best Editor (short form) Nominations?

Best Editor is a notoriously difficult category to nominate for, and while a number of people like our approach, it has also drawn some criticism. We're definitely interested in other ways to attack the problem, so please share your thoughts in the comments below.

: Added links in Editor-Publication table to editor websites and story lists.Most fans have a hard time making Hugo nominations for the Best Editor (Short Form) category. People tend to remember individual stories or publications, not the names of the editors involved. Even when they know the editor, they aren't sure if the editor is qualified, and it takes a bit of research to figure that out. Finally, even if you have a few names in mind, it's not easy to see how they compare with other editors.To make this easier for people nominating for the 2017 Hugo Awards , we've analyzed all the recommendations by six prolific reviewers of 813 original works of short fiction from the top publications of 2016, and we've distilled a list of editors, verified that they're qualified, and included charts making it easy to compare the work they did last year. This should make it a lot easier for people to nominate for the category with confidence.The point of this document is to help people identify the editors of works they liked and help them confirm that they're qualified and worthy. There is no intent to create a slate of names for people to nominate blindly.This table should be enough for you to identify which editors' work you've seen, except for Tor.com. To do a proper job for the Tor.com editors, start with the Tor.com magazine page . Look down that list to spot any stories you especially liked and open theirMini-reviews in separate tabs (CTRL-click on themini-review links). Those include the name of the editor.Based on your reading during the year, this may be enough to give you a list of just five people. However, just because they're qualified doesn't mean they're actually good enough to deserve an award. You should read at least a bit further to make sure.If you still have more than five people on your list, definitely keep reading to find criteria to trim your list.Except for, everything we review is an. That means that authors who publish stories there are eligible to apply for membership in the Science Fiction & Fantasy Writers of America . These publications pay more, and, hence, attract the better stories.We also rule out publications that do not focus on SFF, those which focus on horror and those that focus on stories under 2,000 words, although we do review such stories when they occur in other venues or when they turn up in "Year's Best" anthologies or major recommendation lists.That just leaves between three and five publications (depending on how you apply those rules) that we're missing, and we almost never see anyone else recommending anything from them. On the other hand, we addedeven though it's not SFWA-qualifying, simply because so many excellent stories are published there and keep getting recommendations. None of the omitted publications jumps out at us like that.Accordingly, we think the eleven magazines and eleven anthologies we read and reviewed this year were more than enough.A slightly different measure of an editor is how many bad stories they print. Bad stories range from things that never should have left the slushpile ( 1-star stories ) to stories that make it hard for readers to suspend disbelief ( 2-star stories ).Currently,is the only reviewer that tries to identify bad stories. In cases where we gave a story 1 or 2 stars but any of the other reviewers recommended it, we counted that story as ordinary: neither recommended nor not recommended.These are stories that were recommended by any of the six reviewers we surveyed ( see list ). These include both4 and 5-star stories. As mentioned above, whenrecommended against a story that anyone else recommended, we treated it as ordinary.These are all the rest of the stories. Eithergave them 3 stars and no one else recommended them ( see list ), or else someone did recommend them butgave them 1 or 2 stars ( see list ).Every story falls into one and only one of these four categories; there is no overlap.