Introduction The characters and their relations can be seen as the backbone of any story, and explicitly creating and analysing a network from these relationships can provide insights into the community structures and social interactions portrayed in novels (Moretti, 2013). Quantitative approaches to social network analysis to examine the overall structure of these social ties, are borrowed from modern sociology and have found their way into many other research fields such as computer science, history, and literary studies (Scott, 2012). Elson, Dames & McKeown (2010), Lee & Yeung (2012), Agarwal, Kotalwar & Rambow (2013), and Ardanuy & Sporleder (2014) have all proposed methods for automatic social network extraction from literary sources. The most commonly used approach for extracting such networks, is to first identify characters in the novel through Named Entity Recognition (NER) and then identifying relationships between the characters through for example measuring how often two or more characters are mentioned in the same sentence or paragraph. Many studies use off-the-shelf named entity recognisers, which are not necessarily optimised for the literary domain and do not take into account the surrounding cultural context. Furthermore, to the best of our knowledge, such studies focus on social network extraction from 19th and early 20th century novels (which we refer to as classic novels).1 Typically, these classic novels are obtained from Project Gutenberg (http://gutenberg.org/), where such public domain books are available for free. While beneficial for the accessibility and reproducibility of the studies in question, more recent novels may not imitate these classic novels with respect to structure or style. It is therefore possible that classic novels have social networks that have a structure that is very different from more recent literature. They might differ, for example, in their overall number of characters, in the typical number of social ties any given character has, in the presence or absence of densely connected clusters, or in how closely connected any two characters are on average. Moreover, changes along dimensions such as writing style, vocabulary, and sentence length could prove to be either beneficial or detrimental to the performance of natural language processing techniques. This may lead to different results even if the actual network structures remained the same. Vala et al. (2015) did compare 18th and 19th century novels on the number of characters that appear in the story, but found no significant difference between the two. Furthermore, an exploration of extracted networks can also be used to assess the quality of the extracted information and investigate the structure of the expression of social ties in a novel. Thus far, we have not found any studies that explore how NER tools perform on a diverse corpus of fiction literature. In this study, we evaluate four different tools on a set of classic novels which have been used for network extraction and analyses in prior work, as well as more recent fiction literature (henceforth referred to as modern novels). We need such an evaluation to assess the robustness of these tools to variation in language over time (Biber & Finegan, 1989) and across literary genres. Comparing social networks extracted from corpora consisting of classic and modern novels may give us some insights into what characteristics of literary text may aid or hinder automatic social network extraction and provide indications of cultural change. As previous work (Ardanuy & Sporleder, 2014) has included works from different genres, in this work we decided to focus on the fantasy/science fiction domain to smooth potential genre differences in our modern books. In our evaluation, we devote extra attention to the comparison between classic and modern fantasy/science fiction in our corpus. We define the following research questions: To what extent are off-the-shelf NER tools suitable for identifying fictional characters in novels?

Which differences or similarities can be discovered between social networks extracted for different novels? To answer our first research question, we first evaluate four named entity recognisers on 20 classic and 20 modern fantasy/science fiction novels. In each of these novels, the first chapter is manually annotated with named entities and coreference relations. The named entity recognisers we evaluate are: (1) BookNLP (Bamman, Underwood & Smith, 2014; https://github.com/dbamman/book-nlp—commit: 81d7a31) which is specifically tailored to identify and cluster literary characters, and has been used to extract entities from a corpus of 15,099 English novels. At the time of writing, this tool was cited 80 times. (2) Stanford NER version 3.8.0 (Finkel, Grenager & Manning, 2005), one of the most popular named entity recognisers in the NLP research community, cited 2,648 times at the time of writing. (3) Illinois Named Entity Tagger version 3.0.23 (Ratinov & Roth, 2009), a computationally efficient tagger that uses a combination of machine learning, gazetteers,2 and additional features extracted from unlabelled data. At the time of writing, the system was downloaded over 10,000 times. Our last system (4) is IXA-Pipe-NERC version 1.1.1 (Agerri & Rigau, 2016), a competitive classifier that employs unlabelled data via clustering and gazetteers that outperformed other state-of-the-art NER tools on their within and out-domain evaluations. To answer the second research question, we use the recognised named entities to create a co-occurrence network for each novel. Network analysis measures are then employed to compare the extracted networks from the classic and modern novels to investigate whether the networks from the different sets of novels exhibit major differences. The contributions of this paper are: (1) a comparison and an analysis of four NER on 20 classic and 20 modern novels; (2) a comparison and an analysis of social network analysis measures on networks automatically extracted from 20 classic and 20 modern novels; (3) experiments and recommendations for boosting performance on recognising entities in novels; and (4) an annotated gold standard dataset with entities and coreferences of 20 classic and 20 modern novels. The remainder of this paper is organised as follows. We first discuss related work in the section ‘Related Work’. Next, we describe our approach and methods in the section ‘Materials and Data Preparation’. We present our evaluation of four different NER systems on 20 classic and 20 modern novels in the section ‘Named Entity Recognition Experiments and Results’, followed by the creation and analysis of social networks in the section ‘Network Analysis’. We discuss issues that we encountered in the identification of fictional characters and showcase some methods to boost performance in the section ‘Discussion and Performance Boosting Options’. We conclude by suggesting directions for future work in the section ‘Conclusion and Future Work’. The code for all experiments as well as annotated data can be found at https://github.com/Niels-Dekker/Out-with-the-Old-and-in-with-the-Novel.

Materials and Data Preparation For the study presented here, we are interested in the recognition and identification of persons mentioned in classic and modern novels for the construction of the social network of these fictitious characters. We use off-the-shelf state-of-the-art entity recognition tools in an automatic pipeline without manually created alias lists or similar techniques. For the network construction, we follow Ardanuy & Sporleder (2014) and apply their co-occurrence approach for the generation of the social network links with weighted edges that indicate how often two characters are mentioned together. We leave the consideration of negative weights and sentiments for future work. Before we will explain the details of the used entity recognition tools, how they compare for the given task, and how their results can be used to build and analyse the respective social networks, we explain first the details of our selected corpus, how we preprocessed the data, and how we collected the annotations for the evaluation. Corpus selection Our dataset consists of 40 novels—20 classic and 20 modern novels—the specifics of which are presented in Table A2 in the Appendix. Any selection of sources is bound to be unrepresentative in terms of some characteristics but we have attempted to balance breadth and depth in our dataset. Furthermore, we have based ourselves on selections made by other researchers for the classics and compilations by others for the modern books. For the classic set, the selection was based on Guardian’s Top 100 all-time classic novels (McCrum, 2003). Wherever possible, we selected books that were (1) analysed in related work (as mentioned in the subsection ‘Coreference Resolution’) and (2) available through Project Gutenberg (https://www.gutenberg.org/). For the modern set, the books were selected by reference to a list compiled by BestFantasyBooksCom (http://bestfantasybooks.com/top25-fantasy-books.php, last retrieved: 30 October 2017). For our final selection of these novels, we deliberately made some adjustments to get a wider selection. That is, some of the books in this list are part of a series. If we were to include all the books of the upvoted series, our list would consist of only four different series. We therefore chose to include only the first book of each of such series. As the newer books are unavailable on Gutenberg, these were purchased online. These digital texts are generally provided in .epub or .mobi format. In order to reliably convert these files into plain text format, we used Calibre (https://calibre-ebook.com/—version 2.78), a free and open-source e-book conversion tool. This conversion was mostly without any hurdles, but some issues were encountered in terms of encoding, as is discussed in the next section. Due to copyright restrictions, we cannot share this full dataset but our gold standard annotations of the first chapter of each are provided on this project’s GitHub page. The ISBN numbers of the editions used in our study can be found in Table A2 the Appendix. Data preprocessing To ensure that all the harvested text files were ready for processing, we firstly ensured that the encoding for all the documents was the same, in order to avoid issues down the line. In addition, all information that is not directly relevant to the story of the novel was stripped. Even while peripheral information in some books—such as appendices or glossaries—can provide useful information about character relationships, we decided to focus on the story content and thus discard this information. Where applicable, the following peripheral information was manually removed: (1) reviews by fellow writers, (2) dedications or acknowledgements, (3) publishing information, (4) table of contents, (5) chapter headings and page numbers, and (6) appendices and/or glossaries. During this clean-up phase, we encountered some encoding issues that came with the conversion to plain text files. Especially in the modern novels, some novels used inconsistent or odd quotation marks. This issue was addressed by replacing the inconsistent quotation marks with neutral quotations that are identical in form, regardless of whether if it is used as opening or closing quotation mark. Annotation Because of limitations in time and scope, we only annotated approximately one chapter of each novel. In this subsection, we describe the annotation process. Annotation data To evaluate the performance for each novel, a gold standard was created manually. Two annotators (not the authors of this article) were asked to evaluate 10 books from each category. For each document, approximately one chapter was annotated with entity co-occurrences. Because the length of the first chapter fluctuated between 84 and 1,442 sentences, we selected an average of 300 sentences for each book that was close to a chapter-boundary. For example, for Alice in Wonderland, the third chapter ended on the 315th sentence, so the first three chapters were extracted for annotation. While not perfect, we attempted to strike a balance between comparable annotation lengths for each book, without cutting off mid-chapter. Annotation instructions For each document, the annotators were asked to annotate each sentence for the occurrence of characters. That is, for each sentence, identify all the characters in it. To describe this process, an example containing a single sentence from A Game of Thrones is included in Table 1. The id of the sentence is later used to match the annotated sentence to its system-generated counterpart for performance evaluation. The focus sentence is the sentence that corresponds to this id, and is the sentence for which the annotator is supposed to identify all characters. As context, the annotators are provided with the preceding and subsequent sentences. In this example, the contextual sentences could be used to resolve the ‘him’ in the focus sentence to ‘Bran’. To indicate how many persons are present, the annotators were asked to fill in the corresponding number (#) of people—with a maximum of 10 characters per sentence. Depending on this number of people identified, subsequent fields became available to the annotator to fill in the character names. Id Preceding context Focus sentence Subsequent context # Person 1 Person 2 541 Bran reached out hesitantly ‘Go on’, Robb told him ‘You can touch him’ 2 Robb Stark Bran Stark DOI: 10.7717/peerj-cs.189/table-1 To speed up the annotation, an initial list of characters was created by applying the BookNLP pipeline to each novel. The annotators were instructed to map the characters in the text to the provided list to the best of their ability. If the annotator assessed that a person appears in a sentence, but is unsure of this character’s identity, the annotators would mark this character as default. In addition, the annotators were encouraged to add characters, should they be certain that this character does not appear in the pre-compiled list, but occurs in the text nonetheless. Such characters were given a specific tag to ensure that we could retrieve them later for analysis. Lastly, if the annotator is under the impression that two characters in the list refer to the same person, the annotators were instructed to pick one and stick to that. Lastly, the annotators were provided with the peripheral annotation instructions found in Table 2. Guideline Example Ignore generic pronouns ‘Everyone knows; you don’t mess with me!’ Ignore exclamations ‘For Christ’s sake!’ Ignore generic noun phrases ‘Bilbo didn’t know what to tell the wizard’ Include non-human named characters ‘His name is Buckbeak, he’s a hippogriph’ DOI: 10.7717/peerj-cs.189/table-2 While this identification process did include anaphora resolution of singular pronouns—such as resolving ‘him’ to ‘Bran’—the annotators were instructed to ignore plural pronoun references. Plural pronoun resolution remains a difficult topic in the creation of social networks, as family members may sometimes be mentioned individually, and sometimes their family as a whole. Identifying group membership, and modelling that in the social network structure is not covered by any of the tools we include in our analysis or the related work referenced in the section ‘Related Work’ and therefore left to future work.

Named Entity Recognition Experiments and Results We evaluate the performance of four different NER systems on the annotated novels: BookNLP (Bamman, Underwood & Smith, 2014), Stanford NER (Finkel, Grenager & Manning, 2005), Illinois Tagger (Ratinov & Roth, 2009), and IXA-Pipe-NERC (Agerri & Rigau, 2016). The BookNLP pipeline uses the 2014-01-04 release of Stanford NER tagger (Finkel, Grenager & Manning, 2005) internally with the seven-class ontonotes model. As there have been several releases, and we focus on entities of type Person, we also evaluate the 2017-06-09 Stanford NER four-class CoNLL model. The results of the different NER systems are presented in Table 3 for the classic novels, and Table 4 for the modern novels. All results are computed using the evaluation script used in the CoNLL 2002 and 2003 NER campaigns using the phrase-based evaluation setup (https://www.clips.uantwerpen.be/conll2002/ner/bin/conlleval.txt, last retrieved: 30 October 2017). The systems are evaluated according to micro-averaged precision, recall and F 1 measure. Precision is the percentage of named entities found by the system that were correct. Recall is the percentage of named entities present in the text that are retrieved by the system. The F 1 measure is the harmonic mean of the precision and recall scores. In a phrase-based evaluation setup, the system only scores a point if the complete entity is correctly identified, thus if in a named entity consisting of multiple tokens only two out of three tokens are correctly identified, the system does not obtain any points. Title BookNLP Stanford NER Illinois NER IXA-NERC P R F 1 P R F 1 P R F 1 P R F 1 1984 92.31 70.59 80.00 89.29 73.53 80.65 93.55 85.29 89.23 93.55 85.29 89.23 A Study in Scarlet⊙ 25.00 30.77 27.59 22.22 30.77 25.81 14.29 15.38 14.81 20.00 23.08 21.43 Alice in Wonderland 89.13 55.78 68.62 83.33 57.82 68.27 87.07 87.07 87.07 84.30 69.39 76.12 Brave New World 82.93 60.71 70.00 7.50 5.36 6.25 7.69 5.36 6.32 2.63 1.79 2.13 David Copperfield⊙ 29.41 35.71 32.26 54.02 67.14 59.87 58.82 71.43 64.52 14.47 15.71 15.07 Dracula⊙ 5.00 20.00 8.00 4.00 20.00 6.67 12.50 60.00 20.69 10.53 40.00 16.67 Emma 86.96 93.02 89.89 25.90 27.91 26.87 26.81 28.68 27.72 30.22 32.56 31.34 Frankenstein⊙ 52.00 76.47 61.90 37.93 64.71 47.83 30.77 47.06 37.21 34.62 52.94 41.86 Huckleberry Finn 86.84 98.51 92.31 81.08 89.55 85.11 77.92 89.55 83.33 79.71 82.09 80.88 Dr. Jekyll and Mr. Hyde 86.36 82.61 84.44 18.18 17.39 17.78 21.74 21.74 21.74 13.64 13.04 13.33 Moby Dick⊙ 67.65 74.19 70.77 63.89 74.19 68.66 68.42 83.87 75.36 37.84 45.16 41.18 Oliver Twist 85.61 94.44 89.81 36.30 42.06 38.97 44.32 33.62 38.24 34.69 40.48 37.36 Pride and Prejudice 79.26 94.69 86.29 32.33 38.05 34.96 29.37 32.74 30.96 33.87 37.17 35.44 The Call of the Wild 80.65 30.49 44.25 86.36 46.34 60.32 89.47 82.93 86.08 88.14 63.41 73.76 The Count of Monte Cristo 78.22 89.77 83.60 67.95 60.23 63.86 79.80 89.77 84.49 72.31 53.41 61.44 The Fellowship of the Ring 73.39 72.15 72.77 66.12 68.35 67.22 56.52 38.40 45.73 63.33 56.12 59.51 The Three Musketeers 65.71 29.49 40.71 63.64 35.90 45.90 45.45 25.64 32.12 73.68 35.90 48.28 The Way We Live Now 73.33 92.77 81.91 49.52 62.65 55.32 28.18 37.35 32.12 43.30 50.60 46.67 Ulysses 76.74 94.29 84.62 70.10 97.14 81.44 71.28 95.71 81.71 72.29 85.71 78.43 Vanity Fair 67.30 65.44 66.36 32.46 34.10 33.26 32.61 34.56 33.56 53.12 47.00 49.88 Mean μ 70.16 68.95 67.72 52.03 53.00 51.13 51.37 55.98 52.26 49.26 48.29 47.61 Standard Deviation σ 24.03 26.27 24.25 27.27 25.24 24.93 28.68 30.16 29.17 29.70 24.71 26.50 DOI: 10.7717/peerj-cs.189/table-3 Title BookNLP Stanford NER Illinois NER IXA-NERC P R F 1 P R F 1 P R F 1 P R F 1 A Game of Thrones 97.98 62.99 76.68 92.73 66.23 77.27 93.51 93.51 93.51 92.08 60.39 72.94 Assassin’s Apprentice⊙ 63.33 38.38 47.80 61.19 41.41 49.90 61.45 40.40 48.78 53.12 34.34 41.72 Elantris 82.00 89.78 85.71 76.97 92.70 84.11 83.12 97.08 89.56 76.52 64.23 69.84 Gardens of the Moon 35.29 34.29 34.78 39.02 45.71 42.11 40.43 54.29 46.34 44.44 45.71 45.07 Harry Potter 83.80 90.36 86.96 61.24 65.66 63.37 58.43 58.43 58.43 54.94 53.61 54.27 Magician 72.92 42.17 53.44 65.57 48.19 55.56 77.67 96.39 86.02 63.10 63.86 63.47 Mistborn 96.46 81.95 88.62 93.22 82.71 87.65 90.07 95.49 92.70 94.05 59.40 72.81 Prince of Thorns 69.23 62.07 65.45 64.29 62.07 63.16 60.00 51.72 55.56 72.73 55.17 62.75 Storm Front⊙ 65.00 65.00 65.00 68.42 65.00 66.67 64.71 55.00 59.46 63.16 60.00 61.54 The Black Company⊙ 77.27 96.23 85.71 29.41 9.43 14.29 67.39 58.49 62.63 60.87 26.42 36.84 The Black Prism 90.29 90.29 90.29 88.35 88.35 88.35 88.68 91.26 89.95 87.21 72.82 79.37 The Blade Itself 62.50 71.43 66.67 71.43 71.43 71.43 52.63 71.43 60.61 55.56 35.71 43.48 The Colour of Magic 83.33 37.50 51.72 84.00 52.50 64.62 71.43 25.00 37.04 77.78 35.00 48.28 The Gunslinger 64.71 100.00 78.57 64.71 100.00 78.57 61.76 95.45 75.00 59.38 86.36 70.37 The Lies of Locke Lamora 86.16 74.05 79.65 87.58 76.22 81.50 86.79 74.59 80.23 88.19 68.65 77.20 The Name of the Wind 85.88 74.49 79.78 87.36 77.55 82.16 78.82 68.37 73.22 85.92 62.24 72.19 The Painted Man 87.02 71.70 78.62 86.47 72.33 78.77 80.81 87.42 83.99 83.09 71.07 76.61 The Way of Kings 80.72 87.01 83.75 75.82 89.61 82.14 70.10 88.31 78.16 66.67 49.35 56.72 The Wheel of Time 66.67 45.86 54.34 70.93 77.71 74.16 58.05 87.26 69.72 66.67 57.32 61.64 Way of Shadows 53.85 77.78 63.64 48.72 70.37 57.58 45.45 92.59 60.98 42.86 44.44 43.64 Mean μ 75.22 69.67 70.86 70.87 67.76 68.17 69.57 74.12 70.09 69.42 55.30 60.54 Standard Deviation σ 15.34 20.73 15.86 17.53 20.95 18.08 15.12 21.57 16.67 15.63 15.02 13.50 DOI: 10.7717/peerj-cs.189/table-4 The BookNLP and IXA-Pipe-NERC systems require that part of speech tagging is performed prior to NER, we use the modules included in the respective systems for this. For Stanford NER and Illinois NE Tagger plain text is offered to the NER systems. As the standard deviations on the bottom rows of Tables 3 and 4 indicate, the results on the different books vary greatly. However, the different NER systems generally do perform similarly on the same novels, indicating that difficulties in recognising named entities in particular books is a characteristic of the novels rather than the systems. An exception is Brave New World on which BookNLP performs quite well, but the others underperform. Upon inspection, we find that the annotated chapter of this book contains only five different characters among which ‘The Director’ which occurs 19 times. This entity is consistently missed by the systems resulting in a high penalty. Furthermore, the ‘Mr.’ in ‘Mr. Foster’ (occurring 31 times) is often not recognised as in some NE models titles are excluded. A token-based evaluation of Illinois NE Tagger on this novel for example yields a F 1 -score of 51.91. The same issue is at hand with Dr. Jekyll and Mr. Hyde and Dracula. Although the main NER module in BookNLP is driven by Stanford NER, we suspect that additional domain adaptations in this package account for this performance difference. When comparing the F 1 -scores of the 1st person novels to the 3rd person novels in Tables 3 and 4, we find that the 1st person novels perform significantly worse than their 3rd person counterparts, at p < 0.01. These findings are in line with the findings of Elson, Dames & McKeown (2010). In the section ‘Discussion and Performance Boosting Options’, we delve further into particular difficulties that fiction presents NER with and showcase solutions that do not require retraining the entity models. As the BookNLP pipeline in the majority of the cases outperforms the other systems and includes coreference resolution and character clustering, we further utilise this system to create our networks. The results of the BookNLP pipeline including the coreference and clustering are presented in Table A4. One of the main differences in that table is that if popular entities are not recognised by the system they are penalised heavier because the coreferent mentions are also not recognised and linked to the correct entities. This results in scores that are generally somewhat lower, but the task that is measured is also more complex.

Discussion and Performance Boosting Options In analysing the output of the different NER systems, we found that some types of characters were particularly difficult to recognise. Firstly, we found a number of unidentified names that are so called word names (i.e. terms that also occur in dictionaries, for example to denote nouns such as Grace or Rebel). We suspected that this might hinder the NER, which is why we collected all such names in our corpus in Table A1 on page 21, and highlighted such word names with a †. This table shows that approximately 50% of all unidentified names in our entire corpus consist at least partially of a word name, which underpins that this issue is potentially widely spread. In order to verify this, we replaced all potentially problematic names in the source material by generic English names. We made sure not to add names that were already assigned to other characters in the novel, and we ensured that these names were not also regular nouns. An example of these changed character names can be found in Table 6, which shows all names affected for The Black Company. Original Adjusted Blue Richard Croaker Thomas Curly Daniel Dancing Edward Mercy Charles One-Eye Timothy Silent James Walleye William DOI: 10.7717/peerj-cs.189/table-6 Secondly, we noticed that persons with special characters in their names can prove difficult to retrieve. For example, names such as d’Artagnan in The Three Musketeers or Shai’Tan in The Wheel of Time were hard to recognise for the systems. To test this, we replaced all names in our corpus such as d’Artagnan or Shai’Tan with Dartagnan and Shaitan. By applying these transformations to our corpus, we found that the performances could be improved, uncovering some of the issues that plague NER. As can be observed in Fig. 2, not all of the novels were affected by these transformations. Out of the 40 novels used in this study, we were able to improve the performance for 14. While the issue of the apostrophed affix was not as recurrent in our corpus as the real-word names, its impact on performance is troublesome nonetheless. Clearly, two novels are more affected by these transformations than the others, namely: The Black Company and the The Three Musketeers. To further sketch these issues, we delve a bit deeper into these two specific novels. Figure 2: Effect of transformations on all affected classic and modern novels in F 1 -score in using the BookNLP pipeline (includes co-reference resolution). These name transformations show that the real-word names and names with special characters were indeed problematic and put forth a problem for future studies to tackle. As illustrated by Fig. 2, the aforementioned issues are also present in the classic novels typically used by related works (such as The Three Musketeers). This begs the question of the scope of these problems. To the best of our knowledge, similar works have not identified this issue to affect their performances, but we have shown that with a relatively simple workaround, the performance can be drastically improved. It would thus be interesting to evaluate how much these studies suffer from the same issue. Lastly, as manually replacing names is clearly far from ideal, we would like to encourage future work to find a more robust approach to resolve this issue. The Black Company This fantasy novel describes the dealings of an elite mercenary unit—The Black Company—and its members, all of which go by code names such as the ones in Table 6. With a preliminary F 1 -score of 06.85 (see Table A4), The Black Company did not do very well. We found this book had the highest percentage of unidentified characters of our collection. Out of the 14 characters found by our annotators, only five were identified by the pipeline. Interestingly enough, eight out of the nine unidentified characters in this novel have names that correspond to regular nouns. By applying our name transformation alone, the F 1 -score rose from 06.85 to the highest in our collection to 90.00. The Three Musketeers This classic piece recounts the adventures of a young man named d’Artagnan, after he leaves home to join the Musketeers of the Guard. With an F 1 -score of 13.91 (see Table A4), The Three Musketeers performs the second worst of our corpus, and the worst in its class. By simply replacing names such as d’Artagnan with Dartagnan the F 1 -score rose from 13.91 to 53, suggesting that the apostrophed name was indeed the main issue. To visualise this, we have included figures of both The Three Musketeer networks—before and after the fix—in Figs. 3 and 4. As can be observed in Fig. 3, the main character of the novel is hardly represented in this network, which is not indicative of the actual story. The importance of resolving the issue of apostrophed named is made clear in Fig. 4, where the main character is properly represented. Figure 3: Social network of The Three Musketeers without adjustment for apostrophed names. Figure 4: Social network of The Three Musketeers with adjustment for apostrophed names.

Conclusion and Future Work In this study, we set out to close a gap in the literature when it comes to the evaluation of NER for the creation of social networks from fiction literature. In our exploration of related work, we found no other studies that attempt to compare networks from classic and modern fiction. To fill this gap, we attempted to answer the following two research questions: To what extent are off-the-shelf NER tools suitable for identifying fictional characters in novels?

Which differences or similarities can be discovered between social networks extracted for different novels? To answer our primary research question, we evaluated four state-of-the-art NER systems on 20 classic and 20 modern science fiction/fantasy novels. In our study, we found no significant difference in performance of the named entity recognisers on classic novels and modern novels. We did find that novels written in 3rd person perspective perform significantly better than those written in 1st person, which is in line with findings in related studies. In addition, we observed a large amount of variance within each class, even despite our limitation for the modern novels to the fantasy/science fiction genre. We also identified some recurring problems that hindered NER. We delved deeper into two such problematic novels, and find two main issues that overarch both classes. Firstly, we found that word names such as Mercy are more difficult to identify to the systems. We showed that replacing problematic word names by generic placeholders can increase performance on affected novels. Secondly, we found that apostrophed names such as d’Artagnan also prove difficult to automatically identify. With fairly simple methods that capture some cultural background knowledge, we circumvented the above two issues to drastically increase the performance of the used pipeline. To the best of our knowledge, none of the related studies discussed in the section ‘Related Work’ acknowledge the presence of these issues. We would thus like to encourage future work to evaluate the impact of these two issues on existing studies, and call to develop a more robust approach to tackle them in future studies. To answer our secondary research question, we created social networks for each of the novels in our collection and calculated several networks features with which we compared the two classes. As with the NER experiments, no major differences were found between the classic and modern novels. Again, we found that the distribution of network measures within a class was subject to high variance, which holds for our collection of both classic and modern novels. We therefore recommend that future work focuses on determining particular characteristics that can influence these analyses first and then perform a comparative analysis between subsets to see if this similarity between classes holds when the variance is reduced. Future studies could therefore attempt to compare classic and modern novels in the same genre or narration type (e.g. first-person vs third-person perspective). Lastly, different types of networks that for example collapse characters that occur under different names (cf. Dany and Daenerys) as well as dealing with plural pronouns and group membership (e.g. characters sometimes mentioned individually and sometimes as part of a group) are currently unsolved problems for language technology and knowledge representation. These issues point to a strong need for more culturally-aware artificial intelligence.