Our neotoma package is part of the ROpenSci network of packages. Wrangling data structures and learning some of the tricks we’ve implemented wouldn’t have been possible without help from them throughout the coding process. Recently Scott Chamberlain posted some code for an R package to interface with ORCiD, the rORCiD package.

To digress for a second, the neotoma package started out as rNeotoma, but I ditched the ‘r’ because, well, just because. I’ve been second guessing myself ever since, especially as it became more and more apparent that, in writing proposals and talking about the package and the database I’ve basically created a muddle. Who knows, maybe we’ll go back to rNeotoma when we push up to CRAN. Point being, stick an R in it so that you don’t have to keep clarifying the differences.

So, back on point. A little while ago I posted a network diagram culled from my cv using a bibtex parser in R (the bibtex package by Roman François). That’s kind of fun – obviously worth blogging about – and I stuck a newer version into a job application, but I’ve really been curious about what it would look like if I went out to the second order, what does it look like when we combine my publication network with the networks of my collaborators.

Enter ORCiD. For those of you not familiar, ORCiD provides a unique identity code to an individual researcher. The researcher can then identify all the research products they may have published and link these to their ID. It’s effectively a DOI for the individual. Sign up and you are part of the Internet of Things. In a lot of ways this is very exciting. The extent to which the ORCiDs can be linked to other objects will be the real test for their staying power. And even there, it’s not so much whether the IDs can be linked, they’re unique identifiers so they’re easy to use, it’s whether other projects, institutions and data repositories will create a space for ORCiDs so that the can be linked across a web of research products.

Given the number of times I’ve been asked to add an ORCiD to an online profile or account it seems like people are prepared to invest in ORCiD for the long haul, which is exciting, and provides new opportunities for data analysis and for building research networks.

So, lets see what we can do with ORCiD and Scott’s rorcid package. This code is all available in a GitHub repository so you can modify it, fork, push or pull as you like:

The idea is to start with a single ORCiD, mine in this case (0000-0002-2700-4605). With the ORCiD we then discover all of the research products associated with the ID. Each research product with a DOI can be linked back to each of the ORCiDs registered for coauthors using the ORCiD API. It is possible to find all co-authors by parsing some of the bibtex files associated with the structured data, but for this exercise I’m just going to stick with co-authors with ORCiDs.

So, for each published article we get the DOI, find all co-authors on each work who has an ORCiD, and then track down each of their publications and co-authors. If you’re interested you can go further down the wormhole by coding this as a recursive function. I thought about it but since this was basically a lark I figured I’d think about it later, or leave it up to someone to add to the existing repository (feel free to fork & modify).

In the end I coded this all up and plotted using the igraph package (I used network for my last graph, but wanted to try out igraph because it’s got some fun interactive tools:

library(devtools) install_github('ropensci/rorcid')

You need devtools to be able to install the rOrcid package from the rOpenSci GitHub repository

library(rorcid) library(igraph) # The idea is to go into a user and get all their papers, # and all the papers of people they've published with: simon.record <- orcid_id(orcid = '0000-0002-2700-4605', profile="works")

This gives us an ‘orcid’ object, returned using the ORCiD Public API. Once we ave the object we can go in and pull out all the DOIs for each of my research products that are registered with ORCID.

get_doi <- function(x){ # This pulls the DOIs out of the ORCiD record: list.x <- x$'work-external-identifiers.work-external-identifier' # We have to catch a few objects with NULL DOI information: do.call(rbind.data.frame,lapply(list.x, function(x){ if(length(x) == 0 | (!'DOI' %in% x[,1])){ data.frame(value=NA) } else{ data.frame(value = x[which(x[,1] %in% 'DOI'),2]) } })) } get_papers <- function(x){ all.papers <- x[[1]]$works # this is where the papers are. papers <- data.frame(title = all.papers$'work-title.title.value', doi = get_doi(all.papers)) paper.doi <- lapply(1:nrow(papers), function(x){ if(!is.na(papers[x,2]))return(orcid_doi(dois = papers[x,2], fuzzy = FALSE)) # sometimes there's no DOI # if that's the case then just return NA: return(NA) }) your.papers <- lapply(1:length(paper.doi), function(x){ if(is.na(paper.doi[[x]])){ data.frame(doi=NA, orcid=NA, name=NA) } else { data.frame(doi = papers[x,2], orcid = paper.doi[[x]][[1]]$data$'orcid-identifier.path', name = paste(paper.doi[[x]][[1]]$data$'personal-details.given-names.value', paper.doi[[x]][[1]]$data$'personal-details.family-name.value', sep = ' '), stringsAsFactors = FALSE) }}) do.call(rbind.data.frame, your.papers) }

So now we’ve got the functions, we’re going to get all my papers, make a list of the unique ORCIDs of my colleagues and then get all of their papers using the same ‘get_papers’ function. It’s a bit sloppy I think, but I wanted to try to avoid duplicate calls to the API since my internet connection was kind of crummy.

simons <- get_papers(simon.record) unique.orcids <- unique(simons$orcid) all.colleagues <- list() for(i in 1:length(unique.orcids)){ all.colleagues[[i]] <- get_papers(orcid_id(orcid = unique.orcids[i], profile="works")) }

So now we’ve got a list with a data.frame for each author that has three columns, the DOI, the ORCID and their name. We want to reduce this to a single data.frame and then fill a square matrix (each row and column represents an author) where each row x column intersection represents co-authorship.

all.df <- do.call(rbind.data.frame, all.colleagues) all.df <- na.omit(all.df[!duplicated(all.df),]) all.pairs <- matrix(ncol = length(unique(all.df$name)), nrow = length(unique(all.df$name)), dimnames = list(unique(all.df$name),unique(all.df$name)), 0) unique.dois <- unique(as.character(all.df$doi)) for(i in 1:length(unique.dois)){ doi <- unique.dois[i] all.pairs[all.df$name[all.df$doi %in% doi],all.df$name[all.df$doi %in% doi]] <- all.pairs[all.df$name[all.df$doi %in% doi],all.df$name[all.df$doi %in% doi]] + 1 } all.pairs <- all.pairs[rowSums(all.pairs)>0, colSums(all.pairs)>0] diag(all.pairs) <- 0

Again, probably some lazy coding in the ‘for’ loop, but the point is that each row and column has a dimname representing each author, so row 1 is ‘Simon Goring’ and column 1 is also ‘Simon Goring’. All we’re doing is incrementing the value for the cell that intersects co-authors, where names are pulled from all individuals associated with each unique DOI. We end by plotting the whole thing out:

author.adj <- graph.adjacency(all.pairs, mode = 'undirected', weighted = TRUE) # Plot so that the width of the lines connecting the nodes reflects the # number of papers co-authored by both individuals. # This is Figure 1 of this blog post. plot(author.adj, vertex.label.cex = 0.8, edge.width = E(author.adj)$weight)