In order to account for variation in population density across species, we also use individual size as a second proxy for Nc. Body size has been validated as a proxy for local population density across a wide variety of systems and taxa (e.g., [ 42 – 44 ]) and is readily available from common databases such as the Encyclopedia of Life and Animal Diversity Web ( S9 Table ). While it would be ideal to obtain quantitative estimates of population densities (e.g., by using extensive mark-recapture methods [ 58 ]), for the majority of species we studied reliable direct estimates of population densities are not available.

We recognize that GBIF occurrence data reflects current range and does not account for historical range; however, as accurate long-term historical ranges are not known for most species we are limited to using the data that is available. We also note that for the five domesticated species, plus humans, occurrence data is not of much use for estimating range; in these cases we have attempted to approximate either the historical range of the species (humans, turkey, and clementine) or the current range of the heritage breeds the polymorphism sample is obtained from (chicken, cow, and millet).

While ideally we would obtain estimates of actual census population sizes, even moderately accurate estimates are rarely available. As an alternative, we used species range and individual size as proxies for census population size. To determine range, we used occurrence data available from GBIF ( http://www.gbif.org/ ) or published literature (when no occurrence data was available in GBIF) to estimate species distributions as follows. First, for each species, we obtain and then filter all occurrence data stored at GBIF. In general, we filter to require a known source (basis of observation) and exclude fossil records; we also filter to remove clearly erroneous points, such as those well outside the known species range (often arising, e.g., from transposition of longitude and latitude during data entry in museum records) or those falling in oceans for terrestrial organisms and vice versa. Specific filtering steps for each species are documented in the associated R code (available at GitHub). After filtering, we fit an alpha-hull [ 57 ] to estimate the species range, which we then filter to remove area overlapping ocean for terrestrial species and overlapping land for oceanic species, and then convert to area by projecting from GPS (WGS84) coordinates to a cylindrical equal area projection using the spTransform function in the R package rgdal ( http://cran.r-project.org/web/packages/rgdal/ ). R scripts to replicate our analysis are provided at the GitHub page associated with this manuscript.

We required that each species have available a pedigree-based genetic map, generated from markers that could be mapped to the reference genome by either ePCR or Basic Local Alignment Search Tool (BLAST), and with an average intermarker spacing (after filtering unmapped and mismapped markers; see below) of no more than 10 cM. For species with recombination in both sexes, we used sex-averaged genetic distances where possible, although in two cases (Anopheles gambiae and Bos taurus), maps were only available for a single sex and so for those species we use a single-sex map by necessity. For species with recombination in only one sex (e.g., Drosophila), we corrected genetic distances to represent a sex-averaged value by dividing estimated recombination rates in the sex with recombination by two. In nine cases where genetic maps from the same species as the polymorphism data were unavailable or of insufficient quality (Zea mays, Prunus persica, Papio anubis, Oryza sativa, Ovis aries, Mus musculus, Equus caballus, Canis lupus familiaris, and Citrus clementina), we used genetic maps from a closely related taxon, typically the reference genome species (see S1 Table for full details).

We required that each species be represented by random-shearing Illumina short read sequence data for at least two chromosomes derived from unrelated individuals within the same population. For four species (Bos taurus, Lepisosteus oculatus, Prunus persica, and Papio anubis) we used a single outbred diploid individual. If samples were intentionally inbred or if the species is known to engage in frequent self-fertilization in natural populations, we required data from at least two separate individuals. The number of individuals included and the number of unrelated chromosomes (ploidy in shorthand) of the sequenced individuals are reported in S1 Table . For six species, we used polymorphism data from a very closely related taxon to the species that was sequenced to produce the reference genome ( S1 Table ). In particular, we attempted to avoid using polymorphism data from domesticated species where possible; in many cases, we were able to use polymorphism data from wild ancestors or close relatives of domesticated plants and animals. Nonetheless, for five species (Gallus gallus, Bos taurus, Melagris gallopavo, Setaria italica, and Citrus reticulata), we could not identify suitable data from wild populations, and we instead elected to use polymorphism data from heritage breeds and strains ( S1 Table ).

Next, we manually checked the quality of the genome assembly of each species remaining on our list by inspection of assembly reports available from NCBI, Ensembl, Phytozome, or species-specific databases. Any species without chromosome-scale assemblies was removed, as was any species without an available annotation of coding sequence. In two cases (Heliconius melpomene and Gasterosteus aculeatus), chromosome scale assemblies were available but annotations were only available for the scaffold-level (or a previous, lower-quality chromosome-level) assembly. In these cases, we updated the coordinates of the coding sequence annotations using custom Perl scripts (available from the GitHub page associated with this manuscript: see the data accessibility section for details on how to obtain source code and data).

To identify suitable species for our analysis, we started from the list of genome projects available at Genomes OnLine Database (GOLD; http://www.genomesonline.org/ ) and National Center for Biotechnology Information (NCBI; ftp://ftp.ncbi.nlm.nih.gov/genomes/GENOME_REPORTS/ ), both accessed 6 October 2013. We removed all noneukaryotes from both sets. We then further filtered the GOLD set to remove all projects where status was not either “draft” or “complete” and where project type was not “Whole Genome Sequencing,” and the NCBI set to keep only all projects with MB > 0 and status equal to “scaffold,” “contigs,” or “chromosomes.” Finally, we merged both lists, removed duplicate species, and removed all species without an obligate sexual lifestyle. We required species have an obligatory sexual portion of their lifecycle to ensure that some amount of recombination can be expected in natural populations.

From these filtered files, we computed genetic diversity as π, the average number of pairwise differences [ 65 ], at 4-fold degenerate sites in nonoverlapping windows of 100 kb, 500 kb, or 1,000 kb. In all cases, we excluded windows from our analysis with fewer than 500 sequenced 4-fold degenerate sites. We also exclude all windows on sex chromosomes, in order to avoid complicating effects of hemizygosity on patterns of polymorphism.

Rather than recompute variant calls, for the human data, we obtained Variant Call Format (VCF) files for the Yoruban population from [ 38 ]. We elected to do this because these data are exceedingly well curated and the size of the human variation raw data presents a practical computational challenge. The VCF file was treated as described below in all case.

We sought to exclude low confidence sites by filtering our genotype data through several basic criteria. First, we required that every 4-fold degenerate site have a minimum phred-scaled probability of 20 that there is a segregating site within the sample. To ensure robustness of our results, we also applied a more stringent Q30 genotype quality filter and performed otherwise identical analyses using these data. Second, for every 4-fold degenerate site, we computed the mean depth for each sample. We then required each sample have at least half as many reads as the mean depth at a site for that position to be included in the analysis. For variable sites, we further required that phred-scaled strand bias be below 40. This quantity is based on an exact test for how often alternate alleles are called by reads aligned to the + versus the − strand of the reference genome; a large bias might be expected if, for example, a nearby transposable element insertion relative to the reference genome influenced read alignments on one strand and would make the genotypes at that site unreliable. We further required that the absolute value for the Z-score associated with the read position rank sum, the mapping quality rank sum, and the base quality rank sum be above four. These statistics quantify how biased the reference allele alignments relative to those of nonreference alleles for the relevant filters. For example, the first filter—read position rank sum—quantifies whether nonreference alleles are generally found further forward or backward in a short read. This filter may also reflect errors due to systematic differences in alignments of nonreference allele bearing reads (e.g., due to indels on one of the chromosomes present in an individual). See the GATK [ 62 ] documentation for in-depth descriptions of the relevant filters used. We applied these criteria to both DNA and RNA based libraries. Summaries of sites aligned and filtered for each genome are available in S11 Table , and a schematic of our pipeline is presented in S1 Fig .

We genotyped all samples using the GATK v2.4–3 [ 62 ]. If samples were intentionally inbred, or if the species is known to primarily reproduce through self-fertilization in natural populations, we used the “-ploidy” option to set the expected number of chromosomes per individual to 1 (see S1 Table for ploidy settings used for each species). We then extracted polymorphism data from four-fold degenerate synonymous sites. While there is mounting evidence that these sites are not evolving under strictly neutral processes (e.g., [ 63 , 64 ]), four-fold degenerate sites are a widely accepted approximation for neutral markers in the genome, and importantly, these sites are available in both RNA and DNA sequencing efforts.

We acquired short read data from the NCBI short read trace archive. All accession numbers for short read data used in this analysis are listed in S10 Table . We aligned these data to their respective reference genomes (reference genome versions and relevant citations are listed in S1 Table ). For libraries prepared from genomic DNA, we used bwa v0.7.4 [ 59 ] with default options. For libraries prepared from RNA, we aligned reads initially using tophat2 v2.0.7 [ 60 ] with default options, except we specified “-no-novel-juncs” and “—no-coverage-search” and gave tophat2 a general feature format (GFF) file (version indicated in S1 Table ) to speed up alignment. For both DNA and RNA, we then realigned reads that failed to align confidently using Stampy v1.0.21 [ 61 ] with default options. After this, putative polymerase chain reaction (PCR) duplicates were removed from both RNA and DNA based libraries using the “MarkDuplicates” function in Picard v1.98 ( http://broadinstitute.github.io/picard/ ). For DNA libraries, we next use the “IndelRealigner” tool in the Genome Analysis Toolkit (GATK) v2.4–3 [ 62 ] to realign reads surrounding likely indel polymorphisms. These GATK and Picard functions were run using default command line options.

First, we estimate coding sequence density in each window as the fraction of each window represented by exonic sites, extracted from the same GFF files for each species used to compute four-fold degenerate sites. We then estimate Kendall's τ between recombination rate and genetic diversity for each window after correcting for coding sequence density.

To estimate the strength of the association between recombination rate and genetic diversity, we use partial correlations that account for variation in coding sequence density across the genome. In many species [ 26 , 40 ], recombination rate and/or neutral diversity is correlated with gene density, and thus we need to account for this confounding variable in our analysis. We do this using partial correlations, implemented with the ppcor package in R.

Our basic approach to recombination rate estimation is to fit a continuous function to the Marey map relating genetic position and physical position for each chromosome. We use two different approaches that result in different degrees of smoothing: a polynomial fit and a linear B-spline fit. In both cases, we start by optimizing the polynomial degree or spline degrees of freedom using a custom R function that maximizes the AIC for the model fit. For the polynomial fit, we optimize between degree 1 and degree max(3, min(20, # markers / 3)). For the B-spline fit, we optimize degrees of freedom between 1 and min(100, max(2, #markers/2)). In each case, we retain the value with the highest AIC. To compute recombination rates in cM/Mb, we then take the derivative of the fitted function, evaluated at the midpoint of each window. For additional smoothing, we set all values of recombination estimated below zero to zero, and all values above the 99th percentile to the 99th percentile. While the two estimates tend to be highly correlated with each other, the polynomial fit appears to perform better for low quality maps, and the B-spline fit for high quality maps. Therefore, unless otherwise noted, we use the polynomial estimates of recombination rate for maps with intermarker spacing of greater than 2 cM, and the B-spline estimates for maps with intermarker spacing less than or equal to 2 cM. All estimation was done in R; code is available at the GitHub page associated with this manuscript.

To improve the quality of our recombination rate estimation, we designed a masking filter to exclude regions of chromosomes where the fit between the genetic map and the physical position of markers is particularly poor, defined as a run of five bad markers (for chromosomes with at least 25 markers), or a run of 0.2 times the number of markers on the chromosome, rounded up, bad markers (for chromosomes with at fewer than 25 markers). We also completely mask any chromosome with fewer than five markers in total. The final map quality and various filtering results are summarized in S13 Table .

For consistency, we assume that the reference genome is correctly assembled, and we correct the order and orientation of genetic maps to be consistent with the sequence assembly. To remove incongruent markers, we find the longest common subsequence (LCS) of ranked genetic and physical positions and define as incongruent all markers that are not part of the LCS. After removing incongruent markers, we filtered each map to retain only the single-most congruent mapping position for markers with multiple possible genomic locations. Functions to perform this analysis in R are available at the GitHub page associated with this manuscript.

We used three basic approaches to link markers from genetic maps to sequence coordinates. In some cases, sequence coordinates are available from the literature, in which case we use previously published values (in some cases updated to the latest version of the genome reference). For cases where primer information (but not full sequence information) is available, we used ePCR [ 66 ] with options -g1 -n2 -d50–500 and keeping all successful mappings, except where noted. For cases where locus sequence information is available, we used blastn with an e-value cutoff of 1 × 10 −8 and retained the top eight hits for each marker, except where noted. In both cases, we only retain positions where the sequence chromosome and the genetic map chromosome are identical. Specific curation and data cleaning steps for individual species are summarized in S12 Table and described in more detail in S1 Text .

Our approach to estimating recombination rates is to first obtain sequence information and genetic map positions for markers from the literature, map markers to the genome sequence where necessary, filter duplicate and incongruent markers, and finally estimate recombination rates from the relationship between physical position and genetic position. Specific details of map construction for each species are described in S1 Text .

4. Modeling the Joint Effects of Background Selection and Hitchhiking on Neutral Diversity

We begin with the very general selective sweep model derived by Coop and Ralph [41], which captures a broad variety of HH dynamics. To include the effects of BGS, we rely on the fact that to a first approximation, BGS can be thought of as reducing the effective population size and therefore increasing the rate of coalescence. This effect can be incorporated by a relatively simple modification to equation 16 of [41]. Specifically, we scale N by a BGS parameter, exp(-G), in equation 16, which then leads to a new expectation of average pairwise genetic diversity (π): (1) where α = 2N * Vbp * J2,2 (per [41]) and rbp is the recombination rate per base pair. This is very similar to previously published models of the joint effects of background selection and HH (e.g., [39]). To account for variation in the density of targets of selection, we build upon the approach of Rockman et al. [40] and Flowers et al. [26], which derives from the work Hudson, Kaplan, Charlesworth, and others that originally described models of background selection in recombining genomes [17,18]. Specifically, we fit the following model to estimate G for each window i: (2) where U is the total genomic deleterious mutation rate, fd i is the functional density of window i, sh is a compound parameter capturing both dominance and the strength of selection against deleterious mutations, M k and M i are the genetic positions in Morgans of window k and window i, respectively, and P is the index of panmixis, which allows us to account for the effects of selfing. We estimate functional density as the fraction of exonic coding sites in the genome that fall within the window in question. We focus on exonic coding sites as a proxy for targets of selection as they are the only functional measure that is uniformly available for all the species in our study.

Because P, U, and sh are not known, we fit this BGS model with a variety of parameter combinations. As U is generally unknown, and estimating U is difficult in most cases (e.g., [67,68]), we fit our models with three different values: Umin, where we assume U is equal to the mutation rate times the number of exonic protein-coding bases in the genome; Uconst, where we assume that U is equal to one for all species; and Umax where we assume that U is equal the lesser of the mutation rate times fives times the number of exonic protein-coding bases in the genome or the mutation rate times the genome size. Umin and Umax are multiplied by two to convert to diploid estimates. We believe that these estimates of U should roughly span the reasonable range for most species. Umin is likely to underestimate the true deleterious mutation rate as the number of exonic protein-coding bases will typically underestimate the number of evolutionarily conserved bases in a genome. Umax assumes that 20% of conserved bases are exonic coding bases and 80% are noncoding, which we admit is a relatively arbitrary assumption, but likely close to the maximum plausible U.

For P, we assume one for all vertebrates, insects, and obligate outcrossers among plants; 0.04 for highly selfing species, and 0.68 for partial selfers. These estimates correspond to selfing rates of 0%, ∼98%, and ∼50%, respectively. Estimates of selfing are available in S14 Table. For a few species of plants, we were unable to obtain reliable estimates of selfing rate (indicated by NA in S14 Table), and in this case we include all estimates of P in our model selection approach below. For sh, we fit a range of values evenly spaced (on a log scale) between 1e-5 and 0.1. Code to estimate G i was implemented in C++ and is available from the GitHub repository associated with this manuscript.

To incorporate functional density into the HH component of the model, we make the simplifying assumption that sweeps targeting selected sites outside a window will have little effect on neutral diversity within a window, and that sweeps occur uniformly within a window. Under this assumption, we can consider functional density as a scaling factor on the rate of sweeps, Vbp. Specifically, we reparameterize the rate of sweeps, Vbp, as V, the total sweeps per genome, and then consider the fraction of sweeps that occur in a particular window i as V*fd i . This results in a simple scaling of α in Equation 1. While we note that this assumption is likely to be violated in practice, it allows us to use the homogeneous sweep model of [41] with different rates of sweeps for each window across the genome. Ultimately, of course, it would be preferable to derive a nonhomogenous sweep model that directly incorporates variation in functional density, but doing so is beyond the scope of this work. However, we believe that our simplifying is likely adequate, as the largest reduction in diversity associated with a sweep is localized to the window containing the swept site (e.g., [41]).

Incorporating the effects of functional density in both BGS and HH, our final model for the expectation of neutral diversity in window i is: (3)

To obtain an estimate of the effect of selection for each species, we fit this model for estimates of G i derived from different parameter combinations (see above), using the nlsLM() function from the minpack.lm package in R. In addition, we fit three simpler models: a BGS-only model (in which α is 0 and thus the second part of the denominator is 0), an HH-only model (in which G is 0 for all i, and thus the first part of the denominator is 1), and a neutral model in which both G and α are 0, and thus the model predicts that neutral genetic diversity is equal to mean genetic diversity across the genome. Together, we refer to these four models as model set 1. Finally, we fit a second set of models (model set 2) in which we use the same approach to model background selection, but use the homogenous HH model of [41] without modification to allow for variation in functional density across the genome, and thus remove the fd i term from Equation 3.

From each model fit we estimate θ neutral for all four models (full, BGS-only, HH-only, and neutral) and also extract the likelihood of the fit. We then compute the AIC for each parameter combination, extract the fit with the best AIC for each model, and use that AIC to estimate the Akaike weight (relative likelihood) of each model j as (4) which we then normalize so that the weights for all four models for a species sum to one. We focus on AIC as it provides a straightforward way to compare non-nested models.

We estimate expected neutral genetic diversity in the absence of selection (θ neutral ) for each species as the parameter value obtained by the model with the best AIC. We then compute average observed genetic diversity for each species, and report the magnitude of the impact of selection on linked neutral diversity as 1 – (observed / neutral). Values below zero are replaced by zero. This value can be interpreted as the proportion of neutral variation removed by selection acting on linked sites, averaged across the genome.

This modeling approach has some important limitations: in particular, our approach calculates the effects of BGS and HH in windows across the genome instead of per base and we use the parameter sh instead of integrating across the distribution of fitness effects (as is done in e.g. [48,50]). Additionally, we do not use information such as locations of amino acid fixations, as is used by [49]. We fully acknowledge that these simplifying assumptions will, to a certain extent, degrade the accuracy of our modeling approach compared to other possible approaches. We argue, however, that these assumptions are necessary for this work: more sophisticated models typically require additional data (e.g., the distribution of fitness effects of new mutations or the location of recent amino acid fixations), or significantly increased computational time (i.e., by computing the effects of background selection at each base instead of in windows). For most of the species we studied, the necessary additional data are not clearly available to fit more complex models, and the increased computational time to fit per-base models would rapidly make our analysis computationally intractable. Thus, we believe that we have made reasonable tradeoffs between modeling complexity, data availability, and taxonomic breadth.