While the PCA works well to identify the number of significant components, a potential weakness for this method is that the unrotated task-component loadings are liable to be formed from mixtures of the underlying factors and are heavily biased toward the component that is extracted first. This weakness necessitates the application of rotation to the task-component matrix; however, rotation is not perfect, as it identifies the task-component loadings that fit an arbitrary set of criteria designed to generate the simplest and most interpretable solution. To deal with this potential issue, the task-functional network loadings were recalculated using independent component analysis (ICA), an analysis technique that exploits the more powerful properties of statistical independence to extract the sources from mixed signals. Here, we used ICA to extract two spatially distinct functional brain networks using gradient ascent toward maximum entropy (code adapted from). The resultant components were broadly similar, although not identical, to those from the PCA ( Table 1 ). More specifically, all tasks loaded positively on both independent brain networks but to highly varied extents, with the short-term memory tasks loading heavily on one component and the tasks that involved transforming information according to logical rules loading heavily on the other. Based on these results, it is reasonable to conclude that MD cortex is formed from at least two functional networks, with all 12 cognitive tasks recruiting both networks but to highly variable extents.

The question of how many functionally distinct networks were apparent within MD cortex was addressed using exploratory factor analysis. Voxels within MD cortex ( Figure 1 A) were transformed into 12 vectors, one for each task, and these were examined using principal components analysis (PCA), a factor analysis technique that extracts orthogonal linear components from the 12-by-12 matrix of task-task bivariate correlations. The results revealed two “significant” principal components, each of which explained more variability in brain activation than was contributed by any one task. These components accounted for ∼90% of the total variance in task-related activation across MD cortex ( Table S1 ). After orthogonal rotation with the Varimax algorithm, the strengths of the task-component loadings were highly variable and easily comprehensible ( Table 1 and Figure 1 B). Specifically, all of the tasks in which information had to be actively maintained in short-term memory, for example, spatial working memory, digit span, and visuospatial working memory, loaded heavily on one component (MDwm). Conversely, all of the tasks in which information had to be transformed in mind according to logical rules, for example, deductive reasoning, grammatical reasoning, spatial rotations, and color-word remapping, loaded heavily on the other component (MDr). When factor scores were generated at each voxel using regression and projected back onto the brain, two clearly defined functional networks were rendered ( Figure 1 D). Thus, the insula/frontal operculum (IFO), the superior frontal sulcus (SFS), and the ventral portion of the anterior cingulate cortex/ presupplementary motor area (ACC/preSMA) had greater MDwm component scores, whereas the inferior frontal sulcus (IFS), inferior parietal cortex (IPC), and the dorsal portion of the ACC/preSMA had greater MDr component scores. When the PCA was rerun with spherical regions of interest (ROIs) centered on each MD subregion, with radii that varied from 10 to 25 mm in 5 mm steps and excluding voxels that were on average deactivated, the task loadings correlated with those from the MD mask at r > 0.95 for both components and at all radii. Thus, the PCA solution was robust against variations in the extent of the ROIs. When data from the whole brain were analyzed using the same method, three significant components were generated, the first two of which correlated with those from the MD cortex analysis (MDr r = 0.76, MDwm r = 0.83), demonstrating that these were the most prominent active-state networks in the brain. The factor solution was also reliable at the individual subject level. Rerunning the same PCA on each individual’s data generated solutions with two significant components in 13/16 cases. There was one three-component solution and two four-component solutions. Rerunning the two-component PCA with each individual’s data set included as 12 separate columns (an approach that did not constrain the same task to load on the same component across participants) demonstrated that the pattern of task-component loadings was also highly reliable at the individual subject level ( Figure 1 C). In order to test the reliability of the functional networks across participants, the data were concatenated instead of averaged into 12 columns (an approach that does not constrain the same voxels to load on the same components across individuals), and component scores were estimated at each voxel and projected back into two sets of 16 brain maps. When t contrasts were calculated against zero at the group level, the same MDwm and MDr functional networks were rendered ( Figure 1 E).

Sixteen healthy young participants undertook the cognitive battery in the MRI scanner. The cognitive battery consisted of 12 tasks, which, based on well-established paradigms from the neuropsychology literature, measured a range of the types of planning, reasoning, attentional, and working memory skills that are considered akin to general intelligence (see Supplemental Experimental Procedures available online). The activation level of each voxel within MD cortex was calculated separately for each task relative to a resting baseline using general linear modeling (see Supplemental Experimental Procedures ) and the resultant values were averaged across participants to remove between-subject variability in activation—for example, due to individual differences in regional signal intensity.

The loadings of the tasks on the MDwm and MDr networks from the ICA were formed into two vectors. These were regressed onto each individual’s set of 12 standardized task scores with no constant term. When each individual’s MDwm and MDr beta weights (representing component scores) were varied in this manner, they centered close to zero, showed no positive correlation (MDwm mean beta = 0.05 ± 1.78; MDr mean beta = 0.11 ± 2.92; MDwm-MDr correlation r = −0.20), and, importantly, accounted for 34.3% of the total variance in performance scores. For comparison, the first two principal components of the behavioral data accounted for 36.6% of the variance. Thus, the model based on the brain imaging data captured close to the maximum amount of variance that could be accounted for by the two best-fitting orthogonal linear components. The average test-retest reliability of the 12 tasks, collected in an earlier Internet cohort ( Table S2 ), was 68%. Consequently, the imaging ICA model predicted >50% of the reliable variance in performance. The statistical significance of this fit was tested against 1,000 permutations, in which the MDwm and MDr vectors were randomly rearranged both within and across vector prior to regression. The original vectors formed a better fit than the permuted vectors in 100% of cases, demonstrating that the brain imaging model was a significant predictor of the performance data relative to models with the same fine-grained values and the same level of complexity. Two further sets of permutation tests were carried out in which one vector was held constant and the other randomly permuted 1,000 times. When the MDwm vector was permuted, the original vectors formed a better fit in 100% of cases. When the MDr vector was permuted, the original vectors formed a better fit in 99.3% of cases. Thus, both the MDwm and the MDr vectors were significant predictors of individual differences in behavioral performance.

A critical question is whether the loadings of the tasks on the MDwm and MDr functional brain networks form a good predictor of the pattern of cross-task correlations in performance observed in the general population. That is, does the same set of cognitive entities underlay the large-scale functional organization of the brain and individual differences in performance? It is important to note that factor analyses typically require many measures. In the case of the spatial factor analyses reported above, measures were taken from 2,275 spatially distinct “voxels” within MD cortex. In the case of the behavioral analyses, we used scores from ∼110,000 participants who logged in to undertake Internet-optimized variants of the same 12 tasks. Of these, ∼60,000 completed all 12 tasks and a post task questionnaire. After case-wise removal of extreme outliers, null values, nonsense questionnaire responses, and exclusion of participants above the age of 70 and below the age of 12, exactly 44,600 data sets, each composed of 12 standardized task scores, were included in the analysis (see Experimental Procedures ).

Exploratory factor analysis was carried out on the behavioral data using PCA. There were three significant behavioral components that each accounted for more variance than was contributed by any one test ( Table S3 ) and that together accounted for 45% of the total variance. After orthogonal rotation with the Varimax algorithm, the first two components showed a marked similarity to the loadings of the tasks on the MDwm and MDr networks ( Table 2 ). Thus, the first component (STM) included all of the tasks in which information was held actively on line in short-term memory, whereas the second component (reasoning) included all of the tasks in which information was transformed in mind according to logical rules. Correlation analyses between the task to functional brain network loadings and the task to behavioral component loadings confirmed that the two approaches generated broadly similar solutions (STM-MDwm r = 0.79, p < 0.001; reasoning-MDr r = 0.64, p < 0.05). The third behavioral component was readily interpretable and easily comprehensible, accounting for a substantial proportion of the variance in the three tasks that used verbal stimuli ( Table 2 ), these being digit span, verbal reasoning, and color-word remapping. A relevant question regards why there was no third network in the analysis of the MD cortex activation data. One possibility was that a spatial equivalent of the verbal component did exist in MD cortex but that it accounted for less variance than was contributed by any one task in the imaging analysis. Extracting three-component PCA and ICA solutions from the imaging data did not generate an equivalent verbal component, a result that is unsurprising, as a defining characteristic of MD cortex is its insensitivity to stimulus category (). A more plausible explanation was that the third behavioral component had a neural basis in category-sensitive brain regions outside of MD cortex. In line with this view, the task-factor loadings from the third behavioral component correlated closely with those from the additional third component extracted from the PCA of all active voxels within the brain (r = 0.82, p < 0.001). In order to identify brain regions that formed a likely analog of the verbal component, the task-component loadings were standardized so that they had unit deviation and zero mean and were used to predict activation unconstrained within the whole brain mass (see Experimental Procedures ). Regions including the left inferior frontal gyrus and the bilateral temporal lobes were significantly more active during the performance of tasks that weighed on the verbal component ( Figure 2 ). This set of brain regions had little overlap with MD cortex, an observation that was formalized using t tests on the mean beta weights from within each of the anatomically distinct MD cortex ROIs. This liberal approach demonstrated that none of the MD ROIs were significantly more active for tasks that loaded on the verbal component (p > 0.05, uncorrected and one tailed).

When task-component loadings for the verbal factor from the behavioral analysis were standardized and used as a predictor of activation within the whole brain, a left lateralized network was rendered, including the left inferior frontal gyrus, and temporal lobe regions bilaterally (p < 0.05 FDR corrected for the whole brain mass).

The results revealed a significant conformity ( Table S4 ) between the task-to-component loadings from the PCA models of simulated data and the Internet behavioral data (simulated to real correlations: 2F model STM r = 0.56, p < 0.05 and reasoning r = 0.74, p < 0.005; 3F model STM r = 0.64 p < 0.05, reasoning r = 0.77, p < 0.005, and verbal r = 0.53, p < 0.05). More importantly, the size of the correlations between the obliquely oriented first-order components derived from the PCA of Internet data and data simulated based on task-functional network activation levels were almost identical for the 2F model (MDr-MDwm real r = 0.47, simulated r = 0.46, SD ±0.01) and highly similar for the 3F model ( Figure 3 ) despite the underlying factors in the simulated data set being completely independent. Consequently, there was little requirement for a diffuse higher-order “g” factor once the tendency for tasks to corecruit multiple functional brain networks was accounted for.

Two simulated data sets were generated; one based on the loadings of the tasks on the MDwm and MDr functional networks (2F) and the other including task activation levels for the verbal network (3F). Each of the 44,600 simulated “individuals” was assigned a set of either two (2F) or three (3F) factor scores using a random Gaussian generator. Thus, the underlying factor scores represented normally distributed individual differences and were assumed to be completely independent in the simulations. The 12 task scores were assigned for each individual by multiplying the task-functional network loadings from the ICA of the neuroimaging data by the corresponding, randomly generated, factor score and summating the resultant values. The scores were then standardized for each task and noise was added by adding the product of randomly generated Gaussian noise, the test-retest reliabilities ( Table S2 ), and a noise level constant. A series of iterative steps were then taken, in which the noise level constant was adjusted until the summed communalities from the simulated and behavioral PCA solutions were closely matched in order to ensure that the same total amount of variance was explained by the first-order components. This process was repeated 20 times to generate a standard deviation. (Note that matching the total variance explained by the first-order components in this manner does not bias the result; for example, if each task loaded on just one first-order component, then the first-order components would not be correlated.)

Based on this evidence, it is reasonable to infer that the behavioral factors that underlie correlations in an individual’s performance on tasks of the type typically considered akin to intelligence have a basis in the functioning of multiple brain networks. This observation allows novel insights to be derived regarding the likely basis of higher-order components. More specifically, in classical intelligence testing, first-order components generated by factor analyzing the correlations between task scores are invariably correlated positively if allowed to rotate into their optimal oblique orientations. A common approach is to undertake a second-order factor analysis of the correlations between the obliquely orientated first-order components. The resultant second-order component is often denoted as “g.” This approach is particularly useful when tasks load heavily on multiple components, as it can simplify the task to first-order component weightings, making the factor solution more readily interpretable. A complication for this approach, however, is that the underlying source of this second-order component is ambiguous. More specifically, while correlations between first-order components from the PCA may arise because the underlying factors are themselves correlated (for example, if the capacities of the MDwm and MDr networks were influenced by some diffuse factor like conductance speed or plasticity), they will also be correlated if there is “task mixing,” that is, if tasks tend to weigh on multiple independent factors. In behavioral factor analysis, these accounts are effectively indistinguishable as the components or latent variables cannot be measured directly. Here, we have an objective measure of the extent to which the tasks are mixed, as we know, based on the functional neuroimaging data, the extent to which the tasks recruit spatially separated functional networks relative to rest. Consequently, it is possible to subdivide “g” into the proportion that is predicted by the mixing of tasks on multiple functional brain networks and the proportion that may be explained by other diffuse factors ( Figure 3 ).

A cognitive task can measure a combination of noise, task-specific components, and components that are general, contributing to the performance of multiple tasks. In the current study, there were three first-order components: reasoning, short-term memory (STM), and verbal processing. In classical intelligence testing, the first-order components are invariably correlated positively when allowed to rotate into oblique orientations. A factor analysis of these correlations may be undertaken to estimate a second-order component and this is generally denoted as “g.” “g” may be generated from distinct sources: task mixing, the tendency for tasks to corecruit multiple systems, and diffuse factors that contribute to the capacities of all of those systems. When simulations were built based on the brain imaging data, the correlations between the first-order components from the behavioral study were entirely accounted for by tasks corecruiting multiple functional networks.

These results suggest that the cognitive systems that underlay the STM, reasoning, and verbal components should have largely independent capacities. We sought to confirm this prediction by examining the correlations between the behavioral components (STM, reasoning, and verbal) and questionnaire variables that have previously been associated with general intelligence. An in-depth discussion of the relationship between biological or demographic variables and components of intelligence is outside the scope of the current article and will be covered elsewhere. Here, these correlations were used to leverage dissociations, and the question of whether they are mediated by unmeasured biological or demographic variables is not relevant. The extents to which the questionnaire responses predicted individual mean and component scores were estimated using generalized linear models. In such a large population sample, almost all effects are statistically significant because uncertainty regarding the proximity of sample means to population means approaches zero. Consequently, the true measure of significance is effect size, and here we conformed to Cohen’s notion () that an effect of ∼0.2 SD units represents a small effect, ∼0.5 a medium effect, and ∼0.8 a large effect. The STM, reasoning, and verbal component scores were highly dissociable in terms of their correlations with questionnaire variables. Age was by far the most significant predictor of performance, with the mean scores of individuals in their sixties ∼1.7 SD below those in their early twenties ( Figure 4 A). (Note that in intelligence testing, 1 SD is equivalent to 15 IQ points.) The verbal component scores showed a relatively late peak and subtle age-related decline relative to the other two components. In this respect, the STM and reasoning components can be considered dissociated from the verbal component in terms of their sensitivity to aging. Similarly, the mean score and the STM and reasoning component scores showed small-medium positive relationships with the frequency with which individuals played computer games (∼0.32 SD, ∼0.2 SD, and ∼0.3 SD, respectively) ( Figure 4 B), whereas the relationship with the verbal component was negligible. Conversely, while level of education (calculated from those aged 20+) showed a small-medium-sized positive relationship with the mean score (∼0.33) and the verbal score (0.32 SD), the STM score showed a smaller relationship (0.23 SD), while the relationship with reasoning (0.12 SD) was of negligible scale ( Figure 4 C). The STM and reasoning components were also dissociated from each other. For example, individuals who regularly suffer from anxiety ( Figure 5 A) had significantly lower mean scores (0.21 SDs), a relationship that was most pronounced for the STM component (0.35 SDs), with negligible reasoning (0.06 SDs) and verbal (−0.16 SDs) effect sizes. Similarly, while the differences between male and female participants’ mean (0.1 SD), verbal (0.03), and reasoning scores (−0.03) were negligible, males showed a small advantage over females on the STM component score (0.2 SD) ( Figure 5 B). Other significant factors included amount smoked ( Figure 6 A), with smokers performing worse than nonsmokers on the mean score (∼0.19 SD units), a difference that was most pronounced for the STM component (STM ≈ 0.19 SD, reasoning ≈ 0.09 SD, and verbal ≈ 0.05 SD). By contrast, alcohol consumption and caffeine intake showed negligible effect sizes for mean and component scores. Finally, geographical origin (grouped by country of birth) showed small-medium-sized relationships with a mean score (0.37 SD) that primarily favored individuals from countries in which English is the first language ( Figure 6 B). The largest relationship between component score and geographical origin was for the verbal component, which spanned ∼0.52 SD units, with smaller relationships evident for the reasoning (0.40 SD) and STM (0.23 SD) scores. (Note that rerunning the behavioral PCA and including only individuals for whom English was the first language produced the same three-component solution.) Taken together, this combination of co-relationship and dissociation of the STM, reasoning, and verbal scores supports the view that these components have a basis in relatively independent systems, while demonstrating how a multifactor model can provide a more informative and balanced account of population differences in intelligence.

Data from a pilot study were examined in order to confirm that the cognitive battery generated scores that correlated with “g” as measured by classic IQ testing. Thirty-five young healthy right-handed participants undertook the 12 cognitive tasks under controlled laboratory conditions followed by one of the most commonly applied classic pen and paper IQ tests—the Cattell Culture Fair (scale II). Scores were standardized so that each of the cognitive tasks had zero mean and unit deviation across participants. For each participant, the standardized scores were then averaged across the tasks. A significant bivariate correlation was evident between the mean standardized scores and performance on the Cattell Culture Fair intelligence test (r = 0.65, p < 0.001). Component scores were calculated for the 35 pilot participants using regression with the test-component loadings from the orthogonal PCA of the Internet cohort’s data. Both the STM and the reasoning component scores correlated significantly with the Cattell Culture Fair score, whereas the verbal component showed a positive subthreshold trend (STM r = 0.52, p < 0.001; reasoning r = 0.34, p < 0.05; verbal r = 0.26, p = 0.07). Numerically, the strongest correlation was generated by averaging the STM and reasoning component scores (STM and reasoning r = 0.65, p < 0.001; STM and verbal r = 0.54, p < 0.001; verbal and reasoning r = 0.377, p < 0.05). When second-order component scores were generated for the pilot participants using the obliquely oriented factor model from the Internet cohort, they also correlated significantly with Cattell Culture Fair score (r = 0.64, p < 0.001). These results suggest that the STM and reasoning components relate more closely than the verbal component to “g” as defined by classic IQ testing.

General Discussion

The results presented here provide evidence to support the view that human intelligence is not unitary but, rather, is formed from multiple cognitive components. These components reflect the way in which the brain regions that have previously been implicated in intelligence are organized into functionally specialized networks and, moreover, when the tendency for cognitive tasks to recruit a combination of these functional networks is accounted for, there is little evidence for a higher-order intelligence factor. Further evidence for the relative independence of these components may be drawn from the fact that they correlate with questionnaire variables in a dissociable manner. Taken together, it is reasonable to conclude that human intelligence is most parsimoniously conceived of as an emergent property of multiple specialized brain systems, each of which has its own capacity.

Raven, 1938 Raven J.C. Progressive Matrices: A Perceptual Test of Intelligence. Cattell, 1949 Cattell R.B. Culture Free Intelligence Test, Scale 1, Handbook. Historically, research into the biological basis of intelligence has been limited by a circular logic regarding the definition of what exactly intelligence is. More specifically, general intelligence may sensibly be defined as the factor or factors that contribute to an individual’s ability to perform across a broad range of cognitive tasks. In practice, however, intelligence is typically defined as “g,” which in turn is defined as the measure taken by classical pen and paper IQ tests such as Raven’s matrices () or the Cattell Culture Fair (). If a more diverse set of paradigms are applied and, as a consequence, a more diverse set of first-order components are derived, the conventional approach is to run a second-order factor analysis in order to generate a higher-order component. In order for the battery to be considered a good measure of general intelligence, this higher-order component should correlate with “g” as measured by a classical IQ test. The results presented here suggest that such higher-order constructs should be used with caution. On the one hand, a higher-order component may be used to generate a more interpretable first-order factor solution, for example, when cognitive tasks load heavily on multiple components. On the other hand, the basis of the higher-order component is ambiguous and may be accounted for by cognitive tasks corecruiting multiple functionally dissociable brain networks. Consequently, to interpret a higher-order component as representing a dominant unitary factor is misleading.

Raven, 1938 Raven J.C. Progressive Matrices: A Perceptual Test of Intelligence. Weschler, 1981 Weschler D. Wechsler Adult Intelligence Scale–Revised. Nonetheless, one potential objection to the results of the current study could be that while the 12 tasks load on common behavioral components, by the most commonly applied definition, these components do not relate to general intelligence unless they generate a second-order component that correlates with “g.” From this perspective, only the higher-order component may truly be considered intelligence, with the first-order components being task specific. In the current study, this objection is implausible for several reasons. First, a cognitive factor that does not relate to such general processes as planning, reasoning, attention, and short-term memory would, by any sensible definition, be a very poor candidate for general intelligence. Furthermore, many of the tasks applied here were based on paradigms that either have been previously associated with general intelligence or form part of classical intelligence testing batteries. In line with this view, analysis of data from our pilot study shows that when a second-order component is generated, it correlates significantly with “g,” and yet, based on the imaging data, that higher-order component is greatly reduced, as it may primarily be accounted for by tasks corecruiting multiple functionally dissociable brain networks. Moreover, MD cortex, which is both active during and necessary for the performance of classic intelligence tests, was highly activated during the performance of this cognitive battery but was divided into two functional networks. Thus, the tasks applied here both recruited and functionally fractionated the previously identified neural correlates of “g.” It should also be noted that this battery of tasks is, if anything, more diverse than those applied in classical IQ tests and, in that respect, may be considered at least as able to capture general components that contribute to a wide range of tasks. For example, Raven’s matrices () employ variants on one class of abstract reasoning problem, the Cattell uses just four types of problem, while the WAIS-R () employs 11 subtests. Thus, it is clearly the case that by either definition, the tasks applied here are related to general intelligence.

Another potential objection is that the functional brain networks may not have been defined accurately enough because they form clearly defined clusters and, therefore, are negatively correlated across space. Perhaps the ICA underestimated this spatial segregation, causing voxels from one network to distort the task-component loadings from the other and masking the contribution of a diffuse higher-order “g” factor. This objection is highly unlikely for several reasons. First, while ICA seeks to maximize independence, it does not necessarily derive completely independent components. For example, in the current study, the MDwm and MDr components did show the expected negative correlation across voxels (r = −0.19). Second, such a close conformity between the second-order correlations from the simulated and behavioral models would have been highly unlikely to occur by chance alone if the ICA had failed. Furthermore, if the networks are spatially separable, then it should be possible to take relatively unmixed measures of their task-related activations by examining the centers of each cluster, where there is minimal network overlap. For example, when mean task activation levels were extracted from 5 mm spherical ROIs centered on peak IFO and IFS coordinates within the MDwm and MDr networks bilaterally, a marked double dissociation was evident across tasks. Specifically, there was either strong coactivation of regions or strong activation in one region and virtually no activation in the other dependent on the task context ( Table S5 ). This is clearly the pattern of results that would be expected if the ROIs were placed exclusively within functionally dissociable and spatially separable networks. Nonetheless, when the 2F simulations were rerun based on these IFS and IFO activation levels, the second-order correlation between the estimated oblique components was not diminished but, rather, formed a precise match to the Internet behavioral data (r = 0.47, SD ± 0.02). Thus, while the contribution of diffuse factors should not be entirely discounted, the results accord particularly closely with the view that the higher-order “g” component is primarily accounted for by cognitive tasks recruiting multiple functionally dissociable brain networks.

Bor et al., 2001 Bor D.

Duncan J.

Owen A.M. The role of spatial configuration in tests of working memory explored with functional neuroimaging. Indeed, from a phenomenological perspective, the idea that tasks tend to corecruit multiple functional brain networks makes intuitive sense, as generating a task that depends on any single cognitive process is likely to be rather intractable. Consider a simple working memory task, in which the spatial locations of a sequence of flashes must be observed, maintained, and repeated (spatial span). Even in this simple context, the participant must comprehend the written instructions, otherwise, they may report the correct locations but in the incorrect sequence. More importantly, people often apply chunking strategies when encoding information in short-term memory in order to generate a more efficient memory trace. For example, they may note that the flashes form the outline of a geometric shape. Such “chunking” strategies are a form of logical transformation and are known to recruit the IFS (). Thus, even in the most simple of task contexts, all three of the cognitive systems identified in the current study would play a role but to varying extents.

Bullmore and Sporns, 2009 Bullmore E.

Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Henson, 2006 Henson R. Forward inference using functional neuroimaging: dissociations versus associations. Dosenbach et al., 2008 Dosenbach N.U.

Fair D.A.

Cohen A.L.

Schlaggar B.L.

Petersen S.E. A dual-networks architecture of top-down control. Hampshire and Owen, 2010 Hampshire A.

Owen A.M. Clinical studies of attention and learning. This interplay of processes raises an interesting point regarding what exactly is meant by the term “functional network.” No doubt, it is the case that the functional networks identified here often interact closely during the performance of complex cognitive tasks and, consequently, could be considered to form specialized subcomponents of a broader cognitive system. Indeed, from this perspective, the higher-order “g” factor that may be generated from hierarchical analysis of the behavioral data may be described as representing a higher-order functional network formed from the corecruitment of the MDwm and the MDr subnetworks. Such nested architecture is likely to form an accurate description of the functional organization of the brain (). Nonetheless, activity across the MDwm and MDr brain regions was not positively correlated ( Table S5 ). More importantly, the combination of corecruitment and strong double dissociation across task contexts is in close concordance with the proposed criteria for qualitatively dissociable brain systems (). Furthermore, the fractionation of MD subregions reported here is highly replicable and, consequently, is unlikely to be specific to the choice of tasks. For example, similar functional networks have recently been reported when spontaneous fluctuations in resting-state activity are analyzed using ICA and graph theory (). More importantly, the conformity between the behavioral and imaging factor solutions supports the view that they make independent contributions to cognitive ability. In further support of this view, previous studies have demonstrated that functional activation within the IFO/preSMA and IFS/IPC and their associated cognitive processes are differentially affected by neurological disorders, pharmacological interventions, and genotype (). Thus, the MDr and MDwm networks are also dissociable with respect to their sensitivity to biological factors that modulate individual differences in cognition.

Hampshire et al., 2011 Hampshire A.

Thompson R.

Duncan J.

Owen A.M. Lateral prefrontal cortex subregions make dissociable contributions during fluid reasoning. Hampshire et al., 2008 Hampshire A.

Thompson R.

Duncan J.

Owen A.M. The target selective neural response—similarity, ambiguity, and learning effects. One of the reviewers of this paper suggested that an additional “g” network might exist within MD but would only be recruited at the highest levels of demand. Perhaps activation when performing at lower levels of demand could mask this unitary high-load network? This interpretation is unlikely, as the tasks were specifically designed to be taxing. More specifically, they used a combination of speeded/response-driven designs and dynamically adapting difficulty algorithms that kept participants working at a high cognitive load, yet only 10% of the cross-task variance within MD cortex remained unexplained by the two-component model. Moreover, the subdivision of MD into functionally specialized networks accords particularly well with results from previous studies that have systematically varied difficulty within task by manipulating specific cognitive demands. For example, when the number of concurrent rules was manipulated in a challenging nonverbal reasoning task, there was a disproportionate increase in the response of the IFS (). Conversely, when the difficulty of a target-distractor decision was manipulated in a task that required morphed stimuli to be compared with maintained target objects, there was a disproportionate increase in the response of the IFO (). Cross-study comparisons of this type may be more precisely quantified using factor analysis. When brain maps depicting difficulty effects from these previous studies were added as extra columns in the PCA of task-related activations, the rule complexity manipulation loaded selectively on the MDr network (MDr = 0.79, MDwm = 0.06), whereas the object discrimination manipulation loaded selectively on the MDwm network (MDr = 0.18, MDwm = 0.64). Thus, when specific cognitive demands are systematically varied, MD cortex fractionates into the same two functional networks.

Williams-Gray et al., 2007 Williams-Gray C.H.

Hampshire A.

Robbins T.W.

Owen A.M.

Barker R.A. Catechol O-methyltransferase Val158Met genotype influences frontoparietal activity during planning in patients with Parkinson’s disease. Hampshire et al., 2011 Hampshire A.

Thompson R.

Duncan J.

Owen A.M. Lateral prefrontal cortex subregions make dissociable contributions during fluid reasoning. Petrides, 2005 Petrides M. Lateral prefrontal cortex: architectonic and functional organization. Owen et al., 1996 Owen A.M.

Evans A.C.

Petrides M. Evidence for a two-stage model of spatial working memory processing within the lateral frontal cortex: a positron emission tomography study. It is particularly interesting that the mental transformation of spatial, object, and verbal information shares a common resource within a network of brain regions that includes the IFS. Previous neuroimaging studies that have focused on varying demands within any one of these domains accord well with this finding. For example, dorsolateral prefrontal cortex activation is evident during spatial planning () and deductive reasoning (). The results here confirm this relationship in a more direct manner as the planning, rotations, deductive reasoning, and verbal reasoning tasks all loaded heavily on the same component in both the behavioral and the neuroimaging analyses. Thus, on a process level, it seems sensible to conclude that the MDr network forms a module that is specialized for the transformation of information in mind according to logical rules but that is insensitive to the type or source of information that is transformed. This view is compatible with the idea that the IFS is recruited during more complex executive processes () and accords well with a two-stage model of working memory that assumes that dorsolateral frontal lobe regions are recruited when information is reordered in mind (). A major challenge for future studies will be to determine the neural mechanism by which the MDr network supports such diverse logical processes.