In this study, we implemented a connectome‐based predictive modeling approach to predict individual variations of trust propensity from whole‐brain RSFC. Based on previous evidence, we hypothesized that individual differences in trust propensity would be predicted by functional interplay between a wide array of functional connectivity across distributed networks, particularly those implicated in valuation (RWN: e.g., caudate), aversive emotional processing (SAN: e.g., amygdala, AI), context‐based strategy (CEN: e.g., lateral PFC), and relationship‐based trustworthiness (DMN: e.g., TPJ, dmPFC).

A multivariate machine learning approach is particularly suitable for examining RSFC underlying complex and multifaceted constructs such as propensity of trust. For example, previous evidence has that a multivariate predictive model combining the multi‐round TG with electroencephalography (EEG)‐based RSFC predicts only initial trust as measured during the first round (i.e., trust propensity) but not trust as measured across multiple rounds (i.e., trust dynamics; Hahn et al., 2014 ). Further, the machine learning approach allows for the prediction of unseen participants, offering information at the individual level rather than group level (Cui, Su, Li, Shu, & Gong, 2018 ; Cui, Xia, Su, Shu, & Gong, 2016 ; Dubois & Adolphs, 2016 ; Feng et al., 2017 ; Gabrieli, Ghosh, & Whitfield‐Gabrieli, 2015 ; Shen et al., 2017 ; Yarkoni & Westfall, 2017 ). A machine learning approach typically implements cross‐validation procedures to estimate the model with training samples and to test the performance of the model with independent samples (i.e., test samples).

The neuropsychological underpinnings of trust have been studied dominantely by utilizing task‐based fMRI, but it remains obscure whether trust propensity can be predicted by task‐free fMRI based on resting‐state functional connectivity (RSFC). As a task‐independent measure, RSFC is free from confounds associated with ongoing task demand and different experimental designs across studies (Kable & Levy, 2015 ; Nash, Gianotti, & Knoch, 2015 ; Nash & Knoch, 2016 ). Moreover, fMRI‐RSFC has emerged as a widely‐used network‐level approach to significantly advance our understanding of individual variations in cognitive functions, personality traits, and behaviors (Feng, Yuan, et al., 2018 ; Feng, Zhu, et al., 2018 ; Finn et al., 2015 ; Gianotti, Lobmaier, Calluso, Dahinden, & Knoch, 2017 ; Gianotti, Nash, Baumgartner, Dahinden, & Knoch, 2018 ; Hsu, Rosenberg, Scheinost, Constable, & Chun, 2018 ; Jung, Lee, Lerman, & Kable, 2018 ; Rosenberg et al., 2016 ; Wang et al., 2017 ).

The past decades have witnessed numerous neuroimaging studies on the trust (Delgado, Frank, & Phelps, 2005 ; King‐Casas et al., 2005 ; Krueger et al., 2007 ; McCabe, Houser, Ryan, Smith, & Trouard, 2001 ). In light of previous findings, a neuropsychoeconomic model (Krueger & Meyer‐Lindenberg, 2018 ) suggest that trust arises through the interplay of psychological components (i.e., motivation, affect, cognition) that engage key brain regions anchored in domain‐general large‐scale brain networks: reward network (RWN: e.G. striatum ), salience network (SAN: e.g., amygdala; anterior insula, AI); central‐executive network (CEN: e.g., lateral prefrontal cortex, lateral PFC); and default‐mode network (DMN: e.g., temporoparietal junctions, TPJ; and dorsomedial prefrontal cortex, dmPFC). In particular, the anticipation of reward (motivation, RWN) contrasted with the aversive feeling for risk of betrayal (affect, SAN) creates uncertainty for the vulnerability of trusting another person. To remove uncertainty, an adoption of a context‐based strategy (cognitive control, CEN) or evaluation of relationship‐based trustworthiness (social cognition, DMN) can be employed, thereby, transforming risk of betrayal to positive expectations of reciprocity.

People exhibit some extent of trust by sharing a portion of money with strangers (Johnson & Mislin, 2011 ), but they are heterogeneous regarding their propensity to trust (Cesarini et al., 2008 ; Krueger et al., 2012 ). Individual variations in propensity to trust have been related to many factors, including genetic polymorphisms (Krueger et al., 2012 ; Nishina, Takagishi, Inoue‐Murayama, Takahashi, & Yamagishi, 2015 ), plasma oxytocin levels (Zhong et al., 2012 ), and personality attributes (Alarcon et al., 2018 ; Unoka, Seres, Áspán, Bódi, & Kéri, 2009 ), such as individual‐level collectivism and individualism (Shin & Park, 2005 ; Van Lange, Rockenbach, & Yamagishi, 2017 ; Zeffane, 2017 ). Here, we examined whether individual variations in the trust propensity can be predicted from intrinsic brain imaging measures.

Trust pervades human social life and plays a critical role in numerous social interactions, from facilitating family relations and friendship to promoting and maintaining social exchange among strangers. Representing a social dilemma, trust entails a trustor's willingness to be vulnerable to the risk of betrayal based on the expectations that the action of a trustee will produce some anticipated reward because of reciprocity in the future (Seppänen, Blomqvist, & Sundqvist, 2007 ). In a laboratory setting, a person's propensity to trust can be measured with the one‐shot version of the trust game (TG; Berg, Dickhaut, & McCabe, 1995 ; Camerer & Weigelt, 1988 ). In the TG, trustors decide how much to share from their monetary endowment with trustees. The shared money is then tripled in value by the experimenter and sent to the trustees. The trustees decide then how much to return to the trustors but can also decide to return nothing. The amount of money shared by trustors measures trust propensity since trustors make themselves vulnerable to the potential betrayal of trustees (Fehr, 2009 ).

To further examine whether the brain‐behavior relationship was specific to the trust‐related preferences, similar RVR models were implemented to reveal whether RSFC profiles could also predict altruistic behaviors in the DG. In the first model, all RSFC features were employed; accordingly, this model is the same as the prediction model for the TG, except that prediction target was altruism in the DG. In the second model, features with top 10% weight values selected from the prediction model of trusting preferences were forwarded to the model predicting the altruistic preferences; therefore, this model specifically examined whether trust‐related predictive networks were also associated with altruistic preferences.

The association between trust propensity and social preferences (i.e., collectivism/individualism) as well as trait impulsiveness was examined at both behavioral and neural levels. At the behavioral level, the correlations between the trust propensity and the four subscales of the horizontal and vertical individualism–collectivism scale (Singelis et al., 1995 ) as well as BIS Brief (Steinberg et al., 2013 ) were assessed. Since trust propensity exhibited only reliable correlation with the horizontal collectivism subscale (see Section 3 ), it was further investigated whether network features contributing to the prediction of trust propensity can predict horizontal collectivism at the neural level. For this type of analysis, features with top 10% weight values selected from the prediction model of trusting preferences were forwarded to the model predicting the horizontal collectivism. That is, another RVR model was trained to predict horizontal collectivism based on the top 10% predictive edges derived from the trust preferences prediction analysis.

Control analysis was implemented to further examine the significance of predictions of our models, despite potential confounds of altruistic preference, motion, age, and gender. In the control analysis, the association between actual and predicted trust propensity was recomputed after adjusting these confounding variables.

A potential disadvantage of the LOOCV is that it maximizes variance of test‐set and may not yield stable estimates of predictive performance. Accordingly, a 10‐fold cross‐validation was applied to validate the prediction results, since this scheme may provide more stable estimates of predictive performance (Varoquaux et al., 2017 ). All participants were divided into 10 subsets, of which nine were used as the training set, and the remaining one was as the testing set. The training set was scaled and used to train an RVR prediction model, which was then used to predict the scores for the scaled testing data. The scaling of testing data used parameters acquired from training data. This procedure was repeated 10 times so that each subset was used as a testing set once. Finally, the correlation r and MSE between the true and predicted scores were calculated across all participants. Since the full dataset was randomly divided into 10 subsets, performance might have depended on data division. Therefore, the 10‐fold cross‐validation was repeated 100 times, and the results averaged to produce a final prediction performance. A permutation test was applied 5,000 times to test the significance of the prediction performance.

To further characterize the neural substrates of the contributing network, the network was first defined as the set of edges with top 10% weight values described above. Afterwards, the 268 nodes were grouped into 10 macroscale brain regions, including the prefrontal lobe (46 nodes), motor lobe (21 nodes), insula lobe (7 nodes), parietal lobe (27 nodes), temporal lobe (39 nodes), occipital lobe (25 nodes), limbic lobe (36 nodes), cerebellum lobe (41 nodes), subcortical lobe (17 nodes), and brainstem lobe (9 nodes; Finn et al., 2015 ; Rosenberg et al., 2016 ). The number of edges with top 10 weight values between each pair of macroscale regions or canonical networks was then calculated. Finally, the importance of individual nodes was measured as the number of their connections as in previous studies (Beaty et al., 2018 ; Rosenberg et al., 2016 ) and the connectivity patterns of the top 10 most highly connected nodes were illustrated.

The absolute value of the RVR weight of each feature (edge) quantifies its contribution to the model (Cui & Gong, 2018 ; Gong et al., 2014 ). It is noteworthy that RVR calculates weights for samples. As RVR is a sparse model in the sample space, most weight will be zero; remaining samples with nonzero weight were used to fit the model. The regression coefficients of all features were determined as the weighted sum of the feature vector of the nonzero weighted samples (see also Cui & Gong, 2018 ; Gong et al., 2014 ). A larger absolute value of weight indicates a greater contribution of the corresponding feature to prediction, in the context of every other features (Cui & Gong, 2018 ; Erus et al., 2015 ; Gong et al., 2014 ). The feature was selected for visualization if the absolute value of its weight was within the top 10% of the absolute weight values (Ecker et al., 2010 ; Gong et al., 2014 ). This threshold was applied to eliminate noise components for a better visualization of the most discriminating regions (Ecker et al., 2010 ; Gong et al., 2014 ).

The performance of the prediction was assessed with two frequently used statistics (Franke et al., 2010 ; Gong et al., 2014 ): (a) the correlation coefficient (i.e., r ) between real and predicted trusting behaviors and (b) the mean squared error ( MSE ). A permutation test was applied to determine whether the obtained metrics were significantly better than those expected by chance. The trust propensity across training samples were permuted without replacement 5,000 times, and each time the above LOOCV prediction procedure was reapplied. The permutation resulted in a distribution of r and MSE values reflecting the null hypothesis that the model did not exceed chance level. The number of times that the permuted value was greater than (or, concerning MSE values, less than) the true value, was then divided by 5,000, providing an estimated P ‐value for each statistic.

A leave‐one‐out cross‐validation (LOOCV) was used to evaluate the out‐of‐sample prediction performance. N − 1 participants ( N is the number of participants) were used as the training set, with the remaining individual used as the testing sample. During the training procedure, an RVR prediction model was constructed using the training set. During the testing procedure, the RVR prediction model was used to predict the unseen testing participants' trusting preferences (Gong et al., 2014 ). The training and testing procedures were repeated N times such that each subject was used once as the testing subject.

Thus, RVR calculates the weight (i.e., thein the formula) for the data points. As RVR is a sparse model in the sample space, most weight would be zero and the remaining samples with nonzero weight were used to fit the model. These samples were called “relevance vectors.” Then, the weight of each feature (i.e., thein the formula) can be calculated using the following formula:

RVR is a sparse kernel learning multivariate regression method set in a fully probabilistic Bayesian framework (Tipping,). In this framework, a zero‐mean Gaussian prior is introduced over the model weights and is governed by a set of hyper‐parameters, one for each weight. The most probable values for these hyper‐parameters are then iteratively estimated from the training data, with sparseness achieved because of posterior distributions of many of the weights peaking sharply around zero. Those training vectors associated with nonzero weights are referred to as “relevance” vectors. The optimized posterior distribution of the weights can then be used to predict the target value (e.g., trusting preferences) for a previously unseen feature vector, by computing the predictive distribution (Tipping,). In particular, the equitation for the RVR model can be represented as below (see also Cui & Gong,; Tipping,):whereis a high‐dimensional feature vector (, …,),is the number of features,is the regression coefficient of theth feature,is the quantity of “relevant vectors” (defined below),is the weight for each data point.

The relationship between trust propensity and functional connectome was examined using multivariate relevance vector regression (RVR) as implemented in PRoNTo ( http://www.mlnl.cs.ucl.ac.uk/pronto /) and in‐house scripts running under MATLAB (Mathworks, 2016 release). This algorithm was employed because of the following reasons: (a) our recent study showed that this method works well with high‐dimensional features in the prediction analysis (Feng, Cui, Cheng, Xu, & Gu, 2018 ); (b) this algorithm is relatively simple to implement since there are no hyperparameters to be optimized; and (c) previous studies have shown that RVR outperforms other predictive models (Chu, Ni, Tan, Saunders, & Ashburner, 2011 ; Franke, Ziegler, Klöppel, Gaser, & Initiative, 2010 ; Wang, Fan, Bhatt, & Davatzikos, 2010 ). Notably, we did not perform other prediction algorithms nor compared them with the RVR model, since this issue is beyond the scope of the current work and has been addressed in a recent study (Cui & Gong, 2018 ).

For each participant, a time course was computed for each node by averaging the BOLD signal for all the voxels within the node at each time point. Second, network edges were defined as functional connectivity between each pair of nodes, calculated as the Pearson correlation coefficient between the time courses of each pair of nodes. Fisher's r ‐to‐ z transformation was then implemented to improve the normality of the correlation coefficients, resulting in a 268 × 268 symmetric connectivity matrix that represented the set of edges/connections in each participant's resting‐state connectivity profile. These edges/connections (i.e., connectivity strength) measured as Fisher‐transformed correlations were used as features in the predictive models described below.

Neuroimaging data analyses were performed with the DPABI software plug‐in package ( http://rfmri.org/dpabi ; Yan, Wang, Zuo, & Zang, 2016 ) based on SPM ( http://www.fil.ion.ucl.ac.uk/spm ). The first 10 volumes of the functional images were discarded for signal equilibrium and participants' adaptation to scanning noise. The images were then realigned for head movement correction. Thirteen participants (9 males) were excluded from further analysis under the criteria of head motion exceeding 2.5 mm maximum translation, 2.5° rotation or mean frame‐wise displacement exceeding 0.2 mm throughout scans (Power, Barnes, Snyder, Schlaggar, & Petersen, 2012 ; Yan et al., 2013 ). To normalize functional images, participants' structural brain images were first co‐registered to their mean functional images and were subsequently segmented. The parameters derived from segmentation were used to normalize each participant's functional images into the standard Montreal Neurological Institute space (MNI template, resampling voxel size was 3 × 3 × 3 mm 3 ). Afterwards, the linear trends of time courses were removed, and a band‐pass filtering (0.01–0.1 Hz) was applied to the time series of each voxel to reduce the effect of low‐frequency drifts and high‐frequency physiological noise (Biswal, Zerrin Yetkin, Haughton, & Hyde, 1995 ; Zuo et al., 2010 ). Subsequently, the images were spatially smoothed using a Gaussian filter to decrease spatial noise (with a kernel of full width at half maximum 4 mm). Finally, common nuisance variables were regressed out, including the white matter signal, the cerebrospinal fluid signal (Fox et al., 2005 ; Snyder & Raichle, 2012 ), and 24 movement regressors including autoregressive models of motion incorporating 6 head motion parameters, 6 head motion parameters one time point before, and the 12 corresponding squared items (Friston, Williams, Howard, Frackowiak, & Turner, 1996 ).

Images were acquired with a Siemens Trio 3‐Tesla scanner at the Beijing Normal University Imaging Center for Brain Research, Beijing, China. All participants first completed a 5‐min resting‐state fMRI scanning, during which they were instructed to close their eyes, keep still, remain awake, and not to think about anything systematically. The resting state scanning consisted of 150 contiguous echo‐planar imaging volumes using the following parameters: axial slices, 33; slice thickness, 3.5 mm; gap, 0.7 mm; TR, 2,000 ms; TE, 30 ms; flip angle, 90°; voxel size, 3.5 × 3.5 × 3.5 mm 3 ; and FOV, 244 × 244 mm 2 . In addition, high‐resolution structural images were acquired through a 3D sagittal T1‐weighted magnetization‐prepared rapid acquisition with gradient‐echo sequence, using the following parameters: sagittal slices, 144; TR, 2,530 ms; TE, 3.39 ms; slice thickness, 1.33 mm; voxel size, 1 × 1 × 1.33 mm 3 ; flip angle, 7°; inversion time, 1,100 ms; and FOV, 256 × 256 mm 2 .

Participants were asked to complete the (a) horizontal and vertical individualism–collectivism scale (Singelis, Triandis, Bhawuk, & Gelfand, 1995 ), measuring subdimensions of vertical individualism (i.e., conception of an autonomous individual and acceptance of inequality), vertical collectivism (i.e., perceiving the self as a part of a collective and accepting inequalities within the collective), horizontal collectivism (i.e., perceiving the self as a part of the collective, but seeing all members of the collective as the same) and horizontal individualism (i.e., conception of an autonomous individual and emphasis on equality) and (b) a brief version Barratt Impulsiveness Scale‐11 (BIS Brief; Patton, Stanford, & Barratt, 1995 ; Steinberg, Sharp, Stanford, & Tharp, 2013 ), representing an unidimensional trait impulsiveness measure (Steinberg et al., 2013 ).

Participants came to the lab only once to complete the experiment—playing first the DG first (as the baseline) followed by the TG. They were informed that they would be paid a week later after putative trustees had made their decisions. Unbeknownst to the participants, no other players were recruited for this experiment. To encourage real decisions, however, it was emphasized that the MUs they earned in the game would be converted into their final monetary payoffs. However, participants did not know the exact exchange rate from MUs to monetary payouts, and each participant was paid a fixed fee (50 CNY). Before leaving the laboratory, participants completed a debriefing questionnaire designed to examine their beliefs about the experimental setup, and none of the participants expressed doubts.

We employed participants' initial behaviors (i.e., the first round) in the TG and DG as their measures of trust and altruistic preferences, respectively (Note that we also report the results derived from the behaviors averaging across all rounds for sake of completeness). This is based on Hahn et al. ( 2014 )'s study, in which they found that resting‐state brain‐electrical connectivity only predicted individual variations in trust measured during the first round (i.e., trust propensity) but not trust measured across multiple rounds (i.e., trust dynamics).

In the DG, participants acted as dictators and decided how to split a sum of money (12 MUs) between themselves and the other player as a passive recipient. Participants' behaviors in the DG reflected generosity or altruistic preferences (Benenson, Pascoe, & Radmore, 2007 ; Zak, Stanton, & Ahmadi, 2007 ).

In the TG, participants acted in the role of investor (i.e., trustors) and started with an endowment of 9 monetary units (MUs). They needed to decide whether to trust or not by passing any portion of the endowment to the trustees and keep the remainder of the endowment. The shared money would be tripled in value and passed to the anonymous partners, who would putatively participate in the experiment later as trustees to decide how much to return of the received money. The amount of money shared by trustors measured trust propensity since trustors made themselves vulnerable to the potential betrayal of trustees (Fehr, 2009 ).

Participants played multiple rounds ( n = 12) of the one‐shot TG (Berg et al., 1995 ; Camerer & Weigelt, 1988 ) and one‐shot dictator game (DG; Kahneman, Knetsch, & Thaler, 1986 ) with different putative anonymous partners without feedback between rounds. Before the games, participants were given written instructions on the payoff and rules for TG and DG. Afterwards, participants answered several questions designed to assess their understanding of the TG and DG.

The whole sample included 168 healthy right‐handed college students (109 males; 21.99 ± 2.38 years old, range: 18–30 years old) without history of neurological or psychiatric disorder. Among them, 89 participants (46 males) played economic games (i.e., measuring trust propensity), filled survey measures (see below), and completed a neuroimaging scan. The other participants ( n = 79, 63 males) only filled survey measures and completed a neuroimaging scan. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Beijing Normal University. Written informed consents were obtained from all participants.

Participants' initial altruistic preferences could not be predicted by either whole‐brain functional connectivity profiles (unadjusted for covariates: r = −0.24, p = .92; MSE = 5.03, p = .96, permutation tests; adjusted for covariates: r = −0.24, p = .91; MSE = 4.93, p = .96, permutation tests, Supporting Information Figure S2a,b ) or trust‐related predictive network (unadjusted for covariates: r = −0.24, p = .92; MSE = 5.28, p = .95, permutation tests; adjusted for covariates: r = −0.25, p = .92; MSE = 5.22, p = .96, permutation tests, Supporting Information Figure S2c,d ). Finally, participants' altruistic behaviors averaged across multiple rounds could not be by either whole‐brain functional connectivity profiles (Supporting Information Figure S3a,b ) or trust‐related predictive network (Supporting Information Figure S3c,d ).

To further assess the association between horizontal collectivism and network features contributing to the prediction of trust propensity, it was examined whether these network features can predict horizontal collectivism. The results revealed that trust‐related network features (i.e., the 3,578 edges with top 10% weight values selected from the predictive model of trust propensity) predicted horizontal collectivism ( r = 0.31, p = .0018; MSE = 41.18, p = .0044, permutation tests) also after adjusting for motion, age, and gender ( r = 0.31, p = .0014; MSE = 41.56, p = .0046, permutation tests, Figure 5 b,c). In line with the behavioral findings, the trust‐related predictive network could not reliably predict individual differences for the other subscale scores ( rs < 0.10).

Performance of the prediction model for horizontal collectivism and trait impulsiveness based on trust‐related network features, assessed by leave‐one‐out cross‐validation. (a) Correlation between horizontal collectivism and trusting preference. (b) Correlation between actual and predicted horizontal collectivism scores adjusting for motion, gender, and age. (c) Permutation distribution of the correlation coefficient () for the prediction analysis of horizontal collectivism. The value obtained using real scores is indicated by the blue dashed line. (d) Correlation between trait impulsiveness and trusting preference. (e) Correlation between actual and predicted trait impulsiveness scores adjusting for motion, gender, and age. (f) Permutation distribution of the correlation coefficient (r) for the prediction analysis of trait impulsiveness. The value obtained using real scores is indicated by the blue dashed line [Color figure can be viewed at wileyonlinelibrary.com

The correlational analyses revealed that trust propensity was positively correlated only with the horizontal collectivism subscale ( r = 0.32, p = .003) even after adjusting for altruistic preference, age, and gender ( r = 0.39, p < .0005), but not with other subscales ( rs < 0.1, Ps > 0.39; Figure 5 a ) . However, predicted trust propensity was not correlated with horizontal collectivism subscale ( r = −0.032, p = .77).

Performance of the prediction model assessed by 10‐fold cross‐validation. (a) Correlation between actual and predicted trust preferences adjusting for altruistic preferences, motion, gender, and age. (b) Permutation distribution of the correlation coefficient () for the prediction analysis. The value obtained using real scores is indicated by the blue dashed line. (c) Consistency between actual and predicted trust preferences adjusting for altruistic preferences, motion, gender, and age. (d). Permutation distribution of the mean squared error. The value obtained using real scores is indicated by the blue dashed line [Color figure can be viewed at wileyonlinelibrary.com

Functional connections predicting trust preferences. (a) The connectivity patterns selected by the prediction model, plotted as the number of connections within each marcoscale regions. (b) Connections plotted as the number of edges within and between each pair of marcoscale regions. L, left; R, right; PFC, prefrontal; Mot, motor; Ins, insula; Par, parietal; Tem, temporal; Occ, occipital; Lim, limbic; Cer, cerebellum; Sub, subcortical; Bsm, brainstem [Color figure can be viewed at wileyonlinelibrary.com

The network consisted of features with top 10% weight scores included 3,578 edges. Based on macroscale regions, (a) connections within prefrontal, subcortical, limbic, parietal, and temporal lobes, (b) connections between prefrontal and these lobes, (c) connections between subcortical and temporal and limbic lobes, and (d) connections between limbic and parietal, temporal, and occipital lobes were primary predictors of trust propensity (Figure 2 a,b).

Performance of the prediction model assessed by leave‐one‐out cross‐validation. (a) Correlation between actual and predicted trust preferences adjusting for altruistic preferences, motion, gender, and age. (b) Permutation distribution of the correlation coefficient () for the prediction analysis. The value obtained using real scores is indicated by the blue dashed line. (c) Consistency between actual and predicted trust preferences adjusting for altruistic preferences, motion, gender, and age. (d) Permutation distribution of the mean squared error. The value obtained using real scores is indicated by the blue dashed line [Color figure can be viewed at wileyonlinelibrary.com

A LOOCV approach was implemented to examine whether the relevance between connectivity strength (i.e., Fisher‐transformed correlations) and trust preferences (i.e., measured as participants' decisions in the first round of the TG) generalized to unseen individuals. RSFC strength predicted trust propensity in unseen individuals ( r = 0.26, p = .024; MSE = 2.23, p = .019, permutation test), even after adjusting for altruistic preference, motion, age, and gender ( r = 0.27, p = .019; MSE = 1.89, p = .0012, permutation test, Figure 1 ). RSFC strength did not predict trust averaged across multiple rounds ( r = 0.1, p = .082; MSE = 2.15, p = .16, permutation tests, Supporting Information Figure S1 ).

4 DISCUSSION

In this study, we applied a connectome‐based predictive modeling framework to predict individual variations of trust propensity (measured with the TG) from intrinsic whole‐brain functional connectivity. Overall, our results showed that the functional interplay across multiple neural networks enabled prediction of trust propensity independently of confounding variables such as altruistic preferences, age, gender, and head motion. Inter‐individual variations in trust propensity were primarily predicted by RSFC rooted in the functional integration of distributed key nodes within and between domain general large‐scale networks—RWN (caudate), SAN (amygdala), CEN (lateral PFC), and DMN (TPJ, TP)—for which neural activity has been previously associated with psychological components (i.e., motivation, affect, and cognition) of trust (Krueger & Meyer‐Lindenberg, 2018). Moreover, trust propensity and its underlying neural underpinnings were modulated according to the extent to which a person emphasizes general social preferences (i.e., horizontal collectivism) rather than general risk preferences (i.e., trait impulsiveness). Finally, our findings showed that the brain‐behavior associations were only evident for trust propensity but not altruistic preferences.

Our findings bolster the assertion that trust propensity is a complex social construct emerging from the interactions among large‐scale networks (Bellucci, Chernyak, Goodyear, Eickhoff, & Krueger, 2017; Fehr, 2009). In line with this assumption, a recent study employing resting‐state brain‐electrical connectivity revealed that trust propensity could be predicted by connections of electrodes located over the frontal and parietal regions (Hahn et al., 2014). However, the EEG‐based approach does not allow for a precise location of brain systems contributing to the prediction of trusting propensity. In this regard, our findings provide the evidence to delineate contributions of specific networks in shaping an individual's trust propensity.

Based on a system neuroscience view (Bressler & Menon, 2010), trust emerges from the interactions of psychological systems (i.e., motivation, affect, cognition) that engage key nodes anchored in domain‐general large‐scale brain networks (Krueger & Meyer‐Lindenberg, 2018). The anticipation of reward (motivation, RWN) competed with the risk of betrayal (affect, SAN), which induces uncertainty about the vulnerability of trusting another person. The removal of uncertainty requires an adoption of a context‐based strategy (cognition, CEN) or evaluation of relationship‐based trustworthiness (social cognition, DMN); thereby, transforming risk of betrayal to positive expectations of reciprocity.

First, the motivational system of trust involves the RWN that builds on dopaminergic pathways to determine the anticipated reward for trusting another person. As a key node densely interconnected with numerous frontal regions (Yeterian & Van Hoesen, 1978), the caudate is involved in the production of movement; therefore, influencing future planning and decision selection (Joel & Weiner, 1999). Signals from the caudate are predictive of people's decisions to cooperate or to trust (King‐Casas et al., 2005; Rilling et al., 2002). The caudate also registers other valuation‐related information important for trust decisions, including trustworthiness of others (Dimoka, 2010) and positive feelings of reciprocated trust (Fareri, Chang, & Delgado, 2015; Rilling et al., 2002; Sripada, Angstadt, Liberzon, McCabe, & Phan, 2013).

Second, the affective system of trust engages the SAN to comprise aversive feelings associated with the risk of betrayal by another person. The SAN is consistently identified for the self‐related bottom‐up saliency detection for regulating social behavior (Bressler & Menon, 2010). The amygdala, necessary for appropriate social functioning (Adolphs, Tranel, & Damasio, 1998), signals the threat of betrayal based on promoting social vigilance and encoding emotional salience (Engell, Haxby, & Todorov, 2007). Trust increases after damage of the amygdala (Koscik & Tranel, 2011; van Honk, Eisenegger, Terburg, Stein, & Morgan, 2013). The amygdala evaluates incoming social information to enhance (diminish) trust‐related behaviors for positive (negative) evaluations—consistent with the literature on the opposite effects of the two hormones oxytocin (Baumgartner, Heinrichs, Vonlanthen, Fischbacher, & Fehr, 2008) and testosterone (Boksem et al., 2013) in balancing trust.

Third, the cognitive system of trust involves the CEN (cognitive control system) to adopt context‐based strategies for transforming risk of betrayal to positive expectations of reciprocity. The lateral PFC is a key node of the CEN, which has been consistently associated with top‐down cognitive control in adopting goal‐directed behavior under changing contexts (Miller & Cohen, 2001). The lateral PFC has been involved in emotion regulation through modulations of bottom‐up processes in limbic and subcortical regions (Kober et al., 2010; Lee, Heller, Van Reekum, Nelson, & Davidson, 2012; Wager, Davidson, Hughes, Lindquist, & Ochsner, 2008). These regulatory processes thus might play a key role in transforming risk of betrayal to positive expectations of reciprocity (Fouragnan et al., 2013).

Finally, the cognitive system of trust involves the DMN (social cognition system) to evaluate relationship‐based trustworthiness for transforming risk of betrayal to positive expectations of reciprocity. The TP and TPJ are key nodes of the DMN, which have been consistently identified in mentalizing about others to facilitate cooperative decision‐making (Frith & Frith, 2006; Lieberman, 2007; Rilling & Sanfey, 2011; Saxe & Kanwisher, 2003). The TP is linked with the representation of abstract social knowledge about other people, such as whether they are trustworthy, that is important for our ability to mentalize (Frith & Frith, 2006; Saxe & Kanwisher, 2003; Zahn et al., 2007). The TPJ is well linked with social cognitive functions, including self‐other distinction, perspective‐taking, and intentional inferences of others (Van Overwalle, 2009), making it an essential region for inferring and attributing the intentions of others to evaluate relationship‐based trustworthiness (Engelmann, Meyer, Ruff, & Fehr, 2018). Trustors with higher perspective‐taking tendencies demonstrate greater trust toward others and decrease their trust more drastically after being betrayed (Fett et al., 2014). Sophisticated trustors show higher TPJ activity than naive trustors, consistent with the assumption that they build better mental models about the intentions of their partners (Xiang, Ray, Lohrenz, Dayan, & Montague, 2012). Moreover, TPJ activity increases with age for trusting others—demonstrating a higher sensitivity toward other people's social signals (Fett, Gromann, Giampietro, Shergill, & Krabbendam, 2014). Finally, recent task‐based fMRI evidence has shown that the left TPJ is functionally connected with the amygdala and IFG—similar nodes that were also identified in the current study—to mediate human trust behaviors in a context‐dependent manner (Engelmann et al., 2018). Notably, both the task‐based approach implemented in Engelmann et al. (2018)'s study and current RSFC approach identified the association of left TPJ (but not right TPJ) connectivity profiles with trust behaviors.

Our findings further indicated that trust and associated neural substrates were modulated according to individual differences in general social preferences (i.e., horizontal collectivism) instead of general risk preferences (i.e., trait impulsiveness). Horizontal collectivism refers to beliefs emphasizing social harmony and interdependence with others (Singelis et al., 1995). These beliefs are closely associated with one's propensity to trust, because trust represents an integral component of positive interpersonal interactions and relationships. Previous evidence has shown that horizontal collectivism is positively associated with self‐reported tendencies to trust and cooperate with others (Wagner III, 1995; Wong & Tjosvold, 2006; Zeffane, 2017). Finally, propensity to trust was not predicted by trait impulsiveness—which is closely related to risk preferences in nonsocial contexts (Lauriola, Panno, Levin, & Lejuez, 2014; Romer, 2010; Steinberg, 2010)—supporting previous evidence that trust involves social risk and is distinct from nonsocial risk (Aimone, Ball, & King‐Casas, 2015; Baumgartner et al., 2008; Kosfeld, Heinrichs, Zak, Fischbacher, & Fehr, 2005; Zheng et al., 2017).

Although our findings serve as a starting point, a few limitations exist that future investigations have to address to improve the prediction performance and generalizability of our findings based on RSFC. First, our results indicated that the performance of the predictive models was significantly better than chance level. However, future investigations should improve the accuracy of those models by combining data from different levels of measures (e.g., behavioral and brain measures), modalities (e.g., functional and structural connectivity), and statistical indicators (e.g., mean and variance).

Second, our findings showed that intrinsic functional connectivity profiles only predicted individual differences in trust measured during the first round (i.e., trust propensity) but not trust measured across multiple rounds (i.e., trust dynamics). A similar finding has been found in a recent prediction study—employing resting‐state brain‐electrical connectivity—where participants played multiple rounds with the same partners providing feedback after each round (Hahn et al., 2014). The authors argue that trust behavior in the first round could have been predicted since trust propensity is considered as a trait reflected in the underlying RSFC, whereas trust behavior averaged across rounds reflects trust dynamic as a state influenced, for example, by social learning (based on the received feedback) not reflected in RSFC. In our study, participants played with different partners with the instruction that feedback will be provided a week later after putative trustees had made their decisions, considering the first round of their trust behavior would reflect their propensity to trust as measured in the identified RSFC. In contrast, considering the average trust behavior across rounds could reflect trust dynamics, where participants, for example, strategizing how to play the game differently to maximize their outcome across all games. Future studies are needed to disentangle those different trust dynamics (e.g., social learning, strategizing) and their underlying neural underpinnings based on task‐based and task‐free functional connectivity.

Finally, we investigated whether RSFC predicts individual variations of trust propensity at a single time point, but future studies should further investigate whether the identified RSFC pattern is a temporally stable index of trust propensity over a longer time interval—representing the underlying neural patterns of an individual's phenotype (Peysakhovich, Nowak, & Rand, 2014).

Despite these limitations, our study provided the first evidence that the functional interplay among distributed networks predicts individual variation in the propensity of trust. The contributing nodes and edges of the predictive networks—representing the motivational, affective, and cognitive components of trust—do not work separately, but extensively interact with each other. In this regard, our data‐driven approach provides a more holistic measure of trust and offers a novel tool to characterize the neural underpinnings, highlighting its potential use as an objective neuromarker (Kristensen & Sandberg, 2017; Kropotov, 2016) for future trust investigations in mental health disorders that are characterized by mistrust in close social relationships (Fett et al., 2012; Sripada et al., 2009; Unoka et al., 2009).