People analytics – the fast-growing practice which companies use to analyze large amounts of data to quantify employee performance – has the potential to revolutionize the workplace and vastly improve how all of us are rewarded for our efforts. But used the wrong way, people analytics can be just as blind and biased as human beings have always been.

One of the most well-established findings in social psychology is the “fundamental attribution error” which essentially describes how observers over-attribute their explanations for the causes of behavior to “the person” and under-attribute the causes of behavior to “the situation.” In careers and the workplace, this means that credit or blame for performance is likely to be assigned to an individual more based on his or her perceived character, personality, intentions or efforts rather than on the situation, context, opportunities or constraints within which that individual is working. This cognitive bias explains why salespeople who are lucky enough to be selling the right products to the right market at the right time get credit and get viewed as talented “A players,” while those who have the misfortune of selling the wrong products to the wrong market at the wrong time instead receive blame and become branded as mediocre “B” or “C” players. This bias also explains why a CFO may be perceived as cheap by disposition or why a team might attribute its internal conflicts to incompatible personalities instead of resulting from organizational incentives to compete rather than collaborate with one another.

In this evolving age of data, the latest manifestation of the fundamental attribution error arises in the rapidly growing field of people analytics. Organizations can now conduct large-scale analyses with all kinds of variables in order to try to predict which employees are likely to succeed, and which are not. Companies use variables such as college or graduate school grades, SAT or GMAT scores, years of working experience, and the results of cognitive ability or personality tests, to predict turnover rates, promotions, sales volume, or other performance outcomes. With the explosion of data that companies collect and compile about their employees, it is tempting to both categorize and classify employees who currently work at the organization and to simultaneously build profiles of ideal candidates, who will be likely to perform well, remain at the organization, get promoted, and be satisfied with their jobs.

But just as people are susceptible to making the fundamental attribution error, organizations risk making what might be called the fundamental analytic error. That is to say, in many instances, critical information is missing from human capital or people analytics: situational or contextual variables. An argument can be made that for the purposes of predicting, explaining, and improving variance in performance, situational variables might actually prove better than individual variables. Using the example of salespeople, more of the variation in sales volume may be attributable to product or territory than by which sales person happens to be selling a given product in a given location. What remains indisputable, however, is that the combination of individual and situational variables together will explain much more of the variance in performance or employee engagement than individual variables or situational variables could ever explain alone.

The fundamental analytic error tempts organizations for several reasons. It’s often much more politically expedient to blame individuals when things aren’t going well than to search for the underlying organizational causes of their difficulties. If the talents or efforts of individuals get credited or blamed for performance, then performance that doesn’t meet expectations does not raise tough questions about whether culture, product, strategy, incentives, or technologies might be improperly configured or misaligned with one another. In other words, poor performance can readily get attributed to employees rather than to their leaders, who are directly or indirectly responsible for creating the conditions that should enable people to succeed. If products are not selling, it may be very appealing to initiate an analytics project to look at salespeople’s attributes instead of getting customer feedback about the company’s products. If turnover is high among entry level employees, it could much more politically palatable to analyze the personality, style, education, experience level, and referral source of the employees who leave the organization, rather than to analyze the capabilities or managerial skills of their supervisors. And no amount or kind of human capital analytics is going to save an organization in denial about disruptive changes occurring in its industry or markets.

There is a better way. The most valuable human capital or people analytic initiatives get deployed in a scientific manner. Hypotheses, nested in some kind of conceptual framework, get formulated and tested and theories and hypotheses are all subject to falsification. So, if some associates in a law firm are performing well, while others perform poorly, it is reasonable to hypothesize that their law school grades, LSAT scores, and whether or not they clerked for a judge might help predict, and partially explain, their performance as attorneys. If the associates who have high grades and scores and clerked for a judge are high performers, and associates with low grades and scores who did not clerk are poor performers, a researcher could hypothesize that this correlation indicates causation: that intelligence and motivation are reflected in the lawyers’ resumes, and that higher intelligence and motivation in turn cause higher performance.

But before conclusions can be drawn, other explanations need to be considered and alternative analyses need to be conducted. The hypothetical law firm, for example, might look beyond human capital analysis of its associates and consider the impact of practice area, geographic location, and types of cases on associate retention and performance. A courageous investigator, willing to risk stirring up organizational politics, might even suggest that the law firm conduct analyses to learn whether associates who work for some partners outperform associates who work for others. These additional analyses might determine that variance in associate performance is a function of whether the partners they happen to be working for provide coaching, mentoring and support, and not a result of the associates’ grades, scores or personalities.

Human psychology and organizational politics are both biased towards attributing too much causality to people and their individual attributes and not enough causality to situations and organizational context. Human capital and people analytics, despite their big data-fueled power, can easily get misused in ways that serve only to justify existing organizational systems and to unfairly scapegoat individuals who are not performing well in no small measure because of the weaknesses and constraints of those systems. Only by taking a broader, more open, less biased and less political approach to conducting analyses about the factors that predict and explain performance can organizations hope to improve it over the long term.