1. Introduction

Besides perhaps water, energy is the most important contributor to life on our planet. Over time, natural selection has optimized towards the most efficient methods for energy capture, transformation, and consumption [1–3]. In order to survive, each organism needs to procure at least the amount of energy it consumes. For example, cheetahs that repeatedly expend more energy chasing a gazelle than they receive from eating it will not survive. Further, in order for body maintenance and repair, reproduction, and the raising of offspring, the cheetah will need to obtain significantly more calories from its prey than it expends chasing it. This amount of energy left over after the calories used to locate, harvest (kill), refine and utilize the original energy are accounted for is termed “net energy”. In the human sphere, this same concept applies. Energy sources need to return more energy than used in their retrieval, and in order to secure an average modern human lifestyle including shelter, amenities, leisure activities and many more benefits beyond the bare necessities, such an energy surplus needs to be significant [4].

Human history has been one of transitions in energy quantity and quality. The value of any energy transformation process to society is proportional to the amount of surplus energy it can produce in excess of what it needs for self-replication [5]. Over time, our trajectory from using sources like biomass and draft animals, to wind and water power, to fossil fuels and electricity has enabled large increases in per capita output because of increases in the quantity of fuels available to produce non-energy goods. This transition to higher energy gain fuels also enabled social and economic diversification as less of our available energy was needed for the energy securing process itself, thereby diverting more energy towards non-extractive activities [6].

As fossil fuels become more difficult to retrieve and thus more expensive, a move from higher to lower energy gain fuels will have important implications for both how our societies are powered, and structured. As illustrated in Figure 1, declines in aggregate Energy Return on Energy Invested (EROI) mean more energy will be required by the energy sector (the light gray) leaving less energy available for other areas of an economy (the dark gray). Declines in amounts of surplus energy have been linked to collapses of animal societies and historical human civilizations [7]. Research into precisely how much net energy we might need to sustain human civilization is an interesting and important question, but one not frequently addressed [3].

In the past few decades, a number of concepts have been introduced to measure this relationship between energy input and energy gains for energy sources, for example energy profit ratio, EROI, energy payback period, net energy, and energy yield. These biophysical statistics always describe the amount of energy procured for human use relative to the amount expended. Every energy system incurs initial energy expenditures during its own construction. The facility then produces an energy output for a number of years until the end of its effective lifetime is reached. Over time, additional energy costs are incurred in the operation and maintenance of the facility. The simplest statistic to measure these energy flows is “energy gain”, which is the sum of the total energy output less the sum of the total energy input over the life of the investment. A variation of this is EROI, which divides the total energy output by the energy input to arrive at a ratio, indicative of the energy harnessing return potential of the particular technology (Table 1). EROI is sometimes also referred to as the “Energy Profit Ratio”. Another popular statistic is the “energy payback period”, which is the time it takes an energy procuring technology to “pay back” or produce an amount of energy equivalent to that invested in its construction. This method is limited in that it does not account for the total remaining energy output after the initial “payback period”, which might differ significantly for technologies with the same pay-back time. In this paper, we will use the output/input ratio EROI, though the concepts presented here will be applicable to any biophysical statistic measuring net energy.

Net energy is central to an energy theory of value which asserts that natural resources, particularly energy, as opposed to dollars, are what we have to budget and spend [10]. This mode of analysis was viewed as so fundamental that in 1974 the U.S. Congress required every government sponsored technology for procuring energy to be subject to net energy analysis. Net energy analysis, though popular during the energy crises of the 1970s had largely been subsumed in the academic literature by Life Cycle Assessment, until a recent resurgence in biophysical analysis in the last few years sparked by concerns about oil depletion [3].

EROI in the studies above and others, is represented as a static integer representing the ratio of energy output to energy expense for the life of an energy technology. This graphically can be represented using an energy flow diagram (Figure 2a). The gray shaded region represents the energy output beginning at time t + c (where c is the period required for construction of facilities) and ending at time t + e (where e is the total number of years with energy gains). The black diagonally-striped section (bottom left) is the initial energy investment needed from the beginning of an energy gathering project. The black section represents ongoing inputs in energy terms through time t + e. Depending on the boundaries, there may also be another energy expense at time T…T + n dealing with decommissioning and waste removal (black dotted section, bottom right).

In traditional net energy analysis, an energy input or output is treated the same irrespective of the volatility stream of the underlying energy output. However, the operational requirements for electrical grids have considerable influence on our energy preferences and planning decisions. Even though 100 continuous kilowatt hours of electricity has the same energy content as 100 sporadically generated kilowatt hours, their usefulness and value is proportionate to their fit with human demand systems. As such, volatility and intermittency become important variables. Variability will refer to a measure of statistical dispersion, either referring to the variance (describing how far measured values lie from the mean) or standard deviation (the square root of the variance). In finance, variability is usually termed “volatility”. Intermittency refers to the non-continuous, stochastic nature of electricity generation by some sources. A stochastic process is one that is random, or non-deterministic.

The two hypothetical graphs for energy retrieval illustrate the relevance of variability (Figure 2). Both of the technologies offer the same EROI but the system in Figure 2a returns energy steadily over 20 periods while the system in Figure 2b returns double the energy and zero energy in random periods. The energy costs are identical at the start and during the life of the asset. Provided the quality of the energy retrieved is comparable, societies would prefer the technology that delivers the more stable returns (the graph on the left), as this more closely matches demand. However, nominal EROI analyses treat these two sources as equally preferable.

Risk, a fundamental feature of our natural environment, is typically defined as variance around a mean, although other definitions include the coefficient of variation, and unanticipated volatility [18,19]. Risk is generally considered as “the effect of uncertainty on objectives” [20]. Greater variability is associated with higher risk because it increases the chances for unknown outcomes and consequences.. Variability risk is a significant aspect of decision-making in both the animal and human world [21,22]. Using a simple example may illustrate the problem with variability from a biological perspective. A pride of lions travels a large distance to a water hole where for years they have found gazelles to feed on. At one point, however, they find the water hole dried up, with no prey (and no water) available. Despite the fact that this location has supported the growth of the pride for years, this single event might decimate the group. In contrast, another pride that regularly travels a smaller distance to a place offering less abundant but steadier hunting opportunities, though averaging a smaller return on its efforts, does not experience such a fatal setback. The genes would survive in the offspring of this second pride, who were disposed behaviorally towards the lower output, lower variability option. This phenomenon is formalized in the ecology literature as “risk sensitive foraging theory”, a body of empirical research observing risk preferences in a variety of situations in the animal world. Whether animals behave as if they were risk-averse or risk-prone depends on the energetic status of the forager (e.g., whether they are starving or sated), the type of variance associated with the feeding options and the number of feeding options among which the animal has to choose [22]. As a general rule, when the amount of reward is variable animals almost always exhibit risk-averse behavior. When the delay to reward is variable, animals behave risk-prone universally, choosing a sure thing over deferred consumption [23]. In effect, animals prefer stable rewards and immediate results.

Similar preferences exist for human efforts [24]. A farming approach that secures constant average annual returns of 80% to 90% of a possible maximum will be preferred over one averaging 100% but having widely varying returns between 0% and 250%. This is because any shortfalls are a significant threat to food security and survival. In the below example (Figure 3), people requiring food to survive would prefer the food producing output Method 1 over the higher yielding but more volatile output Method 2 due to the possibility of shortfall (i.e., periods 2, 4 and 5 fall below the minimum survival requirements, assuming no storage).

1.1. Variability Risk in Finance Over time, risk and its measurement have become core parts of both economic and financial theory. The behavioral or physical aspect that is optimized for risk varies widely by species (and among academic disciplines) and includes territory, time, caloric value, energy, mating opportunities, reputation effects, fairness, certainty, emotion and mood effects, and property. In economics, the optimized currency is typically described as “utility” (e.g., [23,24]). Bernoulli [25] noted that “expected utility” (expected return modified by risk preferences) differed from “expected value” (the strict payout multiplied by its probability). Von-Neuman and Morgenstern [26] further advanced the concept that rational individuals are risk averse and act as though they are maximizing expected utility. More recently, Prospect Theory advanced economists understanding of how people make choices involving risk by making the theory more psychologically realistic [18]. In essence, they posit that agents facing gains become more risk averse and those facing losses become more risk prone, consistent with the risk-sensitive foraging literature [27]. Finance has developed practical applications of these economic theories. In our financial system, investors can be thought of as optimal foragers; those with consistently high returns have more “energy” with which to buy goods and services as well as confer this advantage to their offspring. Interestingly, functional magnetic resonance scans of stock traders brains show the activation of the same prefrontal regions after successful trades as when primates find food likes nuts and berries [28,29]. Like any ecosystem, finance is about achieving high rewards with as little risk (and variability) as possible, and has developed multiple methods for risk assessment. Over the past several decades, researchers in academia and private industry tested and refined measurements of how investors respond to various financial problems and scenarios (see, for example, [30]). In financial markets, risk is commonly measured by volatility, a statistical measure of the dispersion of returns for a given security or market index. Volatility is measured either by using the standard deviation or variance between returns from that same security or a market index. Essentially, the higher the volatility, relative to itself or to a benchmark, the riskier an investment becomes [31,32]. Modern portfolio theory has formalized investor's preferences for lower volatility (given returns of equal expected value) with a measure termed “risk adjusted return”, or the return per unit of standard deviation. In evaluating investment alternatives risk aversion lies at the core of risk-return models, such as mean-variance portfolio theory. Markowitz formalized the observation that investors are risk averse, and given two assets offering the same expected return, investors will prefer the less risky one [33]. Thus, an investor will take on increased risk only if compensated by higher expected returns. Although there are many ways to measure risk adjusted performance, one popular portfolio metric called “Sharpe Ratio” takes this concept one step further by measuring the amount of outsized return relative to a “risk free rate” for each unit of risk [34]. The Sharpe Ratio (real or ex ante) is thus the return of a given strategy minus the risk free rate of return (usually U.S. treasury bills) divided by the standard deviation of the return. Specifically: S = r ̄ p − r f σ p (1) p = expected portfolio return; r f = risk free rate; σ p = portfolio standard deviation. = expected portfolio return; r= risk free rate; σ= portfolio standard deviation. Let's consider an example with four potential investments (A, B, C and, D). The assumed risk free rate is 3% (Table 2). The portfolio objective is an annual return of 5%, similar to the many pensions and endowments that have minimum return thresholds to pay out to their beneficiaries. As can be seen in Table 3, two dimensional (mean and volatility) return measures give a much more complete picture of investment success, though other nuances, such as maximum drawdown and relationship to a minimum accepted return are also important. The advantage of risk metrics like the Sharpe Ratio is that one statistic generated from return histories (or expectations) gives the investor a meaningful way to compare investments with different means and variances. Given the above options an investor would likely choose option A as its risk adjusted expected return as far superior to the other 3 assets. Asset B, while having an overall higher return, has much more volatility, especially when compared to the minimum portfolio return of 5%. This higher volatility suggests a greater chance that future returns could fall short of the minimum required return. The return streams from assets C and D are considerably more volatile, including periods of losses. Their low risk adjusted returns drawdowns suggest they do not provide much return adjusted for risk. When an investor has a number of low risk investments that meet his minimum return target, those metrics identified above drive the decisions, e.g., he will select the investments with the highest Sharpe Ratio or similar statistic. Only in situations where the investor, in order to meet a minimum return target, has no choice but to accept investments with high risk (and thus relatively low Sharpe ratio), will he employ additional selection criteria. For example, he will try to create a portfolio mix of those lower quality investments that are least correlated in their fluctuations to eliminate part of the risk in the portfolio.

1.2. Applying Financial Risk Concepts to Net Energy Analysis In energy systems, for example in electricity production, a similar need to reflect risk adjusted returns exists. For example, an index of electricity availability can be developed by multiplying the percent of a country's population with access to electricity by the percentage of hours in a year that there is uninterrupted electrical service. Figure 4 plots such an “availability index” compared against GDP/capita (purchasing power parity adjusted) for 99 countries. It shows that stable electricity is key to producing economic activity significantly above 10,000 US$/capita. The fact that no country with electricity availability below 98% exceeds a per capita GDP of US$ 20,000 suggests that electricity seems to be the prerequisite for high output, and not the inverse. The value of steadily available electricity at all times far exceeds the value of situations that experience regular blackouts, irrespective of the total amount of energy available. As we will show later, the electricity grid is a particularly fragile system, which is susceptible to deviations as small as 0.5% between demand and supply at any given point in time. Recently, fluctuations in the German electrical grid—where voltage weakened for only milliseconds—led to damage to at least one industrial production line, and the idling of that plant for several hours. Companies with sensitive production systems are investing in batteries and generators to prevent similar losses [35]. In general, energy supply technologies offer very different value to societies depending on how controllable they are. However, the importance of variability depends on the type of energy demand system. Storage-based energy sources such as oil, natural gas, or coal, (and to some extent hydropower), which are not subject to meaningful degradation, allow suppliers to maintain flows according to demand, with no or short ramp times required. They thus provide greater value and lower risk on the supply side. For example, oil exporting countries, in theory, can reduce oil production during periods of low demand and low prices. This approach maximizes the value extraction on the supply side, as the stores can be accessed primarily in a discretionary way. In stock-based electricity production systems, conversion technologies (e.g., nuclear, coal, oil and gas generators) produce steady output flows. In these situations, inflexibility of supply can be managed. However, flow-based energy sources such as run-of-river hydropower, solar power, and wind energy, do not allow for supply-side control without additional investments and storage losses. To a certain extent, the same is true for energy conversion technologies that produce flows from stocks, but require long lead times to switch on or off once they are operational. For example, nuclear power plants and some coal based power plants incur significant efficiency reductions when changing their load. For these technologies, flows occur mostly independent of demand or prices. Flow-based inputs with low and mostly only short-time horizon predictability like solar and wind power deliver output stochastically as a function of weather conditions. Once the infrastructure for these technologies has been installed (e.g., a photovoltaic panel, a wind turbine or a solar thermal concentrator) it can produce anything from 0% to 100% of nameplate capacity, relatively independent of demand. This does not necessarily translate to complete (short term) unpredictability, as weather forecasts are able to provide some limited planning guidance and PV panels produce the majority of their electricity during periods of high daytime demand; however, the overall delivery pattern is fully stochastic. Deferral of supply of flow-based energy is possible only with storage technologies, which typically involve a significant conversion or entropy loss, and additional upfront investment. Conversely, gas, coal or oil based fuels can be stored at a high energy density for significant periods of non-use with only limited (for natural gas) or no storage losses (for oil and coal), and then used as needed. Electricity however, does not have that feature once it is produced—it is expensive to store, at a lower energy density, and always incurs losses. Electric power not used or stored at the time of its production is no longer usable even a few seconds later. In electricity systems, both over- and undersupply are equally detrimental, and if not managed, will lead to grid failures and blackouts. Currently, well over 90% of the U.S. energy supply is “stock based” [39]. A move to more “flow based” resources—run of river hydro, solar, wind, etc., will have large implications on how our energy systems are both structured, and used. The largest human-made system that is based fully on short-term flow delivery is our electrical grid, infrastructure delivering electricity on demand using complex and intensively managed combinations of inputs. For these reasons we have chosen electricity supplying technologies to illustrate the important relationship of intermittence as it pertains to energy return for this paper. In electricity delivery systems, demand varies significantly throughout the hours of the day, days of the week and seasons of the year. Different generation technologies (driven by different energy sources), meet this intermittent demand in different ways. Below, electricity generation technologies are categorized according to their flow risks.

1.3. Stable Output Technologies Run-of-river hydropower delivers steady outputs that are not typically easy to alter. This is largely also the case for nuclear and most coal power plants that convert stocks into flows and cannot be modulated easily. Their outputs vary little and are predictable for extended periods of time when considered in aggregate (i.e., while one power plant might fail, the aggregate supply of multiple plants using one technology typically delivers stable returns to a grid system). However, these technologies cannot transition their output either up or down in a timeframe short enough to meet typical demand fluctuations. These output changes are typically associated with energetic (and thus financial) losses. In situations where they supply electricity grids (as opposed to individual industrial facilities), these technologies are not flexible enough to follow all the peaks and lows demanded by society and therefore are of lower overall value. If they are only used against the portion of demand that is stable, their contribution becomes 100% valuable and highly predictable in aggregate. We begin with a hypothetical example depicted in Figure 5, consistent with most demand curves for electricity for advanced economies of a day of operations of steady output sources in a network with a large proportion of stable outputs, for example—a country like France with a high share of nuclear power.

1.4. Flexible Technologies Most stock-based technologies, like gas- or oil-fired power plants, or stored hydropower, can be modulated in a way that can directly follow demand patterns as they emerge. As such, they bear no demand shortfall risk in their application. However, in some cases, as these fuel types are the most valuable, they produce at relatively high costs (particularly true for oil-based generation, but similarly for natural gas). The example in Figure 6 illustrates an electricity grid composed of a stable base of steady output technologies (such as nuclear, coal or any combination thereof), supplemented with flexible generation capacity (such as stored hydropower or natural gas). Together, these technologies are able to match human demand perfectly.