Recent U.S. blackouts in Texas and California and many international blackouts are leading many to reconsider the reliability and resiliency policies of the electric power system. One element that is being discussed is whether the loss of load probability (LOLP) or loss of load expectation (LOLE) criterion of “one-day-in-ten-years” or “one-time-in-ten-years” should be replaced, perhaps by an expected unserved energy criterion or set of other reliability metrics.
The deficiencies with the LOLP/LOLE criterion are well known and were raised many years ago, including noting that this criterion does not account for the magnitude of power outages. Other significant deficiencies were identified as well. In April 2001, more than twenty years ago, I published the following in the Electricity Journal under the heading “The LOLP Criterion Should Not Be Used for the Basis of Policy”:
The first limitation of the LOLP criterion is that, because it is defined by the assumptions made to calculate the LOLP, it is stricter or more lenient depending upon those assumptions. For example, not considering emergency generation ratings during capacity shortages makes it more difficult for a generation system to meet the one-day-in-10- years criterion than does considering these emergency ratings. There is substantial variation in how NERC regions and subregions calculate LOLP.
Second, the LOLP generation adequacy criterion does not consider the severity of a situation. The contribution to the LOLP is the same whether the system is short 1 MW or 1,000 MW. Other adequacy indices do measure the amount of unserved energy and the frequency and duration of periods in which demand is greater than supply. These indices, however, are not widely used as the basis for policy in North America.
A third limitation of the LOLP criterion is that it is not a useful index of widespread blackouts. Understanding quantitatively the contributors to widespread blackouts is likely to be more important to policymakers than knowing the LOLP. For example, assume that the LOLP is 0.1 and the average amount of the load curtailment is 100 MW out of a 20,000 MW system. Now assume that the probability of a blackout of the entire system is 0.01. On an expected MW basis, the blackout is 200 MW, 20 times greater than for load curtailment, which is 10 MW. Furthermore, recovering from a blackout is likely to take longer than would restoring specific parts of a system that were disconnected in a controlled manner. A blackout is also likely to have a higher cost per MWh of unserved load than a controlled and limited curtailment. For example, riots may be less likely to occur during a controlled and limited curtailment than during a blackout. Many factors contribute to the probability of blackouts in addition to generation adequacy, including operating procedures, operator training, security requirements, and the ability of the system to withstand transients. These factors are not incorporated into existing adequacy models and their LOLP calculations. To the extent that policy formation considers these factors, it does not do so formally, which may not result in consistent and rational policies.
The LOLP/LOLE criterion deficiencies are just the tip of the iceberg. The resource adequacy paradigm needs to be rethought, partly due to the changing nature of the electric power sector, but also to improve the efficacy of society’s reliability, resiliency, and adaptability policies.
We can help you meet your specific business needs