A beginner’s guide to wholesale market modelling
Context – the evolving power system
Power systems all over the world are experiencing rapid and profound change. The last decade has seen an inexorable rise in the penetration of renewable generation technologies, ie, wind and solar farms. In the past, these technologies accounted for only a small fraction of total electricity supply. But now they are a critical part of our power system, and their significance will only continue to grow. These are profound changes that have consequences for every part of the energy sector.
The need for models to help us address new challenges
Against this backdrop, there is now more focus than ever on how we can better understand the outlook for the power system. Despite their many detractors, wholesale market models are in widespread use across the sector – we seek to use these models to answer a wide variety of questions, including:
- What combination of generation and transmission assets will we need to invest in to provide energy at the lowest cost?
- What role will different technologies like wind, solar, storage, and gas plant play in a future system?
- How might changes in the shape of demand alter these outcomes?
- When can we retire existing high-emissions generation without adversely impacting reliability, security and the affordability of electricity?
- What will be the prices in the worlds that we model above?
Despite their shortcomings, models of the future power system provide us with the power to design, understand, and stress test potential system configurations without having to touch the physical system. Tools such as PLEXOS and Gurobi are now in widespread use across the energy sector.
The wholesale market model as a ‘black box’
Notwithstanding the widespread use of wholesale market models, we have learned through our work in the energy sector that there is sometimes limited understanding of how these models work, their strengths and weaknesses, and how they can be adapted to fit different purposes. People regularly use the term ‘black box’ to describe wholesale market models. This term tends to be associated with results that cannot be explained, poor visibility of the linkage between inputs and outputs, and a lack of consistency across scenarios and sensitivities.
The problem is that all too often, wholesale market models are used to produce a single, highly aggregated answer. An entire study can be boiled down to a single result: a price of $72 per MWh, a capacity factor of 15 per cent, or even in some instances an answer such as ‘Yes’ or ‘No’. Indeed, we have heard about modelling studies where advisors refuse to provide more detailed information, usually on the grounds that such information leads to ‘greater scrutiny’. This type of approach enshrines the black box view of the system into our thinking, disempowers decision-makers, and undermines the role and power of modelling.
Modelling to build intuition
But the ‘black box’ can be opened. One of the powerful elements of the models that are used in the electricity sector is that they provide us with virtually endless data that can allow us to understand outcomes better, to dig deeper into the operation of the system, and to build, challenge, and refine our intuition. Models can help us understand mechanisms that drive the outcomes we care about. For example, over the last few years the industry has developed an increasingly sophisticated understanding of wind droughts, and their interaction with firming needs. This in turn has led us to understand the significance of the availability of gas-fired generation, and the importance of constraints on the availability of gas.
Similarly, models have shown us that as we see increasing penetration of batteries, the short, sharp periods of high demand in summer that were previously the major driver of high prices will give way to longer periods of energy shortfalls in winter. To understand the solutions to these energy shortfalls, we have used models to answer the question: what is the best combination of resources to respond to wind droughts? We have learned that batteries alone cannot provide the energy we need to endure these periods. Modelling has helped us build our intuition about these events, and challenge ‘gut instinct’ which is wrong at least as often as it is right.
Intuition starts with understanding of the process
With this context in mind, we have prepared a three-part series on understanding how energy market modelling works. We seek to open the black box, and describe how these models work at a detailed level. We cannot hope to describe everything, but we will endeavour to shine a light onto some aspects of their operation that are often overlooked, avoided, and (in some cases) hidden from decision-makers.
The series is divided into three parts:
- Building the arena – capacity expansion modelling
- Operational dispatch modelling – creating a more granular picture of the future
- Defense against the dark arts – assumptions and methods that your adviser would rather not talk about.
In this article, we consider (1).
Part (1) Building the Arena with capacity expansion modelling
When it comes to long-term modelling of the system, the biggest question is how the players/generators that make up that system will change into the future. Given a starting point, where we have a set of existing generators, we want to answer questions like:
- When will the existing power stations close?
- When will new generation enter the system?
- What types of technologies will make up that new generation?
These models are sometimes referred to as ‘LTs’ standing for Long Term models. They work by taking a set of assumptions (described below), formulating the problem as a linear program, and using a mathematical solver to identify the least cost combination of new and existing generation that can satisfy demand.
What is a linear program?
Linear programming is a branch of mathematics that seeks to find the optimal solution to specific classes of problems, ie, ones where all the variables and constraints are linear functions. It is important to understand what we mean here by optimal – this word has a very precise meaning. In linear programming, we define a function (hereafter the ‘objective function’) that we seek to minimise or maximise. In a capacity expansion problem the objective function is the total cost of operating the power system – a function we seek to minimise. The optimal value of the objective function is achieved with the lowest cost combination of generation investment and dispatch and any other variables that we might consider, which satisfies all the constraints, such as meeting demand and delivering policy targets. We use a solver, such as Gurobi, to find this optimal combination that yields the least-cost solution to the objective function. For those who are interested, a good introduction to linear programming can be found in Linear and Integer Optimization by Sierksma and Zwols.
What is a solver?
A solver is a piece of software that implements a combination of different algorithms to find the optimal combination of variables that yields the optimal value of the objective function. Now here I must make a confession – to me most solvers have become so sophisticated that they are themselves a black box. Companies like Gurobi have invested decades in making their software faster and faster, and able to solve broader and broader classes of optimisation problems. Open an introductory textbook to optimisation, and you will find a description of solver algorithms that bears almost no resemblance to what commercial solvers are doing.
The critical point is that we do not (in general) need to know how a solver works to understand how to run the power system. All we need to know is that the outputs of the solver yield the very best solution possible to running the system. It is of course highly valuable to be able to understand how these tools work, but it is not essential to decision-making. All we need to remember is that the outputs of the solver, are the least cost way of operating the system. A pilot need not understand the millions of lines of code that help fly a plane, provided he understands the interaction between his actions, the controls, and the effect on the plane itself.
Key assumptions or parameters
So we have decided that we want to find the least cost combination of generation to satisfy demand. What information do we need to do this in the power system? The standard array of inputs are as follows:
- To capture transmission limitations, we need:
- A topological representation of the transmission network, consisting of regions and connections between those regions.
- Line limits that represent the maximum flow between regions.
- Loss equations that relate flows on the lines to losses
- To represent plant we need:
- Capacities for existing plant
- Fuel Costs
- Hydrological data (ie, inflows and reservoir assumptions) for hydro-electric plant
- Thermal efficiencies or heat rates
- Variable and fixed operations and maintenance costs
- Assumed profiles of resource availability (aka, traces) for wind, solar, and run-of-river hydro plant.
- Capital costs for any new plants that might be build candidates
- Potential locations for these prospective plants, and the depth of resources available in each location.
- To represent demand, we need:
- Traces for demand for each region of our topology.
- Assumptions about the amount of solar PV, and behind the meter generation (be it batteries, VPPs, or other technology) that can be mobilised to help shift load.
- Finally, to capture other constraints, we may need:
- Assumptions about renewable energy policy targets
- Assumptions about the cost of, or limits on, emissions
- Any other factors that we may deem relevant.
Beyond this there are two additional overarching assumptions that are of critical importance:
- A value of lost load or VOLL – this is the penalty that we assign in the model to a failure to have enough supply to meet demand;
A discount rate that can be applied uniformly across all time periods in the model, to translate costs in one time period into costs in another time period.
Decision Variables – what the solver is solving for
At this point, the reader may be asking the question: why have we made no assumptions about the new capacity of generation, when existing plants will retire, the amount of generation from different sources, or the flows on interconnectors. Aren’t these the very things that we are interested in studying?
Indeed, these variables – these decisions – are our focus. But it is for precisely that reason that we have not assumed values for them. Instead, we leave it up to the solver to search and find the optimal combination of these ‘decision variables’ that yields the optimal outcome. The decision variables are:
- When existing plants retire
- What new plant is built and where
- How all plants – be they new or old – are operated
- The flows on transmission lines between regions
- The amount of unserved energy in each region
Tightening the screws with constraints
Without constraints, a linear program is trivial. But we know that there are constraints on the operation of the power system that cannot be violated, whether it be because of the laws of physics, operational restrictions on plants, or some policy objective.
There are many constraints that we could consider, but for the most part they fall into four broad categories:
Supply-demand constraint: For each region,
Generation + Net Imports = Demand – Unserved Energy
Output limits: For each generator, for every time period
Generation <= Capacity
Transmission limits: For each transmission line
Minimum Flow <= Flow <= Maximum Flow
Intermittent limits: For each wind, solar farm:
Generation <= Max Half-hourly Output for a given trace
In addition, there are a wide variety of additional constraints that we could add to this list, including storage constraints for batteries and hydro assets, emissions budgets, and limits on the acceptable level of unserved energy. But for the most part, the vast majority of constraints in a capacity expansion model are shown above.
Outputs – what really matters?
Having defined our objective (ie, to minimise cost), chosen our input parameters, and implemented our constraints, the next step is to hand our problem to our solver to give us the answer. But what is the answer? Many people often assume that the answer is the value of the objective function, ie, the total cost of operating the system over the time horizon. But is this really that helpful? In fact what we are often more interested in are the values of the decision variables that yield this optimal value. We want to know what we should build, when we should retire existing plant, and how all of these facilities should be operated. Put simply, it is the value of the decision variables that are of most interest.
There is also a trick here – it turns out that when we solve a linear program we get some additional variables for free. In particular, every constraint yields a special value, called a dual variable or ‘shadow price’, that represents the effect of alleviating that constraint by an infinitesimal value. In the case of the Supply-Demand constraints, this shadow price has a very particular economic interpretation – it is the additional cost to the system of an additional megawatt-hour of demand. In microeconomic terms, it is the marginal cost of an additional unit of supply.
In a workably competitive market, this can be thought of as being the price yielded by a capacity expansion model. When the marginal cost of building a new plant is higher than the shadow price of the supply demand constraint (ie, the price) a new plant will remain unbuilt. But when the price exceeds the marginal cost, the new plant will be built. This is a desirable characteristic of the model, as it says new plants will be built up to the point where they make an economic profit, but not beyond that point.
Where to from here?
So now we have a means of planning out the system. We have a model that can help us project the new plants that will be built economically, the old plants that will be similarly retired, and how all these plants will work together to ensure supply meets demand. But there have been simplifications along the way – for example, we have not accounted for the following:
- The ability for participants to bid above short run marginal cost.
- Operational factors such as ramp rates, and generator outages
- Other factors such as considering the effect of different traces for demand, wind, and solar.
One might ask why we did not simply account for these factors in our capacity expansion model. The answer is that were we to do so, the computational complexity would be prohibitive – the problem is simply too big to solve in one go, or even in large chunks. It is for this reason that the next step in a standard modelling framework is to simulate real-time dispatch – the subject of Part 2 in this series.
Contact
"*" indicates required fields