California Emissions and Energy History

California has been able to decrease emissions by about 12% from 2000 to 2017. Most of this reduction was from shifting from coal use to natural gas.

There was a drop of about 8% in the overall electricity from coal from 16% to 8% in 2008. This dropped CO2 emissions by about 30 million tons of CO2 per year. It took California about 4 years to replace the clean energy generated by the San Onofre nuclear plant.

The 18.8 cents per kWh Los Angeles households paid for electricity in September 2019 was 35.3 percent more than the nationwide average of 13.9 cents per kWh. California’s electricity prices have increased about 40% from 2011.

California invested over $100 billion on solar and wind from 2001 to 2017.

If California had ordered 20 OPR1000 reactors from South Korea, they would have cost $100 billion if they were twice regular South Korea construction costs.

20 OPR1000s operating at the current U.S. national average 93% capacity factor and San Onofre and Diablo canyon still online, California could be producing 200 terawatt-hours of clean electricity — more than total in-state generation in 2016 and 97 percent of in-state generation in 2017.

14 thoughts on “California Emissions and Energy History”

  1. Yes, everything doesn’t have to be a conspiracy, the provenance of every electron isn’t easily quantifiable.

  2. Gas peakers with a 20% capacity factor are gonna be off the table by the time climate hysteria has fully developed. (For the record, I’m pretty worried but not yet hysterical.)

    One of the properties of renewables is that the more nameplate capacity you build, the bigger the absolute fluctuations in capacity will be when weather changes things. That means that it’s very hard to rely on anything close to 100% renewables, because the more you overbuild, the more the downward fluctuations grow. You can get things to the point where the downward stuff never drops below the demand curve, but you’d need nameplate capacity that was a large multiple of max demand. It’s not economically feasible.

    Adding some baseload capacity changes things. Even if you’re not building out your baseload to minimum demand (which was kinda the old-timey definition of “baseload”), going from close to 100% renewables down to ~70% renewables, by providing 30% baseload, reduces those gross fluctuations, which means you need less renewable nameplate overcapacity.

    By reducing the gain on the fluctuations, you can fill the dips below demand with some combination of storage (which gets charged using the peaks over demand) and gas peakers. You’ll need considerably fewer peakers, and they can charge through the nose while emitting almost no carbon, because they have the tiny capacity factors.

    I’m pretty sure that this is cheaper than over-building your renewables and gas peakers.

  3. This is why gas plants will never drop below 13% of total energy provided (with like a 20% capacity factor) over a 5 year period.
    So the cheapest solution is to plant extra trees to offset and pay the carbon tax and/or use biomethane. Maybe Allam cycle with carbon capture if that turns out to be cheap enough.

  4. If all you’re doing is running your gas plants on days with bad weather, then the GHG emissions are negligible.

    But there’s a much tougher problem with the economics of renewables as you push them to be more and more of your total capacity. To get them to be sufficiently reliable you have a limited number of strategies:

    1) Massively overbuild renewable capacity, until they’re sufficiently reliable that they can handle up to 4-sigma weather situations. But that means that capacity factors plummet for each generator, which means that they have to jack their prices..

    2) Build out enough gas peakers to cover the 4-sigma weather events, and somewhat less renewable capacity. Same problem, though: You want capacity factors on the gas to be low, so you pay through the nose when you need them.

    3) Build out enough storage to cover the 4-sigma weather events, with still less renewable capacity. Storage looks like baseload capacity, and as you raise the amount of baseload in the grid, the variability of the renewables becomes less important. But storage is insanely expensive: currently several times the LCOE of nukes. However, this has a better economic story than #1 or #2.

    4) Build out some other kind of decarbonized baseload capacity. This is almost identical to building out storage, so if it has a lower LCOE than storage, it’s a better deal for the consumer. But all the candidates are either loathed (nukes) or science fiction (fusion, SBSP, high-scale geothermal, etc.).

  5. “Why not just run the natgas plants”

    For the same reason you’d be trying to make solar and wind provide most of your energy – to avoid GHG production.

    Now if you have an efficient way to make a carbon neutral “natural” gas, you get to keep using existing gas power plants, which isn’t nothing. But losses for that (CO2 extraction, H2O splitting, propane or methane production) would be more than 20%. OTOH, the storage tanks and power generation equipment could be cheaper than batteries in some roles. Lots of “it depends” issues.

  6. That 100 billion number for California is pure made up bullshit. The link article cherry picked the most expensive projects ever in California and then multiplied by the total installed and called it a day.

    Pure bullshit.
    EDIT: and we STILL see fuel, operation, and refurbishment inflation adjusted costs ignored for nuclear

  7. Round trip losses for flow batteries and pumped hydro are about 20%. And of course they’re not free.
    Why not just run the natgas plants instead of buying and using storage? That will always be the question and competition.
    That changes if you implement a revenue neutral GHG tax. Pricing something is the first and necessary step to getting to the lowest cost way of reducing use of something.

  8. Already happening. Cali utilities have had negative pricing during the middle of the day since at least 2017, and price spikes at peak demand hours. It’s what happens when tons of power is generated when you need it the least, and not enough power is generated when you need it the most. Storage might be a way, but then you need a lot of subsidies, it is still extremely expensive to scale up. And complicated to “make use of” given there are all sorts of power areas out there with varying connection tech and pricing systems.

  9. “unspecified” is a trick to allow utilities to under-report their emissions. Unspecified is almost all imported power from another state where the utility can’t figure out (or doesn’t want to say) what generated the power. Given its about 10% of the power, you basically need to add back into the energy mix an average of everything else used to make power elsewhere.

    Public Utilities Code Section 398.2(e), as amended by AB 1110, defines unspecified sources as: Electricity that is not traceable to specific generation sources by any auditable contract trail or equivalent, including a tradable commodity system, that provides commercial verification that the electricity source claimed has been sold once, and only once, to a retail consumer.

  10. If California keeps expanding solar and wind per their goals, by about 2030 there will be days in which there are hours where those provide greater than 100% of demand – and competitive bidding might drive marginal pricing to zero or maybe negative. At about that point, further investment probably ceases unless the energy can be time shifted.

    Storage is getting cheaper – but that needs to happen faster if solar/wind are to keep expanding. That can happen through increased volume, same as with solar and wind.

    So perhaps we should look at subsidies for grid battery storage, available to solar, wind and nuclear power. (Yeah, I know, CA wouldn’t do it for nuclear, but maybe at the federal level…) The former to time shift peak generation to other hours, the latter to shift over-production in low demand hours to higher demand hours.

    One cool thing is that the same battery farm could load from solar in the day, dump power during early evening hours, then load from nuclear power in the lowest demand hours of the night and dump that in the hours before solar production ramps up. That’d cut the effective costs of the battery system in half for both modes of generation. It might also encourage implementation of next gen nuclear power to better match renewable generation.

  11. Now superimpose a graph of residential PV penetration and a graph of lost revenues to the utility companies (from just under four percent of residences) and you have the reason for the climbing price of electricity in California. And it’s only going to get worse for them as they piss off more people, and they go solar with batteries. I predict centrally distributed power will be a sliver of what it is today in just a short twenty years.

    Ańother interesting fact: Californian households are among the most efficient. Since they use less power, actual out of pocket payments are on par with other states.

  12. So, 2011 renewables was 14.1%, unspecified sources of power was 14.2%.

    The total of all renewables was less than “we don’t know, the power just comes from something? “

  13. If you make a chart with an X axis of year and a Y axis of CA’s GWhr produced by:

    1. solar + wind
    2. nuclear
    3. all others (hydocarbons)

    And then look to see where SONGS was taken off line you will see that most all of the solar + wind GWhr being produced were equivalent to what SONGS was producing.

Comments are closed.