Search This Blog

Friday, August 5, 2011

Hey Grid, What Makes You Think You're So Smart?

On July 1, 2011, Pacific Gas and Electric (PG&E), Southern California Edison (SCE) and San Diego Gas and Electric (SDG&E) submitted their Smart Grid Deployment Plans as required by the California Public Utilities Commission.  At 290, 178 and 391 pages respectively, the plans are definitely weighty.  They also contain lots of attractive graphics, informative statistics, and interesting projections.  They estimate an implementation cost in excess of $6 billion (on top of the more than $1 billion each for PG&E’s and SCE’s Smart Meter deployment) to achieve a set of vaguely defined benefits that they estimate to be only slightly greater than the cost.  The smart grid benefits include:
·      Safe and reliable integration of renewable energy resources.
·      Integrating electric vehicle charging into grid operations.
·      Enhanced demand response.
·      Customer empowerment.
·      Improved grid reliability – reduced outages.
·      Automated and improved grid management.
·      Foundational and cross-cutting utility systems, facilities and programs necessary to continuously improve application of new smart grid technologies.  (Whatever that means.)

Behind the plans is the implication that there are massive future benefits that have not yet been identified – Smart Grid killer apps if you will – that will do for the electricity industry what advances in cell phone technology have done for the telecommunications industry.  That promise of future market transformation and a wonderful new world brought about by the new and improved Smart Grid is the underlying driver behind this process.  Otherwise, it is just boring utility infrastructure stuff.  The problem is that the platform for developing this exciting future is being crafted by utilities and their regulators, a combination not known for innovation.  Google and Microsoft, entities with a somewhat better reputation for high-tech innovation have recently terminated their nascent Smart Grid programs.  What’s the deal?

The problem is simply that the utility industry does not and cannot undertake the kind of risk-taking and innovating needed to make the Smart Grid anything more than an incremental improvement in utility operations.  It has nothing to do with the quality, intelligence, and forward thinking of utility management and everything to do with the regulatory compact under which utilities and their regulators operate.  In other words, it’s all Samuel Insull’s[1] fault.  It was Insull who promulgated the idea of natural monopolies to avoid the costly waste of duplicate electric systems and the use of regulated rates to protect consumers.  Under this simulated market model, utility companies have their regulators determine whether investments proposed and expenses incurred are just and reasonable and issue Certificates of Public Convenience and Necessity to approve proposed investments.  Utilities are then allowed to recover their costs and a return on equity for their capital investments.  Thus, utility focus is not on offering consumers the most value or developing competitive advantage through innovative products, but on building ratebase and demonstrating to their regulators that their investments and expenses are reasonable and thus worthy of recovery through rates.  In other words, utility earnings are a function of how much stuff they can convince regulators it is reasonable for them to own.  This is not a recipe for the innovation and risk-taking required to develop new and better products or services or for finding new needs or wants that The Smart Grid can fulfill.  It is, however, a great way to build ratebase in new ways and to pass the cost to consumers based on potentially illusory – but reasonable – anticipated benefits.  Up side may be limited to an authorized rate of return on equity, but even failed investments or massive cost overruns can be charged to consumers provided they are found to be just and reasonable.  Utilities have no incentive to think outside the box, but plenty incentive to make sure the box is very well built using the finest, most reasonable, materials.  Is it any wonder that Google and Microsoft decided to take a pass?

What can we expect from utility Smart Grid activities?  The most likely thing is capital investments that reduce operating expenses.  This is consistent with the initial Smart Grid investment – the Advanced Metering Infrastructure, aka smart meters.  While utilities wax poetic about empowering customers and facilitating demand response, the primary benefits of the new metering infrastructure are eliminating the need for meter readers and the getting the ability to turn electric service on and off remotely.  Both replace pass through expense items (employees) with rate based investments in meters.  On top of that, California utilities will be able to continue to get a rate base return on the old – not smart – meters that are being replaced.  They were, after all, a reasonable investment when purchased.  Other Smart Grid investments in distribution and transmission automation will be justified based on things like reduced outage rates and durations - soft benefits, but reasonable. 

The bottom line is that as long as The Smart Grid is in the realm of utility companies and regulatory agencies its impacts will be limited and the transformative “killer apps” will remain nowhere to be found.  Remember, when cell phone licenses were issued in the early 1980s, at least two licensees were allowed in each area and only one could be a regulated telecommunications company.  It was this competition, not the reasonableness of cell phone services that produced the innovation and amazing success of the industry.  We have a long way to go before that kind of competition comes to the electric utility industry.  But at least now I can find out how much electricity my house uses each hour, information I never realized I needed before, but may soon be able to access from my Smart phone.


[1] Insull started as Thomas Edison’s private secretary in 1881, developed the regulated utility industry model into a thirty state empire, was indicted and tried for fraud in the 1930s only to be acquitted by a jury that needed only two hours of deliberation to reach their verdict. 

Friday, July 15, 2011

Googling Electricity


Google recently announced that it would be providing a $280 million equity investment in SolarCity to support SolarCity’s residential solar lease program.  SolarCity claims that the cost its 20 year solar PV system lease would be lower than the electricity bill savings over the life of the lease.  The prospect of a rooftop solar system providing electricity at a lower cost than the local electric company is a potential game changer that I had to investigate.  I was surprised to find out that by putting a leased solar system on my rooftop I can get electricity for 20 years at a fixed price of about 12¢ per kWh.  How could this be?  Does it portend the beginning of the end for the electric utility industry?  Was Jerry Brown aware of this when he made a campaign promise of 12,000 MW of distributed generation in California?  So I decided to sharpen up the spreadsheet and find out.

How Does It Work?
There are currently at least three companies offering residential solar leases or power purchase agreements in my area.  They all offer variations on the same concept:  a zero down payment ten or 20 year lease with payments (per month or per kWh) that increase by a fixed percentage (2.5 to 4%) each year.  The lease can be converted to a fixed payment for the entire term by making an up front payment of 5% or more of the net purchase price, or could be entirely prepaid.   The effective annual interest rate for deferring payment is over 13%, which takes a serious bite out of the savings and makes prepayment attractive.  The cost of the prepaid 20 year lease came out to under $3.50/watt DC, including a 20 year equipment warranty and a meaningful[1] annual production guarantee.  The maximum cost of electricity (payment / total guaranteed production) is 13.2¢/kWh, compared to my current average electricity rate of 17.5¢/kWh, a price which has increased an average of 5.3% annually over the last 10 years.  The utility’s net metering program, which credits peak period (daytime) production at higher rates, results in an actual bill reduction of 27.5¢/kWh, making the solar investment look even better.  Should utility rates continue to increase at 5.3%, the savings would average almost 48¢/kWh.

Is It Really That Cheap?
This can’t be right, there must be hidden subsidies involved somewhere.  In fact, the difference between the total cost of the system and the prepaid lease cost is about $2.70/watt – over 40% of the total system cost of $6.20/watt.  And this is without attaching any value to the performance guarantee or the 20 year warranty, which includes anticipated replacement of the inverter after a dozen years or so.  The reduction includes the California Solar Initiative (CSI) - $.30/watt,  a 30% Federal investment tax credit - $1.76/watt, and lease savings - $.64/watt.  How can it possibly cost less to lease a system for 20 years (after which they either give it to you or uninstall it and return your roof to its pre-solar condition) than to buy it?  Two things – depreciation and RECs.  Google – the equity investor – gets to depreciate its entire cost of the systems it leases in the first year, which translates to a 30% income tax reduction off the top.  Assuming a 100% equity investment and all the other subsidies, Google’s net cost of the system is $2.16/watt.  That leaves $1.34/watt for operating cost and profit.  The value of the Renewable Energy Credits (RECs) is a bit more difficult to estimate.  Values from 0.5¢/kWh up to the California PUC’s price cap of 5.0¢/kWh have been suggested, which over 20 years equates to between $.13 and $1.30/watt.  And, of course, it will be much easier for Google to extract that value from $280 million in solar investment than it would be for anyone with a small residential sized system.  So, Google gets a healthy return on its green investment, I get to save money on electricity while demonstrating my green chops, and SolarCity gets to create jobs and support US-made solar equipment.  Everybody wins.

So Who Pays?
Obviously, if I’m paying $3.50/watt for a solar system that costs $6.20/watt to install and SolarCity and Google are profiting, someone else is picking up the tab.  For that we can thank the American taxpayers and California utility customers.  Federal tax credits and accelerated depreciation cover more than half of the retail cost of the system.  In a less “climatically correct” environment these would be characterized as tax loopholes that increase the federal deficit.  The CSI rebate, one component of a 1.5¢/kWh “Public Purpose Program” charge on California electric customers, currently covers 30¢/watt (it was once $2.50/watt).  Net metering – the ability to be credited for daytime solar generation at high peak period prices (26.5 to 48.5¢/kWh) while being charged for night time usage at off peak rates (9.3 to 31.6¢/kWh) – means that every kilowatt-hour my leased solar system generates at a cost of 13¢/kWh reduces utility charges by 27.5¢/kWh, while saving only 6-10¢/kWh in fuel costs.

Is It Worth It?
Advocates of renewable electricity cite energy independence, climate change mitigation and job creation as reasons for investing in (or subsidizing depending on your perspective) technologies like wind and solar.  As I pointed out last year here, the United States uses very little oil to make electricity.  In California the marginal fuel for power production is natural gas, little of which is imported.  As a source of GHG reduction, solar costs about $187/Tonne of CO2 equivalent.  The renewable industry does appear to create jobs, however, and does so without competing with other non-government industries.  I guess we should just think of it as a 21st century version of public works projects.  Yes, I did indeed decide to buy 20 years of electricity up-front.  Just doing my part for the economy.  


[1] You are paid 18.9¢/kWh escalating at 3.9% annually for production under the guaranteed level.

Tuesday, June 7, 2011

Know More Nukes


It seems pretty likely that the Fukushima nuclear plant accident is going to send ripples (if not tsunamis) through the nuclear power industry.  Unfortunately, as is so often the case, the ripples are likely to be misguided, focus on the wrong issues and recommend untenable solutions.  Such is the controversy over nuclear power.  Rather than focus on the emotionally-laden issues that tend to drive the nuclear debate, I’d like to look at some of the practical issues and tradeoffs that characterize nuclear power.

Nuclear energy was developed after World War II as a peaceful use for the devastating power of atomic fission.  It was based on the simple idea of using the heat generated by controlled fission reaction to make steam that could be used to drive a turbine and make electricity.  Because it does not rely on the combustion of any fuel, nuclear fission was an ideal fuel for ships – particularly submarines, which can remain submerged almost indefinitely.  When used to operate power plants, nuclear energy was characterized in the 1950 and 1960s as providing electricity too cheap to meter.  It has substantial advantages over other electricity sources, particularly coal.  Characterizing coal-fueled generation as “burning dirt” is not far from the truth.  The table below compares the annual requirements for a 1,000 MW nuclear generator versus the same size coal plant (burning Power River Basin coal), both running base load, about 90% of the time:

Characteristic
Coal
Nuke
Fuel required
5,000,000 tonnes
27 tonnes
GHG output – CO2 equiv
7,884,000 Tonnes
0
NOx emissions
14,000 Tonnes
0
Radiation released
490 person-rem/year
4.8 person-rem/year

That’s right, a coal power plant requires over 600 tons of coal per hour while a nuke needs to be refueled once every 18-24 months.  It’s no wonder that nuclear power plants might have been seen as an incredible boon to the industry.  But, of course, none of the nuclear opponents are suggesting replacing the nukes with coal plants.  Instead, the solution is usually to build more renewables, they don’t pollute, the fuel is “free,” and they create more jobs.  The renewable resources of choice are primarily wind and solar.  Time for another reality check.

Wind Power

Wind turbines are probably the most widely recognized renewable resources.  They must be located in areas where sufficient wind is present and only operate when the wind is blowing.  Even sufficiently windy areas can only support generation about 30% of the time.  As a result, to replace a baseload nuclear generator of 1,000 MW, would require over 3,000 MW of wind generation plus 1,000 MW of storage that can operate the 70% of the time when the wind is not available.  Wind turbines require about 30-50 acres/MW, so 3,000 MW would require about 120,000 acres or 187.5 square miles.  This is in addition to the storage facility, which with currently available technology, would need to be a pumped hydro storage facility which would also have a substantial footprint and cost.

Solar, Maybe?

Solar power is another favored alternative to nuclear power.  It has the same kind of limitations as wind, though it is possible to have storage incorporated into a solar resource (by using concentrating solar thermal and molten salt storage rather than photovoltaics).  But here again, many more MW would be needed to produce the same energy as a baseloaded nuke.  Solar can only produce energy when the sun is up and high enough in the sky to be collected.  A 25% capacity factor is typical for middle latitude installations, so 4,000 MW of solar would be needed to provide the same energy as our 1,000 MW nuclear plant.  Solar plants need about 7 acres per MW, so about 44 square miles of land would have to be covered by collectors to be displace a single nuclear unit.  Because of the negative impact of cloud cover, these plants would be best sited in relatively sunny areas like the desert. 

Natural Gas Generation

The most likely replacement for nuclear power is probably natural gas combined cycle generation.  It is a fossil fuel that is combusted in the generation process, but because combined cycle generation is more efficient than steam cycle generation (<7MMBtu/MWh versus 10MMBtu/MWh for coal) and natural gas is less carbon intensive than coal (117#CO2/MMBtu versus 213 for coal), it produces about 40% as much CO2 per unit of electricity as coal.  Natural gas is much cleaner burning than coal and delivered via pipeline rather than unit train, gas plants are more scalable than coal or nuclear generation.  Thanks to advances in drilling technology that are allowing access to shale gas (yet another anathema for environmentalists), natural gas availability is increasing and prices are remaining fairly stable.  The United States could replace its entire nuclear fleet with gas-fired generation and increase total gas consumption by less than 25%, a significant increase but a potentially viable one.  Besides the fact that this conversion would increase GHG emissions by about 300 million tonnes per year, if the gas were used instead to replace coal power plants, it would reduce GHG emissions by about 450 million tonnes per year.

Distributed Generation

For those who prefer to think outside the central power plant box, distributed generation is the answer.  A combination of rooftop solar with storage for residential loads could have some promise, though it would be a major undertaking.  A fairly large residential solar system would cover about 800 square feet of roof and produce a maximum of 10 kW.  Being fixed panels, these systems would produce less energy than a centralized tracking system, with approximately a 20% capacity factor.  That would mean 5,000 MW to replace a baseloaded nuclear plant.  That would be about 500,000 rooftops.  Local battery storage could be accomplished with a battery pack about the size of one used for an all electric car, about 70 kW.  To replace the entire 101,000 MW of nuclear generation in the US would require solar panels on 50 million roofs with a comparable number of electric car batteries.  Should electric car batteries and photovoltaic panels continue to drop in price, this could become a viable option – in a decade or two.  By piggybacking on electric vehicle development, this approach could actually significantly reduce reliance on imported oil.  Should fuel cells ever become a cost-effective alternative, they could also prove to be a game-changer.

The Real Problems with Nuclear Power

When they work the way they’re supposed to, nuclear power plants are very impressive.  They don’t pollute the air, don’t create greenhouse gases, require virtually no fuel, can fit in a fairly small space, and like to run flat out all the time.  They put less radiation into the atmosphere than coal and produce vastly smaller quantities of waste.  Their primary problem, when operating as designed, is the large amount of heat that must be removed from the process. Nuclear plants produce steam at a lower temperature and pressure than generators that rely on combustion can produce.  As a result, more lower temperature heat must be removed from the steam to achieve efficient operation.  That is why nuclear plants have those huge iconic hyperbolic cooling towers, or are located adjacent to bodies of water into which they transfer heat.  The amount of heat they transfer can impact local ecosystems, not to mention the organisms destroyed in pumps and screens as they are sucked through the cooling system.  The US EPA and at least one state (California) are developing regulations to reduce or mitigate the impacts of this once through cooling process.
Another problem with nuclear power is what happens when things aren’t working the way they’re supposed to.  Because of the potential problems when something does go wrong, nuclear power plants are pretty much uninsurable.  Instead, governments legislate liability limits for nuclear plant owners or take responsibility beyond a certain level.  While this has been necessary to make investment in nukes commercially viable, it eliminates or at least mutes signals to engineer changes that would reduce the potential risks associated with something going wrong. 
What about the radioactive waste generated by these plants?  While the “preferred” solution of hauling spent nuclear fuel to a geologically stable location where it can be stored for the thousands of years needed for it to reach safe levels of radioactivity has not come to pass, dry cask storage systems make it possible for a nuclear plant to store all its spent fuel on site in a passively safe manner.  According to the World Nuclear Association[1], worldwide, there are about 270,000 tonnes of used nuclear fuel currently in storage with an additional 12,000 tonnes added annually.  Compare this to the 125 million tons of combustion by-products produced annually by coal power plants in the US.  Nuclear plants could actually store all their spent fuel on site for their entire operating life in containers that can be safely shipped to centralized storage or reprocessing facilities when and if they become available.

The Bottom line

Nuclear energy is an attractive base load generating resource that can produce large amounts of electricity without the pollution problems and global warming impact of plants that rely on combustion of fossil fuels.  Nukes require much less real estate than solar or wind generation and provide a much more predictable energy supply than these intermittent resources.  When the smaller scale, passively safe, factory built nuclear generators currently under development are licensed and become available, they may have an important role to play in our energy future.

Monday, March 21, 2011

Subsidizing Electricity Storage


One of the unique characteristics of the electricity grid is that it epitomizes the concept of just in time delivery.  The amount of electricity generated must exactly equal the amount consumed moment to moment.  Basically, whenever a light switch is turned on a generator somewhere has to increase its generation – ramp up – to provide the needed power.  That’s why we have all these fancy control rooms with computerized map boards like the one shown below.



That’s also why the prospect of non-dispatchable variable generators like wind and solar make system operators nervous.  Sudden shifts in the wind or moving clouds can cause rapid and unanticipated changes in generation which in turn requires other dispatchable generators to be available to increase or decrease their production to balance the changes blowing in the wind.  As the amount of variable generation increases, the potential magnitude of the balancing challenge increases with it.

One potential solution to this is to develop some kind of advance storage mechanism that can store excess generation and then release it when it is needed.  On one level there is nothing new here.  Some would argue that fast response gas turbines perform a storage function – storing ancient sunlight in the form of natural gas and releasing the energy in the form of electricity when needed.  Pumped storage hydroelectric facilities serve the same purpose – pumping surplus electricity up hill and then having it flow through turbines when needed to generate power.  Other technologies, like compressed air energy storage, can serve the same purpose.  But are they enough?  

Some advanced storage advocates argue that these battery or flywheel-based technologies are the answer because they can respond quickly and be built most anywhere.  Like photovoltaic panels, however, they need special treatment and subsidies to prime the pump and make them cost-effective.  Storage advocates have been successful in getting the California legislature to pass a low (AB 2514) requiring the CPUC to “Consider the Adoption of Procurement Targets for Viable and Cost-Effective Energy Storage Systems,” which has resulted in a rulemaking (R.10-12-007) to do just that.  Advocates claim that the fast turn around rates which allow for fast ramping overcome the energy limitations of these devices and somehow provide an improvement over gas turbines and available regulation resources that are currently used.  Others suggest a blatant attempt to get special treatment and subsidies for a “climatically correct” technology that would otherwise not be competitive.  The reasoned approach would be to identify renewable resource integration needs, specify ancillary services products to meet those needs, and then let the market decide which technologies do the best job of meeting the needs.  If fast ramping is needed and these new technologies are the best way to provide the service, there will be a demand without special carve outs or subsidies.

Monday, February 21, 2011

Competitive Transmission Providers?


One of the hot topics in California at the moment is the CAISO’s apparent favoring of incumbent utilities in approving new transmission projects.  Proponents of alternative transmission system developers claim that the ISO’s purported preference diminishes competition and ultimately increases cost to consumers.  They claim that they can build transmission facilities better, cheaper and faster than the big incumbent IOUs (Pacific Gas & Electric, Southern California Edison and San Diego Gas and Electric), and that the ISO is unfairly discriminating against the competitive developers just to keep the utilities happy.  If you look back at the development of non-utility generation and the merchant generation business, the competitors have a good point.  Over the last dozen years or so, the competitive market has put downward pressure on generation development and operating costs, shortening the development process, and shifting risk from the utility ratepayer to the developer.  This has been good for the generation business so surely it would benefit the transmission side of the business.

There is no doubt (except maybe in the “Southern” states and among the APPA) that the competitive generation business has been good for consumers, so we should strive to bring competition to other aspects of the business as well, right?  Maybe not.  The generation business clearly benefited from reduced costs and increased efficiencies brought about by competition.  However, one of the primary reason was not increased efficiency compared to bloated vertically integrated utilities (though it certainly was), but a completely different profit paradigm.  Merchant generations make their profits from selling energy at prices higher than their costs.  The most effective way to increase profits is thus to reduce costs and increase efficiency over both the short and long-term.  This encourages practices like hedging gas price risk and minimizing heat rate.  Utilities, on the other hand, were able to pass through “reasonable” expenses and recover a specified return on equity.  In other words, the more they invested in rate-base the more earnings they were able to return to their shareholders.  This peculiar “regulated cost of service ratemaking” was a function of the “natural monopoly” compact that was developed by Samuel Insull and implemented in the early 1900s. It was very effective for most of the century as electrification spread and marginal costs decreased.  But when competition is an option it makes little sense to reward companies for convincing regulators that they should build more stuff.

Unfortunately, that’s where we are in the transmission part of the business.  Because of the inter-connected and integrated nature of the transmission grid and the variety of impacts a transmission upgrade may have, it’s not reasonably feasible to charge for usage.  Also, the operating costs of a transmission line are trivial compared to the capital cost to build it, so that reduced operating costs have an insignificant impact.  Then there’s the fact that transmission covers a huge geographic area, making franchise agreements and access to eminent domain important characteristics.  So what exactly is it that “merchant” transmission developers have to offer that make them better suited to develop transmission projects?  Virtually all rely on the “go to FERC and get a guaranteed rate of return approved and have the ISO include the costs in its transmission rates” model, which bears a very strong resemblance to the utility model they’re proposing to replace.  Some might argue that it’s just a different set of shareholders.   

Thursday, February 10, 2011

Solar PV - Dis-economies of Scale?

Some recent announcements from Southern California Edison (SCE) appear to warrant more than a little head scratching.  SCE recently announced contracts for the purchase of energy from two different sets of sources.  On January 31, 2011, SCE filed the contracts it had announced in November for 239 MW (567 GWh/year) of solar PV projects resulting from its Renewable Standard Contract (RSC) program.  The 20 projects are all between 4.7 and 20 MW, are for 20 year terms, are scheduled to come on line between April 2013 and April 2014 and are all priced  below the 2009 Market Price Referent (MPR) ($108.98 for contracts starting in 2013 and $112.86/MWh for contracts starting in 2014).  This would certainly appear to support assertions that PV prices are coming down dramatically and could soon be competitive without massive subsidies. 
Also in January, SCE announced that it had executed seven contracts totaling 831 MW for solar PV resources between 20 and 325 MW each, coming on line from 2103 through 2016.  These contracts are all priced ABOVE the very same MPRs that the smaller RSC contracts are below.  How could it be that smaller projects using the same technology are less costly on a per unit basis than larger projects?  I’ve no idea, and not being privy to the confidential pricing terms of the contracts, I can only guess.
The first issue is one of scale – PV installations in the multi-MW size range all pretty much use the same panels, inverters, transformers and other equipment.  A larger project that interconnects at transmission voltage (115-230 kV) will require an extra step of transformation and more costly interconnection facilities than a smaller project that interconnects at distribution voltage (12-33 kV).  Review of SCE’s filing of the RSC projects does show that they are almost all interconnected at distribution voltage.  Since distribution facilities cannot handle anything much larger than 20 MW (if that), larger installations would have a “dis-economy” of scale based on interconnection voltage.
Another issue relates to parcel size and permitting.  At roughly 10 acres per MW, a 200 MW project requires a huge tract of land – over three square miles!  Such a project is likely to get much more attention in the permitting process than a 10 MW project that “only” needs about 100 acres.  It is also more likely to have a significant environmental impact and require both more complex permitting and greater mitigation costs.  Here again size is a dis-economy.
Then there is the cost of capital.  A 200 MW project will cost upwards of a billion dollars.  Raising that much capital probably incurs a higher cost and may also require a higher percentage of equity to secure debt financing.  Both would tend to increase the unit cost of the project compared to smaller projects.
Then there is the PPA negotiation process.  SCE developed the RSC (and the CPUC approved the related Renewable Auction Mechanism, aka RAM) as standardized contract with limited room for negotiating terms and conditions.  Bidders are required to bid on price alone head-to-head against other potential projects for a contract.  They are likely to offer their lowest and best price into the solicitation. Large RPS contracts, on the other hand, often require months of negotiation and in many cases go through upward pricing adjustments before they come to fruition.  How much this adds to the final price is hard to say.
The conclusion drawn from these interesting submissions is that an increased reliance on distributed PV generation – whether on rooftops, vacant lots, or surplus agricultural land – may not prove to be more costly than the huge projects built in the middle of nowhere.  Of course, the smaller projects do not require the massive transmission projects needed to export power from the middle of nowhere.  That means less utility ratebase – often receiving incentive rates of return – available to benefit IOU shareholders, the very same IOUs that are negotiating PPAs with the massive projects.  Interesting coincidence.

Tuesday, January 4, 2011

2011 Predictions

It's time once again to prognosticate expected changes to the coming year.  With a "new" governor in Sacramento, things could get interesting. However, Governor Brown can probably be expected to stay the course regarding renewables policy, though he's less likely to veto "California-centric" 33% RPS legislation.  His stated energy policies are virtually identical to his predecessor’s, so there is little reason to expect significant changes, particularly as he focuses on resolving the state’s fiscal crisis.  Brown does have two CPUC Commissioner positions to fill, as well as one CEC Commissioner and two CAISO Governors.  How soon he fills those positions – and who he appoints – should give an idea of whether he intends any early redirection of regulatory energy.

CPUC
Departed Commissioners Bohn and Grueneich represented the “extreme” positions on the CPUC (though with most Commission votes at 5 to 0, the distance between the extremes is not that great).  The Governor’s choices to replace them should give an idea of whether he plans a significant shift in CPUC policy.  Since the Commission has generally mirrored the Governor’s stated energy policy, there is no reason to expect any significant changes.  The current flow of events does, however, suggest some potential opportunities and challenges in 2011.

Resource Adequacy 
The CAISO has started the ball rolling toward reconsideration of some kind of centralized capacity market by asking the Commission to consider load following characteristics for resource adequacy.  The need to develop renewable integration products and maintain some level of viability for non-contracted conventional generation resources, should provide new life for a capacity payment mechanism.  The key to consideration of a capacity mechanism will be in emphasizing the need for resources available to integrate renewables.  If carefully handled, acknowledging the need for payment mechanisms to offset energy market price reductions could provide basis for new capacity market, though it will take 18 to 24 months to get anything adopted.

Long Term Procurement Planning
The first quarter will be the time to establish an out-of-the-box approach for new generation development.  IOU plans are likely to show minimal need for new resources.  Combination of slow economic recovery - less load growth[1], optimistic RPS and CHP forecasts - 2000 MW of new CHP seems excessive, IOU PV programs and RAM impacts are all likely to focus resource acquisition on replacing OTC units and supporting renewable integration.  Absent an explicit requirement to replace OTC generation with new flexible resources, the IOUs’ bundled customer service plan is likely to call for virtually no new gas-fired generation.  To the extent that OTC resource owners are able to come up with some kind of voluntary OTC replacement plan, they may be able to use it to justify new generation.  Whether the result will be a specific OTC replacement program or some kind of RFO “bonus” for OTC replacement remains to be seen.  This could be the year that a coherent OTC replacement policy gets serious consideration.

Smart Grid
A Smart Grid administrative structure will continue to develop.  Smart Meter brouhaha will dissipate as actual radio emissions impacts are understood.  Some sort of wired option - using landline telecom, perhaps - may become available as alternative for complainers, at their cost.  The idea of making smart meter installation optional is not likely to get much traction unless PG&E bungles things yet again.  Continued talk of smart grid as new killer app without any real consumer impact.  IOUs (particularly SCE) will push for money for grid improvements that improve reliability and operations.  ISO will continue to implement small changes, third party data entities will continue to participate with little in the way of meaningful business opportunity.

Electric Vehicles
The impact of EVs on the grid will remain trivial.  Price point and range limitations are likely to restrain enthusiasm.  The key to EV success may hinge on development of “battery service providers” that will facilitate battery swapping or other fast “refueling” options and change the equation for EV ownership.  Shake out will most likely evolve over the next two to four years as winning battery design and business models become apparent.  Long-term battery standardization and servicing model will be key to EV success.  Regulatory success will include the development of sub-metering rules that will not rely on utility ownership or control.  So far, the CPUC appears to be headed in the right direction – not attempting to put competitive EV service providers in the same category (and with the same restrictions) as electric utilities.

Renewable Portfolio Standard
This could be the year that the economic wisdom of the 33% RPS requirement begins to be questioned.  The combination of a good hydro year, continued low natural gas prices and PG&E’s negotiated General Rate Case settlement will keep current electricity prices from getting out of hand and igniting a "ratepayer revolt."  However, the next RPS solicitation (expected in the first quarter of 2011) should result in a revised – and reduced – Market Price Referent (MPR) that will make more renewable projects appear uneconomic.  A couple more large RPS projects will fail, causing much hand-wringing and questioning of the RPS paradigm.  Some big solar projects will begin construction, but by the end of the year California will not yet lead the world in installed solar generation capacity (but could by the end of 2012).  PV prices will come down a bit for smaller projects (sweet spot likely to be 5-20 MW thanks to IOU purchase programs).  33% RPS may get adopted by legislature, though there will finally be some questioning of the cost, particularly as gas prices - and MPR - continue to remain low.  Success of RAM and PV programs combined with failures of large projects may cause some questioning of reliance on huge projects.  It is possible that IOUs will finally decide that RPS prices are too high and exercise right to say so, slowing RPS procurement, but that probably won’t happen this year.

RECs will finally be adopted, though 33% legislation could muddy waters.

Energy Storage
“New” storage technologies will get a boost from the CPUC’s recent rulemaking (R.10-12-007) but only in how much it's talked about.  Policy development may be driven somewhat by PG&E’s pumped storage application.  Battery storage coordinated with EV development could begin to gain some steam.

Utility Generation Projects
PG&E's Manzana project will get a favorable PD and be approved.  If it is not, the language regarding ratepayer-funded versus developer-funded risk may be a useful stone to hurl at the CPUC's hybrid resource development paradigm.

SCE will get to build peaker in Oxnard.

CPUC Transmission Authorization
Questions will begin to arise about need for transmission projects as more huge RPS projects get cancelled anticipated delay in transmission development as projects look for ways to start building.

CEC
The CEC Infrastructure assessment will turn into a typical CEC activity - interesting but of minimal policy impact.  It will likely make use of information available from other sources and conclude that transmission is needed to access renewables.

The number AFC cases under consideration will be significantly reduced as the rush for ARRA funding ends.  Expect no more than 6 or so active projects once moribund projects are removed.  Several solar thermal projects with PPAs will need to either begin the siting process, convert to PV or fade away.

CAISO
The CAISO is in the process of moving into its new headquarters which should keep it fairly internally focused during January.  They can expect some push back on the 2011 Transmission Planning Process, with questions about transmission projects included/excluded.  Expect some changes to the transmission plan as RPS projects rise and fall over the year.  It will become more and more obvious that out of state RPS projects likely to be more cost-effective and feasible than reliance on in-state resources.  If the legislature once again passes 33% RPS legislation that focuses on resources in California, it will further hamper the IOUs meeting the RPS requirement and may kick off the “it’s too expensive” argument.  Issue of WEC-wide balancing market likely to increase in importance, though implementation will be slow.


[1] The CEC issued a revised Short-Term Demand Forecast that estimates the CAISO peak demand will be some 2,400 to 2,700 MW below the 2009 forecast.  Current forecast values are comparable to the forecast in the 2006 LTPP decision (D.07-12-052).