Author Archives: Chris Williams

About Chris Williams

Chris Williams works with HeatSpring developing products and managing online content. Chris is a NABCEP Certified Solar PV Installer and an IGSHPA Accredited Geothermal Installer. He has installed over 300kW of solar PV systems, tens of residential and commercial solar hot water systems and 50 tons of geothermal equipment. Chris is the Chairman of the Government Relations Committee of New England Geothermal Professional Association and he consults with renewable energy companies on sales, marketing and design.

Why Performance and Not Price Is the Most Important Factor In Finding an Investor or Buyer for Your Solar Projects


Question or comments? If you’re a solar developer or investor and have a story to share that relates to this article or a question about the content, please leave it in the comment section below the article. 

This is a guest article by Chris Lord of CapIron Inc. Chris also teaches our Solar Executive MBA. The next Solar Executive MBA session starts on September 15th. In the course, students will work a commercial solar project from start to finish with expert guidance from Chris along the way. The class is capped so to provide maximum student attention, but there are a limited number of discounted seats. You can get your $500 discounted seat here.

In the Solar Executive MBA, one of the most common topics students have questions about is about identifying, screening, and closing investors or buyers of their solar projects.

Most commonly, students focus exclusively on getting investors who pay the highest price per watt for their projects. In the article on the three keys to defining bankability, we discussed why this is not the best strategy. The investor actually wants the best returns on a project. The best returns means the project is economically strong and reliable.

From the developers’ perspective, there is risk in selecting the right investor. This article will address why it’s critical to address the competence of investors and how developers can screen investors to find the best ones.

Enter Chris Lord from CapIron, Inc

In today’s highly competitive solar PV market, project developers looking for an investor or purchaser for their projects tend to focus almost exclusively on finding those with the lowest project return requirement or willing to pay the highest price for a project.

But is this the best measure to use when locking in your solar project upside?

This article examines the importance of purchaser performance in selecting a project purchaser and outlines ways to collect data that will enable you to assess purchaser performance.

Here is an example of how a developer lost a lot of value in a very short time by ignoring the importance of performance or execution risk when selecting a purchaser for his solar PV projects.

The developer had a mid-sized distributed generation project for sale, largely shovel-ready. The developer asked outside consultants to conduct an auction process among a select group of purchasers. With the bids in, the results were arranged in a matrix to show dollar price against execution risk.

In the matrix, shown below, execution risk was estimated based on a variety of due diligence and market intelligence assessments. The highest execution risk was assigned a ten, and the lowest execution risk assigned a one.


One of the parties added late in the process by the developer offered the highest price at $3.18 a kW, almost $0.35 a kW higher than the average of the other six bidders, and $0.26 a kW higher than the next highest bidder. Based on market experience, the consultants interpreted that as a strong sign that rumors of financial distress at the high bidder were true. The prospective high bidder was desperately trying to bolster a weak pipeline in order to attract a badly-needed infusion of capital.

The consultants recommended a bidder offering a price of $2.85 a kW bolstered by the lowest likely execution risk. Focused solely on price, the developer ignored the recommendation and proceeded with the highest bidder.

After thirty days of intense negotiation on an LOI, and days before execution of the LOI, the purchaser’s parent filed for bankruptcy and the purchaser followed suit. Worse yet, when the developer turned back to the other bidders in an effort to salvage value, he found that they knew of his predicament and were inflexible on terms and soft on their original price bids. Ultimately, the developer settled for $2.82 a kW, but this did not account for the lost legal fees and time spent negotiating a deal that never closed.

1.    Pricing vs. Performance

a.     Why the focus on Pricing?

It is not surprising that project developers zero in on price when selecting a project purchaser. Particularly for small and mid-size developers, finding every possible dollar on the sale price is critical to covering the economic uncertainties inherent in a project’s development and construction phases and generating enough capital to fund continued growth.

The overriding problem facing developers is that there is a complete and natural disconnect between project costs (development and construction) on one side, and the valuation that an investor or purchaser might place on the project.

In the real world, purchasers look solely to the net cash and tax benefits that a project is expected to generate over the 15 to 25 years of its life. By discounting those net cash and tax benefits back to the present using their target return, a purchaser arrives at a price that he or she is willing to pay today for the project and related benefits.

For the capital costs of developing a project, the investor or purchaser is completely indifferent. If a developer spent more than the purchase price, then the developer will lose money. Any amount over the developer’s costs is how the developer generates a return on the development capital invested in the project.  Either way, it has no impact on the value of a project to investors or purchasers.

This sounds simple enough but, given that most developers must find an investor or project purchaser before construction begins, and – worse yet – the actual costs of development and construction may not be known at this point, developers naturally steer to the highest price offered by a purchaser because there appears to be no downside. A higher price gives the sense of security – more margin to cover development and construction unknowns – and, should costs come in at or below projections, more profit to fund future growth.

b.    What’s the downside?

By focusing solely or primarily on price, developers overlook other critical factors including investor or purchaser performance that can dramatically and sometimes adversely impact price. Sometimes this risk is characterized as “execution” risk. Whatever we call it, we are talking about the likelihood and cost of actually closing the specific, targeted transaction with a particular investor or purchaser on terms and conditions (including price) reasonably close to those the parties originally expected when they executed an LOI or otherwise first “shook hands” on the deal.

Performance is important because the ability of an investor or purchaser to follow through and close a transaction in a timely and cost-effective manner can have a bigger impact on a developer’s realized value than the promise of an incrementally higher purchase price from an investor or purchaser who fails to close.

In any financing, there is always a risk that a closing fails. There are at least three main classes of these types of risk: market risk, developer risk, and investor risk.

Market Risk

Market risk is the risk that arises from adverse changes in general market conditions. An example of a market failure, well known to most veterans of solar development, occurred in 2008 with Lehman’s collapse that fall and the onset of the Great Recession. Most project purchasers suddenly lost their tax appetite. Almost all major banks took economic hits to income that saddled them with substantial losses, wiping out the very profits that they were counting on to create their tax appetite. As a consequence, there was very little tax appetite among investors nationwide for the balance of 2008 and much of the first half of 2009. Even when investors did return to the market, tax-drive transaction volume in 2008 was substantially below pre-Lehman projections. In fact, Congress created the Treasury’s Cash Grant program in lieu of the ITC precisely to address that issue.

Continue reading

Posted in Featured Designs, Products, and Suppliers, Solar Photovoltaics | Tagged , , | 2 Comments

AC Coupling – How to Cost Effectively Add Battery Back-up to Existing Grid-Tied Solar PV systems

This is a guest article by Chris LaForge.

Chris is teaching an in-depth 6-week technical training on designing battery based solar PV systems that starts in September. You can read the full description and get a limited-time discount here. If you need to learn how to design, quote, and commission a battery based solar PV array, this is the best course for you.

In the past three years, three trends have converged to create higher demand for battery-based solar arrays: battery prices are declining, the penetration of grid-tied systems is exploding, and homeowners are becoming more interested in backup power.

Retrofitting existing solar PV arrays to include batteries is becoming an opportunity for added revenue for contractors.

Enter Chris LaForge –

AC Coupling

Since the advent of high-voltage battery free (HVBF or grid-direct) solar electric systems, some clients have been frustrated by not being able to use their systems during power outages. The re-work necessary to move to a grid-intertied system with battery back up is costly (GTBB or DC coupled system), inefficient, and, in some cases, unworkable.

Ac coupling can be used in both utility-intertied systems and in off-grid applications. This article will discuss the utility-intertied aspects of AC coupling.

With the advent of AC coupling as a means to introduce battery back-up to an existing HVBF system, an efficient and more workable solution has come to the fore.

AC-coupled systems use the HVBF system while adding a battery-based inverter that works in tandem with the HVBF inverter. It maintains the efficient operation of the PV system while the utility is available and then allows for its operation during power outages by having the GTBB inverter disconnect from the grid, power the back-up load panel and use the power from the HVBF system to power the critical loads in the back-up load panel. It also provides power to the GTBB inverter to charge its battery bank.  If this sounds a bit complicated, well, it is.


Courtesy of Schneider Electric

AC coupling provides the following advantages over traditional DC-coupled GTBB system designs:

  • Retrofit-able with existing HVBF systems (within manufacturer requirements and limitations)
  • Allows for employing the efficiencies of HVBF equipment while achieving back-up power for utility outages
  • Can reduce the number of components used in DC coupling
  • Can reduce losses do to low-voltage aspects of DC-coupled systems
  • Can provide for more flexible and efficient wiring configurations
  • For designs requiring long distances between the renewable energy resources and the balance of system components

As with any innovation, AC coupling has some notable challenges, especially when the design utilizes multiple manufacturers.

For several years, system integrators have completed AC-coupled designs using one manufacturer’s equipment or by using multiple brands of inverters.

SMA pioneered the AC coupled concept with its “Sunny Island” Inverter. Initially built to provide for the creation of microgrids on islands and other non-utility environments. The design lends itself to grid-intertied AC-coupled systems as well.

As shown in the diagram below, SMA’s design allows for multiple HVBF inverter outputs to be combined with the Sunny Island inverter to connect to the utility and have battery back-up.


Courtesy of SMA America

SMA’s design provides for an elegant method of regulating the battery state of charge as long as all the inverters can be networked with cat-5 cable. In this design the HVBF inverters can have their outputs incrementally reduced as the battery reaches a full state of charge. If the distance between the HVBF components and the Sunny Island is too great to network with cat-5 cable, the Sunny Island controls the output by knocking out the output of the HVBF inverters with a shift in the frequency of the inverter’s AC waveform.  The HVBF inverter senses an out-of-spec frequency and disconnects until the frequency is back in spec for five minutes.

This frequency shift method of regulating battery state of charge is often used when different manufacturers’ inverters are used to create the AC-coupled design. This has several drawbacks that we will discuss.

Several other battery-based inverter manufacturers have developed designs for using their inverters with other HVBF inverters to create AC-coupled designs. These include OutBack Power, Magnum Energy, and Schneider Electric. Both SMA and Schneider provide for single manufacturer AC-coupled systems because they manufacture both HVBF inverters and GTBB inverters. This presents the basic advantage of having one manufacturer provide and support the entire AC-coupled design.

OutBack Power and Magnum Energy manufacture only battery-based inverters and therefore require the mixing of manufacturers in AC coupling in order to bring in HVBF inverters.

Both companies provide design information and support for AC-coupled designs.

Schneider’s regulation

With Schneider Electric’s AC coupling, the battery is regulated by the frequency shift method. Schneider itself recognizes the drawback of this method in its AC-Coupling Application Note (see appendix): “Unlike its normal three-stage behavior when charging from utility grid, the Context XW does not tightly regulate charging in a three-stage process when power is back fed through AC inverter output connection to the battery. In this mode charging is a single-stage process, and the absorption charge and float stage are not supported. Charging is terminated when the battery voltage reaches the bulk voltage settings, which prevents overcharging of the batteries. Repeated charging of lead acid batteries in this way is not ideal and could shorten their useful lifetime.”

This can be improved by employing a diversion load controller added to the design.  The diversion load controller will limit the battery voltage by “dumping” excess power into a DC load during times of excess generation for the PV system. While this re-introduces the 3-stage charge regulation into the design it negates some of the benefit of AC coupling because it re-introduces the cost of a charge controller and adds the cost of the DC diversion load(s).

Magnum’s regulation

Magnum Energy also provides for frequency shift method battery regulation but in their White Paper titled “Using Magnum Energy’s Inverters In AC Coupling Applications” (see appendix) they indicate that frequency shift regulation should only be used as a back up to the employment of a diversion load controller. They are developing an innovative addition to their product line the ACLD-40, which will provide for diversion control using AC loads. One aspect of using diversion load controllers is that DC loads are often difficult to find and expensive. Magnum intends the ACLD-40 to be a solution to this issue by allowing the use of more common AC loads for diversion controlling such as AC water heaters or air heaters. This product is under beta testing at this time and is due for release in late 2014.


OutBack’s regulation

OutBack Power’s design provides for frequency shift method battery regulation. The disadvantages to this method can again be overcome by the introduction of a diversion load controller and this comes with the same issues as with the other manufacturers.   OutBack Power’s AC coupling white paper discusses both on and off grid applications for AC coupling (see appendix).

Disadvantages to AC coupling:

  • Frequency shift methods of regulating the battery state of charge are coarse and may create significant power loss if there is a miss-match of equipment leading to nuisance tripping of the HVBF inverter
  • Battery optimization may not be possible without re-introducing a charge controller as a diversion load controller
  • Complexity in systems mixing manufacturers can create systems that are difficult to operate
  • Care must be take not to void warrantees by using equipment that is not designed for this application


In many ways, AC coupling is a good tool for working with both the difficulties of retrofitting battery storage in existing HVBF systems and systems with long distances between resources and loads. As with any innovation in this field, be sure to get the right design and make sure that the application does not void product warrantees.





Posted in Geothermal and Solar Design and Installation Tips, Solar Photovoltaics | Tagged , , , , , | Leave a comment

Three Keys to Developing Bankable Solar Projects – Lessons from Developing 150 MW+ of Solar Projects

Thanks to Chris Lord and Keith Cronin for providing all of the insights in this article. Chris and Keith teach our Solar Executive MBA course (the next session starts on September 15th). Together, they have advised investors, owners, and other developers on more than 150 MW worth of distributed generation solar projects. While there is no standard for defining bankable projects, I trust their on-the-ground experience to provide these insights.

Introduction – Why is Bankability Important?

The majority of large-scale US solar projects are done through power purchase agreements. The key to power purchase agreements is having investors who buy into these projects. As SREC prices and policy continue to fluctuate while project IRRs and installed costs continue to drop, project investors are most interested in investing in bankable projects that have good returns and minimal risk.

One of the most common question we get in our Solar Executive MBA course is, “How can I build a bankable project?”

In other words, “How do I structure a project so it’s very easy for an investor to want to invest in or purchase the project?” For commercial solar, the answer is of course “it depends” because there are so many moving variables that go into projects and everything is negotiable.

In this article, we’ll define and go into three keys to developing bankable projects. Then we’ll go into what developers need to keep in mind to stop wasting time on chasing bad projects.

This article will be useful to a professional who needs to get good at or keep up to date with best practices for financing mid-market solar projects. The goal is to go deeper than most articles on the Internet, but it will be impossible to provide the deep dive necessary to make you an expert. If you have any questions about the content, please leave them in the comment section of the article.

More Reading

If you’d like to get more resources on the subject, here they are. We will reference all of these articles in the article, but I wanted to provide a simple list for ease of use.

Article Outline and Learning Objectives

The article will be split into four major sections. After reading this article, you should understand the most important factors that go into developing a bankable solar project. The goal is that you’ll become familiar with these variables and will be able to start screening your existing projects to find the bankable ones and will also be better at screening new potential customers.

Defining Bankability

Technically speaking, bankable projects mean investment-grade projects. These are the projects that are the economically strongest and most reliable (i.e. low-risk) projects. These projects are able to win the most conservative and lowest-cost capital.

Economically strong is defined as a project that uses reasonable or conservative assumptions and documented facts to create a healthy economic cash and tax flow that comfortably hits or exceeds the investors’ target IRR.

Economically reliable is based in part on the strength of the developer/construction entities and also on the confidence of any state subsidy. But is also heavily based on the quality of the design and construction of the project. Investors want to know that the project was built to a rigorous standard so its economic performance may be reliably predicted to actually generate the forecast numbers in the pro forma over a twenty or twenty five year term. That is a long time by anyone’s standards. Imagine if a project were a car. In twenty years, do you expect to be driving the same car you are driving today? Probably not. A high-quality project is not a project built to meet minimum performance standards at the lowest possible cost.

  • Economic performance is most commonly addressed by establishing production guarantees. Oftentimes, investors will negotiate for the developer or EPC to guarantee a certain level of project, this is especially true if the equipment is being finance under a PPA and not a lease.
  •  The other item that impacts economic performance in modeling is the use of P50 vs P90 production levels. Investors will typically want to use P90 production numbers because they are the most conservative. Read more about production modeling 101 here.

First Key to Bankability  Understand How Investors Evaluate Projects

Solely on a 20-year discounted cash and tax basis.

Investors value a project based solely on the cash and tax benefits that will flow from the project. Similar to how you might value an annuity, they are paying good cash for the right to receive the cash and tax benefits from a project. Project the annual benefits over a twenty or twenty five-year term, and discount each year’s value back to the present using your target return rate.

This valuation method creates a problem for developers.

  1. Developers’ first instinct is to cut construction costs to the bone. Why not? After all, the difference between the development/construction cost and the sale price to the investor is all margin for the developer.
  2.  Projects are also judged on their quality – performance and reliability. In fact, successful developers have learned to fight the instinct to indiscriminately attack costs and to focus instead on managing costs intelligently with an eye to longer-term value.
  3. The key learning here is that it’s not the project with the lowest installed costs that wins, it’s the projects with the highest returns. This takes into consideration installed costs, the amount of power that an array will provide, and the confidence that the installed costs and power production will be very close to what’s expected.

A Second Key to Bankability – Have a Strong Economic Model

It’s key for commercial solar projects to have a comprehensive economic model. You need to know what kind of return you are really offering your investors before you show them the project.

It’s okay to start with a simple model for initial project screening and early development, but the sooner you move to a comprehensive and robust model the sooner you know where your project’s strengths and weaknesses are so you can then develop the project accordingly.

It is extremely important to use reasonable assumptions on all variables of the project economics. This includes: installed costs, PPA price, sales tax, property tax, interconnection costs and timelines, and SREC prices.

It’s important to lock in the “knowns” or “facts” of a project. This means variables that are documented and that you are close to 100% certain of their value. Be clear about which variables in the project are known and unknown.

Comprehensive and accurate documentation of variables is essential. The fastest way to lose the trust of an investor is not properly performing your due diligence by gathering information on all the necessary variables or not accounting for them correctly. For example, interconnection costs and real estate are not eligible for ITC and MACRS depreciation. Did you remember to exclude them? Not properly discounting the ITC is the single most common modeling mistake, even in large projects.

A Third Key to Bankability – Weighing Capital Costs Against Operating Expenses to Maximize Project Returns

This links directly to having a proper economic model. In your model, you need to know and understand what saving $1 on the operating side means relative to $1 on the capital side. The impact is different and depends on the facts.

  • For example, on a 5 MW (AC) project on the East Coast, cutting the construction cost by $.10 a watt (or $500,000) raises the project IRR by approximately 0.4%.
  • On the same project, cutting $6,500 of annual expenses by finding a lower-cost property raises the IRR by almost the same amount. The greater impact comes from the recurring impact of the lease rate reduction. In other words, the $6,500 is realized every year over the term, not just the first year.
  • Effective and valuable cost-cutting involves weighing capital cost reductions against operating expenses – and this is where good development and good design can help.
  • Energy efficiency in buildings offers a very easy way to see this trade-off. Imagine a developer looking to build a commercial office space. The developer might consider a highly-insulated and energy-efficient window solution but reject it because the cost is “too high.” Instead the developer goes with a very cheap but not very efficient window solution. After the building is completed, the operating cost of the building with lower efficiency windows is higher because of the additional energy required to warm and cool the interior space. Had the building owner gone the other way, the capital cost would have been higher, but the operating cost would have been lower. The trade-off is never an easy one to make, but in the building example, if the tenant is not the developer/owner, then the trade-off involves shifting costs from capital (owner/developer) to the tenant (who pays the energy bill).
  • A solar project requires the same trade-offs on the design, choice of materials, and construction. And, as we saw in connection with knowing your model, the impact of changes can vary considerably depending on whether you are cutting capital costs or cutting operating costs. In both cases, you need to know how the IRR is impacted and whether cutting the capital cost makes for a lower life cycle cost.

The Developer’s Perspective     

After Chris Lord provided this advice on the modeling and legal aspects of developing solar projects, I asked Keith Cronin a simple question, “This advice seems so clear, why are developers not following it? What are they chasing around bad projects? What advice do you have for them?”

Here’s an excerpt from his response:

Developers around the globe all want to seize opportunities in the solar industry, as they see gold in their eyes. This has been happening for almost a decade now in various iterations. Small and large developers are always looking at incentives, pulling out their spreadsheets, and eagerly looking to secure properties to park these opportunities on them.

What developers often overlook is the identification of a good versus a bad opportunity. As Chris Lord points out, determining bankability is essential. Discovering that a project can’t be financed is a large source of disappointment for developers after they’ve invested hours of time chasing deals.

This stems from a host of variants, but these are the most likely primary offenders:

  1. Developer runs into unforeseen conditions at a project’s location and the additional costs make the investment economically unattractive.
  2. Cost for construction and interconnection delays decreases project returns. Projects in the Hawaii market that have been involved in the FIT program have experienced 18-plus month delays from a host of parties involved in a project. For example, if you look at a 500kW AC PV system producing $23,000 per month in revenue, how many developers can afford to lose $400,000 during that time period, and how many investors have that level of patience? If you look at Chris’s example with capital expenditure versus operational expenditure and how this impacts project returns, any hiccups with construction delays can substantially decrease project returns.
  3. Uncertainty around interconnection makes some projects impossible. If programs become oversubscribed and circuits on the grid become saturated, how do you explain to investors that you will not only see additional cost overruns, but the likelihood of waiting until the infrastructure can be modernized to impact the project timeline?

As Chris Lord points out, the cost of a project, versus the recurring costs for land, insurance, taxes, leases etc., should be carefully scrutinized. As developers, we all want to build a project for less than what we planned for. What is the best strategy for the short term and long term?

  1. It is advisable not to cut corners on solar equipment because arrays have a useful life of 20 to 30 years. Make the long-term investment and build that into your budget. Be prepared to tell the investment community “why” your costs are higher and they are usually thankful for the insight and your long-term thinking.
  2. What is your O&M strategy? These numbers fluctuate radically. It can be anywhere from $10 per kW to $25 per kW per year on larger scale projects. The bigger question is what is included in this service that will be provided and what isn’t? Remember, the PV system’s output will not go up over time and only go down with degradation, so plan for the impacts of time on a system and understand how your investors are looking for less lumpy returns and more stable forecasts.
  3. Bundling other complementary services offers a unique angle on getting your project to the finish line. With the cheap cost of capital, investors are looking for low-risk returns, and adding in energy efficiency will stabilize returns for the investment community. Access to this money today is easy to find. In places with high-energy costs, it makes the amalgamated deal more attractive and often can give a developer the margin they were originally looking for at the onset.

Project risks often burden developers in the early stages of a project’s introduction and inception. Engineering, permitting, site control, and negotiations with landowners all consume a lot of in-house resources. Getting better at selecting projects that have a higher probability of being developed requires experience and knowing the market you’re entering. Finding local partners that can help you traverse the nuances of the market is essential in maintaining your expectations as well as the expectations of the investment community.


This article outlined the knowledge you need to develop and screen existing and new bankable solar projects. You now know what you need to keep in mind to stop wasting time on chasing bad projects.

The most important factors that go into developing a bankable solar project:

  • Understanding how investors evaluate projects (solely on a 20-year discounted cash and tax basis)
  • Strong economic modeling
  • Weighing capital costs against operating expenses to maximize project returns

If you have additional knowledge to share, please leave it in the comment section below.

Learn More

If you’d like to get more resources on the subject, please review the following resources (they were also included in the article). Please leave comments in the comment section below if you have questions or additional knowledge to add!

Posted in Solar Photovoltaics | Tagged , , , , | 1 Comment

Modeling Solar Production Risk 101 – An Introduction to P50 and P90 Production Levels



This article is part of a series of interviews, tutorials, and definitions around commercial solar financing that is leading up to the start of our next Solar MBA that starts on Monday September 15th. In the Solar MBA students will complete financial modeling for a commercial solar project from start to finish with expert guidance. The class is limited to 50 students, but there are 30 discounted seats. 

Financing a large commercial solar project is about understanding, controlling, reducing, and communicating risk and uncertainty.

Because solar has a variable energy source, the amount of power that an array will produce, and thus the value of that power, is highly variable and needs to be understood to finance a project. As projects get larger, more due diligence is required to understand and evaluate the potential solar production of an array.

Solar production estimates are based on a number of factors. Some factors can be controlled and modeled with a high degree of certainty and others are closer to guesses about the future. Because these are guesses, we need to state a confidence level for each guess.

The confidence level of the amount of energy a solar array will produce is measured in P50 and P90 production levels.

This article will be useful to any solar installer who sees commercial solar projects, and specifically the financing of those projects, as critical to the success of their business. For larger projects, PVWatts won’t cut it. You’ll need to understand the amount of potential revenue a solar array can generate, and your confidence in its ability to generate that revenue,  in order to get investors to buy into the project.

In this article we will explain:

  1. The general types of production risk and why there is such a huge focus on solar radiation levels.
  2. The definition of P50 and P90, how it’s graphed, and the impact of weather variability.
  3. The potential impact of revenue expectations.
  4. What is critical to understand about P50 and P90.
  5. Further reading.

Let’s dive in.

1. Types of Risk

There are many ways to describe the risks associated with a solar array. In general, you could put them into two buckets: “construction risk” and “operating risk.”

From an investor’s perspective, construction risk is any source of risk that happens before COD, when the system is not operating. This can include site risk, site control, interconnection risk, EPC and construction risk, and more. An entire article could be written on those topics. For the most part, construction risks are about understanding and controlling the cost and time required to build the array. There are some obvious factors during the engineering and installation of the array that can have large impacts on the potential production of the array.

Operating risks are the risks associated with running the facility and generating revenue from the production of energy. These can still include some site and equipment failure or warranty risks, but, assuming those are controlled for, the major risk after a solar array has been constructed is how much power it will produce.

AWS Truepower published a report about reducing uncertainly in solar energy estimates in which they rank ordered the factors that have the largest potential impact on solar production estimates. That graph is below.

Screen shot 2014-08-06 at 9.39.08 AM


As you can see, “solar resource uncertainty” is the single largest item that can impact total solar power production based on their analysis.

David Park from IEEE published a similar analysis. He rank ordered the impact solar radiation, climate, module model, inverter model, aging, and system derate can have on expected array production versus estimated production and found that solar radiation, climate, and radiation and module model explained the largest amount of production variability.


The value of energy produced by a solar array is a function of two items: how much energy is produced and the value of that energy. The value of that energy can be based on a number of factors: the kWh rate it’s offsetting, any net-metering laws that are in place, the negotiated PPA rate, potentially demand charge reductions, any production-based incentives, and more.

Given that we cannot predict with 100% certainty the amount of solar radiation that will hit an array over any given period of time, to understand and communicate the potential solar resource we use P50 and P90 production levels of an array.

While these production estimates rely to some degree on system design and siting, the main variable is weather.

2. The definition of P50 and P90 and how they are graphed

In P50 and P90, the P stands for probability.

P50 means there is a 50% chance in any given year that production will be at least a specific amount. If an array has a P50 production level of 500 kWh, it means that on any given year there is a 50% chance that production will be AT LEAST 500 kWh.

P90 production means that there is a 90% chance that in any given year production will be at least the specific amount. This means that there is only a 10% chance that production will be lower then the stated amount. If any array has a P90 production level of 400 kWh per year, it means that on any given year year there is a 90% chance that production will be AT LEAST 400 kWh.

Here’s a graph of P50 and P90 production estimates from David Park’s report.


For any statistics geeks who are reading this, it will look very familiar. What he’s doing is graphing the variability of the mean in a confidence interval, in this case one standard deviation is 12.5%. P50 is the mean and P90 is a little less than two standard deviations (remember that two standard deviations is 95%) from the mean.

3. Weather Data Variability and the Relationship Between P50 and P90

Because the variability of solar production, and thus the difference between P50 and P90, is largely based on the variability of weather, extensive weather analysis must be performed to calculate these values.


The above graph comes from the report by David Parker and illustrates how weather recordings over a specific period are used to determine the variability of irradiation for a specific location.

What this means for solar production is that areas that have less sporadic weather changes have closer P50 and P90 values. Statistics geeks, lower variability means a lower standard deviation across the distribution of solar irradiation values.

If you look at the two graphs below, the top graph is an illustration of an array that has highly variable weather characteristics while the bottom graph displays an array with more stable and predictable weather.


4. Potential Impact of P50 and P90 Production Estimates on Revenue Potential

The difference between P50 and P90 production levels in areas with moderately variable weather can have large impacts on the assumed production for an array.

If we want to use the example from our first graph:

P50 production was: 32,413 kWh

P90 production was: 27,228 kWh

That’s a difference of 5,185 kWh. P90 production estimates are 15.9% lower than P50 values. 15.9% is a lot!

Assume the value of a kWh is $.15 per kWh

P50 production is expected to be: $4,861

P90 production is expected to be: $4,084

If we assume that each kWh is worth exactly the same amount, this means that the value of the power produced would be expected to be 15.9% lower if P90 was used compared with P50. However, we can be much more confident that every year we’ll hit the P90 production levels. This is why investors signed into a PPA tend to favor P90 production levels if they are being paid with power production.

5. What’s critical to understand about P50 and P90?

  • P50 and P90 production levels are hard to determine with software models.
  • For larger projects, an engineering firm will work to perform this analysis.
  • P90 is more conservative, so investors will focus on this amount. P50 is less conservative, so developers tend to focus on this.
  • The greater the variability of weather in a specific area, the greater the variability between P50 and P90 because solar radiation levels explain the majority of the variability in production.
  • Investors will be most concerned with production levels in legal structures where their returns are based on the production of the system. In lease structures, the investors will be less concerned with production because the payments are hell or high water payments.

6. Further Reading on Production and Risk Modeling


Posted in Solar Photovoltaics | 1 Comment

Troubleshooting Condensing Boilers in Hydronic Systems – What is the System Doing?

This a guest post from Roy Collver. Roy is a condensing boiler expert. Here’s what John Siegenthaler, author of “Modern Hydronic Heating,” says about Roy’s work: “When I have a detailed question about the inner operation of a modulating / condensing boiler, Roy Collver is the first person I contact. The investment in Roy’s HeatSpring course is a fraction of the cost of a single mod/con boiler, but it will teach you concepts, procedures, and details that will return that investment many times over.”

Learn from Roy

  • Free. Roy is teaching a two-part free course on how to sell mod-con boilers. The second live lecture is happening on Wednesday, July 30th. Sign up for the free mod-con course here.
  • Paid: Roy Collver teaches an advanced 5-week course on mastering condensing boiler design in hydronic systems with the folks at HeatSpring. If you need to increase your skills and confidence around selling, quoting, designing, setting up controls, or troubleshooting condensing boilers in new construction or retrofit applications, this course is for you. Each session is capped at 50 students, but there are 30 discounted seats. Get your discount and sign up for Condensing Boilers in Hydronic Systems.

Enter Roy…

Understanding the Simple Basics

Cold weather is never too far away in most parts of North America. Be ready when it hits, and review the basics of hydronic system operation so you can quickly locate the problems that always come up. When you approach an operational hydronic system it will exhibit one of the following six states. Quickly understanding what you are dealing with will greatly reduce head-scratching time and point you in the right direction. Standing slack-jawed in front of a boiler with no clear path to determining what is wrong is very uncomfortable and a waste of time. Confidence is a key factor in successful troubleshooting, and to be able to indicate to a customer what the BASIC problem is right away buys you time to be able to work the problem, find out the SPECIFIC cause, and fix it. Using this guide as a quick reference should help speed the troubleshooting process along.

Hydronic systems are all about Delta T (the difference in temperature between the heating fluid, the system components and the surrounding air and objects). Heat always travels to cold, and if heat is not added to the heat transfer fluid (usually water), the fluid and all of the components in the system will eventually cool down to the temperature of the surroundings.



The boiler is on and the hot combustion gases create a large Delta T between the combustion chamber and the water in the surrounding heat exchanger. Because heat travels to cold, the water heats up. The circulation pump moves the hot water through the distribution piping to the terminal units. The terminal units heat up and a Delta T develops between the hot terminal units and the colder air. The air will get warmer at the expense of the water, which cools slightly. The cooler water circulates back through the system back to the boiler where it is heated up again. If the heat going into the boiler is more than the system can use, the water will continue to get hotter until the boiler cycles off on its operating control. The temperature difference between the water leaving the boiler and the water returning to the boiler will be “normal” for the system (usually 15°F to 40°F depending on the load and system design).

noflowThe boiler is on, adding heat to the water, but for some reason the hot water is not circulating through the distribution piping to the terminal units. The terminal units will cool down to the temperature of their surroundings and a “no heat” condition will result. The water in the boiler will continue to get hotter until the boiler cycles off on its operating control or internal high limit control. The supply and return piping near the boiler will be close to the same temperature.



The boiler is on, adding heat to the water, but the hot water is not circulating fast enough through the system. The first terminal unit may become warm, but because the water is moving so slowly, all of the usable heat is transferred out of it before it gets very far. The last terminal units do not become warm enough to heat the space and a “not enough heat” condition will result. The water in the boiler will continue to get hotter until the boiler cycles off on its operating control or internal high limit control. There will be a large Delta T between the water leaving the boiler and the water returning to the boiler. (The supply will be a bit hotter than normal, but the return will be much colder than normal.)


Continue reading

Posted in Building Efficiency | Tagged , , , , | Leave a comment

New Massachusetts Solar Bill H.4185 Would Destroy Community Solar Potential In the Commonwealth

This is a guest post from Sam Rust from SRECTrade about new solar legislation in Massachusetts.

(Editor Note: A note about the importance of community solar for lowering customer acquisition costs, something EVERYONE in the solar industry cares about. Everyone is talking about lowering customer acquisitions costs and soft costs and community solar has the potential to instantly drop acquisition costs by 50% to 80% for solar companies offering roof mounted and community solar projects. Why? It’s simple math. If you had 100 solar leads, a good conversion rate of leads to customers would be 10%. This equals 10 customers. Here’s the thing, in order to find 10 customers that WANT to invest in solar and HAVE a good roof, you must bump into 3 to 4 people that WANT solar but DON’T HAVE the roof space. If those 3 to 4 people could become community solar customers, then the conversion rate of those 100 leads would become 30% to 40% instead of 10%. This would then drop the acquisition costs because you’re getting more customers with the same marketing spend. Food for thought.)

Enter Sam Rust.

In 2013 Massachusetts was ranked 4th, behind California, Arizona, and New Jersey for most solar installed. Despite this success, legislation, officially known as H.4185 (An act relative to net metering), is pending at the Massachusetts State House that could drastically change the direction of the Massachusetts solar industry. Touted in the media as successful compromise between regulated utilities and the solar industry, H.4185 might be more of a step back, than a step forward. The bill could pass in both the Massachusetts House and Senate before the end of the legislative session on July 31st, despite the opposition of many solar owners, installers, and representatives of the community solar movement.

Here’s a short explanation of how we got here and what H.4185 is.

Currently there are limits on how much Massachusetts solar capacity can qualify for net metering in each utility territory. These limits, which only apply to larger solar facilities, are nearly maxed out for each utility and prevent the Commonwealth from meeting Governor Deval Patrick’s 1,600 MW by 2020 solar goal.  H.4185 would remove the net metering limits and put in statute Governor Patrick’s 1,600 MW target in exchange for a radical adjustment in the structure of Massachusetts solar policy of which the primary adjustments are:

  • The removal of annual capacity restrictions on large “solar farm” projects
  • The creation of yet-to-be defined minimum electric bills for all ratepayers
  • The reduction of the virtual net metering rate from compensation at the retail rate to the wholesale rate of electricity
  • A limit on the size of behind-the-meter projects to 100% of the on-site load
  • A transition away from the successful market-based SREC program to an unknown program managed by the Department of Public Utilities
  • Transfer of all of the “environmental” attributes of solar arrays to the utilities

In translation, H.4185, a bill that is ostensibly about net metering would remove or weaken most of the policies that have made the Massachusetts solar industry so successful. It is a bill that exchanges a set of known, highly successful policies, for a new set of untested policies.  The bill has not yet passed and many stakeholders are calling amendment language that would remove most of the major policy language in exchange for an incremental increase in the net metering caps and a formal commission to be convened next year to review the more contentious aspects of the legislation. This more cautious approach would stabilize an already jittery Massachusetts solar industry and ensure that all stakeholders are at the table the next time net metering limits need to be addressed.

How this could negatively impact solar installers.

  1. Anybody working to do community solar will be negatively impacted because the VNM credit is being reduced
  2. H.4185 removes the protections in place under the SREC-II program for incentivizing distributed/ rooftop/ carports/ general behind-the-meter projects
  3. The declining block incentive program will be set at the DPU, rather than at the DOER. This means that installers will need to lawyer up and deal with the regulated utility lawyers in order to argue for favorable incentive targets. Solar in Massachusetts goes from a decentralized system, where everyone and anyone can participate in the rule making process to a system where the big player have the negotiating advantage
  4. The utilities receive all of the attributes of the solar, including the RECs and will be able to lead the discussion on monitoring and other equipment requirements. This reduces the possibility for innovation in the solar space regarding capacity markets, battery storage, voltage regulation etc
  5. The minimum bill imposition will hurt anyone with a low electric bill, which means smaller projects will be most affected by the minimum bill
  6. Anybody doing business in Muni territory is left out. Currently the SREC program covers Munis
  7. Above all else this just adds more complication to the system. We just spent a year implementing SREC-II and now we have to work on implementing another program for which installers will need to fight to be part of the process for negotiating the declining block targets and minimum bill. This just adds more uncertainty, which is bad when you are trying to mature an industry.

WANT TO HELP? Contact Sam

First. Here is Sam’s email address:

Send him an email and he’ll figure out how you can help.

Massachusetts voters are encouraged to research this bill further and to contact their state legislators. Here is a link to a site that makes legislators searchable by zip code.

For more information please read this well written opinion piece in Commonwealth magazine and visit the Facebook page for the Massachusetts Stakeholders for Competitive Solar or


Posted in Solar Photovoltaics | Leave a comment

Free Solar Design Tool: String Sizing Tool For Commercial Solar Projects That Works with All Inverters

One of the most important aspects of designing a solar array is sizing module strings to operate within the parameters of the selected inverter. This is especially true for commercial and megawatt solar projects. To help in this process, we’re providing a free solar design tool to our readers.

Ryan Mayfield and Renewable Energy Associates has developed a free solar design tool to help in that process. Most inverter manufacturers offer some type of sizing tool, whether it’s simple or advanced, it’s usually limited to selecting only their products. The REA System Sizing Tool lets you select from a wide variety of products and manufacturer’s. Ryan is teaching a 10-week advanced solar design class with SolarPro called Megawatt Design.

You can click here to down the string sizing tool. 

Key Features of the Solar String Sizing Tool

A quick note. The tool now requires you to turn on macros. For those concerned about security we can not guarantee that the tool will work well, accurately or at all without macros enabled.

  • Thousands of modules.
  • Hundreds of inverters.
  • Add your own module or inverter.
  • World Wide ASHRAE locations. 5,000+
  • Create your own custom sites.
  • Dual MPPT’s configuration possible.
  • Voltage drop calculator.
  • Performance calculator.
  • Quick Printing features.

Screen Shots of the Solar Design Tool

Inserting Array Characteristics

Screen shot 2014-07-16 at 10.50.25 AM Inserting Weather Conditions

Screen shot 2014-07-16 at 10.30.36 AMLogging Other Project Specific Activities


Download the Solar Design Tool

You can click here to down the string sizing tool. 

Posted in Featured Designs, Products, and Suppliers, Solar Photovoltaics | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Designing Wood Gasification Boiler Protection Systems: T & ∆T Pumping: An application that leverage both strategies

This is a guest article from John Siegenthaler. In the fall, John is teaching an advanced design course on Hydronic-Based Biomass Heating Systems. This is the most advanced and technically challenging biomass design course that you’ll find anywhere. The capstone project for the class will be designing a system and getting it reviewed by John. The class is capped at 50 students, and there are 30 discount seats. You can get a discount and sign up for a test drive here. Click here to join our Linkedin group to connect with professionals and share best practices for selling, designing and installing hydronic-based biomass systems

Download: Wood Gasification Protection Tutorial

  • This is the email where the article will be emailed.

Enter John

The market for wood gasification boilers is growing in North America. Most are used in rural areas where natural gas in not available, and thus cost of firewood is often very competitive against the alternatives of #2 fuel oil or propane.

I my area of upstate, NY, fuel oil is currently selling for about $3.80 gallon, and propane at about $2.80 per gallon.  Firewood is available in the range of $65 per face cord.  If one assumes a conversion efficiency of 85% on oil, 93% on propane (via a mod/con boiler), and 80% on firewood (burned in a wood gasification boiler), the unit cost of these fuels is as follows:

• #2 oil: $31.92 / MMBtu

• Propane: $33.28 / MMBtu

• Firewood: $12.11 / MMBtu

From the standpoint of fuel cost, it not hard to see why the demand for wood gasification boilers is growing.

Wood Heating Done Right: When properly applied, wood gasification boilers allow dry firewood to be converted into thermal energy at efficiencies far higher than those of non-gasification type wood burning devices.  These boilers can also be matched up with the latest hydronics hardware such as radiant panels, high efficiency circulators, and even mod/con boilers (in cases where fully automatic heat delivery is required if the wood fired boiler is not operating).

One critical detail that must be addressed with any wood gasification boiler is protection against sustained flue gas condensation.  The situation is very similar to the boiler protection issues we’ve discussed in many past columns.  It can be summarized as follows: If the water temperature entering the boiler allows the fire side of the boiler’s heat exchanger to drop below the dewpoint of the exhaust gases, some of those gases will condense into an acidic liquid that is very corrosive to materials such as steel or cast iron.  In the case of wood fired boilers, the condensate also leads to formation of creosote within the boiler, the vent connector, and chimney.  Think of creosote as solid fuel.  If it’s exposed to suitably high temperature, creosote will combust.  The resulting heat can quickly destroy a steel or masonry chimney, and possibly the building with it.

Dry Fire: There are several ways to provide boiler protection.  Currently, the most common approach uses a “loading unit,” as shown in figure 1.


Figure 1

The loading unit combines a high flow capacity 3-way thermostatic mixing valve, circulator, and flapper check valve. When the boiler is warming up, the 3-way thermostatic valve routes all flow through the bypass, and back to the boiler inlet.  This allows the boiler to warm up quickly, since no heat is being released to the load.  When the water exiting the loading unit reaches a minimum set temperature, such as 130 ºF, the valve modulates to allow some hot water flow to the thermal storage tank.  When the water temperature leaving the valve is several degrees above the minimum temperature setting, all water leaving the boiler goes to the storage tank.  Thus the loading unit acts as a “thermal clutch” between the boiler and tank, smoothly increasing or decreasing the rate of heat transfer, as necessary to keep the boiler inlet at an appropriate temperature.   The loading unit is also internally configured so that it allows thermosiphon flow between the boiler and thermal storage tank during a power outage.  Thus, no external heat dump is required when the system is piped as shown in figure 1.

Alternate Approach: Another method of boiler protection use a variable speed circulator as the thermal clutch between the boiler and a high thermal mass load such as a thermal storage tank.  The water temperature entering the boiler is sensed, and the electronics controlling the circulator slow it, as necessary, so that the rate of heat transfer to the load doesn’t exceed the rate at which heat is being produced in the boiler.  This approach has been used for years in the form of variable speed injection mixing pumps between conventional gas-fired and oil-fired boilers, and lower temperature / high thermal mass loads such as radiant floor slabs.  Figure 2 shows how it can be used to protect a wood gasification boiler.


Figure 2

In this system, circulator (P1) operates, at a fixed speed, whenever the wood gasification boiler is being fired.  The circulator would be sized to provide a nominal 20 to 25 ºF temperature drop across the boiler when it’s operating at maximum output.  If the boiler has relatively low flow resistance (which is typical of most wood gasification boilers), and the piping loop is short, there isn’t much head loss in the boiler loop.  Thus, circulator (P1) could likely be relatively small.

Circulator (P2) is a variable speed circulator that monitors the temperature at sensor (T1), installed at the boiler inlet.  This circulator operates at a very low speed until the boiler’s inlet temperature rises to where flue gas condensate will not continue to form. For dry firewood and a typical air/fuel ratio, this temperature is about 130 ºF.

Once the boiler’s inlet temperature rises above this “dewpoint” temperature, circulator (P2) speeds up, and thus starts transferring heat from the boiler loop to the thermal storage tank. When (if ?) the temperature at sensor (T1) reaches 5 ºF above the  minimum boiler inlet temperature, circulator (P2) would be operating at full speed.  As the temperature at sensor (T1) drops back toward the dewpoint, circulator (P2) slows as necessary to prevent condensation in the boiler.  Thus, boiler protection always remains in effect.

Another important consideration in the piping design is to allow natural convection between the boiler and thermal storage tank to remove residual heat from the boiler if a power failure occurs while it’s firing.  The piping shown in figure 1 will allow this to occur.  However, it’s crucial that the check valve shown near the upper left connection on the storage tank is a swing check, rather than a spring-loaded or weighted plug flow check.  The forward opening resistance of a swing check is very low, and compatible with natural convective flows.  That’s not the case with either spring-loaded or weighted plug flow checks.

Upping the Offering: The schematic in figure 2 keeps the same piping as figure 1, but adds differential temperature control between a temperature sensor in the upper portion of the storage tank, and the outlet of the wood gasification boiler as the means of turning on fixed speed circulator (P1), as well as supplying power for variable speed circulator (P2).  The differential temperature controller determines when the boiler is a fixed number of degrees above the temperature in the upper portion of the storage tank, and in effect “enables” the heat transfer process, including boiler protection. It’s essentially the same control action used to start and stop operation of a solar thermal system.  Just think of the boiler as the heat source rather than a solar collector.  Adding this control functionality eliminates the need for some to manually turn the boiler circulator on and off.


Figure 3

One More Variation: The schematic in figure 3 uses the same logic as the system in figure 2, but adds speed control to circulator (P1).  The “∆T” logic controlling circulator (P1) would modulate its speed to maintain a user-set temperature difference between the outlet and inlet of the boiler.  The objective is to reduce power demand to circulator (P1) when the boiler output is reduced, and thus reduce overall electrical energy use.


Figure 4

There you have it; modern variable speed circulators matched with state-of-the-art solid fuel boilers.  This combination is highly scaleable to larger systems using larger circulators controlled by variable frequency drives (VFDs). It’s also applicable to pellet-fired boilers, which also require protection against sustained flue gas condensation.  Keep these details in mind if you have a solid fuel boiler project coming up.

© Copyright 2013, J. Siegenthaler, all rights reserved
Posted in Biomass Heating | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

How to Normalize Energy Consumption For Weather Influences Using RETScreen ® Plus

This is a guest post by Michael Ross from RER Energy Inc. Michael is teaching a 6-week, 30 hour class on Mastering RETScreen for Clean Energy Project Analysis. The class is capped at 50 students, and there are only 30 discounted seats. Get your discount here.

Download Tutorial

  • This is the email where the article will be sent.

Article Goal 

This article shows engineers and energy data analysts how to “normalize” energy consumption or production to account for the variation in weather over time. By the end of the article, you should understand why normalizing for weather is important, and how it can be done, either in a spreadsheet or using a free tool called RETScreen® Plus.

Why Normalize for Weather?

The need to “normalize” for weather arises very often. For example, you have a year or two of utility bills for a facility, you plan on improving the energy efficiency of the facility, and you need to estimate what the energy savings will be in the future. One challenge is that the past energy consumption is determined not just by the equipment at the facility, but by the variations in the weather experienced by the facility. What if the winter covered by the utility bills was especially cold, and as a consequence gas consumption was higher than typical? Basing your estimates of savings on a single year, without “normalizing” for weather, or explicitly adjusting the consumption to reflect typical weather conditions, will cause you to overestimate the typical savings in the future.

Normalizing for weather is a good idea whenever an accurate understanding of the current energy consumption of a facility (a “baseline”) is needed; otherwise, as suggested in the previous example, estimates of future savings arising from improvements to the existing facility may be too high or too low, and consequently inferences that a proposed improvement is cost-effective may not turn out to be correct (or, conversely, a truly cost-effective opportunity may be missed).

The need to normalize may also appear in energy production projects. For example, a photovoltaic system might produce more electricity in one year than in the previous year. Is this merely because there was more sunshine in the second year? If so, did this additional sunshine hide deterioration in the system operation?

Sometimes normalizing for weather is not merely a good idea, but rather a requirement of a client or a utility or government funding program. For example, I recently conducted a study for a client who was seeking funding from the Federation of Canadian Municipalities (FCM). The client needed to show how much connecting his building to a district heating system would reduce overall natural gas consumption (and thereby greenhouse gas emissions). The FCM program stipulated that any study had to first normalize past energy consumption for variation in the weather, and then project savings into the future based on typical weather.

Normalizing for Weather: the Theory

Normalizing for weather is, in principal, straight forward:  you “fit” a statistical model (i.e., an equation) that relates you consumption data (e.g., utility bill consumption) to one or more variables that you think exercise an influence on consumption (e.g., heating or cooling degree days).  When “fitting” the model to the data, you adjust the coefficients of the equation until sum of squared differences between the actual consumption data and the modeled consumption data is minimized. Often a linear equation is used for the statistical model, and the process is called “linear regression”.

So, for example, you might produce a scatterplot of daily average gas consumption for each billing period against the average number of heating degree days per day for the billing period, as shown in the figure below.



I’ve superimposed a straight line on the scatterplot to make it evident that there is a linear relationship between the fuel consumption and the heating degree days. That is, I should be able to estimate with some accuracy the fuel consumption using an equation of the form[1]:

This equation has the right form, but what should I use for the coefficients a and b? A common approach is to select a and b in such a way as to minimize the “sum of squared errors”, or SSE.  To do this manually, I start out with a guess for these coefficients, and then I use this equation to estimate the fuel consumption for each billing period. I then compare these estimates with the actual fuel consumption for each billing period. If I square the difference of the two and sum over all billing periods, I’ll have the SSE. This is a measure of how well my choice of coefficients fits this equation to the data; I adjust the coefficients until the SSE is as small as I can make it (unless the line passes through every data point exactly, the SSE will not go to zero).



Then I’ve got my equation. For the data from the example above, it would be:


I can then use this equation to estimate the gas consumption based on the heating degree days. So, for example, imagine that for the location of this building, a typical month of March will have 620 heating degree days (°C·day). That works out to 20 heating degree days per day. If I wanted to know what the facility’s gas consumption in a typical March would be, I’d plug this into the equation:


This would tell me that on an average March day, I’d require 6.6 GJ of gas, so over the whole month I’d consume around 206 GJ of gas. To determine the gas consumption in a typical year, I do this same exercise for each month’s typical number of heating degree days.

Normalizing for Weather Using RETScreen® Plus

While this normalization can be done using a spreadsheet, my tool of preference is RETScreen® Plus, a sister program to the better known but completely different RETScreen® 4. (Both tools are available for download, for free, from the Government of Canada:

RETScreen® Plus is designed precisely for this type of exercise (as well as much more in-depth analyses to be discussed in later articles), and consequently much quicker and (less error-prone) than doing the manual exercise outlined above. The main program features that make it quicker and easier than the manual exercise are:

1)     Rapid access to up-to-date daily weather data for locations across the globe

2)     Tools for combining and regrouping data sets on different time bases.

3)     Automatic fitting of equations

4)     Optimization of the heating degree day reference temperature

Let’s examine each of these advantages by going through the key steps for normalizing for weather data using RETScreen® Plus.

I’ll start by asking my client for utility bills. He sends me a spreadsheet for the period of 2012 through 2013, indicating for each bill the billing date and the billed gas consumption (in GJ) for the period:


Note that the “monthly” bills are not all dated on the same day of the month, and the number of days in the billing period changes from bill to bill. Also note that I’m missing the bill for May 23. Such are the complications of the real world.

Next, I open RETScreen® Plus. The first key step is to tell it where my building is located; it will be apparent why we need to do this when we need to get weather data. There are a variety of ways to specify the project location, but the fanciest is through a map interface that lets me indicate the project location with a thumbtack:



Then I import my Excel spreadsheet of utility data into RETScreen® Plus. I tell it that the data I want to investigate is for “Fuel Consumption”, specifically natural gas measured in GJ. It opens a blank table:

Screen shot 2014-06-19 at 10.24.00 AM

I fill this table by “Importing from file…” and selecting my Excel file. A dialog box pops up and I see that it has correctly interpreted the headers in the file, with the exception of the gas consumption, which I have to pick from a drop down list:



When I click on the green checkmark, I get another dialog box identifying the missing data for May and giving me some choices for dealing with this, such as using the average for the whole data set, interpolating between adjacent data points, deleting the whole row, or repeating the previous value. I chose to simply ignore the missing data for now. RETScreen inserts this data into my table, automatically calculating the number of days in each billing period:


With that half my data is in the tool. But now I need to tell RETScreen what the “factors of influence” in this data are: that is, what variables are likely to exert an influence on the gas consumption. When normalizing for weather, the answer is pretty clear (it’s the weather, obviously), but in different applications of the tool it might be factory production, hotel occupancy, or something else.

Thus, I need to get weather data for 2012 and 2013. Ideally, this weather would be on the same time basis as my utility bills. That is, I’d have the average weather conditions for my site for the first, second, etc. billing periods.

Continue reading

Posted in Featured Designs, Products, and Suppliers | Tagged , , , , , , , , , , , , , , , , , | 1 Comment

[Free Floor Plan] 10 Ways Passive House Design is Different Than Normal Home Design

If you want to download the floor plan, please scroll to the bottom of the article.

This is a guest post by Mike Duclos. Mike is founder of The DEAP Energy Group, a firm providing a wide variety of deep energy retrofit, zero net energy, and Passive House related consulting services. Mike has real-world experience with the design, construction, certification, and delivered performance measurement of Passive House, and is a Certified Passive House Consultant. Mike will be teaching a 6-week course on Passive House Design as part of NESEA’s Building Energy Master Series that will teach builders, architects, and engineers the fundamentals of Passive House design. In the class you’ll design your own passive house and get it reviewed by Mike using “PHPP Lite.” The class is capped at 50 students with 30 discounted seats. Sign up for the Passive House Design training here. 

Passive House Design vs Normal Home Design

Passive House is a hot topic, and we get a lot of questions about how to design and model these homes. Most people are familiar with design principles for “normal” residential homes, so we wanted to provide a sample as-built for an actual Passive House with a number of comments on how its design is different from traditional construction.

A Real Passive House Design

passive house plans

Here are 10 Key Design Features That are Different From Normal Residential Home Design

  1. The long elevation of the home faces close to due South, providing more wall area for windows.
  2. Home is positioned on lot so views are to the South so that the larger South window area is used to advantage for both the view and solar gain.
  3. Room layout centers around a ‘great room’ comprised of a living and kitchen/dining area for entertaining a modest number of people in 1152-square-foot home.
  4. Master bedroom receives sun from the East and South; the other front bedroom receives sun from the South and West.
  5. Point source heating efficacy is optimized by use of a central great room in which a single, 9 KBTU/hr  ductless mini-split is used for all space conditioning.
  6. Bathroom door is located immediately below ductless mini-split, for best localized space conditioning.
  7. Mechanicals are located between bathroom and kitchen sink, minimizing delay to hot water and stranding of hot water after a draw. Solar DHW tank can contribute 300-500 BTU/hr next to the bathroom door.
  8. Glazing is maximized on the South elevation, minimized on East and especially West to help manage overheating , and is minimized on the North to minimize space heating losses.
  9. South elevation has one entry door which is glazed to take advantage of the view and the sun.
  10. Mudroom on the North is the entrance used on a daily basis by occupants.

Download the Sample Passive Home Design

Enter your email to download the sample floor plan.
  • This is the email where your floor plan will be sent.
Posted in Building Efficiency | Tagged , , , , , , , , , , , , , , , , | Leave a comment