I just fielded a call from a potential client who was curious about how an appraiser would go about extracting an adjustment from the market, in this case specifically basement finish. In the discussion I explained that there is no factor that appraisers use, but that we turn to the market to try and show us what buyers are paying. Because different markets can act quite differently, I thought putting up a couple of examples of this type of extraction might be useful, both to my potential client, as well as my audience in general. The following show two different examples of an extraction for basement finish, one in Ann Arbor related to a generally newer house in the $400,000 or so price range, and the other in Lincoln school district in the under $200,000 price range. Both use the same methodology and both show substantial differences in final results, which is why an appraiser cannot just provide a number. Instead the appraiser has to look at the market. The first sample I went back two years and narrowed my market data to houses between 2000 and 3000 sqft, built between 1990-2010 on the west side of Ann Arbor (used areas 82, 83, and 84) and then downloaded all these sales to Excel and segmented the sales between houses with finished basements and without. The results were 37 sales without finished basements and 62 identified with finished basements. I looked at median and average sales price differences and median and average amount of basement finish, and came up with between $21,647 and $24,500 difference in price favoring those with the basement finish, and between $24.24 per sqft and $27.75 per sqft of basement finish. This provided me with some support for my adjustment. I don’t recall what my adjustment was, but I think anywhere between $20,000 and $25,000 is supported based on this data. That and in my experience, basements in this area cost about $40 per sqft to actually finish. Here is what it looks like on a spreadsheet: The next example is using sales in the Lincoln school district, and in this one my isolated properties were between 1,200 – 1,700 sqft in size and built between 1985-2010, also going back two years. I had 48 sales without basement finish and 36 with basement finish, and the median difference in price was $8,953 and the average price difference was $14,420. The median size of finish was 625 sqft and the average size of finish was 703 sqft, supporting adjustments per sqft of $14.32 to $20.51. As you can see, there are differences in price between the areas and the sizes, as would be expected. Cost remains about the same to complete. Each appraisal may be different, and the numbers found here in these two samples could change depending on how far back the appraiser goes on their data research and what they set as the perimeters for the data search. I offer this to you, my readers, as a simple study showing how I often go about trying to extract an adjustment from the market. A final word of caution; I would not expect to see an appraiser put this analysis into their appraisal. They will likely do it, and say something in the report about the adjustment being analyzed through market data. This is what they likely mean, but won’t put the actual results into the report, instead they will have it in their files, be it in the office in general, or specific to an appraisal they were working on. Hope you all enjoyed this simple explanation, and if you have questions about appraisals and appraisal processes, please feel free to contact me. Easiest way to reach me is via email at rach mass at comcast dot net.
February 13, 2015
Editorial by Woody Fincham, SRA
Over the next several days I will be posting up my thoughts on the recent Fannie Mae Lender Letter Lender Letter LL-2015-02. This is part 2. Part 1 is here.
So let us dive in to looking at that Lender Letter. Each of the quoted sections comes from the Lender Letter, which is cited above this. Following each are my thoughts on the quoted section.
“CU does not accept or reject appraisal reports or characterize an appraisal as “good” or “bad.” The CU risk score and messages pertain to risk and identify potential defects in the appraisal report. The lender is not obligated to “clear” or “override” the CU messages. The messages are meant to be used as red flag messages that lenders should use to assist with their appraisal analysis and inform their decisions based on a complete analysis and understanding of the appraisal report.”
I think this clarifies some of the biggest concerns to what CU is and is not. Most large lenders and appraisal management companies (AMCs) have been using all sorts of third-party review rule sets and data pools for many years. So this is nothing new under the sun. It really is the first time they are pulling in data that is verified by appraisers. It has the potential to be a good thing, but it could very well be a bad thing. It will depend on how the lenders use it in their respective review processes. It is certainly bad for appraisers if they approach it the same way that they approached the 15% and 25% adjustment guidelines. I will not get into the old adjustment guidelines to deeply yet, but we all know that many lenders were set in stone about the 15% and 25% guidelines when Fannie Mae was not. Yet lenders and AMCs still required adherence to those guidelines in some cases as hard and fast rules.
Working in a market as I do in Charlottesville, VA I understand where some concern would come from in the appraisal community. Much of my personal work involves complex residential assignments. From what I gather from those I have spoken to at Fannie Mae, and what information I have read about CU, I imagine many of my reports will become a four or five in their system. I deal with properties that require regional research because they are status homes: essentially unique and custom to the market. I would assume that these types of properties represent less than 5% of that MSA, and transfer infrequently. The market is also small on the urban-side and voluminous in the rural and transition from suburban to rural type properties; acreage varies greatly. The fact the MSA is in the Appalachian Mountains and that Fannie Mae requires segregation of finished basements from above grade living area; an overwhelming number of homes are built on slopes that have basements. I think you see that unless I am in a planned neighborhood or condominium development, it is unlikely my work product can be seen as conforming. By circumstance, these properties will rate high in risk.
If I felt like my ability to perform work would be affected by the CU rick scores, then I would be up in arms as well. Many of my colleagues believe that the CU risk score will affect them. While I cannot say that it will (or will not), if AMCs/lenders decide to use the information to benchmark appraiser quality it could be a nightmare for some appraisers. When you get to my thoughts on the 15% and 25% adjustment guidelines, later in this piece you will see more of my perspective on this. I could be wrong, but I am not going to be overly concerned… yet. Until I see things happen contrary to Fannie Mae’s stated position, I will defer on an alarmist attitude.
“CU does not provide an estimate of value to the lender. CU provides a numerical risk score from 1.0 to 5.0, with one indicating the lowest risk and five indicating the highest risk. Risk flags and messages identify risk factors and specific aspects of the appraisal that may require further attention.”
I know many appraisers were convinced that this was not the case. Many were positive that Fannie Mae was going to assign scores to the individual appraisers. It is easy to see why that would be a concern, as the last major Fannie Mae policy change dealt with the Uniform Appraisal Dataset (UAD). Appraisers are directly monitored on consistency of data for comparables with the UAD, but not with CU.
It is easy to mix it all up. If you submit data in violation to their UAD standards, that does affect you. The CU rating score does not. It is relative to the report itself, not to the appraiser. With that stated, AMCs and lenders COULD use consistent high-risk scores on reports as a means to stratify appraisers as problematic from those that get lower ratings. If this type of comparison was made a tracked, it could affect me. I do not compete with the typical mortgage-use appraisers in my market. Plenty stay in homogenous subdivisions and I cannot do the work that they do at the price points they do them. If their resulting reports are lower-risk scores by the nature of the conformity these properties present versus the types of properties that I typically work with, I will be seen as an inferior appraiser. Fannie Mae may state their position on such things but that does not mean the lenders and AMCs will not distill and extend the information they see further.
This is getting into the realm of conjecture; as such, there is not a whole lot of merit to it at all. It does make one stop and ask questions though. I try not to worry about things that I cannot control so I will leave such thoughts alone for now. But I will come back to the way lenders took the 15% and 25% adjustment guidelines out of context and altered the profession. I will have some more on that later, of course.
“CU’s selection of comparable sales considers the relevance of each potential comparable sale based on physical similarity, time, and distance. The selection process is not based on the relative “risk” or sale price of a comparable sale nor is there a “lower is best” approach. In fact, CU may assign a high risk score to an appraisal when the model identifies alternative sales that are potentially more relevant than the comparable sales used by the appraiser, regardless of whether the alternative sales are higher or lower in price.”
This certainly is concerning for appraisers. Appraisers are paid to perform the research and when we do it, we can get defensive about someone questioning it. Call it professional pride, but this can be a catalyst to incite negativity among us quickly. It is probably a good idea that appraisers write their reports with the above in mind. Part of this may be addressed by including commentary regarding ideal and typical improvements for the shared competing market.
Since the risk flags are triggered by not using properties that are more similar on paper, commentary may need to change to deal with these items in mind. Canned commentary certainly will not work in many situations. I know this means taking the time to write custom commentary in every report, but with enough foresight, it is easy enough to build a template that is set up as a skeleton to which specific report comments can be added. I have also suggested to a few colleagues, when asked, that they approach this similar to ERC (employee relocation) reporting. Possibly embedding a chart of all the comparables surveyed from MLS prior to distilling then down to the comparables in the actual report. I realize this is more work, but if we start seeing lots of kickbacks on this issue, this might be a way to avoid it.
The way CU is setup, at least how it has been explained thus far, is that data is stratified by census block groups (CBG), which goes into the next quote…
“CU takes location into account using Census Block Group levels, which are subsets of Census Tracts. This is the most viable proxy for location in the absence of standardized neighborhood definitions, and more effective than use of arbitrary distance guidelines. Fannie Mae is not suggesting that appraisers use Census Block Groups to define comparable search areas, but appraisers remain responsible for indicating when comparables are from outside of the subject neighborhood and for addressing any differences.”
This has caused quite the banter in social media appraisal groups and pages. There are all kinds of issues with this concept. The most obvious one is that appraisers do not stratify by CBGs normally. It would be great if the software companies could create or add a way to also tag which CBGs each comparable comes from or address it in the rules set reviewer in each of the form packages. This would at least allow a streamlined tool to allow the appraisers to comment on this item. Perhaps even a data point that could appear in MLS data. Most addresses are geo-coded now, so it would be an assumingly easy thing to do by over laying the CBG maps to existing maps.
Obviously, Fannie Mae chose this methodology because neighborhoods will vary market to market. My concern with it comes from the staff reviewers at lenders and AMCs (Quality Control or QC Staff) that are not familiar geographically (geo competency) with the area. This certainly gets back to USPAP regarding writing the report at level commensurate to the intended users. Explain away would the obvious answer, but that also require the readers and QC Staff to read the reports thoroughly. I often hear from folks involved in QC review that they prefer cogent writing and brevity. From some of the reviews I have personally gotten from lower-level QC staff (unlicensed appraisers); it seems many struggle with common terms in real property economics. I struggle with dealing with those that gloss over when I use terms such as linkage, commercial zones, obsolescence, etc.
This also still sets up the appraisers to deal with non-appraisers applying arbitrary guidelines. While distance guidelines are now going to be relaxed, in their stead I dread the likely possibility that QC Staff will want CBG differences addressed. A bit later, I will address the sun setting of the 15% and 25% adjustment guidelines, but one has to see the possible pitfalls with CBGs that the lender-leveraged adjustment guidelines created. QC staffs need to be well trained to deal with this. Nothing prevents individual lenders and AMCs from requiring more than Fannie Mae’s suggestion as to how to implement the CU into their work processes.
“The risk analysis performed by CU is for exclusive use by the lender in their analysis of the appraisal report. After completing a thorough review, a lender should be able to have constructive dialogue with the appraiser to resolve specific appraisal questions or concerns. Although the lender may use output from Collateral Underwriter to inform its dialogue with appraisal management companies and appraisers regarding appraisals they supplied, the CU license terms prohibit providing these entities with copies or displays of Fannie Mae reports that contain CU findings, including without limitation the CU Print Report, the UCDP Submission Summary Report, or any other CU report. The lender must not make demands or provide instructions to the appraiser based solely on automated feedback. Also, the CU license terms prohibit using it “in a manner that interferes with the independent judgment of an appraiser.” Fannie Mae expects the lender to use human due diligence in combination with the CU feedback, and will actively follow up with lenders who are reported to be asking appraisers to change their reports based on CU feedback without any further due diligence.”
Fannie Mae is pretty clear the impetus is not to strong-arm appraisers with the feedback and analysis done by the system. There is a real desire to keep human beings in the mix from Fannie Mae’s side. The possible disconnect I see will be on the competency of the QC Staff. The way many AMCs and lenders approach QC reviews is by hiring unlicensed staff people and expecting them to understand what valuation professionals do. Each appraisal is different and finding comments buried in a 20-50 page report is arduous at best; I can struggle with it and I have performed hundreds if not well over a thousand reviews in my career. The tactic that most QC staff uses now is simply kick to out a report because they cannot find a comment, or how the appraiser addresses the issues up front, and rely on the appraiser to point it out. There are already copious examples of appraisers stating they are often already addressed in the original report. Unless there is some reengineering of the process, this will only get worse as now the QC Staffs will be armed with more data.
One thing that we have already seen from CU is the copying CU comments being sent to the appraisers. I have seen several examples from colleagues where something was flagged in CU and no human review was done. No dialogue was attempted between the QC staff and the affected appraiser. Fannie Mae has made it clear that the CU scores and flags are meant to be dealt with by QC staff actually having dialogue with the appraiser. Instead, what we are seeing so far is many QC Staff people are simply copying and pasting CU comments and sending them as a standalone engagement for revision or commentary for the appraiser to deal with. That is not creating dialogue; it is asking the appraiser to the work for the QC Staff. One would think, after reading Fannie Mae’s letter that the expectation is for the QC Staff to check the report in question before calling on the appraiser to do anything. If the appraiser has reasonably commented or dealt with the issues of concern, they should be good to go.
Much of this is going to remain an issue with AMCs and lenders that continue to utilize the services of uneducated and undertrained QC Staff. Large lenders and AMCs that process lots of volume expect an awful lot of their QC Staff. Each appraisal, if written well, is stand-alone research project. It should be read and understood with the same care it was prepared. Pulling in someone that has never read an appraisal report as an hourly reviewer and expect them to get through the jargon and concepts that are summarized in a mortgage use report is counter-intuitive. Either the Lenders or AMCs need to start hiring competent and credentialed valuation professionals, or spend the resources needed to train raw talent. Both aspects are expensive, and neither is an option with the current compensation structures in the mortgage and valuation overlap of space. We will certainly discuss fee levels in depth a bit later.
“Fannie Mae does not instruct or suggest to lenders that they ask the appraiser to address all or any of the 20 comparables that are provided by CU for most appraisals. It is also not Fannie Mae’s expectation that appraisals should contain only CU’s top-ranked comparable sales. In the majority of cases, there may be no material difference between comparable sales utilized by the appraiser and those identified by CU. Before asking the appraiser to consider any alternative sales, it is imperative that the lender analyze the relevance of the sale and determine if the use of such sale would result in any material change to the appraisal report. If the lender determines that there would be no material change, then they should not ask the appraiser to make revisions. Fannie Mae expects CU to enable lenders to accept appraisals “as is” with greater confidence.”
The previous comments I have made are applicable here, too. The disconnect lenders had, again the adjustment ratio guidelines come to mind immediately, understandably make appraisers wince at this idea. The biggest concern, here again, is that QC Staff must be at a level of competency to understand that suggested comparable sales are just that, suggestions. The way this was handled pre-CU was to send an appraiser comparable sales that were not used and ask the appraiser to then comment on not using them or to possibly also include them. Of course, the comedy often ensues when the appraiser replies back that, “Two of the three comparables you sent me to consider are already in the report.” This type of real world scenario proves that where Fannie Mae may need to concentrate some of this reengineering of the process is on those that do review and QC work.
Not to plug the Appraisal Institute (AI), but this may be the very reason that the AI created the new review designations, AI-GRS and AI-RRS. Review is a completely different animal than Standard-1 and Standard-2 reporting. I understand that hiring such professionals is a higher cost, which means more cost to the consumer, but let us face it; you get what you pay for. At the very least, if the lenders want to have a positive outcome from the QC side, it should be built around utilizing well-trained professionals and the review designations are a step in the right direction in my opinion. And it really may not need to be at the consumer’s dime so much as maybe it should come from the lenders. The last I looked the larger lenders have no problems posting profit reports.
I spoke with a chief appraiser with a major AMC last week. He informed me that they have three levels of human reviewers. The first level is a combination of using technology to flag potential issues and areas that may need more in depth analysis. If there is enough need to elevate it upwards, it is then looked at by a non-licensed staff person. On the next and final level, a human being that is licensed is involved. It is apparently an effective way to do things, but even their internal processes still leave some room for improvement. When so much volume is handled by any given entity, and cost is always the biggest concern, it is impossible to hire but so much real talent. I will come back to cost a little later when I discuss fees and compensation.
End of part 2.
Stay tuned more to come over the next week. If you have any suggestions or want to share some war stories, please send them over to firstname.lastname@example.org.
 “Lender Letter LL-2015-02,” Fannie Mae, https://www.fanniemae.com/content/announcement/ll1502.pdf (February 2015)
Rachel Massey, SRA, AI-RRS
Woody Fincham, SRA
Tim Andersen,MAI, Msc., CDEI, MAA
Originally published at http://www.appraisalbuzz.com/depreciated-cost-test-reasonableness/
With all of the clamor and excitement that Fannie Mae’s Collateral Underwriter (CU) is creating, we started working on a new article that addresses some possible solutions. In this one, we are expanding a bit on using the cost approach as a means to develop and support some adjustments. Each of the three traditional approaches to value can be used to develop a basis of analysis in any of the approaches. As such, the cost approach can be a reliable means to develop a gross living area adjustment, or lend additional support for it. While it does not work each time, has proven successful for us many times, and as such, we do urge studying it and putting it into your toolbox of solutions for supporting adjustments.
Quantitative adjustments require some type of support. CU is not changing anything regarding this premise. Appraisers are supposed to have support within the workfile for adjustments made, and then support the adjustments with commentary within the report. This is in harmony with USPAP. Many appraisers do not address specifics on the adjustments made, let alone explain how they were developed and applied. So here is one method that can be relied on as a means to support a gross living area (GLA) adjustment. Sometimes it can be used for other items.
One aspect of Collateral Underwriter (CU) that many have been discussing concerns price/SF. In the example from the CU webinar, it is stated that if an appraiser is using $15/SF for adjustments regarding gross living area (GLA) adjustments and the comparables sales indicate $200-$300/SF, then it will be probably be flagged as a higher-risk item. So part of the advantage of using this technique will help you address this with analysis. Let us look at some improved sales now that we have an idea of what site values are for the market
|Comp 1||Comp 2||Comp 3||Comp 4|
|Price||$ 308,300||$ 300,000||$ 295,000||$ 283,000|
|$/SF||$ 127.71||$ 129.98||$ 119.53||$ 122.51|
In this data set, we have four sales. The range of price/SF is $119.53 to $129.98. The problem with price/SF is that it deals with all attributes of the property. This can be problematic because it is inclusive of the land, which can skew the usefulness of using it as a unit of comparison. Once we get part way through this article, we will start discussing residual improvement value (RIV). RIV can be an effective defense against overall price per square foot concerns.
Simple Depreciated Cost
We are going to walk through a case study of a file that Rachel worked on recently. Obviously, some things have been changed. Some of you will notice that the data set is anything but like what we all normally see in classroom case studies. Hardly ever do we see perfect sets of data like what often seen in most case studies in an educational offering. With that said, this may not be something to use if you are starting newly in the profession. This article is written with an experienced residential appraiser in mind.
Depreciated cost can be a test of reasonableness for some adjustments, and here it is used as a basis for the gross living area adjustment, tied to sensitivity analysis. It is not meant as a means of arriving at an adjustment, but instead as either a place to start, or a second or third approach. Because each of us have used it extensively we felt it would be a great place to help some of you establish a benchmark or test of reason to use for a gross living area adjustment, in particular as the example is from the real world.
Site value – you really need to get a handle on site values for using this approach (while you can use the depreciation factors to get to land values, having a grasp on site values is easier with land sales). Most communities have land sales, even if they are not in the immediate area. For example, this grouping of data presented here was for a property in Michigan and there have not been a great number of land sales in the immediate area over the past few years. There have been no land sales in the subject neighborhood. There were however, enough land sales from competing areas to provide some basis from an opinion of the value of the subject site as if vacant.
The following chart shows seven sites that sold and three acreage parcels:
|Sale||Sold date||Sold price||$ To Acquire||DOM||Size||Frontage||$/SF||$/FF|
|Comp 1 (Demo)||9/17/2014||$42,050||$51,050||673||13,068||100||$3.91||$510.50|
*Note we included a couple of acreage properties because one of the improved comparable sales was an acre property and support was needed support for a site adjustment.
In the example, we see that the smaller the lot, of course, typically the higher price per square foot (SF). This is known as increasing and decreasing returns; see definition below. While there are exceptions, this is a general rule. Comparable sale-1 is a tear down property. Because the data is actual real world data, it is not perfect as we typically see in many academic examples, but it does allow a supportable conclusion to be derived.
increasing and decreasing returns
The concept that successive increments of one or more agents of production added to fixed amounts of the other agents will enhance income (in dollars, benefits, or amenities) at an increasing rate until a maximum return is reached. Then, income will decrease until the increment to value becomes increasingly less than the value of the added agent or agents; also called law of increasing returns or law of decreasing returns.
With the data shown above, we can see that price/SF averages $4.19 and the range is $3.00 to $4.97/SF in this market. Front footage (FF) averages $548.42/FF and the range is $421.69 to $695.88/FF. By establishing an estimate of land value for the comparables used in the sales analysis, it helps to develop cost-derived adjustment.
Using comparable sale-1 as an example, the estimated cost looks like this:
|Dwelling||2,414||$ 87.85||$ 212,069.90|
|Basement||1,142||$ 22.17||$ 25,318.14|
|Basement Finished||1,000||$ 15.00||$ 15,000.00|
|Garage||504||$ 27.57||$ 13,895.28|
|Cost New Estimate||$ 114.45||$ 276,283.32|
|Sales Price||$ 308,300.00|
|Site Value||$ (55,000.00)|
|Depreciated Value of Improvements (or RIV)||$ 253,300.00|
|Minus Cost New||$ 22,983.32|
Below, we have estimated the site value and subtracted it from each of the comparable sales. The resulting unit of comparison is much better than overall price/SF. The price /SF-RIV can be used as an indicator of the highest possible reasonable adjustment for GLA. We like this as a test of reasonableness for any adjustment made for differences in gross living area. The resulting $/SF-RIV is going to be the upper limit of how much you can adjust.
|Sales||Sale Price||Land Value||RIV||GLA||$/SF-RIV|
|What about Fireplaces and Decks, etc. is this approach right for that? Decks and other items like decks and outbuildings typically depreciate at a faster rate than the house. One would try to steer away from using this methodology with such items. We still believe that this approach can be used in measuring the top end of the adjustment range, or as a test of reasonableness, but with the caveat that the rates of depreciation may vary.Depreciated cost may offer one of the only adjustments that you need at all, if your comparable sales are all very similar. It can be difficult to support adjustments for additional features like decks and fireplaces. Sometimes those types of amenities are sometimes best dealt with using qualitative reasoning. If you are looking at sales that all have similar external features, are of the same quality/condition as the subject it may not be required to adjust for them. These items are difficult to extract and may be summed up with qualitative reasoning. It will depend on what information you have learned about from the market. This is an excellent area to discuss with real estate agents and ask if such features are strong considerations by buyers. It is also important to understand how the sellers are looking at such items as well. We find that talking to both agents on a transaction can be beneficial to glean such information. In the end, if no adjustments are supportable for such amenities, the appraiser can discuss the additional amenities present for a sale and use that in the final weighting during the reconciliation of the sales comparison approach.|
We can apply these figures to the improved sales that we are using in the sales approach to get a residual improvement value (RIV). As mentioned earlier, RIV is a better indication of comparability as it allows us to compare apples to apples. It removes the land component, and other improvements not related specifically to the house itself. Just getting this far into the process with each of the comparables, and looking at the RIV/SF as a metric will assist with the concerns many are having about the CU overall price/square foot metric.
The next process is to take each sale and develop a cost approach using Marshall & Swift Residential Cost Handbook (disclaimer, huge fans here) for the appropriate quality. It is important to make adjustments for energy and foundation (bottom of the page related to the type of housing) if they apply, refinements for floor covering, heating and cooling, etc. as well as applying the quarterly multipliers to region and location. From there you should compare total cost to the depreciated remainder for an account of depreciation.
You would then do one for each of the sales in the study.
|Sales||Cost New||RIV||Total Depreciation||% Depreciated||Age||Depreciation/yr|
*Note: This type of approach utilizes all forms of depreciation. If there were cases of functional or external depreciation present for any of the comparable sales, this would need to be adjusted for as well. In this case, study, there were neither additional forms of depreciation.
This information can be valuable in terms of understanding depreciation, as well as supporting either an age or a condition adjustment (look at how sales 3 and 4, which are older houses, have much more depreciation than the newer houses overall). Since each house is depreciated between ~6 and ~15 percent, you also have supportable adjustments to make for age or condition.
You can also utilize this type of adjustment for amenities such as basements. For example, say comparable sale-1 has a finished basement that is older and not high quality. The finish costs an additional $15 per square foot rounded over and above the cost of the basement. This finish is a recreation room only and the cost new is around $15,000. The overall rate of depreciation for this property is 8.32% or $1,250(rounded). This means that logically the basement finish would now contribute about $13,750 to the property value. That may not be sufficient to stand alone, but does offer a method of support.
Additional support can be from running simple statistics such as isolating a group of sales based on some commonalities. For the following sample, we took houses in this particular market separated between houses built between 1995 and present, but excluding proposed construction. They were further narrowed to include 1,800 to 2,800 SF and no walkout basement. By doing a simple version of grouped paired analysis, we see the result was a difference between $14,843 and $15,377 between the two types, with many having bathrooms in addition to finished rooms. With an indication of $13,750 from comparable sale-1 and the paired group analysis showing a range of $14,800- $15,400, it is easy to deduce a reasonable adjustment amount.
|No Walk Out||# Sales||Avg Price||Median Price||Avg GLA||Med GLA||Avg $/SF||Median $/SF|
|1800-2800 SF Unfinished Basement||42||$340,559||$329,623||2392||2402||$142.37||$137.23|
|1800-2800 SF Finished Basement||89||$355,402||$345,000||2399||2408||$148.15||$143.27|
Completing a cost approach on each sale is a good exercise in terms of seeing cost in action, as well as testing depreciation. The greater the depreciation exhibited in the individual sales, the greater the difference in either condition or age, or a combination of both. So this methodology can also create support for other types of adjustments as well, such as the basement finish adjustment shown above. Many will say this takes a lot of time, and our answer is, “Yes, but it’s something that uses some commonsense and appeals to reasonableness”. We would also add that explaining this is much easier than trying to use regression analysis or find that elusive matched paired sales. Most appraisers can reasonably explain cost-based extractions to a jury or licensing board. It does not require much in the way of additional tools. Excel©, cost estimation software and appraisal software is all that is really needed.
Depreciated cost does work in many markets, so give it a try to see if it is something that will work for you. Use it in addition to some other methods of supporting adjustments. We consider it an excellent test of the reasonableness of both the value conclusion, and the elements of comparison within the value conclusion. We have each successfully used it in lending and non-lending work assignments.
Fannie Mae and CU are specifically going to target our size adjustments. In the past, many appraisers used “rules of thumb” as the basis for a size adjustment. As we are all now aware, rules of thumb do not work anymore because CU has the ability to calculate size adjustments from market sales data. The model above, while not based on CU’s sophisticated algorithm, also functions quite well in isolating the sales price of the improvements. Using this model, appraisers are able to isolate such differences within a reasonable range of values. Even more importantly, this range of values is market-derived, thus in full compliance with CU’s requirements. Be sure, too, to save all of these calculations in the workfile for future reference. Gone are the days when we can justify out adjustments by invoking “my 30-years of appraisal experience”. Now, we must prove our adjustments. This model is one of those proofs. Finally, what we have presented here is nothing new. This well-known method has been published in numerous books and in courses. We thought presenting a “real-world” example might be helpful in showing how even without perfect results; the results can be, nonetheless, meaningful.
 Appraisal Institute, The Dictionary of Real Estate Appraisal, 5th ed. (Chicago: Appraisal Institute, 2010)
All Three of us worked on this piece. I won’t post it in the entirety yet as it’s brand new today and should be given full look at through the publishing site. But if you happen across it here, please click through to read it.
By Timothy C. Anderson, MAI, Msc., CDEI, MAA
In my on-going attempt to unravel some of the mysteries of real estate appraisal, as well as to give appraisers an idea of what it is that CU is and does, I have studied some actual sales in a mid-western state and then summarized those sales data, in graphic form, in the Figures below.
The exhibits that appear below are from the statistical functions in Excel®. There is some rather scary looking algebra on them. But do not worry: most of it is for comparison purposes. You do NOT have to understand how the computer arrived at those formulae (i.e., the algebra and calculus behind them) to understand the topics in this article. The math behind what those formulae tell us is not really all that difficult, but it is for advanced classes. This is an article, not an advanced class.
To understand this article, you do not even need to understand statistics. Just follow the narrative and the thrust of the charts will become clear to you.
First up is an explanation of the data the chart’s use. These data are from 2013 and 2014, so are recent. The appraiser who amassed them knows what s/he is doing, so there are no reasons to question his/her professional integrity or ability. These are actual sales data, culled from the MLS. All have closed escrow and transferred title from the seller to the buyer. The sales prices are all cash equivalent (i.e., adjusted for non-realty concessions as necessary). All of these sales data are from the same subdivision, but that subdivision has houses of varying ages, sizes, qualities of construction & maintenance, and so forth. In other words, the houses here are all subject to the same market forces, but clearly differ one from another.
Since the data were not property-specific (i.e., not all of them would be applicable to a hypothetical subject), what we look at in this article are the subdivision’s trends. Specifically, we analyze if there is any correlation between (a) the sales price per square foot and the year built; (b) between sales prices per square foot and total size; (c) between sales prices per square foot and the date of sale; and, finally (d) the correlation (if any) between the absolute sales price and the days on market.
Just to jog your memory about statistics, in any comparison there has to be a basis for that comparison. This basis is called the independent variable. It is always shown on the graph’s x-axis (e.g., the horizontal line or the base line). The dependent variable is always shown on the y-axis or the vertical line.
This article’s topic is the correlation between the dependent and independent variables. On the Figures that follow, you will see lots of blue dots and then lines of various colors. What you are looking for is how well the lines (specifically the red line) track with the blue dots. When the (red) line and the blue dots are close to each other, there is what is called high correlation (as well as low variance). All other things being equal, we look for high correlation, typically above 50% (and really, a correlation close to 90% is more-or-less ideal).
When there is a high correlation it means the data explain will the relationship between the independent variable and the dependent variable. When that correlation is low, however, it means the two variables really do not explain each other. We will see examples of these relationships as the article progresses.
Another purpose of this article is to illustrate (but not explain – too short for that) what it is CU does with all that data with which we have provided it in the past. When CU flags an appraiser’s entry in a field, it is because it has gone thru an analysis such as one of these (although far more in depth, breadth, and width), and then determined that the appraiser’s response did not correlate properly with the other data it has in its database. This lack of correlation does not mean the appraiser is “wrong”. It merely means the appraiser needs to explain how/where s/he derived that particular response. While there are many ways to respond to such a request, a graph such as one of those below, goes a long way toward that explanation.
Take a look at Figure 1.
It looks at the relationship between the sales price per square foot of the properties (y-axis) and the year in which a particular house was built (x-axis).
First, look at the red line. Notice its trend is slightly uphill from left to right. This means that newer properties tend to sell for more per square foot than to older properties. All other things being equal, you would expect this relationship. However, as you will also notice, relatively few of the blue dots (the sales price per square foot of the component sales) touch the red line. This means there is a lack of correlation (i.e., a high variance) between the two variables. In fact, the formula at the figure’s upper left-hand corner shows a correlation of only 1.85%, which is essentially no correlation at all.
What this statistical analysis tell us is that, assuming a particular property were to have been constructed between 1999 to 2007 (and all 77 in the sample were), its age at the date of sale really has nothing to do with its sales price per square foot, since they do not vary in all that much.
Therefore an age & condition adjustment for a property built within these years is likely not necessary. True, this contradicts the traditional thinking of many appraisers. But are appraisers incapable of change when the need for that change stares them in the face?
Now look at the purple line (ignore the green one, since it is a variation on the red one). While the math behind the purple line is more demanding than the math behind the red line, it is more explanatory, too. What this says is that the market current as of the date of appraisal was willing to pay more for houses built in 2002 that for houses built much before or after that date. However, they do not explain why this is so.
However, despite the fact the purple line touches more of the blue dots than the red line, it shows a correlation of only 13% between year built and sales price per square foot. While this latter line explains the market better than the red line, it does not explain it all that much better.
This Figure, therefore, indicates that, given solely these data, there really is no compelling reason to make an adjustment based solely on a house’s date of construction. Given different data, or using less than 77 sales, the graph might have indicated a different result.
Figure 2, however, tells a different story. Looking solely at the blue lines, it is easy to deduce that as size increases (the x-axis), sales price per square foot (the y-axis) decreases. From looking at the dots, however, that there is an overall decrease is clear, but the rate of decrease is not. Now look at the red line (ignore the other two since they are essentially the same as the red line). You’ll notice that, not only does the red line touch a lot of the blue dots but that, of the blue dots that don’t touch it, a whole bunch of them are really close to it. This indicates that, given this sample of data, there is a high correlation between a house’s square footage and its sales price per square foot. In fact, the math behind the red line (not shown here, but included by reference) shows there is an 82% correlation between the two.
In fact, the formula in the far upper right-hand corner of the Figure quantifies that change in value. It says that there is a $0.0302 change in sale price per square foot for every 1 square foot of variance in size from the average square footage of this sample (in this case, the average size is 1,998 square feet). In fact, these data indicate that for an average size house (i.e., 1,998 square feet in this sample), the market recognizes an adjustment of $91 per square foot [(-0.0302x * 1,998) + 151.25 = $90.90].
Therefore, were an appraiser to make an adjustment of $15 per square foot for size differences in this market, based on these sample sales transactions, then CU would (rightly) flag it. Why so? Because the market data clearly indicate this market does not support an adjustment at $15 per square foot for this difference. This analysis is based on these sales, not on traditional rules-of-thumb. Obviously, using different sales, or using less than the 77 sales here would provide different results.
Now let us consider changes in sales prices per square foot as they relate to changes in sales dates. In other words, as time progressed over the time period these sales covered, how (if at all) did sales prices per square foot change? Since the sales date is fixed, it is the independent variable (the x-axis), whereas the sales price per square foot is the y-axis. See Figure 3. For purposes of this discussion, we ignore the really funky formulae and concentrate on the “simple” one (the one that calculates the red line).
Note in this Figure there are lots of the blue dots that are relatively far away from the red regression line. Again, this indicates the data were all over the place, thus show a great deal of variance or error. Therefore, in Figure 3, there is a lot of error. It also means the data are not really reliable at predicting anything other than a trend (i.e., as time passes, value per square foot increases). The red regression line also shows the correlation of these data in predicting anything is really low at 4.2%, which is no correlation at all. So these graphs, and the data behind it, are something you would toss into the workfile and forget.
Now move on to Figure 4. It shows the relationship between total sales prices and confirmed days on market. Look at the red regression line. Not a lot of the blue dots touch it, so there is a lot of error there. Its correlation of <1% indicates there is no more linear correlation between these variables than the operation of mere random chance would explain.
However, look at the green regression curve. This is a lot more complex to calculate, but as you can see it touches a lot more of the blue dots (approximately 26% of them, as a matter of fact). What this graph demonstrates is that relative inexpensive properties (<$150,000) spent a lot of time on the market before going under contract, whereas more expensive properties ($160,000 to $200,000) spent relatively fewer days on the market before they sold. Then, at about $200,000+, their higher prices meant they appealed to a smaller submarket of buyers, thus their days on the market increased back to between 140 and 160 days. So what does this relationship mean to an appraiser?
On page 1 of the 1004 form, it means the “typical” range of values in the neighborhood is from about $160,000 to $200,000, with the sales outside of this range as outliers. It also means that, were the appraiser to conclude a value outside of the $160,000 to $200,000 range, the appraiser would also be concluding a longer-than-average sales period (here the average was ±61days). However, given the low correlation coefficient of 26%, it also means that there are reasons other than days on the market, that explain difference in sales prices. Thus, whatever conclusions the appraiser were to draw from this graph merit the use of a liberal seasoning of salt.
So what are the take-aways here? The only graph that really tells us anything is that of Figure 2, given that it shows an 82% correlation. Therefore, the appraiser can confidently conclude that mere square footage alone accounts for 82% of prices differences. Further, given this high degree of correlation, the appraiser could use the regression formula (-0.0302x * 151.25) as one fairly accurate tool to use in forming a value conclusion. Note, however, it is no more than a tool.
What does all of this have to do with CU? CU’s built-in algorithms do all of the above, plus a whole lot more, and have millions of data points to draw on, not 77, which is what we had here. It can compare all of these data points with each other one variable at a time, or it can look at the “big picture” and compare them all at once via a multi-variable regression analysis. While a multi-variable regression analysis is far from infallible, and will not work under some circumstances, if FannieMae can use tools such as this one, why should appraisers not use a similar tool, too? (If you have Excel®, then activate the statistics pack, and you will have all of the statistical computing power and potential you will ever need).
Although some of the regression tools that are popping up all over the web are appealing, Excel® offers everything you need, right at your fingertips. All you will need is a few hours study time to get up to snuff on it, and then it is virtually free. There are courses that are available with different education providers that can walk you through learning how to use it if that is the way you learn best, and even some online tools not related to appraisal that are very inexpensive and accessible (think udemy as well as Microsoft itself).
On a closing note, note the technology to take the appraiser out of the mortgage-lending picture has been in FannieMae’s hands for at least the last five years (and the math has been available since the late 1700s). The data and technology to do so exist now, and will only become keener in the future. This article was written with the residential appraiser in mind, to offer a simplified version of how Excel works and a sample from the real world where it is applied.
If appraisers do not start to adapt and change, and keep to the status quo of three or four sales on a grid, without providing some support for their analysis, why should FannieMae and local lenders continue to pay appraisers millions of dollars per year to do what FannieMae can already do essentially for free with literally a few keystrokes of CU? Algorithms already “grade” our appraisals. Right now they have the capacity to do everything we do now (for the most part), but CU can do all of this much faster, cheaper and more compliantly. FannieMae is well ahead of is in this race. We appraisers can catch-up with technology and thereby show our clients we are the ones to be doing their appraisals. We should be doing them, not brokers, not AVMs, not unlicensed desk-monkeys, and most certainly not FannieMae whose lenders have a vested interest in getting the numbers it needs to make the loans.
 The fact that this coefficient is negative means the line slopes downward from upper left to lower right. If this coefficient were positive, the opposite would be true.
 In stats-speak “variance” is also called “error”. This does not mean there is something amiss or the math is wrong somewhere. It means, instead, that when a point falls well above or below the regression line, it is in error by that distance from the regression line.
 In this formula, the “x” is the square footage you want to insert.
 Without going into a lot of calculus or philosophy, an algorithm is a “set of rules that precisely defines a sequence of operations”. A computer program is an algorithm. CU uses algorithms. Fortunately, appraisers do not have to write these algorithms since they are built into Excel®. See http://en.wikipedia.org/wiki/Algorithm.
Given there have not been all that many flat residential real estate markets in the past 10-years, how market-accurate, then, are the published tables? SR2-3 requires appraisers to certify to the fact that the statements of facts in an appraisal report are both true and correct. If there have been essentially no flat markets in the last 10-years, how can we certify our depreciation is both true and correct if the published depreciation tables are based on a flat market? If markets are dynamic, but the published tables assume a flat market, how accurate are they?
Another issue with the published tables is their self-recognized inability to speak to the appraiser about functional obsolescence and external or locational obsolescence. Appraisers know there are three components to accrued depreciation. Yet they depend on the published tables to conclude as to all three of depreciation’s components. These tables do not and cannot estimate the latter two forms of depreciation. In addition, it is a logical fallacy to assume a property has only one form of depreciation (even, sometimes, in a new one).
The Comment to SR1-3(a) is very clear about unsupported assumptions. If the appraiser does not engage in the analytics of the Cost approach, how is the appraiser sure there is no functional obsolescence? If the appraiser does not engage in the analytics of the Cost approach, how is the appraiser sure there is no external or locational obsolescence? If the appraiser does not engage in the analytics of the Cost approach, how can the appraiser certify that everything in the Cost Approach is both true and correct? Falling rents and/or falling multipliers may indicate the presence of these other two components of accrued depreciation. However, how many appraisers, via the residential income approach, go to the effort to read the market’s tea-leaves?
To professional appraisers, then, the issue is to extract accrued deprecation from market data. Published tables may be a help with depreciation’s age-life component, true. But they cannot aid the appraiser with conclusions as to functional or locational/external obsolescence. These tables simply cannot calculate them; the appraiser must extract them from the market evidence. Yet, unfortunately, many do not. And, equally unfortunately, many appraisers do not understand when, where, and how to account for an entrepreneurial profit/incentive. Because of this lack of competency, therefore, many appraisers do not understand the market since they are unable to listen to it.