Sales Comparison Model Evaluation
Model Evaluation provides statistics on the accuracy and effectiveness of a Model. The evaluation process works by taking a set of Sales from DataLog and treating them as Subject properties to be valued using the Model under evaluation. The evaluation will take up to three years of Sale history for Sales matching the criteria and use those Sales as candidates for comparisons against each Subject.
The evaluation takes each Subject in the set and picks the highest rated comparisons to use in a simulated Sales Comparison Valuation. The number of rated comparisons used to estimate the value of the Subject is determined by the Model’s configuration. Once these Sales are chosen as comparisons, they are adjusted for Time, Land, and Improvements against the Subject. A value for the Subject is then calculated using the adjusted value of the comparisons. This value is compared to the actual sale price of the Sale and used to measure the Model’s accuracy. Once this process has completed for every Subject property in the set, a summary of the Evaluation is shown.
Model Testing Method
Two methods for Evaluating Models are available: Match Criteria and Testing Set.
Match Criteria is the default and preferred method for Evaluating a Model. This method selects Sales to use as simulated Subjects by using Sales from DataLog that fit the Subject Criteria set in the Model. In order for a Model to be approved, it must be Evaluated using the Match Criteria method.
The Testing Set method takes a preconfigured static set of Sales and uses them as simulated Subjects in the Evaluation. This method is useful for testing smaller sets of known Sales for accuracy using the Model. However, since the Subjects for this method are user provided and not necessarily representative of Subjects that would be selected using the Model Criteria, it cannot be used to approve the Model for use on actual Subjects.
The Evaluation Summary section provides an overview on the results of the Evaluation. It lists the number of Sales that were used as simulated Subjects, how many were rejected for valuation, and numerical statistics regarding the effectiveness of the Model in estimating the value of the Subjects.
The Valuations Grid displays the details of each simulated Subject that was successfully valued in the Evaluation.
The Index of each Sale used as a simulated Subject is listed and can be used to view the details of the Sale. It’s estimated value, both total and per-unit is shown for comparison with its actual sale price.
Expanding the Subject row provides more details on the results of the Valuation. This shows a list of the Sales that were chosen as comparisons and how they scored when rated against the Subject. Expanding a compared Sale row lists each individual comparison that was done against the Subject which was used to arrive at its overall rating.
The Rejected Valuations grid lists each simulated Subject that was not able to be valued using the Model in the Evaluation. Common reasons for rejection are insufficient comparable Sales that meet the Model requirements and having an adjusted price spread that exceeds the limit set in the Model.
Additionally, for a Sale to be used as a comparison with a Subject, it must have a Dollar Per Acre value set for each land type that the Subject property has Acres listed for. This is so a Land Adjustment can be done before the final valuation. If these land types do not match, then the Sale cannot be used as a comparison. A Subject property with an abnormal land mix set will then be difficult to value using Sale Comparison due to a lack of comparable Sales to use.
Once satisfied with the results of a Model Evaluation, the user can use the Accept Model button to indicate that the Model is ready for use. Using this button will prompt the user to enter their name. Once finished, the Model can then be used to value actual Subject properties that fit its criteria.