Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to papers filed on 10/30/2025.
Claim 1 and 39 have been amended.
Claims 2-6, 8, 10, 11, 13, 14, 17-33, and 37 have been cancelled.
No claims have been added.
Claims 1, 7, 9, 12, 15, 16, 34-36, and 38-44 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 7, 9, 12, 15, 16, 34-36, and 38-44 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
The claims are directed to a process (method as introduced in Claim 1) and/or system (Claim 39), thus Claims 1, 7, 9, 12, 15, 16, 34-36, and 38-44 fall within one of the four statutory categories. See MPEP 2106.03.
Step 2A, Prong 1:
The claimed invention recites an abstract idea according to MPEP §2106.04. The independent claims which recite the following claim limitations as an abstract idea, are underlined below.
Claim 1 recites:
training a home valuation system to determine a value of a home in a geographic area, the home valuation system comprising a photo scene classification model, a photo quality level classification model, and a valuation model, wherein an output of the photo scene classification model and an output of the photo quality level classification model are connected to an input of the valuation model, and
wherein training the home valuation system comprises:
training, using a plurality of photos associated with a plurality of homes in the geographic area, the photo scene classification model, wherein the photo scene classification model is trained to determine, for an input photo, a scene classification from among defined scene classifications including one or more room types;
training, using the plurality of photos associated with the plurality of homes in the geographic area, the photo quality level classification model, wherein the photo quality level classification model is trained to determine, for the input photo, a quality classification; and
after training the photo scene classification model and the photo quality level classification model, training, using scene classifications output by the photo scene classification model and quality classifications output by the photo quality level classification model, the valuation model for predicting the value of a home in the geographic area based on one or more home attribute values of the home, one or more scene classifications associated with one or more photos of the home, and one or more quality classifications associated with the one or more photos of the home; and
periodically obtaining a housing price index for the geographic area at least in part by, for each of substantially every home in the geographic area,
retrieving, from an external data source, one or more home attribute values for the home;
receiving one or more photos depicting the home; and
subjecting the one or more home attribute values and the one or more photos depicting the home to the trained home valuation system to obtain a predicted value of the home, wherein the subjecting comprises:
for each photo of the one or more photos depicting the home:
determining, using the photo scene classification model, a scene classification; and
determining, using the photo quality level classification model, a quality classification;
calculating, based on the scene classification and quality classification for each photo of the one or more photos depicting the home, one or more aggregated quality values, each aggregated quality value being associated with a scene classification determined for the one or more photos; and
determining, using the valuation model, a predicted value of the home based on the one or more home attribute values and the one or more aggregated quality values; and
using the obtained predicted values to obtain the housing price index for the geographic area.
Claim 39 recites:
a computing system including a processor and a memory operatively connected to the processor and storing:
a photo quality level classification model trained using a plurality of quality classification training photos and a plurality of corresponding quality classifications to generate, in response to an input photo, a quality classification of the input photo;
a photo scene classification model trained using a plurality of scene classification training photos and a plurality of corresponding scene classifications, the photo scene classification model being a classifier model configured to determine, for the input photo, a scene classification from among defined scene classifications including one or more room types;
a valuation model communicatively linked to outputs of the photo quality level classification model and the photo scene classification model, the valuation model being trained using training data including the input photo, the scene classification generated for the input photo by the trained photo quality level classification model, the quality classification generated for the input photo by the trained photo scene classification model, one or more attributes of a property corresponding to the input photo, and an actual home value of the property corresponding to the input photo;
wherein the valuation model is trained to generate, in response to receiving a second input photo corresponding to a scene of a home, a quality classification of the second input photo from the photo quality level classification model, a scene classification of the second input photo from among the one or more room types, and one or more attributes of the home retrieved from an external data source, a valuation of the home;
wherein the memory further stores instructions which, when executed cause display of a user interface including identifying information for the home and the valuation of the home.
The underlined claim limitations as emphasized above, as drafted, recite a process that, under its broadest reasonable interpretation, covers concepts performed in the human mind (including an observation, evaluation, judgment, opinion). Other than reciting a computer implementation, nothing in the claim elements precludes the step from encompassing the performance of concepts performed in the human mind which represents the abstract idea of mental processes. But for the recitation of generic implementation of computer system components, the claimed invention merely recites a process for determining home values based on information derived from photos and attributes of the home which could be performed in the human mind or by using pen and paper. For example, a user could determine the quality and scene of a home and decide its classification by merely viewing the photo (or multiple photos from various sources) and making a judgment, and in turn, use those judgements and additional home attributes to calculate and/or predict the value of a home.
Step 2A, Prong 2:
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements such as:
a computing system including a processor and a memory operatively connected to the processor for storing;
training a home valuation system comprising a photo scene classification model, a photo quality level classification model, and a valuation model, wherein an output of the photo scene classification model and an output of the photo quality level classification model are connected to an input of the valuation model;
training the photo scene classification model;
training the photo quality level classification model;
after training the photo scene classification model and the photo quality level classification model, training by outputs by the photo scene classification model and outputs by the photo quality level classification model, the valuation model used for making predictions;
subjecting the received information about the home to the trained home valuation system, wherein the subjecting comprises: using the photo scene classification model, using the photo quality level classification model, and using the valuation model to predict a value; and
an external data source (for storing data for retrieval).
In particular, the additional elements cited above beyond the abstract idea are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components.
Accordingly, since the specification describes the additional elements in general terms, without describing the particulars, the additional elements may be broadly but reasonably construed as generic computing components being used to perform the judicial exception (see specification at [0023]-[0025]). These claimed additional elements merely recite the words “apply it" (or an equivalent) with the judicial exception, or merely include instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea.
Step 2B:
The claims do not include additional elements, individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept at Step 2B. Thus, the claim is not patent eligible.
Dependent Claims:
Claims 7, 9, 12, 15, 16, 34-36, and 38, and 40-44 recite further elements related to the analysis, scoring, and message improvement steps of the parent claims. These activities fail to differentiate the claims from the related activities in the parent claims and fail to provide any material to render the claimed invention to be significantly more than the identified abstract ideas, as outlined below.
Claim 7 recites “wherein the photo scene classification model is further trained using information reflecting a selling price per square foot of the home portrayed in each photograph”. which further specifies additional types of data to be used for training the model(s), but does not lead toward eligibility. The additional types of data for classifying are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 9 recites “wherein the photo scene classification model is further trained based on a scene classification input on each photo generated by at least one human editor, and wherein the photo quality level classification model is further trained based on a quality classification input on each photo generated by the at least one human editor”, which further specifies additional types of data to be used for training the model(s), but does not lead toward eligibility. The additional types of data for classifying are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 12 recites “determining the aggregated quality values includes applying, to each quality classification associated with a scene classification, a mean, median, mode, maximum, or minimum aggregation function”, which specifies further steps related to determining a predicted value, but does not make the claims any less abstract.
Claim 15 recites “wherein the photo scene classification model comprises a sequence of layers including one or more convolutional layers, the convolutional layers preceding one or more pooling layers, the pooling layers preceding one or more normalization layers, the normalization layers preceding one or more fully connected layers” which further specifies additional descriptions of the classification model(s), but does not lead toward eligibility. The additional model layers are recited at a high level of generality and do not integrate the abstract idea into a practical application or provide an inventive concept. There is no material in Applicant’s disclosure that clearly indicates how these layers would provide a practical application or an inventive concept, such as those described above (see specification at [0031]-[0033], which includes citations to references that indicate useful arrangements of such layers was known dating back to at least 2014, but does not provide evidence of how/why using such layers in Applicant’s instant application would provide any significant practical application or inventive concept beyond the recited abstract idea).
Claim 16 recites “wherein the photo scene classification model comprises a sequence of layers including two or more convolution/pooling cycles, each convolution/pooling cycle comprising a convolutional layer preceding one or more pooling layers, the convolution/pooling cycles preceding one or more fully connected layers” which further specifies additional descriptions of the classification model(s), but does not lead toward eligibility. The additional model layers are recited at a high level of generality and do not integrate the abstract idea into a practical application or provide an inventive concept. There is no material in Applicant’s disclosure that clearly indicates how these layers would provide a practical application or an inventive concept, such as those described above (see specification at [0031]-[0033], which includes citations to references that indicate useful arrangements of such layers was known dating back to at least 2014, but does not provide evidence of how/why using such layers in Applicant’s instant application would provide any significant practical application or inventive concept beyond the recited abstract idea).
Claim 34 recites “for each photo of the plurality of photos associated with the plurality of homes in the geographic area, prompting a user for a quality score for the photo, receiving, from the user, the quality score for the photo, and adding the photo and the quality score for the photo received from the user to a training set for the photo quality level classification model”, which further specifies additional types of data to be used for training the model(s), but does not lead toward eligibility. The additional types of data for classifying are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 35 recites “for each photo of the plurality of photos associated with the plurality of homes in the geographic area, prompting a user for a classification of a room type depicted in the photo, receiving, from the user, the classification of the room type depicted in the photo, and adding the photo and the classification of the room type depicted in the photo received from the user to a training set for the photo scene classification model”, which further specifies additional types of data to be used for training the model(s), but does not lead toward eligibility. The additional types of data for classifying are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 36 recites “wherein using the obtained predicted values to obtain the housing price index comprises aggregating the obtained predicted values”, which specifies further steps used in determining a predicted value, but does not make the claims any less abstract.
Claim 38 recites “wherein a training set for photo quality level classification model includes a quality score that is determined based on an interior area of the home and a quantile of a selling price within the geographic area”, which further specifies additional types of data to be used for training the model(s), but does not lead toward eligibility. The additional types of data for classifying are part of the abstract idea and merely adding that data to the training data does not integrate the abstract idea into a practical application or provide an inventive concept.
Claim 40 recites “wherein the plurality of corresponding quality classifications includes one or more human annotated quality scores”, which specifies further types of data used in determining a predicted value, but does not make the claims any less abstract.
Claim 41 recites “wherein the plurality of corresponding scene classifications includes one or more human annotated scene classifications”, which specifies further types of data used in determining a predicted value, but does not make the claims any less abstract.
Claim 42 recites “wherein the actual home value is derived from a sale price of the property”, which specifies further steps and types of data used in determining a home value, but does not make the claims any less abstract.
Claim 43 recites “wherein one or more of the plurality of corresponding quality classifications corresponds to an imputed quality classification based on a score imputation technique, the imputed quality classification being determined in response to a determination that a photo of a particular scene is unavailable”, which specifies further steps and types of data used in determining a predicted value, but does not make the claims any less abstract.
Claim 44 recites “wherein the valuation model obtains the valuation of the home based on a plurality of second input photos associated with the home corresponding to different scenes within the home, each scene having an associated scene classification and an associated quality classification”, which further specifies the use of additional photos evaluated by the model(s), but does not lead toward eligibility. The additional photos are processed in the same manner as other photos (as part of the abstract idea) and merely using the model on additional photos does not integrate the abstract idea into a practical application or provide an inventive concept.
The claims do not provide any new additional limitations or meaningful limits beyond abstract idea that are not addressed above in the independent claims therefore, they do not integrate the abstract idea into a practical application nor do they provide significantly more to the abstract idea. Thus, after considering all claim elements, both individually and as a whole, it has been determined that the claims do not integrate the judicial exception into a practical application or provide an inventive concept. Therefore, claims 7, 9, 12, 15, 16, 34-36, and 38, and 40-44 are ineligible.
Relevant Prior Art not Relied Upon
Kim et al. (Pub. No. US 2005/0154657 A1). Kim discloses accessing information about each of a plurality of homes sold in a geographic area during a distinguished period of time, the information including, for each home, a selling price for the home and using statistical models for predicting the value of a home in the geographic area based on information about the home (see [0008], accessing data from a database that includes data regarding a plurality of homes; [0054], includes selling prices of homes; [0038], discusses differences among data and valuations for different regions or cities, this is comparable to focusing on a “geographical area” and discusses reasons for doing so).
Linne et al. (Pub. No. US 2007/0143132 A1). Linne discloses predicting property values in areas based on property characteristics and comparisons to other properties (see at least Abstract; [0003]-[0005]; [0025]; Claim 1, also includes modeling techniques for performing the valuations).
Perkins et al. (Pub. No. US 2016/0171622 A1). Perkins discloses the use of photographs for evaluating real estate property assets for the purpose of insurance pricing and claims processing using convolution neural networks. This includes interior photos and exterior photos which could be used for room identification, quality identification, etc. (see at least [0004]; [0010]; [0072]; [0085]).
Wierkins et al. (Pub. No. US 2012/0303536 A1). Wierkins discloses the use of price indexes and quality values in the scoring of properties. (see at least [0037]; [0040]; Claim 7).
Humphries et al. (Patent No. US 8,676,680 B2). Humphries discloses additional subject matter regarding the use of values for determining home value indexes. Although the Humphries reference used above covers the claim material, this reference (by the same inventors) provides additional detail that should be considered for future amendments (see at least column 11, last paragraph; column 2, DETAILED DESCRIPTION, paragraph 1).
Lammert, JR. et al. (Pub. No. US 2019/0019261 A1). Lammert discloses a system/method for valuating real estate including uploading contemporaneous image data (see at least [0021]; See also, Abstract; [0017]-[0023]).
Humphries et al. (Patent No. US 8,140,421 B1), as applied to the claims in previous office actions.
Gross (Pub. No. Us 2016/0048934 A1), as applied to the claims in previous office actions.
Response to Arguments
I. Rejection of Claims under 35 U.S.C. §101:
Applicant’s arguments have been fully considered are not persuasive.
Applicant asserts that the claimed invention provides an improvement over conventional approaches by using photos in addition to public data sources because public data sources are unreliable. However, Examiner has reviewed the specification and there is not substantial background and/or evidence to demonstrate the alleged deficiencies of conventional systems or that the addition of photographic data would improve those systems. For example, Applicant asserts that conventional systems rely solely on public data that is unreliable. First, Applicant fails to provide background/evidence to demonstrate that conventional systems only use public data, beyond asserting so. Second, there is no evidence to demonstrate how/why the public data is unreliable, beyond Applicant’s assertions regarding potential problems. Third, Applicant fails to provide evidence that conventional systems do not or cannot use photographic data in conjunction with public data. Applicant does not demonstrate how using the photographic data would provide the alleged improvement or solution in a meaningful manner beyond the recited abstract ideas.
See MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field (“If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.”).
Applicant also relies on assertions regarding the configuration of the models. However, as previously discussed, Applicant fails to provide sufficient evidence to demonstrate how/why this particular configuration would provide the alleged improvement. For example, the specification does not explain how/why this particular configuration (particular outputs and inputs between models) would provide an improvement or different result than other configurations that perform the same/similar tasks and processes. Nor does it demonstrate how/why this particular configuration of models would enable the use of photographic data (i.e. why other configurations or models would/could not enable the use of photographic data, including in the context used in the claims).
Related remarks from the previous office action are provided here for additional detail and reference:
Applicant argues that the “specific configuration for the home valuation system and temporal training relationship between the models in home valuation system…recites more than mere instructions to apply the alleged judicial exception using a generic computer component”, however, it is not clear how the particular configuration and temporal relationship provide more than mere instructions to apply the alleged judicial exception using a generic computer component. The model is used as a tool to perform the observations and determinations used to predict a valuation, however, there is no evidence to demonstrate how/why the models (including use in the recited configuration or temporal relationship) would provide meaningful limitations beyond the abstract ideas. Although Applicant describes the configuration and temporal relationship, Applicant’s remarks do not explain or provide evidence to demonstrate that the use of this particular configuration or temporal relationship would provide a practical application, improvement, inventive concept, etc.
II. Jumbo IDS:
Applicant is reminded of the Content Requirements for Information Disclosure Statement originally set forth in the previous office action, mailed on 3/13/2025 (as well as the related Requirement for Information under 37 CFR 1.105 set forth in the office action mailed on 1/4/2021 and repeated in subsequent office actions). The issue has not been resolved. The IDSs referenced include those filed on 4/11/18, 8/2/18, 8/22/18, 10/24/18, 11/7/18, 12/28/18, 3/21/19, 7/17/19, 11/11/19, 2/5/20, 5/10/2021, 9/28/2021, and 3/21/2022. The remarks referenced are those filed on 5/10/2021, 9/28/2021, and 3/21/2022. Please see the office actions mailed on 3/13/2025 or 7/30/2025 for the original notice.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN D SENSENIG whose telephone number is (571)270-5393. The examiner can normally be reached M-F: 10:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached at 571-272-6872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.S/Examiner, Art Unit 3629 February 5, 2026
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626