Prosecution Insights
Last updated: April 19, 2026
Application No. 17/585,207

METHODS FOR DETECTING PROBLEMS AND RANKING ATTRACTIVENESS OF REAL-ESTATE PROPERTY ASSETS FROM ONLINE ASSET REVIEWS AND SYSTEMS THEREOF

Final Rejection §101§103
Filed
Jan 26, 2022
Examiner
TRUONG, BENJAMIN LY
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Skyline AI Limited
OA Round
4 (Final)
0%
Grant Probability
At Risk
5-6
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 16 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
34.0%
-6.0% vs TC avg
§103
34.0%
-6.0% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to application 17/585,207 filed on 1/26/2022. No amendments were made by the applicant. Claims, 1-3, 5-9, 11-15, and 17-18 are currently pending. No claims are allowed. Response to Arguments Applicant's arguments filed 10/31/2025 have been fully considered but they are not persuasive Regarding 35 USC 101: The applicant submits that the abstract idea of “concepts performed in the human mind” is an oversimplification of the technical nature of claimed invention. Specifically, the applicant cites that the claimed invention could not be reasonably performed mentally because there is specific technical implementation using a trained machine learning model. However, the invention using specifically labeled data sets, unlabeled data sets, and a machine learning model does not disqualify the claims from reciting an abstract idea. The claims still recite an abstract idea and simply use the computers and machine learning models to perform it. There is no improvement in the computer technology or machine learning technology itself. Therefore, it is not a technological solution that goes beyond the abstract idea. It is simply using the computers and machine learning to perform the abstract idea. Additionally, the applicant submits that there is integration into a practical application because the machine learning models are trained using both labeled and unlabeled data sets. The applicant states this is a specific technical implementation. However, training the machine learning model with different data sets is not indicative of practical application. Describing the data inputs does not describe how the model itself functions to reach the resulting output. There is no improvement to the machine learning model technology itself, just an “apply it” recitation of a general-purpose machine learning model that now uses different types of inputs (unlabeled and labeled) to perform the abstract idea, see MPEP 2106.05(f). Further, the applicant submits that the data type being demographic data provides practical application. The applicant states, “The use of demographic data to adjust category assessment scores based on geographic areas provides a practical application that improves property evaluation technology”. The weighted data input changing the resulting output does not provide practical application by improving technology. Data type is not additional element that can provide practical application or amount to significantly more. Giving more weight to certain demographics to change the evaluation of property falls under the abstract idea of mental processes. The difference in evaluation resulting from different data is directed to a mental process, not to an improvement of the machine learning technology itself Further, the applicant submits there is a technical improvement because the iterative model uses progressively larger data sets thereby improving the model performance. However, there is not improvement to the function of the machine learning model. The resulting outputs simply become refined by using more input data, which is part of the abstract idea of using data to make evaluations (e.g. a human that receives more inputs and data over time can improve their evaluations based on the additional data). There is no technical improvement to the machine learning model technology itself; rather, it is just an iterative model that is supplied with more data. Finally, the applicant submits that the additional elements amount to significantly more. The applicant cites that the “apply it” analysis fails to consider the technical elements that provide significantly more, citing the additional element of a machine learning model. However as previously stated, the machine learning model is discussed at such a high level of generality, only being described by its inputs and outputs. Describing the model by outlining the data it is trained upon does not describe how the model operates or show an improvement to the model itself. Therefore, the machine model along with the other general purpose computer components fall under an “apply it” recitation and the rejection is maintained. Further the applicant submits the “majority rule” process goes beyond standard computer implementation. However, using “majority rule” is part of the abstract idea of a mental process, not an additional element that can amount to significantly more. Therefore, the examiner respectfully disagrees and the rejection is maintained. Regarding 35 USC 103: The applicant submits that Christopulos does not teach the two-stage labeling because it instead teaches labeling on two sets of data rather than labeling one data set twice. The applicant is arguing claimed features that are not recited. The claims specifically recite, “one or more machine learning models” and “wherein labelling comprises: at least two stages of the labelling each of the online reviews in the subset of aggregated heterogeneous dataset in one or more pre-defined property asset problem categories”. The broadest reasonable interpretation of the limitation can include multiple models, multiple datasets, and two occurrences of labeling. The claims do not specify one machine model labeling the same data set twice; rather, it recites one or more machine learning models. Further, the applicant argues the “majority rule” is different because majority rule is used in multiple models rather than labeling between two stages. Again, the claims recite “one or more machine learning models”. Therefore, the prior art meets the claim as there are two instances of labeling and a majority rule. The examiner respectfully disagrees and the rejection is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) with no practical application and without significantly more. Step 1: Claims 1-6 are methods, and Claims 7-18 are systems. Thus, each claim on its face is directed to one of the statutory categories of 35 USC 101. However, claims 1-18 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. Step 2A Prong 1: Under step 2A Prong 1, the test to identify claims are “directed to” a judicial exception. Examiner notes that the claimed invention is directed to an abstract idea in that the instant application is directed to a mental process (See MPEP 2106.04(a)(2)(III)). The independent claims (1, 7, and 13) recite a method and systems to collect and evaluate data to score real estate properties in labeled categories. These claim elements are being interpreted as concepts performed in the human mind (including observation, evaluation, judgement, and opinion). Using a dataset of online reviews to asses property can equivalently be done with pen and paper. The claims recite an abstract idea consistent with the “mental process” grouping set forth in the MPEP 2106.04(a)(2)(III). Step 2A Prong 2: The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites an “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea. The instant application is directed towards a method and systems to implement the identified abstract idea of receiving information, processing information, and displaying the result of the analysis, (i.e. processing review data on real estate properties to evaluate properties) on a generically claimed computer structure. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as a “processor” recited at a high level of generality. These elements do not themselves amount to an improvement to the interface or computer, to a technology or another technical field. The claims do not include additional elements that amount to significantly more than the judicial exception. The independent claims recite the additional elements “computing device”, “machine learning models”, “one or more processors”, and “a non-transitory machine readable medium”. These claim elements are recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a general computer environment. The machines merely act as a modality to implement the abstract idea and are not indicative of integration into a practical application (i.e., the additional elements are simply used as a tool to perform the abstract idea), see MPEP 2106.05(f). Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed in Step 2A Prong Two, the additional elements in the claims amount to no more than mere instructions to apply the exception using generic computer components. The same analysis applies her in 2B and does not provide an inventive concept. In regards to the dependent claims Claims 2-6, 8-12, and 14-18 introduce no new abstract ideas or new additional elements, and do not impact analysis under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5-7, 9, 11-13, 15, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Christopulos (US 20140270492) in view of Spencer (US 20080177794) in further view of Wu (US 20220188517). Regarding Claims 1, 7 and 13 (substantially similar in scope and language), Christopulos teaches: A method for automating assessment of real-estate property assets, the method comprising: executing, by a computing device, [(Para 0018)] one or more machine learning models [(Para 0030)] trained in text classification on a heterogeneous dataset of stored online asset reviews [The limitations recite executing a machine learning model trained with a dataset to assess real-estate (Para 0030). The data type (i.e., text, image, video, reviews, comments etc.) is nonfunctional descriptive material that does not carry patentable weight in the claims] wherein the training comprises: randomly aggregating the heterogeneous dataset of stored online asset reviews from a plurality of sources based on one or more search criteria, [(Para 0015) “ Before a given property or building is assessed, data describing various building structures may be collected, labeled, and categorized.”] labelling subset of the aggregated heterogeneous dataset in one or more pre-defined property asset problem categories, [(Para 0015) “Before a given property or building is assessed, data describing various building structures may be collected, labeled, and categorized… Categorizing and/or labeling data may involve extracting relevant data and/or features from the collected data.”] wherein the labelling comprises: at least two stages of the labelling each of the online reviews in the subset of the aggregated heterogeneous dataset in one or more pre-defined property asset problem categories; [Christopulos Para (0015) "Categorizing and/or labeling data may involve extracting relevant data and/or features from the collected data. The extracted data may, for example, describe physical characteristics of the property or building to be assessed." Para (0045) "The building assessment module 112 may then receive a second set of data representative of property or objects to be assessed (e.g., images of a damaged or destroyed building structure, such as a roof) (block 504). Again, if necessary, the data may be further processed so that the received data can be analyzed by building assessment module 112." Figure 5 (501-505)] training the one or more machine learning models […] based on the labelled subset of the aggregated heterogeneous dataset and another unlabeled subset of the aggregated heterogeneous dataset; [The limitations recite machine models trained from the labeled and unlabeled data; (Para 0030-0032); model trained from the labeled and unlabeled data] While Christopulos teaches training a machine learning model to label data, it does not explicitly teach, calculating an assessment score, a weighting factor, resolving inconsistency based on majority rule, training a model in text classification: to calculate a category assessment score in each of a plurality of pre-defined property asset problem categories and calculating, by the computing device, a property asset assessment score for each of the one or more property assets based on the calculated category assessment score in each of the pre-defined property asset problem categories, wherein the calculating further comprises analyzing at least one factor with respect to at least one of the pre-defined property asset problem categories based on demographic data in a geographic area of the one or more property assets and adjusting the calculated category assessment score in the one of the pre-defined property asset problem categories. wherein a weighting factor is applied to the aggregated heterogeneous data set from one or more of the plurality of sources based on an identified concentration in the aggregated heterogeneous dataset from a subset of the plurality of sources higher than for the remaining subset of the plurality of sources; and resolving any inconsistency in the labelling between the at least two stages is resolved based on a majority rule from received input during the at least two stages; in text classification However, Spencer teaches to calculate a category assessment score in each of a plurality of pre-defined property asset problem categories, [(Para 0019) “receiving from the user a plurality of scores corresponding with said assessment parameters”] wherein a weighting factor is applied to the aggregated heterogeneous data set from one or more of the plurality of sources based on an identified concentration in the aggregated heterogeneous dataset from a subset of the plurality of sources higher than for the remaining subset of the plurality of sources; [The limitations recite the use of a weighting factor that can be applied to the data set. The data source is nonfunctional descriptive material that does not carry patentable weight on the claims. (Para 0021) “applying an importance weighting factor to the combined score to generate a weighted score for each assessment parameter”] and calculating, by the computing device, a property asset assessment score for each of the one or more property assets based on the calculated category assessment score in each of the pre-defined property asset problem categories, [(Para 0020-0022); discusses calculating final score from parameter scores] wherein the calculating further comprises analyzing at least one factor with respect to at least one of the pre-defined property asset problem categories based on demographic data in a geographic area of the one or more property assets and adjusting the calculated category assessment score in the one of the pre-defined property asset problem categories. [(Para 0130) “The user is also provided with the option of recommending the selected street for various demographics”, (Para 0140-0145) “The adjustment calculations, illustrated in FIG. 14B, are applied by taking the base StreetScore and then iteratively applying each adjustment factor”] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the machine learning process and labeling taught by Christopulos with the scoring calculations and weighting taught by Spencer. The claimed invention is merely a combination of old elements and in combination each element would have performed the same function as it did separately. One of ordinary skill would have recognized the results of the combination were predictable. While Christopulos in view of Spencer teach training a machine learning model, calculating scores, and weighting parameters, it does not explicitly teach text classification or resolving inconsistencies with majority rule: and resolving any inconsistency in the labelling between the at least two stages is resolved based on a majority rule from received input during the at least two stages; in text classification However, Wu teaches in text classification [(Para 0020) “A processor executing a real-time training program may start to label a data sample (e.g., a word, a sentence) in the training data responsive to each tagging action by the operator directed to the data sample on the user interface. The action may indicate a positive tagging (e.g., selecting a text word) or a negative tagging (e.g., de-selecting or removing the text word)”] and resolving any inconsistency in the labelling between the at least two stages is resolved based on a majority rule from received input during the at least two stages; [(Para 0090) “The majority rule is described in the following. In the majority rule, for an instance, its final class is decided by the majority of all models' classification results.”] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the method of text classification and majority rule taught by Wu, with the model taught by Christopulos in view or Spencer. The claimed invention is merely a combination of old elements and in combination each element would have performed the same function as it did separately. Utilizing text classification and majority rule with labeling would yield predictable results. Regarding Claims 3, 9 and 15 (substantially similar in scope and language), Christopulos in view of Spencer in further view of Wu teach the limitations set forth above, Christopulos further teaches: The method as set forth in claim 1 further comprising: executing, by the computing device, additional tuning of the one or more machine learning models based on an additional unlabeled subset of the aggregated heterogeneous dataset which is larger than the another unlabeled subset of the heterogeneous dataset. [(Para 0044) “Based on the extraction and labeling results, building assessment module 112 may perform one or more training calculations (block 503). As discussed above, initial and follow-up training may be performed in addition to training verification” Regarding Claims 5, 11 and 17 (substantially similar in scope and language), Christopulos in view of Spencer in further view of Wu teach the limitations set forth above While Christopulos teaches a method to labeling data and using machine learning models to assess assets, it does not explicitly teach: The method as set forth in claim 1 wherein the one or more pre- defined property asset problem categories comprise a crime issue category, a noise issue category, a pest issue category, and a parking issue category. However, Spencer further teaches the one or more pre-defined property asset problem categories comprise a crime issue category, a noise issue category, a pest issue category, and a parking issue category. [Para (0024) "These characteristics may be defined by a number of assessment parameters which are associated with identified characteristics" and "Health and Safety" Para (0097) comprises of "Low noise", "Low crime", and "pest free"] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of labeled scoring parameters taught by Spencer, with the methods of machine learning and labeling taught by Christopulos, in order to help assess property information to make informed decisions. Regarding Claims 6, 12 and 18 (substantially similar in scope and language), Christopulos in view of Spencer in further view of Wu teach the limitations set forth above While Christopulos teaches a method of training models and labeling data, it does not explicitly disclose the one or more pre-defined property asset problem categories further comprise a plurality of the pre-defined property asset problem categories and wherein the calculating the property asset assessment score further comprises: executing, by the computing device, an aggregation formula on the calculated category assessment score for each of the plurality of pre-defined property asset problem categories, wherein a weight is applied to one or more of the calculated category assessment scores for the plurality of pre-defined property asset problem categories and then the calculated category assessment score are aggregated to calculate the property asset assessment score. However, Spencer further teaches: the one or more pre-defined property asset problem categories further comprise a plurality of the pre-defined property asset problem categories [Para (0097); predefined asset categories further comprise of pre defined asset categories] and wherein the calculating the property asset assessment score further comprises: executing, by the computing device, an aggregation formula on the calculated category assessment score for each of the plurality of pre-defined property asset problem categories, wherein a weight is applied to one or more of the calculated category assessment scores for the plurality of pre-defined property asset problem categories and then the calculated category assessment score are aggregated to calculate the property asset assessment score. [Para (0125) “These assessment parameters are used in both the weightings and also the scoring model. Within each specified locality, a weighting index, namely a set of importance weighting factors which will be applied to the assessment parameters, is calculated to determine the importance of each of the assessment parameters” Para (0128) “At step 208, the system receives user scores in relation to the assessment parameters” Para (0135) “At step 214, the weighted scores are combined, and in particular the sum of all of the final scores for all assessment parameters is calculated, giving a total score for the street”] Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of scoring parameters, and further using and weighting those scores to calculate a combined score taught by Spencer, with the methods taught by Christopulos in order to analyze information to help make informed decisions. Claims 2, 8, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Christopulos (US 20140270492) in view of Spencer (US 20080177794) in view of Wu (US 20220188517) in further view of Bhide (US 20200134493). Regarding Claims 2, 8 and 14 (substantially similar in scope and language), Christopulos in view of Spencer in view of Wu teach labeling data, training a model, and calculating scores, but it does not explicitly teach random sampling: randomly sampling, by the computing device, the aggregated heterogeneous dataset to obtain the subset of the aggregated heterogeneous dataset. However, Bhide teaches: randomly sampling, by the computing device, the aggregated heterogeneous dataset to obtain the subset of the aggregated heterogeneous dataset. [Para (0075) "In an example, step 525 comprises the indirect bias module 410 (or another module of the server 404) performing bias prevention in data by selecting different sampling techniques (e.g., stratified sampling, random sampling, systematic sampling, etc.) from heterogeneous data sets and performing automated bias checking using the selected sampling techniques."] Therefore, it would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to combine the method of randomly sampling heterogeneous data taught by Bhide with the method taught by Christopulos in view of Spencer in view of Wu, because the claimed random sampling of heterogenous data is merely an addition of an old element, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Examiner Benjamin Truong, whose telephone number is 703-756-5883. The examiner can normally be reached on Monday-Friday from 9 am to 5 pm (EST) Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber SPE can be reached on 571-270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300 Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.L.T/ Examiner Art Unit 3626 /NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Jan 26, 2022
Application Filed
Jul 26, 2024
Non-Final Rejection — §101, §103
Nov 04, 2024
Response Filed
Dec 17, 2024
Final Rejection — §101, §103
May 23, 2025
Request for Continued Examination
May 27, 2025
Response after Non-Final Action
Jul 25, 2025
Non-Final Rejection — §101, §103
Oct 31, 2025
Response Filed
Feb 04, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month