Prosecution Insights
Last updated: April 19, 2026
Application No. 18/815,656

METHODS FOR DETERMINING IMAGE CONTENT WHEN GENERATING A PROPERTY LOSS CLAIM THROUGH PREDICTIVE ANALYTICS

Final Rejection §101
Filed
Aug 26, 2024
Examiner
KWONG, CHO YIU
Art Unit
3693
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mitchell International Inc.
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
3y 5m
To Grant
38%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
104 granted / 324 resolved
-19.9% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
372
Total Applications
across all art units

Statute-Specific Performance

§101
37.0%
-3.0% vs TC avg
§103
26.9%
-13.1% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
25.9%
-14.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 324 resolved cases

Office Action

§101
DETAILED ACTION This Final Office Action is in response to the application filed on 08/26/2024 and the Amendment & Remark filed on 11/25/2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. As an initial matter, the claims as a whole are to an apparatus, a manufacture and a process, (claims 1, 8 and 15) which falls within one or more statutory categories. (Step 1: YES) The recitation of the claimed invention is then further analyzed as follow, in which the abstract elements are boldfaced. Claim 1 recites: A system, comprising: one or more hardware processors; and one or more non-transitory machine-readable storage media encoded with instructions that, when executed by the one or more hardware processors, cause the system to perform operations comprising: obtaining, by a computing device, electronic images of a damaged vehicle; obtaining, by the computing device, metadata of image features present in electronic images of previously-identified damaged vehicles, wherein the metadata of image features includes a same type of feature as the electronic images of previously-identified damaged vehicles, and wherein the same type of feature is a vehicle component; training, by the computing device, a first machine learning model using the metadata; wherein the first machine learning model is trained on the metadata of the electronic images the electronic images of previously-identified damaged vehicles, views and components of the previously-identified damaged vehicles represented in the electronic images, and the obtained metadata; generating, by the first machine learning model, metadata for the obtained images of the damaged vehicle based on views and components of the damaged vehicle represented in the obtained electronic images of the damaged vehicle identifying, by a second machine learning model, a subset of the electronic images of the damaged vehicle based on the generated metadata for the obtained images of the damaged vehicle; providing, by the computing device, the subset of electronic images with the generated metadata to a client computing device; obtaining, by the computing device, feedback data on the identified subset of the electronic images from the client computing device; and training, by the computing device, the machine learning model using the feedback data at a second time after the first time. Claim 8 recites: One or more non-transitory machine-readable storage media encoded with instructions that, when executed by one or more hardware processors of a computing system, cause the computing system to perform operations comprising: obtaining, by a computing device, electronic images of a damaged vehicle; obtaining, by the computing device, metadata of image features present in electronic images of previously-identified damaged vehicles, wherein the metadata of image features includes a same type of feature as the electronic images of previously-identified damaged vehicles, and wherein the same type of feature is a vehicle component; training, by the computing device, a first machine learning model using the metadata; wherein the first machine learning model is trained on the metadata of the electronic images the electronic images of previously-identified damaged vehicles, views and components of the previously-identified damaged vehicles represented in the electronic images, and the obtained metadata; generating, by the first machine learning model, metadata for the obtained images of the damaged vehicle based on views and components of the damaged vehicle represented in the obtained electronic images of the damaged vehicle; identifying, by a second machine learning model, a subset of the electronic images of the damaged vehicle based on the generated metadata for the obtained images of the damaged vehicle; providing, by the computing device, the subset of electronic images with the generated metadata to a client computing device; obtaining, by the computing device, feedback data on the identified subset of the electronic images from the client computing device; and training, by the computing device, the machine learning model using the feedback data at a second time after the first time. Claim 15 recites: A computer-implemented method comprising: obtaining, by a computing device, electronic images of a damaged vehicle; obtaining, by the computing device, metadata of image features present in electronic images of previously-identified damaged vehicles, wherein the metadata of image features includes a same type of feature as the electronic images of previously-identified damaged vehicles, and wherein the same type of feature is a vehicle component; training, by the computing device, a first machine learning model using the metadata; wherein the first machine learning model is trained on the metadata of the electronic images the electronic images of previously-identified damaged vehicles, views and components of the previously-identified damaged vehicles represented in the electronic images, and the obtained metadata; generating, by the first machine learning model, metadata for the obtained images of the damaged vehicle based on views and components of the damaged vehicle represented in the obtained electronic images of the damaged vehicle; identifying, by a second machine learning model, a subset of the electronic images of the damaged vehicle based on the generated metadata for the obtained images of the damaged vehicle; providing, by the computing device, the subset of electronic images with the generated metadata to a client computing device; obtaining, by the computing device, feedback data on the identified subset of the electronic images from the client computing device; and training, by the computing device, the machine learning model using the feedback data at a second time after the first time. Claims 2, 9 and 16 recites: generating the metadata. Claims 3, 10 and 17 recites: wherein generating the metadata comprises: providing, by the computing device, the electronic images of the damaged vehicle as input to a metadata machine learning model of the computing device, wherein responsive to the input, the metadata machine learning model provides the metadata as output. Claims 4, 11 and 18 recites: generating, by the computing device, a training data set comprising the electronic images of the previously identified damaged vehicles; and training, by the computing device, the metadata machine learning model using the training data set prior to providing the electronic images of the damaged vehicle as input to the metadata machine learning model. Claims 5, 12 and 19 recites: applying, by a metadata machine learning model of the computing device, a Bayesian-type statistical analysis to determine a damage indicator associated with the vehicle component, wherein the damage indicator is associated with a percentage probability that the vehicle component is damaged; and updating, by the computing device, the metadata with the damage indicator prior to training the machine learning model using the metadata. Claims 6, 13 and 20 recites: wherein the providing the subset of electronic images with the generated metadata comprises: providing, by the computing device, the subset of electronic images with the generated metadata to the client computing device to assess damage to the damaged vehicle. Claims 7 and 14 recites: wherein the providing the subset of electronic images with the generated metadata comprises: providing, by the computing device, the subset of electronic images with the generated metadata to the client computing device to assess likely causality and relation of one or more reported or treated injuries to the damage. Based on the limitations above, the claims describe a process that covers iteratively analyzing insurance related data using models. Analyzing insurance related data is considered to be a fundamental economic practice and the iterative communication with the client associated with a damaged vehicle is considered to be a commercial interaction, which fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. As such, the claim(s) recite(s) a Judicial Exception. (Step 2A prong one: Yes) This analysis then evaluates whether the claims as a whole integrates the recited Judicial Exception into a practical application of the exception. In particular, the claims recite the additional element(s) of “computing device”, “system”, “one or more hardware processors” as a mere tool to perform the … steps of the Judicial Exception, which encompasses no more than Mere Instruction to Apply. For example, the limitation “obtaining, by a computing device, electronic images of a damaged vehicle ” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of obtaining images of a damaged vehicle; the limitation “obtaining, by the computing device, metadata of image features present in electronic images of previously-identified damaged vehicles, wherein the metadata of image features includes a same type of feature as the electronic images of previously-identified damaged vehicles, and wherein the same type of feature is a vehicle component” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of obtaining metadata of image features present images of previously-identified damaged vehicles; the limitation “training, by the computing device, a first machine learning model using the metadata; wherein the first machine learning model is trained on the metadata of the electronic images the electronic images of previously-identified damaged vehicles, views and components of the previously-identified damaged vehicles represented in the electronic images, and the obtained metadata” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of training a model using the metadata; the limitation “generating, by the first machine learning model, metadata for the obtained images of the damaged vehicle based on views and components of the damaged vehicle represented in the obtained electronic images of the damaged vehicle” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of generating metadata using the first model; the limitation “identifying, by a second machine learning model, a subset of the electronic images of the damaged vehicle based on the generated metadata for the obtained images of the damaged vehicle” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of executing a second model to identify a subset of the images; the limitation “providing, by the computing device, the subset of electronic images with the metadata to a client computing device” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of providing the subset of images with metadata to a client; the limitation “obtaining, by the computing device, feedback data on the identified subset of the electronic images from the client computing device” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of obtaining feedback data on the identified subset of images from the client; the limitation “training, by the computing device, the machine learning model using the feedback data at a second time after the first time” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of training the model; the limitation “generating the metadata” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of generating the metadata; the limitation “wherein generating the metadata comprises: providing, by the computing device, the electronic images of the damaged vehicle as input to a metadata machine learning model of the computing device, wherein responsive to the input, the metadata machine learning model provides the metadata as output” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of providing the image of the damaged vehicle as input to a metadata model that provides metadata as outputs; the limitation “generating, by the computing device, a training data set comprising the electronic images of the previously identified damaged vehicles; and training, by the computing device, the metadata machine learning model using the training data set prior to providing the electronic images of the damaged vehicle as input to the metadata machine learning model” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of generating the training data set and training the metadata model using the training data set prior to inputting images of the damaged vehicle as input to the metadata model; the limitation “applying, by a metadata machine learning model of the computing device, a Bayesian-type statistical analysis to determine a damage indicator associated with the vehicle component, wherein the damage indicator is associated with a percentage probability that the vehicle component is damaged; and updating, by the computing device, the metadata with the damage indicator prior to training the machine learning model using the metadata” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of applying a Bayesian-type statistical analysis to determine a damage indicator; the limitation “wherein the providing the subset of electronic images with the generated metadata comprises: providing, by the computing device, the subset of electronic images with the generated metadata to the client computing device to assess damage to the damaged vehicle” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of providing the subset of images with the generated metadata to the client; the limitation “wherein the providing the subset of electronic images with the generated metadata comprises: providing, by the computing device, the subset of electronic images with the generated metadata to the client computing device to assess likely causality and relation of one or more reported or treated injuries to the damage” encompasses no more than generically invoking a computing device to apply the Judicial Exception step of providing the subset of images with the generated metadata to the client. Other than being generally linked to the steps of the Judicial Exception, the additional elements in the above step(s) is/are recited at a high-level of generality, without technological detail of how the particular steps are performed technologically. The additional element(s) of “memory” and/or “non-transitory storage medium” are generically recited to store data and/or instructions of the Judicial Exception. The additional element(s) of “machine learning model” and “metadata machine learning model” are generically recited to perform data generating steps described only by a result-oriented solution with insufficient technological detail for how the ML model accomplish it. The “training” steps of the machine learning models are too described only by a result-oriented solution with insufficient technological detail for how the training is accomplished. For example, describing the training process only by inputting training data with training process undisclosed. The examiner further noted generic computer affixes such as “electronic”, “computer device” and “machine learning” are appended to abstract elements such as “image of a damaged vehicle”, “client” and “models”, but found that to be mere instructions to implement the Judicial Exception idea on a computer. Indeed, the instant claims (1) attempted to cover a solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result; (2) used of a computer or other machinery in its ordinary capacity for economic or other tasks or simply added a general purpose computer or computer components after the fact to the Judicial Exception and (3) generally applied the Judicial Exception to a generic computing environment without limitation indicative of practical application (See MPEP 2106.04(d)I). Thus, the claims are no more than Mere Instruction to Apply the Judicial Exception (See MPEP 2106.05(f)) or adding insignificant extra-solution activity to the judicial exception (See MPEP 2106.05(g)), which do not integrate the cited Judicial Exception into practical application (Step 2A prong two: No) The claims are directed to a Judicial Exception. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer and generic machine learning model to iteratively analyze insurance related data, such as training model accuracy by comparing to reported feedback, amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. No additional element currently recited in the claims amount the claims to be significantly more than the cited abstract idea. (Step 2B: No) Therefore, claims 1-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Response to Arguments Applicant's arguments filed on 11/25/2025 have been fully considered but they are not persuasive. Regarding the applicant’s argument that the amended claims addressed the reasons for the rejection or otherwise render the rejection moot, the examiner respectfully disagrees. The claims remain to recite the Judicial Exception of analyzing insurance related data, while the newly recited additional elements (such as a second machine learning model and what training data the first machine learning model is trained on) do not integrate the Judicial Exception into practical application. In particular, the newly recited additional elements encompasses no more than Mere Instruction to Apply the Judicial Exception to generically recited computing environment. (See the 101 rejection above). As such, the argument is not persuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHO KWONG whose telephone number is (571)270-7955. The examiner can normally be reached 9am - 5pm EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MICHAEL W ANDERSON can be reached at 571-270-0508. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHO YIU KWONG/Primary Examiner, Art Unit 3693
Read full office action

Prosecution Timeline

Aug 26, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §101
Sep 24, 2025
Interview Requested
Oct 01, 2025
Interview Requested
Oct 10, 2025
Applicant Interview (Telephonic)
Oct 11, 2025
Examiner Interview Summary
Nov 25, 2025
Response Filed
Jan 24, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561747
PROCESSING INSURED ITEMS HOLISTICALLY WITH MOBILE DAMAGE ASSESSMENT AND CLAIMS PROCESSING
2y 5m to grant Granted Feb 24, 2026
Patent 12530718
SYSTEMS AND METHODS FOR LOAN REWARDS PROVISIONING
2y 5m to grant Granted Jan 20, 2026
Patent 12530720
TRANSACTION PROCESSING SYSTEM PERFORMANCE EVALUATION
2y 5m to grant Granted Jan 20, 2026
Patent 12488340
Address Verification, Seed Splitting and Firmware Extension for Secure Cryptocurrency Key Backup, Restore, and Transaction Signing Platform Apparatuses, Methods and Systems
2y 5m to grant Granted Dec 02, 2025
Patent 12488398
SYSTEMS AND METHODS FOR CUSTOM AND REAL-TIME VISUALIZATION, COMPARISON AND ANALYSIS OF INSURANCE AND REINSURANCE STRUCTURES
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
38%
With Interview (+5.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 324 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month