Prosecution Insights
Last updated: April 19, 2026
Application No. 17/930,431

SYSTEMS AND METHODS FOR GENERATING AND UPDATING A VALUE OF PERSONAL POSSESSIONS OF A USER FOR INSURANCE PURPOSES

Non-Final OA §101
Filed
Sep 08, 2022
Examiner
MILLER, JAMES H
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Quanata LLC
OA Round
6 (Non-Final)
40%
Grant Probability
Moderate
6-7
OA Rounds
3y 7m
To Grant
77%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
78 granted / 193 resolved
-11.6% vs TC avg
Strong +37% interview lift
Without
With
+36.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
35 currently pending
Career history
228
Total Applications
across all art units

Statute-Specific Performance

§101
35.7%
-4.3% vs TC avg
§103
33.7%
-6.3% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 193 resolved cases

Office Action

§101
DETAILED ACTION Acknowledgements This action is in response to Applicant’s filing on Jan. 28, 2026, and is made Non-Final. This action is being examined by James H. Miller, who is in the eastern time zone (EST), and who can be reached by email at James.Miller1@uspto.gov or by telephone at (469) 295-9082. Interviews Examiner interviews are available by telephone or, preferably, by video conferencing using the USPTO’s web-based collaboration platform. Applicants are strongly encouraged to schedule via the USPTO Automated Interview Request (AIR) portal at http://www.uspto.gov/interviewpractice. Interviews conducted solely for the purpose of “sounding out” the examiner, including by local counsel acting only as a conduit for another practitioner, are not permitted under MPEP § 713.03. The Office is strictly enforcing established interview practice, and applicants should ensure that every interview request is directed toward advancing prosecution on the merits in compliance with MPEP §§ 713 and 713.03. For after-final Interview requests, supervisory approval is required before an interview may be granted. Each AIR should specifically explain how the After-Final Interview request will advance prosecution—for example, by identifying targeted arguments responsive to the rejection of record, alleged defects in the examiner’s analysis, proposed claim amendments, or another concrete basis for discussion. See MPEP § 713. If the AIR form’s character limits prevent inclusion of all pertinent details, Applicants may send a contemporaneous email to the examiner at James.Miller1@uspto.gov. The examiner is generally available Monday through Friday, 10:00 a.m. to 4:00 p.m. EST. For any GRANTED Interview Request, Applicant can expect an email within 24 hours confirming an interview slot from the dates/times proposed and providing collaboration tool access instructions. For any DENIED Interview Request, the record will include a communication explaining the reason for the denial. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Jan. 28, 2026, has been entered. Claim Status The status of claims is as follows: Claims 21–40 remain pending and examined with Claims 21, 29, and 36 in independent form. Claims 21, 29, and 36 are presently amended. No Claims are presently cancelled or added. Response to Amendment Applicant's Amendment has been reviewed against Applicant’s Specification filed Jan. 9, 2023, [“Applicant’s Specification”] and accepted for examination. Response to Arguments 35 U.S.C. § 101 Argument Applicant argues the amended claims “recite an improvement to other technology or technical field, and also recite use of the ideas in a meaningful way beyond generally linking to a particular technological environment,” citing MPEP 2106.04(d) and the deep learning imitations of Claim 21. Applicant’s Reply at 10. Specifically, Applicant points to the recited steps of “training a deep-learning neural-network machine learning model” with particular data, “updating the predictive possession value model using a second set of training data in which the first set of training data is updated with one or more item values for the one or more personal property items based on the actual reimbursement amount” by “comparing the one or more predicted values to the actual reimbursement amount to identify prediction errors; and retraining the deep-learning neural-network machine learning model with corrected training examples derived from the prediction errors, as identified” as integrating any abstract idea into a practical application. Id. at 10–1. Examiner respectfully disagrees. Representative Claim 21 remains directed to the abstract idea of insurance-related valuation and reimbursement (a fundamental economic practice and method or organizing human activity) implemented using mathematical modeling on generic computer components. The steps of “training a deep-learning neural-network machine learning model with a first set of training data” to “recogniz[e] patterns that map the input training data to the output training data through supervised machine learning” and later “update[e] the predictive possession value model” are recited only at a functional results-orientated level, reflect conventional supervised machine learning as acknowledged by the Specification (¶¶ 72, 73, 74) and expresses a mathematical concept applied to a business problem rather than an improvement in the computer. The hardware is generically claimed as “one or more processors in communication with at least one memory device,” which does not amount to an improvement in the functioning of the computer itself. Examiner is unable to locate in the specification and Applicant does identify any technical improvement in the machine learning model, the computer, or any other technology. MPEP § 2106.05(a). Accordingly, the amendments this round to not transform the abstract idea into a practical application. Applicant argues that Claim 21 is analogous to USPTO Example 39 where a neural network is initially trained with a first training set and then retrained with a second training set to correct false positives. Applicant’s Reply at 11–2. Similarly, Claim 21 requires training a deep-learning model with a first training set of historical policyholder data then “updating the predictive possession value model using a second set of training data” based on actual reimbursement amounts and feedback, which Applicant assets “can beneficially be used to correct estimated values that were ultimately incorrect.” Id. at 12. Applicant relies on the limitations that “updating the predictive possession value model” includes “comparing the one or more predicted values to the actual reimbursement amount to identify prediction errors; and retraining the deep-learning neural-network machine learning model with corrected training examples derived from the prediction errors, as identified.” Examiner respectfully disagrees. Example 39 is distinguished. The example addresses a specific technological problem in a particular detection system and recites a solution that improved technology. Claim 21 merely describes generic supervised machine learning applied to insurance valuation at a high level of abstraction. The feedback loop here, comparing predictions to actual outcomes and retraining with corrected examples is the standard way supervised machine learning models are trained and updated as taught by Applicant’s own specification, ¶ 72 (“Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data.”) See also, Leskovec, Jure, Anand Rajaraman, and Jeffrey David Ullman. "Mining of Massive Datasets," (2019) p. 465 (Example 12.2) [“NPL Leskovec”] (cited herein on PTO-892). Claim 21 does not specify any particular non-conventional network architecture, loss function, optimization technique, or data structure that improves the model’s operation beyond ordinary practice. Spec. ¶¶ 72, 73, 74. Thus, Example 39 does not control and the claimed feedback loop does not transform the abstract idea into a practical application. Applicant argues the claimed “closed loop feedback system” is “not practically performed in the human mind or with pen and paper” because it processes “the comparison, error identification, example derivation, and network retraining across potentially thousands of parameters,” and therefore should not be treated as a mental process. Applicant’s Reply at 12. Examiner agrees but this argument is moot with respect to the § 101 analysis. For the reasons explained above, the claims are still directed to an abstract idea. The mental process categorization is but one abstract idea grouping and not solely determinative of eligibility under § 101. Applicant argues the amended claims recite “a non-conventional and non-routine arrangement of systems and ordered combination of limitations” that adds significantly more that the alleged abstract idea. Applicant emphasizes the “specific feedback mechanism” in which “updating the predictive possession value model comprises ‘comparing the one or more predicted values to the actual reimbursement amount to identify prediction errors; and retraining the deep-learning neural-network machine learning model with corrected training examples derived from the prediction errors, as identified.’” Applicant’s Reply at 13. Applicant asserts that this “specific error-correction retraining process provides an inventive concept” and that the “Patent Office provides no evidence that demonstrates otherwise for the specific ordered combination of limitations” are “well-understood, routine, or conventional.” Id. Examiner respectfully disagrees. The additional elements beyond the abstract idea, viewed individually and in combination, are reasonable characterized as well understood, routine, and conventional supervised machine learning practices implemented on a generic computer based on Applicant’s own specification, ¶ 72 (“Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data.”); see also ¶¶ 73, 74. The feedback loop here, comparing predictions to actual outcomes and retraining with corrected examples is the standard way supervised machine learning models are trained and updated. See also, Leskovec, Jure, Anand Rajaraman, and Jeffrey David Ullman. "Mining of Massive Datasets," (2019) p. 465 (Example 12.2) [“NPL Leskovec”] (cited herein on PTO-892). Claim 21 does not specify any particular non-conventional network architecture, loss function, optimization technique, or data structure that improves the model’s operation beyond ordinary practice. Spec. ¶¶ 72, 73, 74. Applicant’s assertions of a “non-conventional and non-routine arrangement” are conclusory and not supported by evidence demonstrating that this feedback-based training and retraining loop was unconventional in the art, nor do the claims recite a specific improvement to the functioning of the computer itself. 37 CFR 1.111(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21–40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Analysis Step 1: Claims 21–40 are directed to a statutory category. Claims 21–28 recite a “computing system” and are therefore, directed to the statutory category of a “machine.” Claims 29–35 recite a “method” and are therefore, directed to the statutory category of a “process.” Claims 36–40 recite a “one or more non-transitory computer-readable media” and are therefore, directed to the statutory category of an "article of manufacture.” Representative Claim Claim 21 is representative [“Rep. Claim 21”] of the subject matter under examination and recites, in part, emphasis added by Examiner to identify limitations with normal font indicating the abstract idea exception, bold limitations indicating additional elements. Each limitation is identified by a letter for later use as a shorthand notation in referencing/describing each limitation. Portions of the claim use italics to identify intended use limitations1 and underline, as needed, in further describing the abstract idea exception: [A] 21. A computing system for generating one or more predicted values of one or more personal property items owned by a user, the computing system including one or more processors in communication with at least one memory device, the one or more processors configured to: [B] generate a predictive possession value model based at least in part upon a plurality of historical policyholder records associated with a plurality of policyholders, comprising: [C] training a deep-learning neural-network machine learning model with a first set of training data comprising (i) training input data comprising respective personal data and respective property data for each policyholder of the plurality of policyholders, and [D] (ii) training output data comprising one or more respective item values of one or more respective items owned by each policyholder of the plurality of policyholders, wherein training the deep-learning neural-network machine learning model comprises recognizing patterns that map the input training data to the output training data through supervised machine learning; [E] receive personal data and property data associated with the user; [F] determine, based at least in part upon the predictive possession value model, as generated, the one or more predicted values of the one or more personal property items owned by the user and one or more confidence estimates for the one or more predicted values of the one or more personal property items based at least in part upon the first personal data and the first property data; [G] receive a claim associated with the user in response to a claim event, wherein the claim includes a list of lost items and a list of spared items; [H] estimate a claim value associated with the claim based at least in part on the list of lost items and the list of spared items; [I] determine an actual reimbursement amount for the user based at least in part upon the one or more predicted values of the one or more personal property items and the claim value, as estimated, associated with the claim; and [J] updating the predictive possession value model using a second set of training data in which the first set of training data is updated with one or more item values for the one or more personal property items based on the actual reimbursement amount, continually retrieved additional historical policyholder data, and feedback associated with accuracy of predictions made by the predictive possession value model, wherein updating the predictive possession value model comprises: [K] comparing the one or more predicted values to the actual reimbursement amount to identify prediction errors; and [L] retraining the deep-learning neural-network machine learning model with corrected training examples derived from the prediction errors, as identified. Claims are directed to an abstract idea exception. Step 2A, Prong One: Rep. Claim 21 recites “generating one or more predicted values of one or more personal property items owned by a user” in the preamble, Limitation A, and “determine an actual reimbursement amount for the user based at least in part upon the one or more predicted values of the one or more personal property items and the claim value, as estimated, associated with the claim” in Limitation I, which recites commercial or legal interactions under the organizing human activity exception because these limitations describe processing an insurance claim. MPEP § 2106.04(a)(2)(II)(B)(ii) (“processing insurance claims for a covered loss or policy event under an insurance policy (i.e., an agreement in the form of a contract”)). Limitations B–H, J, K, L are the required steps to process the insurance claim and therefore, recite the same exception. Id. Alternatively, Limitations B, C, D, F, J, K, L recite a mathematical relationship under the mathematical concepts exception because "a process that employs mathematical algorithms [“generate a … model,” “training a … model … with training input data … and training output data … through supervised machine learning,” use the model to “determine … predictive values,” “updating the … model,” by “comparing the one or more predicted values,” and “retraining the … model”] to manipulate existing information to generate additional information [“an actual reimbursement amount”] is an abstract idea.” Digitech Image Techs. LLC v. Elecs. For Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014); MPEP § 2106.04(a)(2) (citing Digitech (“The patentee in Digitech claimed methods of generating first and second data by taking existing information, manipulating the data using mathematical functions, and organizing this information into a new form. The court explained that such claims were directed to an abstract idea because they described a process of organizing information through mathematical correlations, like Flook's method of calculating using a mathematical formula.”) Alternatively2, Limitations B–J, as drafted, recite the abstract idea exception of mental processes that under the broadest reasonable interpretation, cover performance in the human mind or with pen and paper, but for the recitation of the generic computer components indicated in bold. MPEP § 2106.04(a)(2)(III). Claims recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: • a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); . . . • a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011). MPEP § 2106.04(a)(2)(III)(A). For example, but for the generic computer components claim language, here, Limitations B–J, recite collecting information (Limitations E, G) and analyzing it (Limitations B, C, D, F, H, I, J) where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind. For example, Limitations B, C, and D, are mental processes that are practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to “generate a predictive possession model” using the “historical policyholder records” (Limitation B); “train[ ] a deep learning neural network machine learning model with training input data … and training output data … through supervised machine learning” (Limitations C, D).” When given their broadest reasonable interpretation in light of the disclosure, training “through supervised machine learning” (a specific technique) may be performed by hand. See, Leskovec, Jure, Anand Rajaraman, and Jeffrey David Ullman. "Mining of Massive Datasets," (2019) p. 465 (Example 12.2) [“NPL Leskovec”] (cited herein on PTO-892). The “neural-network” characterization of the machine learning model is merely a label and/or technique “where the outputs of one rank (or layer of nodes) becomes the inputs to nodes at the next layer. The last layer of nodes produces the outputs of the entire neural net.” NPL Leskovec, p. 523, and can be performed by hand. NPL Leskovec, p. 524 (Example 13.1). Likewise, the “deep-learning” characterization of the machine learning model is merely a label and/or technique where “numbers of training examples” are used to train “neural nets with many layers,” NPL Leskovec, p.523, and may also be performed by hand. NPL Leskovec, p. 524 (Example 13.1). “The use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation, but simply accounts for variations in memory capacity from one person to another” or a multi-step mental process. MPEP § 2106.04(a)(2)(III)(B). Likewise, Limitations F, H, and I are mental processes that are practically performed in the human mind or with pen and paper because they requires mere “observation, evaluation, judgment, and/or opinion” to “determine … the one or more predicted values … and confidence estimates … based on personal data and the first property data” (Limitation F); “estimate a claim value … based at least in part on the list of lost items and the list of spared items” received in the prior step (Limitation H); and “determine an actual reimbursement amount … based at least in part upon the one or more predicted values of the one or more personal property items and the claim value, as estimated, associated with the claim” (Limitation I). Last, Limitation J is also mental processes that are practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to update the predictive possession value model in the manner claimed. Because the model may be “generated” by hand as explained supra, “updating” the model may also and could encompasses mere changes to input or output data. If a claim limitation under BRI, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract idea exception. MPEP § 2106.04(a)(2)(III). Accordingly, the pending claims recite the combination of these abstract idea exceptions. Step 2A, Prong Two: Rep. Claim 21 does not contain additional elements that integrate the abstract idea exception into a practical application because the additional elements are mere instructions to apply the abstract idea exception. MPEP § 2106.05(f). The additional elements are limited to the computer components and indicated in bold, supra. The additional elements are: A computing system … including one or more processors in communication with at least one memory device; a predictive possession value mode comprising a deep-learning neural-network machine learning model. Regarding the computing system including one or more processors in communication with at least one memory device; a predictive possession value mode comprising a deep-learning neural-network machine learning model, Applicant’s Specification does not otherwise describe them or describes them using exemplary language as a general-purpose computer, as a part of a general-purpose computer, or as any known and exemplary (generic) computer component known in the prior art. Thus, Applicant takes the position that such hardware/software is so well known to those of ordinary skill in the art that no explanation is needed under 35 U.S.C. § 112(a). Lindemann Maschinenfabrik GMBH v. Am. Hoist & Derrick Co., 730 F.2d 1452, 1463 (Fed. Cir. 1984) (citing In re Meyers, 410 F.2d 420, 424 (CCPA 1969) (“[T]he specification need not disclose what is well known in the art”). E.g., Spec. ¶ 24 (“The possession value model may be generated based at least in part upon any suitable technique”); ¶¶ 37, 76 (“The methods and systems described herein may be implemented based at least in part upon computer programming or engineering techniques including computer software, firmware, hardware, or any combination thereof”); ¶ 41 (“User computing device 108 may be any device capable of accessing the Internet”); ¶ 42 (“Insurance provider device 110 may be any device capable of accessing the Internet”); ¶ 48 (“Memory 310 maybe any device allowing information such as executable instructions and/or transaction data to be stored and retrieved.”); ¶ 48 (“Media output component 315 may be any component capable of conveying information to user 301.”); ¶ 56 (“Storage device 434 may be any computer-operated hardware suitable for storing and/or retrieving data”); ¶ 58 (“Storage interface 420 may be any component capable of providing processor 405 with access to storage device 434.”); ¶ 76 (“any transmitting/receiving medium”); ¶ 77 (“”machine readable medium" "computer-readable medium" refers to any computer program product, apparatus and/or device … used to provide machine instructions and/or data to a programmable processor.”); ¶ 77 (“The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor”); ¶ 78 (“a processor may include any programmable system”); ¶ 79 (“the terms "software" and "firmware" are interchangeable, and include any computer program stored in memory for execution by a processor”); ¶ 80 (“The application is flexible and designed to run in various different environments without compromising any major functionality”). The generic processor, here, appears to perform calculations (functions) that are programmed by software. Spec. ¶¶ 79, 48. This is a computer doing what it is designed to do—performing directions it is given to follow. Regarding the predictive possession value model comprising a deep-learning neural-network machine learning model, Applicant’s Specification describes the model may be built using exemplary language (such that it could be almost anything) and “may be generated based at least in part upon any suitable technique.” Spec.¶ 24. Under BRI, this could include mental processes. The only requirement specified by the claims is that the possession model uses two pieces of data: (1) personal data and (2) property data (Limitation C), to predict, in part, a value of personal possessions associated with the user (Limitation E). Further, like any model, input is received and output is generated. The possession model or ML model do not pose any meaningful limits on the abstract idea exception and amounts to “apply it” with a generic and exemplary computer. MPEP § 2106.05(f). The feedback loop here, comparing predictions to actual outcomes and retraining with corrected examples is the standard way supervised machine learning models are trained and updated. Spec. ¶¶ 72, 73, 74; see also, Leskovec, Jure, Anand Rajaraman, and Jeffrey David Ullman. "Mining of Massive Datasets," (2019) p. 465 (Example 12.2) [“NPL Leskovec”] (cited herein on PTO-892). Claim 21 does not specify any non-conventional network architecture, loss function, optimization technique, or data structure that improves the model’s operation beyond ordinary practice. Spec. ¶¶ 72, 73, 74. Examiner is unable to locate in the specification and Applicant does identify any technical improvement in the machine learning model, the computer, or any other technology. MPEP § 2106.05(a). Recentive Analytics, Inc. v. Fox Corp., 2025 U.S.P.Q.2d 628 (Fed. Cir. 2025) (“[P]atents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.”). Limitation A describes the processor communicating with memory to perform the steps of the claimed invention. This takes a generic piece of hardware and describe the functions of receiving, storing, and sending data (instructions) between the processor and memory, which merely invokes computers or other machinery in its ordinary capacity to receive, store, or transmit data. MPEP § 2106.05(f)(2). Limitations B–L describe the processor and memory performing the steps of the claimed invention, which represents the abstract idea exception itself. Performing the steps of the abstract idea exception itself simply adds a general-purpose computer after the fact to an abstract idea exception, MPEP § 2106.05(f)(2), or generically recites an effect of the judicial exception. MPEP § 2106.05(f)(3). The generic processor, here, appears to perform calculations that are programmed by software. This is a computer doing what it is designed to do—performing directions it is given to follow. Therefore, the claim as a whole, looking at the additional elements individually and in combination, are no more than mere instructions to apply the exception using generic computer components and is not a practical application. MPEP § 2106.05(f). The additional elements do not integrate the abstract idea exception into a practical application because they do not impose any meaningful limits on the abstract idea exception. Accordingly, Rep. Claim 21 is directed to an abstract idea. Rep. Claim 21 is not substantially different than Independent Claims 29 and 36 and includes all the limitations of Rep. Claim 21. Independent Claims 29 and 36 contain no additional elements. Therefore, Independent Claims 29 and 36 are also directed to the same abstract idea. The claims do not provide an inventive concept. Step 2B: Rep. Claim 21 fails Step 2B because the claim as whole, looking at the additional elements individually and in combination, are not sufficient to amount to significantly more than the recited judicial exception. As discussed with respect to Step 2A, Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer and/or generic computer components. MPEP § 2106.05(f). The same analysis applies here in Step 2B. Mere instructions to apply an exception using a generic computer and/or generic computer components cannot provide an inventive concept. MPEP § 2106.05(I). The additional elements, taken individually and in combination, do not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The pending claims in their combination of additional elements is not inventive. First, the claims are directed to an abstract idea. Second, each additional element represents a currently available generic computer technology, used in the way in which it is commonly used (individually generic). Last, Applicant’s Specification discloses that the combination of additional elements is not inventive. Spec., ¶ 71, 81 (steps/functions may be performed in any order); ¶¶ 24, 37, 41, 42, 48, 56, 58, 71, 72, 73, 74, 76, 77, 78, 79, 80, 81 (known and generic (exemplary) computer equipment as explained and cited supra.) Thus, Examiner finds the additional elements of Rep. Claim 21 are elements that have been recognized as well-understood, routine, and conventional (“WURC”) activity in the particular field of this invention based on Applicant’s own disclosure3. Spec. ¶¶ 24, 37, 41, 42, 48, 56, 58, 71, 72, 73, 74, 76, 77, 78, 79, 80, 81; MPEP § 2106.05(d). Specifically, Applicant’s Specification discloses the recited additional elements (i.e., a computing system … including one or more processors in communication with at least one memory device; a predictive possession value mode comprising a deep-learning neural-network machine learning model) are generic computer components. These elements do no more than “apply” the recited abstract idea(s) on a known computer (e.g., processor) and computer-related components (e.g., memory). Examiner finds the machine learning technology and techniques of “supervised machine learning,” “deep-learning,” and “neural-network,” as recited by the claims, is conventional, as the specification demonstrates. Spec. ¶¶ 72, 73, 74. Prior art NPL Leskovec is additional evidence available to a person of ordinary skill in the machine learning arts of the conventionality of the claimed machine learning technology and techniques, including the feedback retraining mechanism amended this round. Specifically, “updating the predictive possession value model comprises: comparing the one or more predicted values to the actual reimbursement amount to identify prediction errors; and retraining the deep-learning neural-network machine learning model with corrected training examples derived from the prediction errors, as identified” is well-understood, routine, and conventional in the field of machine learning. NPL Leskovec, Ch.12, § 12.7 “Summary of Chapter 12” (pp. 519–20) and Ch. 13, §§ 13.1, 13.2, 13.2.7, 13.2.8, 12.2.9 (pp. 523–538) (describing gradient descent, loss functions based on prediction error, and training of neural nets via repeated weight updates drive by those errors). The Examiner also finds the functions of receiving, storing, transmitting, and processing (e.g., performing mathematical operations on) data, described in Limitations A–L are all normal programmed functions of a generic computer. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the additional elements in combination adds nothing that is not already present when looking at the elements individually. Their collective functions merely provide conventional computer implementation of the abstract idea at a high level of generality. Thus, Rep. Claim 21 does not provide an inventive concept. Rep. Claim 21 is not substantially different than Independent Claims 29 and 36 and includes all the limitations of Rep. Claim 21. Independent Claims 29 and 36 contain no additional elements. Therefore, Independent Claims 29 and 36 also do not recite an inventive concept. Dependent Claims Not Significantly More The dependent claims have been given the full two-part analysis including analyzing the additional limitations both individually and in combination. The dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. § 101. Dependent claims are dependent on Independent Claims and include all the limitations of the Independent Claims. Therefore, all dependent claims recite the same Abstract Idea. Dependent claims do not contain additional elements that integrate the abstract idea exception into a practical application or recite an inventive concept because the additional elements: (1) are mere instructions to apply the abstract idea exception; and/or (2) further limit the abstract idea exception of the Independent Claims. The abstract idea itself cannot provide the inventive concept or practical application. MPEP §§ 2106.05(I), 2106.04(d)(III). Dependent Claims 22, 23, 24, 25, 26, 27, 28, 34, 35, 37, 38, 39, and 40 all recite “wherein” clauses or limitations that further limit the abstract idea of the Independent Claims and contain no additional elements. An inventive concept or practical application cannot be furnished by an abstract idea exception itself. MPEP §§ 2106.05(I), 2106.04(d)(III). Dependent Claims 30, 31, 32, and 33 recite additional limitations that form part of the same abstract idea exception as recited in Independent Claims and contain no additional elements. An inventive concept or practical application cannot be furnished by an abstract idea exception itself. MPEP §§ 2106.05(I), 2106.04(d)(III). Conclusion Claims 21–40 are therefore drawn to ineligible subject matter as they are directed to an abstract idea without significantly more. The analysis above applies to all statutory categories of invention. As such, the presentment of Rep. Claim 21 otherwise styled as another statutory category is subject to the same analysis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES H MILLER whose telephone number is (469)295-9082. The examiner can normally be reached M-F: 10- 4 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett M Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES H MILLER/ Primary ExaminArt Unit 3694 1 Statements of intended use fail to limit the scope of the claim under BRI. MPEP § 2103(I)(C). 2 “It should be noted that these groupings are not mutually exclusive, i.e., some claims recite limitations that fall within more than one grouping or sub-grouping. … Accordingly, examiners should identify at least one abstract idea grouping, but preferably identify all groupings to the extent possible, if a claim limitation(s) is determined to fall within multiple groupings and proceed with the analysis in Step 2A Prong Two.” MPEP § 2106.04(a). 3 See Changes in Examination Procedure Pertaining to Subject Matter Eligibility, Recent Subject Matter Eligibility Decision (Berkheimer v. HP, Inc.), 3-4, https://www.uspto.gov/sites/default/files/documents/memo-berkheimer-20180419.PDF (April, 18, 2018) (That additional elements are well-understood, routine, or conventional may be supported by various forms of evidence, including "[a] citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates the well-understood, routine, conventional nature of the additional element(s).").
Read full office action

Prosecution Timeline

Sep 08, 2022
Application Filed
May 16, 2024
Non-Final Rejection — §101
Jul 16, 2024
Examiner Interview Summary
Aug 21, 2024
Response Filed
Aug 31, 2024
Final Rejection — §101
Dec 02, 2024
Request for Continued Examination
Dec 03, 2024
Response after Non-Final Action
Feb 13, 2025
Non-Final Rejection — §101
Apr 17, 2025
Examiner Interview Summary
Apr 25, 2025
Response Filed
May 08, 2025
Non-Final Rejection — §101
Jul 11, 2025
Interview Requested
Jul 21, 2025
Examiner Interview Summary
Aug 12, 2025
Response Filed
Nov 05, 2025
Final Rejection — §101
Jan 28, 2026
Request for Continued Examination
Feb 15, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602690
SYSTEMS AND METHODS FOR TRANSACTION AUTHORIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12591931
METHODS, APPARATUS, AND SYSTEMS TO FACILITATE TRADES USING DISPLAYED FINANCIAL CURVES
2y 5m to grant Granted Mar 31, 2026
Patent 12561745
Artificial Intelligence Systems and Methods for Efficient Use of Assets
2y 5m to grant Granted Feb 24, 2026
Patent 12547992
CRYPTOGRAPHIC CURRENCY EXCHANGE
2y 5m to grant Granted Feb 10, 2026
Patent 12518279
SYSTEMS AND METHODS FOR PROVIDING MULTI-FACTOR AUTHENTICATION FOR VEHICLE TRANSACTIONS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
40%
Grant Probability
77%
With Interview (+36.6%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 193 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month