Prosecution Insights
Last updated: April 18, 2026
Application No. 16/728,356

INTELLIGENT SERVICING

Final Rejection §101§103§112
Filed
Dec 27, 2019
Examiner
GREGG, MARY M
Art Unit
3695
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Lendingclub Bank National Association
OA Round
6 (Final)
14%
Grant Probability
At Risk
7-8
OA Rounds
5y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
89 granted / 629 resolved
-37.9% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
63 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
31.3%
-8.7% vs TC avg
§103
37.2%
-2.8% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 629 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Non-Final Office Action in response to communications received December 16, 2025. Claim(s) 1-25 and 28-29 have been canceled. Claim(s) 26 has have been amended. New claims 32-34 have been added. Therefore, claims 26-27 and 30-34 are pending and addressed below. Priority Application No. 16,728,356 filing date 12/27/2019. Applicant Name/Assignee: Lendingclub Bank, National Association Inventor(s): Liu, Jianju; Han, Jianglan; Vijayakumar, Sandeep; Nazari, Ali; Thukral, Ashish; Raval, Ahutosh Response to Amendment/Arguments Claim Rejections - 35 USC § 112(a) Applicant’s amendments are not sufficient to overcome the 112(a) rejection of claims 26-31. The examiner maintains the previous 112(a) rejection of claims 26-31. However a new 112(a) rejection has been set forth below in response to the amended limitations. Claim Rejections - 35 USC § 101 Applicant's arguments filed 12/16/2025 have been fully considered but they are not persuasive. In the remarks applicant argues the amended limitations reflect improvement to technology in architecture and operation of machine learning systems consistent with Ex parte Dejardin. Applicant’s argument is not persuasive. The specification discloses training machine learning engine to evolve knowledge to determine characteristics from inputted data for predicting default (¶ 0012, 0014, 0018, 0021, 0029-0030, 0035, 0042), and trained to select most effective remedial action based on predicted likelihood of default (¶ 0020-0021, 0035) The specification does not disclose any indication of a training process directed toward improvement of technology but rather the application of leaning technology to determine characteristics for predicting fault. The specification lacks technical disclosure with respect to the training of the ML model instead focuses on expected outcomes where the training nominally mentions “building models” which can be implemented by any known means (¶ 0031) and further “refined based on new data” (¶ 0032, 0040). The current machine learning as set forth in the limitations and specification as per the requirements of Dejardin, merely uses machine learning techniques in a particular environment with no inventive concept or any process directed toward the improvement of machine learning technology or any other underlying technology in the claims or specification. The specification teaches that the machine learning model may be “trained using a set of training data,” which can include “historical data from real-time snapshots of events. The machine learning model may “be trained to recognize how to recognize one or more of the target features (probability of default or recommended remedial action) based on a given set of input data. The current application of machine learning technology does not do more than apply established methods of machine learning to a new data environment. As claimed the process and in light of the specification, qualifies as an abstract idea for which computers are invoked merely as a tool (see Finjan, Inc. v. Blue Coat Sys., Inc., 879 F.3d 1299, 1303 (Fed. Cir. 2018)). The machine learning technology described in the specification is conventional, as the specifications demonstrate which is for use in default prediction and loan servicing features.( ¶ 0012, 0014, 0030-0032, 0036-0041). See Recentive Analytics Inc v Fox Corp. The rejection is maintained. In the remarks applicant argues that amended claim 26 does not reflect financial risk analysis or credit related decision making processes of the identified abstract idea in the previous Office Action. Rather the limitations are directed toward a pipeline technique applying machine learning technology. Specifically the limitations include applying different trained versions of the same model to data snapshot, storing outputs of the different models, assembling vector feature of the models outputs, providing the vector feature as machine learning input features to a second machine learning model. Accordingly, the limitations focus on how the machine learning model manages the model versions, feature construction and downstream inferences not on business/financial aspects. Therefore, under step 2A prong 1, the claimed limitations are directed toward abstract subject matter of mental concepts or methods of organizing human activity. With respect to mental processes. Applicant is arguing a rejection not applied under step 2A prong 1. Applicant’s argument is not persuasive. The claim limitations recite applying the ML model to data snapshots to produce a …prediction score…produce a second prediction… providing as input …to generate a decision output. Applying Ex parte Dejardin, the specification identifies decision output as predicted likelihood of default based on time series snapshot (¶ 0014) where the snapshot include FHA snapshots of borrower on a times series reflecting borrowers financial health (payment history, loan terms, etc) for specified period (¶ 0015, 0022) and identifies the prediction score as an indication of likelihood of default of the loans (¶ 0001, 0010, 0014, 0035-0036, 0042) which is explicitly directed toward the analysis of human behavior. The step 2A prong 1 rejection is maintained. In the remarks applicant argues the claimed limitations under step 2A prong 2, integrate any alleged abstract idea into a practical application. Applicant argues the claimed subject matter address a technical problem specific to machine learning systems which typically overwrite and discard outputs from earlier trained model versions when a model is retrained, which prevents downstream model from leveraging information in model evolution. Applicant argues that the “retraining outputs from multiple versions of the same model, encoding outputs to model specific attributes, “assemble attributes into a feature vector” and “enable downstream machine learning inference using feature vector” improves the operation of the machine learning model. Applicant’s argument is not persuasive. Applicant’s argument that deriving multiple generations of a model over time is an improvement over existing technology is not supported by evidence. The examiner provides as evidence that is it is known in the art to construct different versions of models over time-US Pub. No. 2015/0332165 A1 by Mermoud- FIG. 7. ; US Pub. No. 2019/0385070 A1 by Lee et al- “with the data having then been processed by daily shutdown predictor 810 and updated by data updater 826, period updater 828 may update data used in the generation, training, and evaluation of new models as well as the re-evaluation and re-training of existing models” (para 0142); US Pub No. 2018/0349769 A1 by Mankovskii et al- claim 15; US Pub No. 2017/0223036 A1 by Muddu et al- “method involves training a second version of the machine learning model with the time slice that is being processed through the first version for scoring, in parallel with the processing the time slice. “; CA 2821103 by Padullaparthi et al- Claim 14; WO 2018/224669 A1 by Servajean et al.-“ model is periodically recorded after each of a plurality of training iterations so that multiple versions of the model are available, each successive model being trained with more iterations than a previous model. Then, in a training phase, time-series data for a device is processed by all the models at once to generate, for each host, a vector of reconstruction error values (one for each model). A derivative of this vector is evaluated to determine a gradient vector which is used to train a machine learning model such as a one-class support vector machine (SVM) (one-class because only positive examples are used in training - i.e. examples reflecting "normal" traffic). “; WO 2018/111270 A1 by Garg et al. “model server 1301 can continuously or periodically update machine learning models to generate new versions of machine learning models based on additional training data.” With respect to the vectoring of input data for machine learning process. Vectoring data for input is well known in the realm of computer technology, therefore its use as inputted data is not an improvement over existing technology. The rejection is maintained. In the remarks applicant repeats argument 1 with respect to USPTO 101 AI guidance for patent eligible subject matter in light of the Ex parte Dejardin decision. See response above, the rejection is maintained. Claim Rejections - 35 USC § 103 Applicant’s arguments with respect to claim(s) 26-32 have been considered but are moot because the new ground of rejection applied in the prior rejection of record for the rejection in light of the amends presented and argued. In the remarks applicant argues the prior art references fail to teach “applying different trained versions of the same machine learning model to the same structured data snapshot”, applicant’s argument is moot as a new reference has been applied to address the submitted amendments. In the remarks applicant argues the prior art references fail to teach “ storing outputs from multiple trained versions concurrently as separate derived attributes within a single snapshot”, applicant’s argument is moot as a new reference has been applied in response to the modification of the prior art rejection to address the submitted amendments In the remarks applicant argues the prior art references fail to teach “ assembling a feature vector that intentionally includes multiple model-specific outputs” applicant’s argument is moot as a new reference has been applied to address the submitted amendments. In the remarks applicant argues the prior art references fail to teach “ providing such a feature vector as machine-learning input features to a downstream machine learning model.” applicant’s argument is moot as a new reference has been applied to address the submitted amendments. In the remarks applicant argues the prior art references fail to teach “preserve outputs from multiple trained versions of the same model”, applicant’s argument is moot as a new reference has been applied to address the submitted amendments. In the remarks applicant argues the prior art references fail to teach “ encode those outputs as distinct, model-specific attributes”, the examiner respectfully disagrees. The prior art Findanza teaches para 0175-0180 wherein the prior art teaches creating input vectors from original input data In the remarks applicant argues the prior art references fail to teach “ assemble those attributes into a feature vector for downstream machine learning inference.” applicant’s argument is moot as a new reference has been applied to address the submitted amendments. In the remarks applicant argues the dependent claim based on allowable subject matter of the independent claims over the prior art references” are also allowable over the prior art references. The examiner respectfully disagrees. See response above and amended rejection below Claim Objections Claim 26 is objected to because of the following informalities: parenthetical expression recites “Previously Presented” on the amended claims. Appropriate correction is required. Claim Interpretation The amended limitations of the independent claims recite “applying a first version of the machine learning model…”, “applying a second version of the machine learning model…” which the specification describes as model generations. [0025] In addition, attribute derivation unit 106 may include the logic of one or more credit models. For the purpose of explanation, it shall be assumed that attribute derivation unit 106 includes the logic for five generations (G 1 to GS) of a credit model. The credit model for each generation takes the values of various raw attributes as input and, based on those values, generates a "credit score" for the borrower. Typically, different generations of a credit model will take different raw attributes as input and/or apply different weights to those raw attributes to derive a credit score for a user. Thus, even for the same user with the same raw attributes, the credit scores generated by each generation of credit model will be different. [0026] The input attributes and logic of the credit models may vary from generation to generation, so the credit scores generated the various credit model generations may also differ from generation to generation. In such an embodiment, any given FHA snapshot may include five different derived credit scores, one for each of the five generations of the credit model. The examiner is interpreting the language “version” to be analogous to the “generation (G1-GN) as described in the specification. With respect to the terminology “vector”, the examiner is interpreting the term by its ordinary meaning. In machine learning context, vectors are encoded data points representing data. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim(s) 26-27 and 30-34 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In reference to Claims 26-27 and 30-34: Independent claim 26 recites the limitation “computing a derived feature based on a difference between the first prediction score and the second prediction score”, although the specification does have possession of generating predictions scores (¶ 0042) and does have possession of scores generated by more than one model (¶ 0017), The original presentation of the written description is silent with respect to the operation Although the specification has possession of a process to generate derived features, there is no possession of “computing a derived feature based on difference between a first prediction score and the second prediction score”. With respect to the limitation “storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot”, although the specification has possession of “storing information and instructions to be executed by processor… Main memory 306 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 304” (¶ 0046) “static storage device …for storing static information and instructions for processor 304…A storage device 310, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus 302 for storing information and instructions” (¶ 0047); “, the original presentation of the written description is silent with respect to the process “storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot”, With respect to the limitation “assembling, for the structured data snapshot, a feature vector that includes the first prediction score and the second prediction score as distinct model-specific attributes” and “providing, the feature vector together with other derived attributes from the structured data snapshot as input to a second machine learning model configured to generate a decision output based on the feature vector as a set of machine- learning input features for the entity…”, the specification is silent with respect to data or features being assembled as vectors. The original description does not recite any vector data features or vectors provided with other derived attributes. The term “vector” is not mentioned in the specification. Accordingly, the limitations “storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot”, “assembling, for the structured data snapshot, a feature vector that includes the first prediction score and the second prediction score as distinct model-specific attributes” and “providing, the feature vector together with other derived attributes from the structured data snapshot as input to a second machine learning model configured to generate a decision output based on the feature vector as a set of machine- learning input features for the entity…” is new matter. In reference to Claim 27: Claim 27 recites the limitation “associating each prediction score with a corresponding model version identifier, and storing the version identifiers as metadata with the structured data snapshots” which is new matter. The specification has possession of: [0035]…FIG. 2 is described specifically in respect of one of the servers 108, analogous versions of the system 99 can also be used for the user devices 104. Dependent claims 27 and 30-34 depend upon claim 26 and contain the same deficiencies as discussed above with respect to claim 26. Therefore, claim(s) 26-27 and 30-34 are rejected for failing to comply with written description requirements of statute 112(a). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 26-27 and 30-34 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. In reference to claims 26-27 and 30-34: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a method, as in independent Claim 26 and the dependent claims. Such methods fall under the statutory category of "process." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The claimed invention is directed to an abstract idea without significantly more. Method claim 26 recites a method steps (1) generating sequence of data snapshots (2) produce a first prediction score (3) produce second prediction score (4) storing first and second prediction score within corresponding structured data snapshot (5) assembling a feature vector that includes the first and second prediction score as distinct model specific attributes (6) provide feature vector together with other derived feature as input to generate a decision. The claimed limitations which under its broadest reasonable interpretation, covers performance of risk analysis and mitigation a fundamental economic practice. The claim limitations recite applying the ML model to data snapshots to produce a …prediction score…produce a second prediction… providing as input …to generate a decision output. Applying Ex parte Dejardin, the specification identifies decision output as predicted likelihood of default based on time series snapshot (¶ 0014) where the snapshot include FHA snapshots of borrower on a times series reflecting borrowers financial health (payment history, loan terms, etc) for specified period (¶ 0015, 0022) and identifies the prediction score as an indication of likelihood of default of the loans (¶ 0001, 0010, 0014, 0035-0036, 0042) which is explicitly directed toward the analysis of human behavior. Therefore, when considered as a whole the claimed subject matter is directed toward receiving, analyzing financial data in order to determine and compute derived features based on prediction scores in order to output a risk decision. Such concepts can be found in the abstract category of financial behavior and fundamental economic practices. These concepts are enumerated in Section I of the 2019 revised patent subject matter eligibility guidance published in the federal register (84 FR 50) on January 7, 2019) is directed toward abstract category of methods of organizing human activity. STEP 2A Prong 2: The identified judicial exception is not integrated into a practical application because the claims fail to provide indications of patent eligible subject matter that integrate the alleged abstract idea into a practical application. The additional elements recited in the claim beyond the abstract idea include a first version of a machine learning model, a second version of a machine learning model and a device including a hardware processor to perform the method. The claim limitations generally link use of the judicial exception to the device including hardware. The additional element “device including hardware” is recited to perform the operation “storing the first and second prediction scores as separate attributes within corresponding snapshot” and “providing feature vector with other derived attributes as input to …model”. According to MPEP 2106.05(d) II (see also MPEP 2106.05(g)) the courts have recognized the following computer functions are claimed in a merely generic manner (e.g., at a high level of generality) where technology is merely applied to perform the abstract idea or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 The claim limitations (storing and providing) are recited at a high level of generality without details of technical implementation and thus are insignificant extra solution activity. The additional element “device including hardware” is recited to perform the operation “generating …time aligned sequence of data snapshot”, “applying …first learning model to produce first prediction score”, “applying …second learning model to produce second prediction score”, “assembling a feature vector that includes first and second prediction score”, which individually are directed toward analyzing financial data in order to compute predictions scores indicating likelihood of default. The claim limitations recite applying the ML model to data snapshots to produce a …prediction score…produce a second prediction… providing as input …to generate a decision output. Applying Ex parte Dejardin, the specification identifies decision output as predicted likelihood of default based on time series snapshot (¶ 0014) where the snapshot include FHA snapshots of borrower on a times series reflecting borrowers financial health (payment history, loan terms, etc) for specified period (¶ 0015, 0022) and identifies the prediction score as an indication of likelihood of default of the loans (¶ 0001, 0010, 0014, 0035-0036, 0042) which is explicitly directed toward the analysis of human behavior and not toward any of the underlying technology. The claimed hardware processor recited for performing the above limitations is merely acting as a tool to automate the abstract idea of generating data for use in computing a derived feature provided as input to a second ML model. The claim limitations recite “applying” a first and second machine learning model to produce a first and second prediction score respectively. The functions are is recited at a high-level of generality such that it amounts to no more than applying the exception using generic computer components. Taking the claim elements separately, the operation performed by the processor and applied first and second machine learning model at each step of the process is purely in terms of results desired and devoid of implementation of details. Technology is not integral to the process as the claimed subject matter is so high level that any generic programming could be applied and the functions could be performed by any known means. When considered as a combination of parts the limitations 1-3 are directed toward generating data snapshots, applying the model to data snapshots to produce a score and applying a second model trained after first [model] version to produce a second prediction score are directed toward collecting data and computing values using models. These claimed limitations are not directed toward the learning model itself but rather the application for computing prediction scores based on data applied. The limitation where the second model is trained after first model lacks technical disclosure, failing to provide any processes directed toward training the second model other than the training occurs after the first learning model. Accordingly the combination of steps 1-3 is not directed toward processes which indicate patent eligibility under step 2A prong 2. The combination of limitations 1-3 and 4-5 is directed toward storing the computed scores modifying prediction scores into vectors for input into the learning model for the intended use of generating a decision output based on the scores calculated in limitations 2-3. Accordingly the claim limitations as a whole are not directed toward the improvement or changing the way processors or learning models operations or corresponding technology. Instead the combinations when considered as a whole is directed toward applying data generated in order to calculate a first and second score which is used to compute a derived feature that is provided for use in generating a decision output. The claim limitations and specification lacks technical disclosure of the details for technical implementation of the claimed limitations and makes clear that technology or the improvement thereof is not the focus of the claimed invention. The claim limitations are not directed toward a solution to a technical problem. The specification discloses using snapshots of data over time in order to calculate credit risk related value. Accordingly, the claimed limitations are not directed toward a technical solution to a technical problem rather than a solution to a problem found in the abstract idea. The integration of processor and application of the first and second machine learning models used to calculate credit score do not improve upon technology or improve upon computer functionality or capability in how computers carry out one of their basic functions. The integration of elements do not provide a process that allows computers to perform functions that previously could not be performed. The integration of elements do not provide a process which applies a relationship to apply a new way of using an application. The limitations do not recite a specific use machine or the transformation of an article to a different state or thing. The limitations do not provide other meaningful limits beyond generally linking the use of the abstract idea to a particular technological environment. The resource claimed performing the steps is merely a “field of use” application of technology. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments apply what generic computer functionality in the related arts. The steps are still a combination made to perform a financial activity and does not provide any of the determined indications of patent eligibility set forth in the 2019 USPTO 101 guidance. The additional steps only add to those abstract ideas using generic functions, and the claims do not show improved ways of, for example, an particular technical function for performing the abstract idea that imposes meaningful limits upon the abstract idea. Moreover, Examiner was not able to identify any specific technological processes that goes beyond merely confining the abstract idea in a particular technological environment, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim provides no technical details regarding how the operations performed by the “processor” or “first machine learning model” or “second machine learning model”. Instead, similar to the claims at issue in Intellectual Ventures I LLC v. Capital One Financial Corp., 850 F.3d 1332 (Fed. Cir. 2017), “the claim language . . . provides only a result-oriented solution with insufficient detail for how a computer accomplishes it. Our law demands more.” Intellectual Ventures, 850 F.3d at 1342 (citing Elec. Power Grp. LLC vy. Alstom, S.A., 830 F.3d 1350, 1356 (Fed. Cir. 2016)). The claim is directed to an abstract idea. STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements recited in the claim beyond the abstract idea include a processor to perform operations “generating…data”, “storing …score”, “assembling …feature vector”, “providing input …to generate output” and “applying” “first machine learning model” to “produce score” and “applying …second machine learning model” to “produce score”. Taking the claim elements separately, the function performed by the processor at each step of the process is purely conventional. Applying machine learning models to produce scores is so high level that the operations of the model do not distinguish the functions of the machine learning model over generically, programmed models. Furthermore, the limitations “generating”, “storing”, “applying models to provide scores”, “assembling feature vectors of the computed scores”, “providing input”, “generate output” ----are some of the most basic functions of a computer. Limitations where the additional elements are applied to perform the abstract idea as referenced in Alice that are not enough to qualify as “significantly more”. Alice found that limitation which include “apply it” (or an equivalent) with an abstract idea, mere instructions to implement the abstract idea on a computer or requiring no more than a generic compute to perform generic computer functions that are well understood activities known to the industry are insufficient. As a result, none of the hardware recited by the method claims offers a meaningful limitation beyond generally linking the use of the method to a particular technological environment, that is, implementation via computers. .. . The claim limitations do not recite that any of the “devices” perform more than a high level generic function ... . None of the limitations recite technological implementation details for any of these steps, but instead recite only results desired to be achieved by any and all possible means. .. . Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. When the claims are taken as a whole, as an ordered combination, the combination of steps does not add “significantly more” by virtue of considering the steps as a whole, as an ordered combination. All of these computer functions are generic, routine, conventional computer activities that are performed only for their conventional uses. See Elec. Power Grp. v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016). Also see In re Katz Interactive Call Processing Patent Litigation, 639 F.3d 1303, 1316 (Fed. Cir. 2011) Absent a possible narrower construction of the terms “generating”, “applying models to provide scores”, “providing input”, “generate output” ... are functions can be achieved by any general purpose computer without special programming. None of these activities are used in some unconventional manner nor do any produce some unexpected result. In short, each step does no more than require a generic computer to perform generic computer functions. As to the data operated upon, "even if a process of collecting and analyzing information is 'limited to particular content' or a particular 'source,' that limitation does not make the collection and analysis other than abstract." SAP America, Inc. v. Invest Pic LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018). Considered as an ordered combination, the computer components of Applicant’s claimed functions add nothing that is not already present when the steps are considered separately. The sequence of data reception-analysis modification-transmission is equally generic and conventional. See Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 715 (Fed. Cir. 2014) (sequence of receiving, selecting, offering for exchange, display, allowing access, and receiving payment recited as an abstraction), Inventor Holdings, LLC v. Bed Bath & Beyond, Inc., 876 F.3d 1372, 1378 (Fed. Cir. 2017) (sequence of data retrieval, analysis, modification, generation, display, and transmission), Two-Way Media Ltd. v. Comcast Cable Communications, LLC, 874 F.3d 1329, 1339 (Fed. Cir. 2017) (sequence of processing, routing, controlling, and monitoring). The ordering of the steps is therefore ordinary and conventional. The analysis concludes that the claims do not provide an inventive concept because the additional elements recited in the claims do not provide significantly more than the recited judicial exception. According to 2106.05 well-understood and routine processes to perform the abstract idea is not sufficient to transform the claim into patent eligibility. As evidence the examiner provides: The specification discloses: [0032] In view of the above, for semi-cold start problem, the user behavior and usage data 101a within the application 91 is stored to provide better personalization when they return. In addition, the content-to-content filtering algorithm 57a is used to recommend 302 the list of similar services 90a,b,c,d to the one that was chosen previously by the user for better user experience and show the variety of similar available services. For premium existing institution clients (an embodiment of user type), transactions data 101b can be leveraged by the system 99 to build an immersive user experience recommendation by combining with the state-of-the-art machine learning algorithms of the recommendation engine 54b and the life event predictor 54a. Further, based on the extensive user research, it is understood that there is a vacuum when it comes to financial jargons and the need for financial education. Hence, the smart guide 54e is used by the user for providing financial education and recommending the relevant services 90a,b,c,d by the recommendation engine 54b based on the searches performed by the user when using the smart guide 54e. [0036]… In conjunction with manipulation of the application 91 by the user, the system 99 can apply a machine learning recommendation engine 54b (see Figure 3) to generate a recommendation I prediction 302 of the Right Services (e.g. 90a,b,c,d) at the right time from a curated platform of services 90 to meet the identified needs 301…. [0043] Referring to Figure 4, the recommendation engine 54b algorithm can be implemented using technology such as but not limited to Public Python Libraries (e.g. Scikit Learn, Pandas, Numpy, NLTK, Collections, Matplotlib, Seaborn, radar, boto3, imblearn, 20 XGBoost, tqdm, botocore, pyyaml) and machine learning algorithms such as but not limited to: XGBoost Machine Learning Model with SMOTE to handle the class imbalance and feature importance using Random Forest algorithm to select the most important features and feed to the model 59 to predict a user's life event 300 by the predictor 54a; K-Means Clustering algorithm with NLTK data descriptive analysis along with Google Gensim word2vec model to cluster the 25 institution in-house and third party partnered services 90a,b,c,d into k clusters for content-to content recommendation via the model 57a in order to predict 302 a selected set (e.g. top 5) relevant services 90a,b,c,d. K that can be chosen with Elbow analysis followed by Silhoutte analysis; and Cosine similarity scoring function with NLTK data descriptive analysis along with gensim word2vec model for an optimized financial glossary search engine for the get smart guide 54e. [0050]… A machine learning (ML) pipeline can be used for generating the life event predictor results 300 is as follows. The ML pipeline can consists of components 25 namely Data preprocessing and transformation, Feature selection, Model Training & Evaluation and Model monitoring and comparison…. [0057]… This can be provided by machine learning (ML) algorithms like XGBoost, Natural Language Processing (NLP) algorithms like Gensim coupled with cosine similarity cost function. Accordingly, the application 91 can recommend 302 a variety 5 of services 90a,b,c,d based on the previous clients' preferences 101a within the application 91. In terms of diversity, it is facilitated that a range of services 90a,b,c,d can be recommended without tampering the content similarity, as discussed by example above. 0069] The processor used in the foregoing embodiments may comprise, for example, a processing unit (such as a processor, microprocessor, or programmable logic controller) or a microcontroller (which comprises both a processing unit and a non-transitory computer readable medium). Examples of computer readable media that are non-transitory include disc-based media such as CD-ROMs and DVDs, magnetic media such as hard drives and other forms of magnetic disk storage, semiconductor based media such as flash media, random access memory (including DRAM and SRAM), and read only memory. As an alternative to an implementation that relies on processor-executed computer program code, a hardware-based implementation may be used. For example, an application-specific integrated circuit (ASIC), field programmable gate array (FPGA), 10 system-on-a-chip (SoC), or other suitable type of hardware implementation may be used as an alternative to or to supplement an implementation that relies primarily on a processor executing computer program code stored on a computer medium. With respect to the limitation “storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot”, although the specification has possession of “storing information and instructions to be executed by processor… Main memory 306 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 304” (¶ 0046) “static storage device …for storing static information and instructions for processor 304…A storage device 310, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus 302 for storing information and instructions” (¶ 0047); “, the original presentation of the written description is silent with respect to the process “storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot”, With respect to the limitation “assembling, for the structured data snapshot, a feature vector that includes the first prediction score and the second prediction score as distinct model-specific attributes” and “providing, the feature vector together with other derived attributes from the structured data snapshot as input to a second machine learning model configured to generate a decision output based on the feature vector as a set of machine- learning input features for the entity…”, the specification is silent with respect to data or features being assembled as vectors. The original description does not recite any vector data features or vectors provided with other derived attributes. The term “vector” is not mentioned in the specification. See also Electric Power Group -receiving, analyzing data and outputting the results. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is generic components and functions in the related arts. The claim is not patent eligible. The remaining dependent claims—which impose additional limitations—also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claims 27 and 30-34 these dependent claim have also been reviewed with the same analysis as independent claim 26. Dependent claim 27 is directed toward associating scores with identifier and storing identifiers- a business practice. Dependent claim 30 is directed toward using feedback data to train a model and incorporating delay window to account for outcome latency- applying technology in a conventional routine process lacking technical description. Dependent claim 31 is directed word applying pipeline that normalizes, aggregates or transforms raw data- mere data manipulation. Dependent claim 32 is directed toward retraining the second model based on changes in attributes across data snapshot- well understood use and application of technology. Dependent claim 33 is directed toward generating, normalizing or aggregating input data from a plurality of sources – mere data manipulation. Dependent claim 34 is directed toward generating decision output based on results of analysis – insignificant extra solution activity. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 26. Where all claims are directed to the same abstract idea, “addressing each claim of the asserted patents [is] unnecessary.” Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat 7 Ass ’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims 27-30-34 are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 26-27 and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2018/226337/CA 3065807 by Fidanza et al. (Fidanza), in view of US Pub No. 2020/0320428 A1 by Chaloulos et al and further in view of US Pub No. 2019/0138946 A1 by Asher et al. (Asher) In reference to Claim 26: Fidanza teaches: (Currently Amended) A method for generating predictive features from versioned machine learning outputs ((Fidanza) in at least Abstract), the method comprising: generating, for a given entity, a time-aligned sequence of structured data snapshots, each snapshot corresponding to a time interval and comprising a consistent set of attribute values ((Fidanza) in at least FIG. 41 para 0010, para 0089, para 0179); applying a first version of a machine learning model to one or more of the structured data snapshots to produce a first prediction score ((Fidanza) in at least Abstract; para 0008-0009, para 0016-0017, para 0079-0080, para 0101 wherein the prior art teaches user profile generation, para 0104, para 0169, para 0179); applying a … version of the machine learning model, trained after the first version, to the same structured data snapshot(s) to produce a second prediction score ((Fidanza) in at least para 0009, para 0016 wherein the prior art teaches inputting/outputting data to the ML model; para 0098 wherein the prior art teaches schema changes as data is updated and dozens of data input can be created; para 0100 wherein the prior art teaches second phase with acquisition of additional data, para 0143-0144, para 0146 wherein the prior art teaches computing each prediction separately the input data and learning method select the type by weight and voting, para 0152, para 0192-0193);… assembling, for the structured data snapshot, a feature vector that includes the first prediction score and the second prediction score as distinct model-specific attributes, …((Fidanza) in at least FIG. 41; para 0009 wherein the prior art teaches inputting consumer loan and transaction vectors; para 0016, para 0088-0089, para 0175-0180 wherein the prior art teaches creating input vectors from original input data, para 0182, para 0185, para 0187, para 0194, para 0195, para 0209, para 00215-00216); and providing, the feature vector together with other derived attributes from the structured data snapshot, as input to a second machine learning model configured to generate a decision output based on the feature vector as a set of machine- learning input features for the entity, wherein the method is performed by at least one device including a hardware processor ((Fidanza) in at least para 0009 wherein the prior art teaches inputting consumer loan and transaction vectors; para 0016, para 0088-0089, para 0175-0180 wherein the prior art teaches creating input vectors from original input data, para 0182, para 0185, para 0185-0186, para 0192-0195, para 0204, para 0209, para 00215-00216). Fidanza does not explicitly teach: applying a second version of the machine learning model, trained after the first version… storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot …wherein the first prediction score and the second prediction score correspond to different trained versions of the same machine learning model: Chaloulos teaches: applying a second version of the machine learning model, trained after the first version…((Chaloulos) in at least para 0047, Claim 1) assembling, for the structured data … a feature vector that includes the first prediction score and the second prediction score as distinct model-specific attributes, wherein the first prediction score and the second prediction score correspond to different trained versions of the same machine learning model ((Chaloulos) in at least Fig. 6; para 0028-0029, para 0039, para 0047, para 0049, para 0053, para 0055, para 0060- 0068, para 0070-0072, para 0111-0115, para 0136; claim 1) : Both Fidanza and Chaloulos are directed toward calculating financial risk predictions scores using machine learning algorithms. Chaloulos teaches the motivation of generating a first and second version of the learning model in order to reduce discriminatory bias in machine learning analysis improving the fairness of the analysis by calculating a first and second score by the respective first and second model by applying a selected list of hyper parameters and parameters where at least one aspect of the model is controlled by adjusting hyper-parameter and parameter values of the list. Chaloulos further teaches the motivation of applying vector as representation of points in space so that the machine model may understand separate categories divided by a gap as data mapped into the same space and predicted belongs to a category based on which side of the gap they fall. It would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the training of ML models that are updated using output results of inputs of previous analysis of Fidanza to include a second model and vectoring of data as taught by Chaloulos since Chaloulos teaches the motivation of generating a first and second version of the learning model in order to reduce discriminatory bias in machine learning analysis improving the fairness of the analysis by calculating a first and second score by the respective first and second model by applying a selected list of hyper parameters and parameters where at least one aspect of the model is controlled by adjusting hyper-parameter and parameter values of the list. Chaloulos further teaches the motivation of applying vector as representation of points in space so that the machine model may understand separate categories divided by a gap as data mapped into the same space and predicted belongs to a category based on which side of the gap they fall. Asher teaches: applying a second version of the machine learning model, trained after the first version…((Asher) in at least para 0017, para 0037-0039, para 0049-0050, para 0053-0054 storing the first prediction score and the second prediction score as separate derived attributes within the corresponding structured data snapshot…((Asher) in at least para 0017, para 0039, para 0050, para 0055 wherein the prior art teaches daily snapshot maybe executed for all prediction objects, para 0069, para 0075, para 0079 ) assembling, for the structured data snapshot a feature … that includes the first prediction score and the second prediction score as distinct model-specific attributes, wherein the first prediction score and the second prediction score correspond to different trained versions of the same machine learning model ((Asher) in at least para 0017-0018, para 0037-0039, para 0050-0051, para 0055-0056, para 0058, para 0069, para 0075, para 0077: Both Fidanza and Asher are directed toward calculating financial risk predictions scores using machine learning algorithms. Asher teaches the motivation of applying two or more machine learning models in the analysis of data and the resulting calculated score in order to determine inaccuracies in the model so that the best model can be selected based on several candidate models. Asher further teaches the motivation of storing the scores for the predication field in order to aid in a determination of the viability of the model so that it may be determined as to whether the model contains the requisite information to generate accurate scores. It would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the training of ML models that are updated using output results of inputs of previous analysis of Fidanza to include a second model and the storing of scores determined by the plurality of models as taught by Asher since Asher teaches the motivation of applying two or more machine learning models in the analysis of data and the resulting calculated score in order to determine inaccuracies in the model so that the best model can be selected based on several candidate models. Asher further teaches the motivation of storing the scores for the predication field in order to aid in a determination of the viability of the model so that it may be determined as to whether the model contains the requisite information to generate accurate scores In reference to Claim 27: The combination of Fidanza, Chaloulos and Asher discloses the limitations of independent claim 26. Fidanza further discloses the limitations of dependent claim 27 (Previously Presented) The method of claim 26 (see rejection of claim 26 above), further comprising associating each prediction score with a corresponding model version identifier, and storing the version identifiers as metadata with the structured data snapshots. ((Fidanza) in at least para 0007, para 0013, para 0076, para 0079, para 0084, para 0200, para 0217) In reference to Claim 31: The combination of Fidanza, Chaloulos and Asher discloses the limitations of independent claim 26. Fidanza further discloses the limitations of dependent claim 31 (Previously Presented) The method of claim 26 (see rejection of claim 26 above), wherein the time-aligned sequence of structured data snapshots is generated by applying a preprocessing pipeline that normalizes, aggregates, or transforms raw input data from multiple sources.((Fidanza) in at least para 0075, para 0140, para 0182) In reference to Claim 32: The combination of Fidanza, Chaloulos and Asher discloses the limitations of independent claim 26. Fidanza further discloses the limitations of dependent claim 33. (New) The method of claim 26 (see rejection of claim 26 above), wherein the structured data snapshots are generated using raw input data obtained from a plurality of external data sources, and wherein the raw input data is normalized or aggregated prior to generation of the derived attributes ((Fidanza) in at least para 0104 wherein the prior art teaches initial data is structured into attribute strings; para 00140 wherein the prior art teaches raw data acquired form plurality of external sources and then processed, para 0182 wherein the prior art teaches raw data inputs compressed into suitable vector feature; para 00213, para 00227). In reference to Claim 32: The combination of Fidanza, Chaloulos and Asher discloses the limitations of independent claim 26. Fidanza further discloses the limitations of dependent claim 34. (New) The method of claim 26 (see rejection of claim 26 above), wherein the second machine learning model generates the decision output based on a combination of the feature vector and one or more additional derived attributes representing historical values across multiple structured data snapshots. ((Fidanza) in at least para 0009 wherein the prior art teaches inputting/outputting data comprises a vector for input and outputting a probability; para 0016, para 0088, para 0104 wherein the prior art teaches initial data is structured into attribute strings; para 0171, para 0175-0176, para 0178-0179, para 0182-0185, para 0217) Claim(s) 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2018/226337/CA 3065807 by Fidanza et al. (Fidanza), in view of US Pub No. 2020/0320428 A1 by Chaloulos et al in view of US Pub No. 2019/0138946 A1 by Asher et al. (Asher), as applied to claim 26 and further in view of US Pub No. 2021/0201208 A1 by Bhole et al. (Bhole) In reference to Claim 30: The combination of Fidanza, Chaloulos and Asher discloses the limitations of independent claim 26. Fidanza further discloses the limitations of dependent claim 30 (Previously Presented) The method of claim 26 (see rejection of claim 26 above), Fidanza does not explicitly teach: wherein the second machine learning model is trained using feedback collected after execution of actions based on prior outputs, and wherein the training incorporates a delay window to account for outcome latency. Bhole teaches: wherein the second machine learning model is trained using feedback collected after execution of actions based on prior outputs, and wherein the training incorporates a delay window to account for outcome latency. ((Bhole) in at least para 0035-0037, para 0040, para 0044 wherein the prior art teaches delay the training model until additional training data has met a threshold number to target outputs) Both Fidanza and Bhole apply time series data for behavior analysis to ml models for calculation where data is received and changes over time. Bhole teaches the motivation of the data received that is applied in the analysis to include customer recommendations, feeds and messages in order to allow analysis of content to present to users and to predict content items that users are likely to consume, respond to with activity and teaches the motivation of teaches delaying the training model until additional sufficient training data has met a threshold number to target outputs. It would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the training data applied of ML models for predicting human behavior of Fidanza to include feedback from customers and a delay of the model analysis as taught by Bhole since Bhole teaches the motivation of the data received that is applied in the analysis to include customer recommendations, feeds and messages in order to allow analysis of content to present to users and to predict content items that users are likely to consume, respond to with activity and teaches delay the training model until additional training data has met a threshold number to target outputs. Claim(s) 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2018/226337/CA 3065807 by Fidanza et al. (Fidanza), in view of US Pub No. 2020/0320428 A1 by Chaloulos et al in view of US Pub No. 2019/0138946 A1 by Asher et al. (Asher), as applied to claim 26 and further in view of US Pub No. 2017/0323216 A1 by Fano (Fano) In reference to Claim 32: The combination of Fidanza, Chaloulos and Asher discloses the limitations of independent claim 26. Fidanza further discloses the limitations of dependent claim 32 (New) The method of Claim 26 (see rejection of claim 26 above), further comprising Fidanza does not explicitly teach: initiating retraining of the second machine learning model based on detected changes in one or more statistical distributions of the derived attributes across a plurality of structured data snapshots. Fano teaches: initiating retraining of the second machine learning model based on detected changes in one or more statistical distributions of the derived attributes across a plurality of structured data snapshots.((Fano) in at least abstract wherein the prior art teaches retraining one or more predictive models; para 0004 wherein the prior art teaches retraining rules due to updated training data after determining the measure of the impact of the change in the data, para 0025-0026 wherein the prior art teaches data received on an ongoing basis over time and monitoring characteristics metrics of the data including central tendency, average value, mean, medium or an aggregated value from multiple data units where if the average value of the data element changes a predictive model that relies on the data may no longer be reliable and benefit from retraining; para 0028, para 0035-0037 wherein the prior art teaches receiving data used to train current version of mode where the retraining component is related operations that occurred since rule for retaining triggered where the retrained model is different than the performance of the current version of the model, para 0043 wherein the prior art teaches applying threshold related to change in data based on particular values or range of values or other expressions of the extent of change; Both Fidanza and Fano apply time series data for behavior analysis to ml models for calculation where data is received and updated over time. Fano teaches the motivation of retraining versions of one or more model due to updated training data that have a significant statistical change in values (central tendency, average value, mean, medium or an aggregated value from multiple data units where if the average value of the data element changes a predictive model that relies on the data may no longer be reliable and benefit from retraining. It would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the training data applied of ML models for predicting human behavior of Fidanza to retraining one or more current model versions based on statistical data changes as taught by Fano since Fano teaches the motivation of retraining versions of one or more model due to updated training data that have a significant statistical change in values (central tendency, average value, mean, medium or an aggregated value from multiple data units where if the average value of the data element changes a predictive model that relies on the data may no longer be reliable and benefit from retraining. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.US Pub. No. 2018/0357559 A1 by Truong et al; US Pub No. 2019/0295088 A1 by Jia et al; US Pub No. 2019/0392295 A1 by OI; US Pub. No. 2020/0074401 A1 by Oliverira et al; US Patent No. 10,445,152 B1 by Zhang et al; US Pub No. 2020/0126126 A1 by Briancon et al; US Pub No. 2020/0285737 A1 by Kraus et al; US Pub No. 2020/0074294 A1 by Long et al; US Pub No. 2018/0060205 A1 by Raj; Ca 2821103 A1 by Padullaparthi; WO 2018111270 A1 by Garg; WO 2018224669 A1 by Servajean Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARY M GREGG whose telephone number is (571)270-5050. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine Behncke can be reached at 571-272-8103. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARY M GREGG/Examiner, Art Unit 3695 /CHRISTINE M Tran/Supervisory Patent Examiner, Art Unit 3695
Read full office action

Prosecution Timeline

Dec 27, 2019
Application Filed
Jul 31, 2021
Non-Final Rejection — §101, §103, §112
Nov 04, 2021
Response Filed
Jan 08, 2022
Final Rejection — §101, §103, §112
Mar 24, 2022
Response after Non-Final Action
Apr 25, 2022
Response after Non-Final Action
May 05, 2022
Request for Continued Examination
May 10, 2022
Response after Non-Final Action
Jun 19, 2022
Non-Final Rejection — §101, §103, §112
Sep 26, 2022
Response Filed
Nov 09, 2022
Final Rejection — §101, §103, §112
Feb 21, 2023
Response after Non-Final Action
Mar 21, 2023
Notice of Allowance
May 18, 2023
Response after Non-Final Action
May 31, 2023
Response after Non-Final Action
Jul 19, 2023
Response after Non-Final Action
Sep 25, 2023
Response after Non-Final Action
Sep 26, 2023
Response after Non-Final Action
Sep 27, 2023
Response after Non-Final Action
Sep 27, 2023
Response after Non-Final Action
Apr 28, 2025
Response after Non-Final Action
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Sep 21, 2025
Non-Final Rejection — §101, §103, §112
Dec 16, 2025
Response Filed
Feb 28, 2026
Final Rejection — §101, §103, §112
Apr 15, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12450653
FIRM TRADE PROCESSING SYSTEM AND METHOD
2y 5m to grant Granted Oct 21, 2025
Patent 12443991
MINIMIZATION OF THE CONSUMPTION OF DATA PROCESSING RESOURCES IN AN ELECTRONIC TRANSACTION PROCESSING SYSTEM VIA SELECTIVE PREMATURE SETTLEMENT OF PRODUCTS TRANSACTED THEREBY BASED ON A SERIES OF RELATED PRODUCTS
2y 5m to grant Granted Oct 14, 2025
Patent 12217312
System and Method for Indicating Whether a Vehicle Crash Has Occurred
2y 5m to grant Granted Feb 04, 2025
Patent 11900469
Point-of-Service Tool for Entering Claim Information
2y 5m to grant Granted Feb 13, 2024
Patent 11861715
System and Method for Indicating Whether a Vehicle Crash Has Occurred
2y 5m to grant Granted Jan 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
14%
Grant Probability
28%
With Interview (+14.3%)
5y 3m
Median Time to Grant
High
PTA Risk
Based on 629 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month