Prosecution Insights
Last updated: April 19, 2026
Application No. 17/699,222

COGNITIVE ADVISORY AGENT

Non-Final OA §101§103
Filed
Mar 21, 2022
Examiner
GIROUX, GEORGE
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
4y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
401 granted / 612 resolved
+10.5% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
28 currently pending
Career history
640
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 612 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to applicant’s communication filed 9 December 2025, in response to the Office Action mailed 12 September 2025. The applicant’s remarks and any amendments to the claims or specification have been considered, with the results that follow. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 22 October 2025 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) mental processes and/or mathematical concepts. This judicial exception is not integrated into a practical application and does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception, as described below. Step 1 for all claims: Under the first part of the analysis, claims 1-7 recite a method, claims 8-14 recite a manufacture, and claims 15-20 recite a device. Accordingly, these claims fall within the four statutory categories of invention and the analysis proceeds to Step 2A, prongs 1 and 2, and Step 2B, as described below. As per claim 1: Under step 2A, prong 1, the claim recites an abstract idea including the following mental process and/or mathematical concept elements: analyzing one or more external datasets to identify a set of similar features – a data scientist analyzes external datasets to identify sets of similar features. Alternatively/additionally – comparing feature values of different datasets to determine similarity is a mathematical calculation. and assessing performance of the updated machine learning model – the data scientist assesses the performance of the updated machine learning model (speed, accuracy, error, etc.). Alternatively/additionally – assessing performance of the updated machine learning model includes calculating an assessment value (e.g., via a loss/cost/error function), which is a mathematical calculation/formula. responsive to training the updated machine learning model, comparing a performance of the updated machine learning model to a previous performance that occurred prior to the training – comparing performance metric values is a mathematical calculation. Alternatively/additionally – the data scientist compares the performance of the updated machine learning model with previous performance (of the model before training). identifying a set of actions, wherein the set of actions are utilized to determine whether to optimize the modified dataset by merging or modifying the existing features based on the comparison of performances and the monitoring – the data scientist determines whether to merge or modify the existing features and identifies a set of actions to take, based upon the comparison. If a claim, under the broadest reasonable interpretation covers a mathematical relationship between variables or numbers, a numerical formula or equation, or a mathematical calculation, it will be considered as falling within the “mathematical concepts” grouping of abstract ideas. If a claim, under the broadest reasonable interpretation covers concepts that can be performed in the human mind, or by a human using a pen and paper, including observation, evaluation, judgment, or opinion, it will be considered as falling within the “mental processes” grouping of abstract ideas. Additionally, performing mathematical calculations using a formula that could be practically performed in the human mind may be considered to fall within both the mathematical concepts grouping and the mental process grouping. See MPEP § 2106.04(a)(2). Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. Under step 2A, prong two, the judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: a computer implemented method – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). comprising: receiving a dataset for use with respect to a current machine learning model – this is recited at a high level of generality and amounts to insignificant extra-solution activity as data gathering/storage. See MPEP § 2106.05(g). wherein the dataset comprises one or more features – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). appending the set of similar features to the received dataset to generate an updated dataset – this is recited at a high level of generality and amounts to insignificant extra-solution activity as data gathering/storage that is limited to a particular type of data, generally linking the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(g) and (h), and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). generating an updated dataset by modifying a received dataset to reflect added feature and updating additional instances of the received dataset to reflect the appended set of similar features – this is recited at a high level of generality and amounts to insignificant extra-solution activity as data gathering/storage that is limited to a particular type of data, generally linking the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(g) and (h), and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). applying the updated dataset to the current machine learning model to generate an updated machine learning model – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). wherein assessing performance of the updated machine learning model comprises: training the updated machine learning model according to the modified dataset – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). monitoring one or more performance metrics of interest associated with the updated machine learning model – this is recited at a high level of generality and amounts to insignificant extra-solution activity as data gathering/storage that is limited to a particular type of data, generally linking the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(g) and (h), and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). outputting, to a user interface, recommendations for additional actions to improve the performance – this is recited at a high level of generality and amounts to insignificant extra-solution activity as insignificant application (display) of the abstract idea that is limited to a particular type of data, generally linking the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(g) and (h), and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Accordingly, at step 2A, prong two, these additional elements do not integrate the abstract idea into a practical application for the claim as a whole, because it does not impose any meaningful limits on practicing the abstract idea. See MPEP § 2106.04(d). Under step 2B, the claims do not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the claim recites the additional elements of: a computer implemented method – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). comprising: receiving a dataset for use with respect to a current machine learning model – this is recited at a high level of generality and amounts to insignificant extra-solution activity as data gathering/storage. The courts have found limitations directed to obtaining and storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional. See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” "electronic record keeping,” and "storing and retrieving information in memory.” wherein the dataset comprises one or more features – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). appending the set of similar features to the received dataset to generate an updated dataset – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). The courts have also found limitations directed to obtaining and storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional. See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” "electronic record keeping,” and "storing and retrieving information in memory.” generating an updated dataset by modifying a received dataset to reflect added feature and updating additional instances of the received dataset to reflect the appended set of similar features – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). The courts have also found limitations directed to obtaining and storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional. See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” "electronic record keeping,” and "storing and retrieving information in memory.” applying the updated dataset to the current machine learning model to generate an updated machine learning model – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). wherein assessing performance of the updated machine learning model comprises: training the updated machine learning model according to the modified dataset – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). monitoring one or more performance metrics of interest associated with the updated machine learning model – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). The courts have also found limitations directed to obtaining and storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional. See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” "electronic record keeping,” and "storing and retrieving information in memory.” outputting, to a user interface, recommendations for additional actions to improve the performance – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). The courts have also found limitations directed to obtaining, storing, and displaying information electronically, recited at a high level of generality, to be well-understood, routine, and conventional. See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” "electronic record keeping,” and "presenting offers and gathering statistics.” Accordingly, at step 2B, these additional elements, both individually and in combination, do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 2: The claim recites the following additional mental process and/or mathematical concept elements: recommending one or more actions based on the performance assessment of the updated machine learning model – the data scientist recommends one or more actions based on the performance assessment of the updated machine learning model (e.g., perform a recommended action provided by the model, retrain the model, create a new model, etc.). Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. The claim does not include any additional elements, under step 2A prong two, or step 2B, except those listed above in prior claim(s). Accordingly, at step 2A, prong two, the claim as a whole does not integrate the judicial exception into a practical application. See MPEP § 2106.04(d). Furthermore, at step 2B, the claim elements both individually and in combination do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 3: The claim recites the following additional mental process and/or mathematical concept elements: wherein analyzing one or more external datasets to identify a set of similar features includes: converting the one or more features into numerical feature vectors – converting features into a numerical feature vector is a mathematical function. Alternatively/additionally – the data scientist could create the numerical feature vector from the one or more features. identifying a set of similar features in the one or more external datasets – the data scientist analyzes external datasets to identify sets of similar features. Alternatively/additionally – comparing feature values of different datasets to determine similarity is a mathematical calculation. using word embedding on the set of similar features – word embedding is a mathematical function. Alternatively/additionally – the data scientist performs the word embedding. and identifying a vectoral distance between the one or more features and the set of similar features – identifying the vectoral distance between features is a mathematical function. Alternatively/additionally – the data scientist calculates the vectoral distance. Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. The claim does not include any additional elements, under step 2A prong two, or step 2B, except those listed above in prior claim(s). Accordingly, at step 2A, prong two, the claim as a whole does not integrate the judicial exception into a practical application. See MPEP § 2106.04(d). Furthermore, at step 2B, the claim elements both individually and in combination do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 4: Under step 2A, prong two, the judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: wherein the current machine learning model includes a reinforcement learning model – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Accordingly, at step 2A, prong two, these additional elements do not integrate the abstract idea into a practical application for the claim as a whole, because it does not impose any meaningful limits on practicing the abstract idea. See MPEP § 2106.04(d). Under step 2B, the claims do not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the claim recites the additional elements of: wherein the current machine learning model includes a reinforcement learning model – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Accordingly, at step 2B, these additional elements, both individually and in combination, do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 5: The claim recites the following additional mental process and/or mathematical concept elements: wherein analyzing one or more external datasets to identify a set of similar features includes using a bag of words technique to find similar features – the bag of words technique is a mathematical function to convert words/counts into a vector/numerical values. Alternatively/additionally – the data scientist could perform the bag of words technique. Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. The claim does not include any additional elements, under step 2A prong two, or step 2B, except those listed above in prior claim(s). Accordingly, at step 2A, prong two, the claim as a whole does not integrate the judicial exception into a practical application. See MPEP § 2106.04(d). Furthermore, at step 2B, the claim elements both individually and in combination do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 6: The claim recites the following additional mental process and/or mathematical concept elements: using Pearson correlation to create a correlation between the one or more features and the set of similar features indicating a level of similarity – the Pearson correlation is a mathematical formula which creates the level of similarity value via mathematical calculations. Alternatively/additionally – the data scientist could determine the Pearson correlation value(s). Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. The claim does not include any additional elements, under step 2A prong two, or step 2B, except those listed above in prior claim(s). Accordingly, at step 2A, prong two, the claim as a whole does not integrate the judicial exception into a practical application. See MPEP § 2106.04(d). Furthermore, at step 2B, the claim elements both individually and in combination do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 7: The claim recites the following additional mental process and/or mathematical concept elements: categorizing the features of the dataset into categorical features and unstructured text features, wherein categorical features are features with corresponding identifying metadata, and unstructured text features are features which lack such metadata – the data scientist looks at the features of the dataset and categorizes them into categorical features and unstructured text features, based on whether the features correspond to metadata or unstructured text without metadata. Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. The claim does not include any additional elements, under step 2A prong two, or step 2B, except those listed above in prior claim(s). Accordingly, at step 2A, prong two, the claim as a whole does not integrate the judicial exception into a practical application. See MPEP § 2106.04(d). Furthermore, at step 2B, the claim elements both individually and in combination do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 8: See the rejection of claim 1 above wherein, under step 2A, prong 1, the claim also includes the following mental process and/or mathematical concept elements: features whose similarity falls below a threshold – this is a mathematical calculation (comparison). Alternatively/additionally – the data scientist can compare the similarity values to a threshold to determine which features should be added. Accordingly, at step 2A, prong one, the claim is directed to an abstract idea. Under step 2A, prong two, the judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: a computer program product comprising: one or more computer readable storage media – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). and program instructions stored on the one or more computer readable storage media, the program instructions comprising instructions to [perform the method] – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). adding features whose similarity falls below a threshold amount to the received dataset – this is recited at a high level of generality and amounts to insignificant extra-solution activity as data gathering/storage that is limited to a particular type of data, generally linking the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(g) and (h), and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Accordingly, at step 2A, prong two, these additional elements do not integrate the abstract idea into a practical application for the claim as a whole, because it does not impose any meaningful limits on practicing the abstract idea. See MPEP § 2106.04(d). Under step 2B, the claims do not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the claim recites the additional elements of: a computer program product comprising: one or more computer readable storage media – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). and program instructions stored on the one or more computer readable storage media, the program instructions comprising instructions to [perform the method] – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). adding features whose similarity falls below a threshold amount to the received dataset – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). The courts have also found limitations directed to obtaining and storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional. See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” "electronic record keeping,” and "storing and retrieving information in memory.” Accordingly, at step 2B, these additional elements, both individually and in combination, do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 9, see the rejection of claim 2, above. As per claim 10, see the rejection of claim 3, above. As per claim 11, see the rejection of claim 4, above. As per claim 12, see the rejection of claim 5, above. As per claim 13, see the rejection of claim 6, above. As per claim 14, see the rejection of claim 7, above. As per claim 15: See the rejection of claim 8 above wherein, under step 2A, prong two, the judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: a computer system comprising: one or more processors – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). one or more computer-readable storage media – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). program instructions stored on the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising instructions to [perform the method] – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Accordingly, at step 2A, prong two, these additional elements do not integrate the abstract idea into a practical application for the claim as a whole, because it does not impose any meaningful limits on practicing the abstract idea. See MPEP § 2106.04(d). Under step 2B, the claims do not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the claim recites the additional elements of: a computer system comprising: one or more processors – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). one or more computer-readable storage media – this amounts to mere instructions to apply the exception using a generic computer component, recited at a high level of generality. See MPEP § 2106.05(f). program instructions stored on the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising instructions to [perform the method] – this amounts to generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP § 2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Accordingly, at step 2B, these additional elements, both individually and in combination, do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101. As per claim 16, see the rejection of claim 2, above. As per claim 17, see the rejection of claim 3, above. As per claim 18, see the rejection of claim 4, above. As per claim 19, see the rejection of claim 5, above. As per claim 20, see the rejection of claim 6, above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5, 7-10, 12, 14-17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korycki (US 2017/0068906), in view of Sokolov (US 10,924,513), and further in view of Grady (US 2023/0045347). As per claim 1, Korycki teaches a computer implemented method comprising: receiving a dataset for use with respect to a current machine learning model, wherein the dataset comprises one or more features [a training data set is collected that comprises a record of past communications and feature vector(s) of the communications (abstract, etc.)]; analyzing one or more external datasets to identify a set of similar features [over time, as additional messages are sent/received, the feature vectors of the new messages are determined, and similarity measures are calculated between the features of the new message(s) and features of prior messages in the training dataset (paras. 0050-52, see also: 0094-106, 0134, 0155-172, 0203-204, etc.)]; appending the set of similar features to the received dataset to generate an updated dataset [the additional messages and their feature vectors are added to the training dataset used to train a general model or a user specific model (paras. 0050-52, etc.) including concatenating features of the new messages and their similarity to prior message features (paras. 0094-106; see also: 0134, 0155-172, 0203-204, etc.)]; generating an updated dataset by modifying a received dataset to reflect added feature [the additional messages and their feature vectors are added to the training dataset used to train a general model or a user specific model (paras. 0050-52, etc.) including concatenating features of the new messages and their similarity to prior message features (paras. 0094-106; see also: 0134, 0155-172, 0203-204, etc.); where the messages are received data and the concatenated features are added to the features of the messages, thereby generating an updated dataset]; applying the updated dataset to the current machine learning model to generate an updated machine learning model [the additional messages and their features are added to the training dataset and used to train a general model or a user-specific model (paras. 0050-52, etc.)]; and assessing performance of the updated machine learning model [the performance of multiple models may be evaluated after being trained with the combined training data, to determine prediction accuracy, and to provide confidence/probability of resulting suggestions (paras. 0187-189)], wherein assessing performance of the updated machine learning model comprises: training the updated machine learning model according to the modified dataset [the additional messages and their features are added to the training dataset and used to train a general model or a user-specific model (paras. 0050-52, etc.)]; responsive to training the updated machine learning model, comparing a performance of the updated machine learning model to a previous performance that occurred prior to the training [the performance of multiple models may be evaluated after being trained with the combined training data, to determine prediction accuracy, and to provide confidence/probability of resulting suggestions (paras. 0187-189)]; monitoring one or more performance metrics of interest associated with the updated machine learning model [the performance of multiple models may be evaluated after being trained with the combined training data, to determine prediction accuracy, and to provide confidence/probability of resulting suggestions (paras. 0187-189)] and outputting, to a user interface, recommendations for additional actions to improve the performance [based on the predicted probabilities and/or scores of the models, automated suggestions may be provided by the model(s) for new communications (paras. 0025-29, 0032, 0063, 0195-199, etc.)]. While Korycki teaches generating an updated dataset by modifying a received dataset to reflect added feature, as well as monitoring the performance of the updated machine learning model after training (see above) it has not been relied upon for teaching generating an updated dataset by updating additional instances of the received dataset to reflect the appended set of similar features; and wherein assessing performance of the updated machine learning model comprises: identifying a set of actions, wherein the set of actions are utilized to determine whether to optimize the modified dataset by merging or modifying the existing features based on the comparison of performances and the monitoring. Sokolov teaches generating an updated dataset by modifying a received dataset to reflect added feature and updating additional instances of the received dataset to reflect the appended set of similar features [the network security service trains a machine learning model on previously collected data and, for each device in a plurality of devices, receives activity data collected by a security agent executed at the additional device, wherein the activity data specifies an action type of the previous action and a time at which the previous action was performed; generates an additional set of features based on the previous time-series data; and creates a training instance comprising the additional set of features and a label indicating the action type of the previous action. The network security service trains the machine-learning model based on the training instances (col. 11, lines 5-23; col. 13, lines 5-16; etc.); for the similar features of Korycki, above]. Korycki and Sokolov are analogous art, as they are within the same field of endeavor, namely generating features from messages in a network for training machine learning models for classification. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to generate updated features and add them to multiple training instances, as taught by Sokolov, for the generating updated features and adding them to the message feature data in the system of Korycki. Sokolov provides motivation as [additional instances of the training data can be used for each additional device in a plurality of devices which may not otherwise be able to provide necessary data and analysis; and allows patterns across devices to be collected/analyzed (col. 2, line 62 to col. 3, line 54; etc.)]. Additionally, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the similarity/feature data to the training dataset to multiple instances of the dataset, for the system taught by Korycki, since it has been held that mere duplication of the essential working parts of a device (i.e., instances of a training dataset and associated features) involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8. Grady teaches wherein assessing performance of the updated machine learning model comprises: training the updated machine learning model according to the modified dataset [the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; responsive to training the updated machine learning model, comparing a performance of the updated machine learning model to a previous performance that occurred prior to the training [the system includes a function generator that can calculate metrics based on aggregated and/or filtered feature data (merging or modifying the existing features) and, based upon the metric(s), can sort and aggregate feature data, where the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; monitoring one or more performance metrics of interest associated with the updated machine learning model [the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; identifying a set of actions, wherein the set of actions are utilized to determine whether to optimize the modified dataset by merging or modifying the existing features based on the comparison of performances and the monitoring [the system includes a function generator that can calculate metrics based on aggregated and/or filtered feature data (merging or modifying the existing features) and, based upon the metric(s), can sort and aggregate feature data, where the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; and outputting, to a user interface, recommendations for additional actions to improve the performance [the UI displays the features and other results for review on the client device (paras. 0015, 0025, etc.)]. Korycki/Sokolov and Grady are analogous art, as they are within the same field of endeavor, namely training and retraining machine learning models. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to include modifying, aggregating, and/or adding new features for training machine learning model(s), based upon the monitored performance metrics of the model(s), as taught by Grady, in the updating of features and training of the machine learning model(s) of the system taught by Korycki/Sokolov. Grady provides motivation as [the updating of features supports training additional and/or more specific models in the case of decreasing accuracy on the larger feature set, providing flexibility and increased speed and accuracy (paras. 0059-63, etc.)]. As per claim 2, Korycki/Sokolov/Grady teaches recommending one or more actions based on the performance assessment of the updated machine learning model [based on the predicted probabilities and/or scores of the models, automated suggestions may be provided by the model(s) for new communications (Korycki: paras. 0025-29, 0032, 0063, 0195-199, etc.); where the UI displays the features and other results for review on the client device (Grady: paras. 0015, 0025, etc.)]. As per claim 3, Korycki/Sokolov/Grady teaches wherein analyzing one or more external datasets to identify a set of similar features includes: converting the one or more features into numerical feature vectors [over time, as additional messages are sent/received, the feature vectors of the new messages are determined, and similarity measures are calculated between the features of the new message(s) and features of prior messages in the training dataset (Korycki: paras. 0050-52, see also: 0094-106, 0134, 0155-172, 0203-204, etc.)]; identifying a set of similar features in the one or more external datasets [over time, as additional messages are sent/received, the feature vectors of the new messages are determined, and similarity measures are calculated between the features of the new message(s) and features of prior messages in the training dataset (Korycki: paras. 0050-52, see also: 0094-106, 0134, 0155-172, 0203-204, etc.)]; using word embedding on the set of similar features [bag of words and/or word2vec may be used to determine the communication feature vectors (Korycki: paras. 0094-95, etc.); both of which provide word embedding]; and identifying a vectoral distance between the one or more features and the set of similar features [similarity between the feature vectors can be calculated via bag of words models, cosine similarity, tf-idf similarity, latent semantic indexing (LSI), and/or distributed representation models like word2vec, etc. (Korycki: paras. 0094-100, etc.); where these similarity functions, such as cosine similarity, include identifying a vectoral distance between the features]. As per claim 5, Korycki/Sokolov/Grady teaches wherein analyzing one or more external datasets to identify a set of similar features includes using a bag of words technique to find similar features [similarity between the feature vectors can be calculated via bag of words models, cosine similarity, tf-idf similarity, latent semantic indexing (LSI), and/or distributed representation models like word2vec, etc. (Korycki: paras. 0094-100, etc.)]. As per claim 7, Korycki/Sokolov/Grady teaches categorizing the features of the dataset into categorical features and unstructured text features, wherein categorical features are features with corresponding identifying metadata, and unstructured text features are features which lack such metadata [category, text, time, and metadata of the communications may be used to determine separate features (Korycki: paras. 0084-90; see also: 0052, 0061, 0124, 0131-135, 0204, etc.); where the categories of features described include regular (unstructured) text features and metadata (categorical) features]. As per claim 8, Korycki teaches a computer program product comprising: one or more non-transitory computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising instructions [the method may be implemented via a computer program product embodied on a computer-readable storage medium and configured so as when run on a processing apparatus, comprising one or more processing units, the processing unit(s) performs the operations (paras. 0214-215; claim 20; etc.)] to: receive a dataset for use with respect to a current machine learning model, wherein the dataset comprises one or more features [a training data set is collected that comprises a record of past communications and feature vector(s) of the communications (abstract, etc.)]; analyze one or more external datasets to identify a set of similar features [over time, as additional messages are sent/received, the feature vectors of the new messages are determined, and similarity measures are calculated between the features of the new message(s) and features of prior messages in the training dataset (paras. 0050-52, see also: 0094-106, 0134, 0155-172, 0203-204, etc.)]; append the set of similar features to the received dataset to generate an updated dataset [the additional messages and their feature vectors are added to the training dataset used to train a general model or a user specific model (paras. 0050-52, etc.) including concatenating features of the new messages and their similarity to prior message features (paras. 0094-106; see also: 0134, 0155-172, 0203-204, etc.); where the messages are received data and the concatenated features are added to the features of the messages, thereby generating an updated dataset], wherein appending comprises: generate an updated dataset by modifying the received dataset to reflect the added feature [the additional messages and their feature vectors are added to the training dataset used to train a general model or a user specific model (paras. 0050-52, etc.) including concatenating features of the new messages and their similarity to prior message features (paras. 0094-106; see also: 0134, 0155-172, 0203-204, etc.); where the messages are received data and the concatenated features are added to the features of the messages, thereby generating an updated dataset] apply the updated dataset to the current machine learning model to generate an updated machine learning model [the additional messages and their features are added to the training dataset and used to train a general model or a user-specific model (paras. 0050-52, etc.)]; and asses performance of the updated machine learning model [the performance of multiple models may be evaluated after being trained with the combined training data, to determine prediction accuracy, and to provide confidence/probability of resulting suggestions (paras. 0187-189)], wherein assessing performance of the updated machine learning model comprises: training the updated machine learning model according to the modified dataset [the additional messages and their features are added to the training dataset and used to train a general model or a user-specific model (paras. 0050-52, etc.)]; responsive to training the updated machine learning model, comparing a performance of the updated machine learning model to a previous performance that occurred prior to the training [the performance of multiple models may be evaluated after being trained with the combined training data, to determine prediction accuracy, and to provide confidence/probability of resulting suggestions (paras. 0187-189)]; monitoring one or more performance metrics of interest associated with the updated machine learning model [the performance of multiple models may be evaluated after being trained with the combined training data, to determine prediction accuracy, and to provide confidence/probability of resulting suggestions (paras. 0187-189)] and outputting, to a user interface, recommendations for additional actions to improve the performance [based on the predicted probabilities and/or scores of the models, automated suggestions may be provided by the model(s) for new communications (paras. 0025-29, 0032, 0063, 0195-199, etc.)]. While Korycki teaches generating an updated dataset by modifying a received dataset to reflect added feature and monitoring performance of the model(s) (see above) it has not been relied upon for teaching wherein appending comprises: adding features whose similarity falls below a threshold amount to the received dataset; and generate an updated dataset by updating additional instances of the received dataset to reflect the appended set of similar features; and wherein assessing performance of the updated machine learning model comprises: identifying a set of actions, wherein the set of actions are utilized to determine whether to optimize the modified dataset by merging or modifying the existing features based on the comparison of performances and the monitoring. Sokolov teaches generate an updated dataset by modifying a received dataset to reflect added feature and updating additional instances of the received dataset to reflect the appended set of similar features [the network security service trains a machine learning model on previously collected data and, for each device in a plurality of devices, receives activity data collected by a security agent executed at the additional device, wherein the activity data specifies an action type of the previous action and a time at which the previous action was performed; generates an additional set of features based on the previous time-series data; and creates a training instance comprising the additional set of features and a label indicating the action type of the previous action. The network security service trains the machine-learning model based on the training instances (col. 11, lines 5-23; col. 13, lines 5-16; etc.); for the similar features of Korycki, above]. Korycki and Sokolov are analogous art, as they are within the same field of endeavor, namely generating features from messages in a network for training machine learning models for classification. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to generate updated features and add them to multiple training instances, as taught by Sokolov, for the generating updated features and adding them to the message feature data in the system of Korycki. Sokolov provides motivation as [additional instances of the training data can be used for each additional device in a plurality of devices which may not otherwise be able to provide necessary data and analysis; and allows patterns across devices to be collected/analyzed (col. 2, line 62 to col. 3, line 54; etc.)]. Additionally, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add the similarity/feature data to the training dataset to multiple instances of the dataset, for the system taught by Korycki, since it has been held that mere duplication of the essential working parts of a device (i.e., instances of a training dataset and associated features) involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8. Grady teaches wherein appending comprises: adding features whose similarity falls below a threshold amount to the received dataset [if predictions and expected results are within a predefined similarity threshold (for which any similarity level threshold can be used), the predictions are confirmed as a viable indicator and may be stored as a new feature with the dataset (para. 0063, etc.); using the similarity measure of features in Korycki, above]; wherein assessing performance of the updated machine learning model comprises: training the updated machine learning model according to the modified dataset [the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; responsive to training the updated machine learning model, comparing a performance of the updated machine learning model to a previous performance that occurred prior to the training [the system includes a function generator that can calculate metrics based on aggregated and/or filtered feature data (merging or modifying the existing features) and, based upon the metric(s), can sort and aggregate feature data, where the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; monitoring one or more performance metrics of interest associated with the updated machine learning model [the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; identifying a set of actions, wherein the set of actions are utilized to determine whether to optimize the modified dataset by merging or modifying the existing features based on the comparison of performances and the monitoring [the system includes a function generator that can calculate metrics based on aggregated and/or filtered feature data (merging or modifying the existing features) and, based upon the metric(s), can sort and aggregate feature data, where the modified feature data generated by the function generator are used as features for training the machine learning model and/or used by the model during runtime (paras. 0051-52, etc.) which can also include determining whether retraining on the new set of features decreases or increases the accuracy of the model based on the gathered performance metrics and dynamic generation of new features (paras. 0059-60, 0063, etc.); for the training of Korycki/Sokolov, above]; and outputting, to a user interface, recommendations for additional actions to improve the performance [the UI displays the features and other results for review on the client device (paras. 0015, 0025, etc.)]. Korycki and Grady are analogous art, as they are within the same field of endeavor, namely generating features for classification models from language processing. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use a similarity measure threshold for determining when to add new features, as well as modifying, aggregating, and/or adding new features for training machine learning model(s), based upon the monitored performance metrics of the model(s), as taught by Grady, in the updating of features and training of the machine learning model(s) of the system taught by Korycki/Sokolov. Because both Korycki and Grady teach systems which determine similarity between text features and add features to the training dataset(s), it would have been obvious to one of ordinary skill in the art to use a similarity measure threshold for determining when to add new features, as taught by Grady, for adding similar features in the system taught by Grady, to achieve the predictable result of adding features which are not too similar to features already being used (based on the similarity threshold), and thus add more information for the model. Grady also provides motivation as [the similarity threshold can be used to determine that the new feature provides a viable indicator for predictions and the updating of features supports training additional and/or more specific models in the case of decreasing accuracy on the larger feature set, providing flexibility and increased speed and accuracy (paras. 0059-63, etc.)]. As per claim 9, see the rejection of claim 2, above. As per claim 10, see the rejection of claim 3, above. As per claim 12, see the rejection of claim 5, above. As per claim 14, see the rejection of claim 7, above. As per claim 15, see the rejection of claim 8, above, wherein Korycki/Sokolov/Grady also teaches a computer system comprising: one or more processors; one or more computer-readable storage media; program instructions stored on the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising instructions to [perform the method] [the method may be implemented via a computer program product embodied on a computer-readable storage medium and configured so as when run on a processing apparatus, comprising one or more processing units, the processing unit(s) performs the operations (Korycki: paras. 0214-215; claim 20; etc.)]. As per claim 16, see the rejection of claim 2, above. As per claim 17, see the rejection of claim 3, above. As per claim 19, see the rejection of claim 5, above. Claim(s) 4, 6, 11, 13, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korycki (US 2017/0068906), in view of Sokolov (US 10,924,513), further in view of Grady (US 2023/0045347), and further in view of Herman-Saffar (US 10,721,266). As per claim 4, Korycki/Sokolov/Grady teaches the computer implemented method of claim 1, as described above. While Korycki/Sokolov/Grady teaches using machine learning models/techniques (see above), it has not been relied upon for teaching wherein the current machine learning model includes a reinforcement learning model. Herman-Saffar teaches wherein the current machine learning model includes a reinforcement learning model [a reinforcement learning model can be used to provide content-based recommendations (figs. 1 and 9; see also: related descriptions of the figure elements, etc.)]. Korycki/Sokolov and Herman-Saffar are analogous art, as they are within the same field of endeavor, namely using a machine learning model to make content-based recommendations based on similarity measures and features of current and past content. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to utilize a reinforcement learning model to make the content-based recommendation, as taught by Herman-Saffar, as the machine learning model making the content-based recommendation in the system taught by Korycki/Sokolov. Herman-Saffar provides motivation as [by utilizing expert/team feedback and reinforcement learning, the recommendation model can be improved (col. 11, line 60 to col. 12, line 11; etc.)]. As per claim 6, Korycki/Sokolov/Grady teaches the computer implemented method of claim 1, as described above. While Korycki/Sokolov/Grady teaches determining similarity measure(s) between content features (see above), it has not been relied upon for teaching using Pearson correlation to create a correlation between the one or more features and the set of similar features indicating a level of similarity. Herman-Saffar teaches using Pearson correlation to create a correlation between the one or more features and the set of similar features indicating a level of similarity [the feature vectors can be compared using one or more similarity measuring algorithms, including Pearson correlation and/or cosine similarity, and the similarity scores may be included in the feature vector(s) (col. 11, lines 51-59; etc.)]. Korycki/Sokolov and Herman-Saffar are analogous art, as they are within the same field of endeavor, namely using a machine learning model to make content-based recommendations based on similarity measures and features of current and past content. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to utilize Pearson correlation in the determination of the similarity measure(s) between feature vectors, as taught by Herman-Saffar, in the determination of the similarity measure(s) between feature vectors in the system taught by Korycki/Sokolov. Because both Korycki/Sokolov and Herman-Saffar teach utilizing one or more similarity measurement algorithms to determine a similarity measurement/score between feature vectors of current and past content, it would have been obvious to one of ordinary skill in the art to utilize Pearson correlation in the determination of the similarity measure(s) between feature vectors, as taught by Herman-Saffar, in the determination of the similarity measure(s) between feature vectors in the system taught by Korycki/Sokolov, to achieve the predictable result of including more accurate similarity measurements by including more methods of measurement, and/or utilizing the measurement algorithm most accurately reflecting the desired features. As per claim 11, see the rejection of claim 4, above. As per claim 13, see the rejection of claim 6, above. As per claim 18, see the rejection of claim 4, above. As per claim 20, see the rejection of claim 6, above. Response to Arguments The objections to claims 1-7 have been withdrawn due to the amendments filed. The rejections of claims 8-20 under 35 U.S.C. 112(b) have been withdrawn due to the amendments filed. Applicant's arguments filed 22 October 2025, with respect to the rejections under 35 U.S.C. 101 have been fully considered but they are not persuasive. Applicant argues that the identified abstract idea does not fall within the subject matter groupings of abstract ideas. However, the abstract ideas have been identified as mental process and/or mathematical concepts, as described above. If a claim, under the broadest reasonable interpretation covers a mathematical relationship between variables or numbers, a numerical formula or equation, or a mathematical calculation, it will be considered as falling within the “mathematical concepts” grouping of abstract ideas. If a claim, under the broadest reasonable interpretation covers concepts that can be performed in the human mind, or by a human using a pen and paper, including observation, evaluation, judgment, or opinion, it will be considered as falling within the “mental processes” grouping of abstract ideas. Additionally, performing mathematical calculations using a formula that could be practically performed in the human mind may be considered to fall within both the mathematical concepts grouping and the mental process grouping. See MPEP § 2106.04(a)(2). Applicant also argues that the abstract idea is “integrated into the practical application of [a] performance metric improvement system” by “providing improvements to existing advisory agent system technology.” While the examiner has identified (above) what constitutes the abstract idea and what is drawn to conventional components, the Federal Circuit has also indicated that mere automation of manual processes or increasing the speed of a process, where these purported improvements come solely from the capabilities of a general-purpose computer are not sufficient to show an improvement in computer functionality. FairWarning IP, LLC v. latric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016); Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017). The Federal Circuit has also indicated that a claim must include more than conventional implementation on generic components or machinery to qualify as an improvement to an existing technology. Affinity Labs of Tex. v. DirecTV, LLC, 838 F.3d 1253, 1264-65, 120 USPQ2d 1201, 1208-09 (Fed. Cir. 2016); TLI Communications LLC v. AVAuto, LLC, 823 F.3d 607, 612-613, 118 USPQ2d 1744, 1747-48 (Fed. Cir. 2016). Claims must also include more than just instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology (MPEP § 2106.05(a)). Applicant further argues that the improvement is provided in how “embodiments of the present invention dynamically improve performance through reinforcement learning to append new features to existing dataset and reassess the model for its performance.” However, applicant has described an improvement to determining whether/when to add new features to the existing dataset. As described above, this is a mental process/mathematical calculations. Therefore, (assuming that the invention provides these advantages) this amounts to an improvement to an abstract idea rather than to a computer or technology. See MPEP 2106.05(a). It appears that any benefits to the computer itself are based solely on the use of an improvement to the abstract idea(s), using generic computer components to apply the abstract idea(s). Additionally, to find a valid improvement to a computer or technology the specification must disclose the improvement and the claim must include the necessary components to realize the improvement. MPEP 2106.05(d)(1). Applicant’s arguments, see the remarks, filed 22 October 2025, with respect to the rejection(s) under 35 U.S.C 103 over Korycki, Sokolov, and Herman-Saffar have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Grady, which has been relied upon for teaching identifying a set of actions, wherein the set of actions are utilized to determine whether to optimize the modified dataset by merging or modifying the existing features based on the comparison of performances and the monitoring as part of assessing the performance of the updated machine learning model, as described above. Conclusion The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-20 are rejected. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu (US 11,443,553) – discloses merging subsets of facial images based on similarity scores of feature vectors. Seifert (US 2021/0141897) – discloses combining features into a similarity training set with pairing of similar files/documents, used for multiple training instance datasets, including for negative sampling based on similarity. Yagnik (US 7,827,123) – discloses determining feature proximity values, combined feature proximity values, and adding a lowest proximity value feature to a training set. Chang (US 11,636,161) – discloses adding new features, provided by a user, to a training dataset. Zhang (US 12,165,056) – discloses using a trained model to transform an observed data subset and features to a predicted version of a new feature, and use the new feature and dataset to train an auxiliary model. Gupta (US 2019/0130304) – discloses a system identifying a new electronic communication (or new features and interactions for a prior communication), then generating an additional training instance based on the new data. Uchide (US 2021/0312333) – discloses selecting samples for a negative example data set based on similarity level rankings. Yadav (US 2019/0279618) – discloses identifying a cluster below a threshold level of similarity between latent features of users and, in response, excluding a specific model in generating a new personalized model. Liu (US 11,443,554) – discloses merging subsets of images based on feature centroid similarity scores, including merging them with the centroid. Saad (US 2023/0115855) – discloses monitoring performance metrics for particular arrangements of features (or combination of features) and updating the arrangements/combinations of features in response to the performance metric(s). Gehler et al. (On Feature Combination for Multiclass Object Classification, Oct 2009, pgs. 221-228) – discloses a system to update weightings of features during training, and combining multiple complementary features. Zhang et al. (A feature selection and multi-model fusion-based approach of predicting air quality, Dec 2019, pgs. 210-220) – discloses a system/method of fusing models (and associated features) for air quality prediction, including feature selection/extraction. The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached on 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEORGE GIROUX/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Mar 21, 2022
Application Filed
Mar 07, 2025
Non-Final Rejection — §101, §103
Apr 23, 2025
Interview Requested
May 06, 2025
Applicant Interview (Telephonic)
May 21, 2025
Examiner Interview Summary
May 30, 2025
Response Filed
Sep 09, 2025
Final Rejection — §101, §103
Oct 12, 2025
Interview Requested
Oct 22, 2025
Response after Non-Final Action
Dec 09, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572807
Neural Network Methods for Defining System Topology
2y 5m to grant Granted Mar 10, 2026
Patent 12572818
DEVICE AND METHOD FOR RANDOM WALK SIMULATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554986
WEIGHT QUANTIZATION IN NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Patent 12554983
MACHINE LEARNING-BASED SYSTEMS AND METHODS FOR IDENTIFYING AND RESOLVING CONTENT ANOMALIES IN A TARGET DIGITAL ARTIFACT
2y 5m to grant Granted Feb 17, 2026
Patent 12541696
ENHANCED VALIDITY MODELING USING MACHINE-LEARNING TECHNIQUES
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
93%
With Interview (+27.1%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 612 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month