Prosecution Insights
Last updated: April 19, 2026
Application No. 18/343,836

System and method for training an artificial intelligent system based on entity rules, regulation rules, and core values

Non-Final OA §101§103§112
Filed
Jun 29, 2023
Examiner
VAUGHN, RYAN C
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
BANK OF AMERICA CORPORATION
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
145 granted / 235 resolved
+6.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
45 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on June 29, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: (a) in the title, “intelligent” should be “intelligence”; (b) on p. 6, ll. 10-11 of the specification as filed, “one or more communication equipment” should be “one or more pieces of communication equipment”; and (c) on p. 12, l. 17 of the specification as filed, “hyper parameters” should be “hyperparameters”. Appropriate correction is required. Claim Objections Claims 3, 10, and 17 are objected to because of the following informalities: the first recitation of “one or more entity rules” should be “the one or more entity rules”. Claims 5, 12, and 19 are objected to because of the following informalities: “hyper parameters” should be “hyperparameters”. Claims 6-7, 13-14, and 20 are objected to for dependency on claims 5, 12, and 19, respectively. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-7, 11-14, and 18-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 4, 11, and 18 recite that “one or more threat scenarios may include one or more vulnerable data objects”. First, the use of the term “may” renders it unclear whether including vulnerable data objects in the first set is or is not required by the claim. Second, the term “vulnerable” is a relative term which renders the claims indefinite. The term “vulnerable” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Examiner is unaware of any commonly accepted definition of “vulnerable” in the art, and the specification does not define what the dividing line is between vulnerable and invulnerable data objects. The term “improving” in claims 5, 12, and 19 is a relative term which renders the claims indefinite. The term “improving” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Neither the claims nor the specification defines how the claimed “improvement” is measured or over what baseline the sampling rules are being improved. All claims dependent on a claim rejected hereunder are also rejected for being dependent on a rejected base claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1: The claim recites a system comprising a memory and a processor; therefore, it is directed to the statutory category of machines. Step 2A Prong 1: The claim recites, inter alia: [A]nalyz[ing] a first set of data objects associated with a first set of data groups to derive a set of logic relationships between the first set of the data objects: This limitation could encompass mentally analyzing the data objects and mentally deriving the logical relationships among them. [D]etermin[ing] whether the machine learning model is trained to meet one or more entity rules with a desired accuracy: This limitation could encompass mentally determining whether the model is sufficiently trained by visually observing its outputs. [D]etermin[ing] whether the tested machine learning model meets one or more regulation rules and one or more core values: This limitation could encompass mentally determining whether the model meets regulations and values by observing the model. [I]n response to determining that the tested machine learning model does not meet the one or more regulation rules or the one or more core values, refin[ing] one or more training rules to generate a second set of data objects associated with a second set of data groups: This limitation could encompass mentally determining that the model does not meet the regulation rules or core values and mentally generating the second data objects by mentally refining the training rules. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “a memory operable to store: a plurality of data groups comprising a plurality of data objects; a plurality of entity rules; a plurality of regulation rules; and one or more core values; and a processor operably coupled to the memory”. However, this is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). The claim further recites “train[ing], based on a set of training rules, a machine learning model with the first set of the data objects and the set of the logic relationships”; “test[ing] the machine learning model with the first set of the data objects and the set of the logic relationships”; and “retrain[ing] the machine learning model with the second set of the data objects to meet the one or more regulation rules and the one or more core values.” However, these limitations merely restrict the field of use of the judicial exception to model training, testing, and retraining. MPEP § 2106.05(h). Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step mirrors that of step 2A, prong 2. As an ordered whole, the claim is directed to a mentally performable algorithm for analyzing a model to determine whether it meets a set of rules and values and generating additional data if it does not. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia, “ determin[ing] that the tested machine learning model meets the one or more regulation rules and the one or more core values”. This limitation could encompass mentally making this determination by observing the model. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “the processor is configured” to perform the method. However, this is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). The claim further recites “deploy[ing] the machine learning model into a real-time application.” However, this limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “the processor is configured” to perform the method. However, this is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). The claim further recites “deploy[ing] the machine learning model into a real-time application.” However, this limitation recites the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Claim 3 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia, “ in response to determining that the machine learning model does not meet one or more entity rules with the desired accuracy, refine one or more training rules to generate a third set of data objects associated with a third set of data groups”. This limitation could encompass mentally determining that the model does not meet the rules by observation of the model, then mentally refining the training rules and mentally generating the set of objects. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “retrain[ing] the machine learning model with the third set of the data objects to meet the one or more entity rules with the desired accuracy.” However, this limitation merely restricts the field of use of the judicial exception to model retraining. MPEP § 2106.05(h). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “retrain[ing] the machine learning model with the third set of the data objects to meet the one or more entity rules with the desired accuracy.” However, this limitation merely restricts the field of use of the judicial exception to model retraining. MPEP § 2106.05(h). Claim 4 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia: [P]rocess[ing] … the first set of the data objects and the set of the logic relationships with the one or more regulation rules and the one or more core values: This limitation could encompass mentally processing the data objects and logic relationships with the regulation rules and core values. [G]enerat[ing] one or more threat scenarios associated with the first set of the data objects, wherein the one or more threat scenarios may include one or more vulnerable data objects in the first set of the data objects: This limitation could encompass mentally generating the threat scenarios that include vulnerable data objects. [D]etermin[ing] one or more risk levels of the one or more threat scenarios based on a risk matrix: This limitation could encompass mentally determining the risk levels based on a risk matrix. [D]etermin[ing] whether the first set of the data objects meets the one or more regulation rules and the one or more core values: This limitation could encompass mentally determining whether the set of data objects meets the regulation rules and core values. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the processing of the first data objects is performed “through a generative network” ; that the determination whether the set of data objects meets the rules and values is performed by “test[ing] the machine learning model with the one or more threat scenarios and the one or more risk levels”; and that the “processor is configured” to perform the method. However, these are mere instructions to apply the judicial exception using a generic computer programmed with generic classes of computer algorithm. MPEP § 2106.05(f). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that the processing of the first data objects is performed “through a generative network” ; that the determination whether the set of data objects meets the rules and values is performed by “test[ing] the machine learning model with the one or more threat scenarios and the one or more risk levels”; and that the “processor is configured” to perform the method. However, these are mere instructions to apply the judicial exception using a generic computer programmed with generic classes of computer algorithm. MPEP § 2106.05(f). Claim 5 Step 1: A machine, as above. Step 2A Prong 1: The claim recites that “refining the one or more training rules comprises adjusting one or more hyper parameters, improving sampling rules, or collecting additional historical data.” This limitation could encompass mentally making adjustments to the sampling rules. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 6 Step 1: A machine, as above. Step 2A Prong 1: The claim recites that “the sampling rules comprise stratified sampling, cluster sampling or random sampling.” The improvement of the sampling rules remains mentally performable under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 5 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 5 analysis. Claim 7 Step 1: A machine, as above. Step 2A Prong 1: The claim recites that “collecting the additional historical data comprises collecting augmented data or rescaling the historical data associated with the plurality of the data groups.” This limitation could encompass mentally rescaling the data. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 5 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 5 analysis. Claims 8-14 Step 1: The claims recite a method; therefore, they are directed to the statutory category of processes. Step 2A Prong 1: The claims recite the same judicial exceptions as in claims 1-7, respectively. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claims 1-7, respectively, except insofar as these claims do not recite the hardware recited therein. Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step mirrors that of claims 1-7, respectively, except insofar as these claims do not recite the hardware recited therein. Claims 15-20 Step 1: The claims recite a non-transitory computer-readable medium; therefore, the claims are directed to the statutory category of articles of manufacture. Step 2A Prong 1: The claims recite the same judicial exceptions as in claims 1-5 and 6-7 combined, respectively. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis at this step mirrors that of claims 1-5 and 6-7 combined, respectively, except insofar as these claims recite a “non-transitory computer-readable medium storing instructions that when executed by a processor causes the processor to [perform the method]”. However, this is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step mirrors that of claims 1-5 and 6-7 combined, respectively, except insofar as these claims recite a “non-transitory computer-readable medium storing instructions that when executed by a processor causes the processor to [perform the method]”. However, this is a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5, 7-10, 12, 14-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wodetski et al. (US 20220398680) (“Wodetski”) in view of Sharma Mittal et al. (US 20230128548) (“Sharma”). Regarding claim 1, Wodetski discloses “[a] system comprising: a memory (processor may include memory that stores methods, codes, instructions, and programs – Wodetski, paragraph 160) operable to store: a plurality of data groups comprising a plurality of data objects (Wodetski Fig. 8 shows that sample contracts [data objects] are processed using ML/AI algorithms and pre-classified [analyzed/grouped] into contractual document types); a plurality of entity rules (Wodetski Fig. 15 shows that machine learning training is applied until a performance/accuracy crosses a desired threshold, and high-performing universal contract model rules [entity rules]/models are packaged for deployment); a plurality of regulation rules (industry-specific rules [core values] are optionally packaged to supplement universal rules, and customer-specific rules [regulation rules] are optionally packaged to supplement universal rules – Wodetski, paragraph 131); and one or more core values (industry-specific rules [core values] are optionally packaged to supplement universal rules, and customer-specific rules [regulation rules] are optionally packaged to supplement universal rules – Wodetski, paragraph 131); and a processor operably coupled to the memory (processor may include memory that stores methods, codes, instructions, and programs – Wodetski, paragraph 160), the processor configured to: analyze a first set of data objects associated with a first set of data groups to derive a set of logic relationships between the first set of the data objects (Wodetski Fig. 8 shows that sample contracts [data objects] are processed using ML/AI algorithms and pre-classified [analyzed/grouped] into contractual document types, especially contract transaction types (create contract, create order, amend, review, assign, terminate, etc.) [logic relationships between the data objects = classification in common]); train, based on a set of training rules, a machine learning model with the first set of the data objects and the set of the logic relationships (Wodetski Fig. 8 shows that, once human experts review pre-classified documents, valid classifications are passed to the trained corpus [containing data objects/logic relationships] with classification annotations and the trained corpus is then used to train ML/AI models; Fig. 15 shows that training is applied to the corpus using multiple techniques [training rules] until performance/accuracy crosses a desired threshold); determine whether the machine learning model is trained to meet one or more entity rules with a desired accuracy (Wodetski Fig. 15 shows that machine learning training is applied until a performance/accuracy crosses a desired threshold, and high-performing universal contract model rules [entity rules]/models are packaged for deployment); … determining that the machine learning model meets one or more entity rules with the desired accuracy (Wodetski Fig. 15 shows that machine learning training is applied until a performance/accuracy crosses a desired threshold, and high-performing universal contract model rules [entity rules]/models are packaged for deployment) … [and using] the first set of the data objects and the set of the logic relationships (see Wodetski Figs. 8 and 15 and note that both are used to train the model); … determine whether the … machine learning model meets one or more regulation rules and one or more core values (high-performing universal contract model rules/models are packaged for deployment; industry-specific rules [core values] are optionally packaged to supplement universal rules, and customer-specific rules [regulation rules] are optionally packaged to supplement universal rules; the industry-specific rules and customer rules are deployed to an AI engine [i.e., the AI system then handles/meets these rules] – Wodetski, paragraphs 131-32); … determining [whether] the tested machine learning model … meet[s] the one or more regulation rules or the one or more core values (see mapping of previous limitation) …; and meet[ing] the one or more regulation rules and the one or more core values (see Wodetski paragraphs 131-32 and note that the AI system is equipped to handle the industry and customer rules once they are deployed).” Wodetski appears not to disclose explicitly the further limitations of the claim. However, Sharma discloses “in response to determining that the machine learning model meets one or more … rules with the desired accuracy, test[ing] the machine learning model with the first set of the data1 (generating the training dataset is an iterative process where data are received from data sources, aggregated by a central server, , and used to generate a training dataset which is used to train the model; the model is then tested against a validation dataset [first set] to identify an accuracy of the trained machine-learning model – Sharma, paragraph 51) …; … in response to determining that the tested machine learning model does not meet the one or more … rules …, refin[ing] one or more training rules to generate a second set of data objects associated with a second set of data groups (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data are used to refine the training dataset [refined training set = second set of data objects] which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges [determining that the process has not converged = determining that the model does not meet the rule that the process should converge; the fact that particular new data should be added to the training dataset may be regarded as a training rule, so that rule is refined when specific new data are added] – Sharma, paragraph 51; training dataset includes labels [data groups] identifying a correct classification for the datapoint – id. at paragraph 1); and retrain[ing] the machine learning model with the second set of the data objects to meet the one or more rules (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data are used to refine the training dataset [refined training set = second set of data objects] which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges [i.e., until the convergence rule is met] – Sharma, paragraph 51) ….” Sharma and the instant application both relate to machine learning and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wodetski to test the model and retrain it with new training data until a criterion is met, as disclosed by Sharma, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that the model is up-to-date and better geared toward fulfilling its intended purpose, thereby increasing the accuracy of the model on the task. See Sharma, paragraph 51. Claim 8 is a method claim corresponding to system claim 1 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 15 is a non-transitory computer-readable medium claim corresponding to system claim 1 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 2, Wodetski, as modified by Sharma, discloses that “the processor is further configured to: determine that the tested machine learning model meets the one or more regulation rules and the one or more core values (high-performing universal contract model rules/models are packaged for deployment; industry-specific rules [core values] are optionally packaged to supplement universal rules, and customer-specific rules [regulation rules] are optionally packaged to supplement universal rules; the industry-specific rules and customer rules are deployed to an AI engine of a contract AI platform [i.e., the AI system then handles/meets these rules] – Wodetski, paragraphs 131-32); and deploy the machine learning model into a real-time application (high-performing universal contract model rules/models are packaged for deployment; industry-specific rules [core values] are optionally packaged to supplement universal rules, and customer-specific rules [regulation rules] are optionally packaged to supplement universal rules; the industry-specific rules and customer rules are deployed to an AI engine of a contract AI platform – Wodetski, paragraphs 131-32; contract risk scores are updated in real time – id. at paragraph 35).” Claim 9 is a method claim corresponding to system claim 2 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 16 is a non-transitory computer-readable medium claim corresponding to system claim 2 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 3, the rejection of claim 1 is incorporated. Wodetski further discloses “determining [whether] the machine learning model … meet[s] one or more entity rules with the desired accuracy (Wodetski Fig. 15 shows that machine learning training is applied until a performance/accuracy crosses a desired threshold, and high-performing universal contract model rules [entity rules]/models are packaged for deployment) …; and … [training] the machine learning model with the … set of the data objects to meet the one or more entity rules with the desired accuracy (Wodetski Fig. 15 shows that machine learning training is applied until a performance/accuracy crosses a desired threshold, and high-performing universal contract model rules [entity rules]/models are packaged for deployment).” Wodetski appears not to disclose explicitly the further limitations of the claim. However, Sharma discloses that “the processor is further configured to: in response to determining that the machine learning model does not meet one or more … rules with the desired accuracy, refine one or more training rules to generate a third set of data objects associated with a third set of data groups (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data are used to refine the training dataset [another refined training set = third set of data objects] which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges [determining that the process has not converged = determining that the model does not meet the rule that the process should converge; the fact that particular new data should be added to the training dataset may be regarded as a training rule, so that rule is refined when specific new data are added; note also that the fact that the process is iterative implies that there are multiple training datasets] – Sharma, paragraph 51; training dataset includes labels [data groups] identifying a correct classification for the datapoint – id. at paragraph 1); and retrain the machine learning model with the third set of the data objects to meet the one or more … rules with the desired accuracy (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data are used to refine the training dataset [refined training set = third set of data objects] which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges [i.e., until the convergence rule/desired accuracy is met] – Sharma, paragraph 51).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wodetski to retrain the model with new training data until a criterion is met, as disclosed by Sharma, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that the model is up-to-date and better geared toward fulfilling its intended purpose, thereby increasing the accuracy of the model on the task. See Sharma, paragraph 51. Claim 10 is a method claim corresponding to system claim 3 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 17 is a non-transitory computer-readable medium claim corresponding to system claim 3 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 5, Wodetski, as modified by Sharma, discloses that “refining the one or more training rules comprises adjusting one or more hyper parameters, improving sampling rules, or collecting additional historical data (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data [additional historical data, “historical” in the sense that they are collected prior to the training] are used to refine the training dataset which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges – Sharma, paragraph 51).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wodetski to collect more training data for retraining the model, as disclosed by Sharma, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that the model is up-to-date and better geared toward fulfilling its intended purpose, thereby increasing the accuracy of the model on the task. See Sharma, paragraph 51. Claim 12 is a method claim corresponding to system claim 5 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 19 is a non-transitory computer-readable medium claim corresponding to system claim 5 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 7, Wodetski, as modified by Sharma, discloses that “collecting the additional historical data comprises collecting augmented data or rescaling the historical data associated with the plurality of the data groups (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data [additional/augmented historical data] are used to refine the training dataset which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges – Sharma, paragraph 51).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wodetski to collect more training data for retraining the model, as disclosed by Sharma, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that the model is up-to-date and better geared toward fulfilling its intended purpose, thereby increasing the accuracy of the model on the task. See Sharma, paragraph 51. Claim 14 is a method claim corresponding to system claim 7 and is rejected for the same reasons as given in the rejection of that claim. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wodetski in view of Sharma and further in view of Hoque et al., “Risk-Ranking Matrix for Security Patching of Exploitable Vulnerabilities,” in 2808.1 Proc. 1st Int’l Conf. Frontier of Digital Tech. Towards a Sustainable Soc’y 050004 (2023) (“Hoque”) and Madani et al. (US 20190197368) (“Madani”). Regarding claim 4, the rejection of claim 1 is incorporated. Wodetski further discloses that “the processor is further configured to: process … the first set of the data objects and the set of the logic relationships with the one or more regulation rules and the one or more core values (industry specific rules and customer rules [regulation rules and core values] are deployed from a contract AI developer to an AI engine associated with the contract AI platform – Wodetski, paragraph 132; a contract management application passes new contract documents to the AI platform, and the AI/ML rules [including the industry-specific and customer rules] are applied to the contract document [which may be regarded as part of the first set of data objects] – id. at paragraph 133; see also Fig. 16 (showing that the contract AI platform contains several classifiers, i.e., the document classifications [logic relationships] are also processed)); … and … determine whether the first set of the data objects meets the one or more regulation rules and the one or more core values (industry specific rules and customer rules [regulation rules and core values] are deployed from a contract AI developer to an AI engine associated with the contract AI platform – Wodetski, paragraph 132; a contract management application passes new contract documents to the AI platform, and the AI/ML rules [including the industry-specific and customer rules] are applied to the contract document [i.e., it is determined whether the contract document meets the industry-specific and customer rules, i.e., the regulation rules and core values] – id. at paragraph 133).” Neither Wodetski nor Sharma appears to disclose explicitly the further limitations of the claim. However, Hoque discloses “generat[ing] one or more threat scenarios associated with the first set of the data objects, wherein the one or more threat scenarios … include[s] one or more vulnerable data objects in the first set of the data objects (Table 2 of Hoque shows a risk ranking matrix that assigns a Common Vulnerability Scoring System (CVSS) score and a ranking to various threat scenarios considering the gained access type, confidentiality impact, integrity impact, availability impact, access complexity, and authentication [collectively comprising a threat scenario]; second hollow bullet point under “Design of Proposed Risk Ranking Matrix Set Ups” discloses that the impact on the data confidentiality of the system if the vulnerability is exploited is considered [i.e., the scenario includes vulnerable data objects]); determin[ing] one or more risk levels of the one or more threat scenarios based on a risk matrix (Table 2 of Hoque shows a risk ranking matrix that assigns a Common Vulnerability Scoring System (CVSS) score and a ranking [risk level] to various threat scenarios considering the gained access type, confidentiality impact, integrity impact, availability impact, access complexity, and authentication); and … testing the [system] using the one or more threat scenarios and the one or more risk levels (Hoque Table 3 and section entitled “Application of the Proposed Matrix on Test Data” show that risk matrix containing the threat scenarios was tested to determine the number of exploitable vulnerabilities in a national vulnerability database based on the ranking) ….” Hoque and the instant application both relate to data security and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Wodetski and Sharma to analyze a data threat environment using a risk matrix, as disclosed by Hoque, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would save critical assets and objectives by providing an improved vulnerability risk ranking framework. See Hoque, Introduction. Neither Wodetski, Sharma, nor Hoque appears to disclose explicitly the further limitations of the claim. However, Madani discloses “process[ing], through a generative network, the … data (generative adversarial network [generative network] based framework generates medical image data and trains a medical image classifier based on an expanded medical image dataset [data] – Madani, paragraph 22) …; … [and] test[ing] the machine learning model (GAN was trained on Dataset 1 and tested using Dataset 2 – Madani, paragraph 66) ….” Madani and the instant application both relate to generative networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Wodetski, Sharma, and Hoque to employ a tested generative network, as disclosed by Madani, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the network to generate data rather than merely classifying them, thereby enhancing its performance capabilities. See Madani, paragraph 2. Claim 11 is a method claim corresponding to system claim 4 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 18 is a non-transitory computer-readable medium claim corresponding to system claim 4 and is rejected for the same reasons as given in the rejection of that claim. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wodetski in view of Sharma and further in view of Mazzoleni et al. (US 20200279171) (“Mazzoleni”). Regarding claim 6, neither Wodetski nor Sharma appears to disclose explicitly the further limitations of the claim. However, Mazzoleni discloses that “the sampling rules comprise stratified sampling, cluster sampling or random sampling (various query techniques may be used to identify relevant data and non-relevant data, such as stratified sampling – Mazzoleni, paragraph 33).” Mazzoleni and the instant application both relate to machine learning and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Wodetski and Sharma to use stratified sampling, as disclosed by Mazzoleni,, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that each subgroup is properly represented in the sample. See Mazzoleni, paragraph 33. Claim 13 is a method claim corresponding to system claim 6 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 20, the rejection of claim 19 is incorporated. Sharma further discloses that “collecting the additional historical data comprises collecting augmented data or rescaling the historical data associated with the plurality of the data groups (aggregated data are then shared with data sources who use the aggregated data to train a local machine-learning model to generate new data to be provided to a central server; these new data [additional/augmented historical data] are used to refine the training dataset which is then used to retrain the machine-learning model; the machine-learning model is tested against the validation dataset and this process continues until the process converges – Sharma, paragraph 51).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wodetski to collect more training data for retraining the model, as disclosed by Sharma, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that the model is up-to-date and better geared toward fulfilling its intended purpose, thereby increasing the accuracy of the model on the task. See Sharma, paragraph 51. Neither Wodetski nor Sharma appears to disclose explicitly the further limitations of the claim. However, Mazzoleni discloses that “the sampling rules comprise stratified sampling, cluster sampling or random sampling (various query techniques may be used to identify relevant data and non-relevant data, such as stratified sampling – Mazzoleni, paragraph 33) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Wodetski and Sharma to perform stratified sampling, as disclosed by Mazzoleni, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would ensure that each subgroup is properly represented in the sample. See Mazzoleni, paragraph 33. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN C VAUGHN/ Primary Examiner, Art Unit 2125 1 This language is somewhat confusing insofar as it suggests that the same objects/relationships are used to train and to test the model, which is normally not the case in practice. It also recites testing the model after the desired accuracy has already been determined, which is nonsensical insofar as the testing is normally performed in order to determine whether the desired accuracy is met. It is unclear what purpose testing the model would serve once the accuracy has already been determined. Examiner will construe this language as meaning that a different subset of the same dataset of objects/relationships performs the testing as part of determining whether the accuracy is met.
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602448
PROGRESSIVE NEURAL ORDINARY DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12602610
CLASSIFICATION BASED ON IMBALANCED DATASET
2y 5m to grant Granted Apr 14, 2026
Patent 12561583
Systems and Methods for Machine Learning in Hyperbolic Space
2y 5m to grant Granted Feb 24, 2026
Patent 12541703
MULTITASKING SCHEME FOR QUANTUM COMPUTERS
2y 5m to grant Granted Feb 03, 2026
Patent 12511526
METHOD FOR PREDICTING A MOLECULAR STRUCTURE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
81%
With Interview (+19.4%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month