Prosecution Insights
Last updated: April 19, 2026
Application No. 17/811,229

MACHINE LEARNING TECHNIQUES USING MODEL DEFICIENCY DATA OBJECTS FOR TENSOR-BASED GRAPH PROCESSING MODELS

Final Rejection §102§103
Filed
Jul 07, 2022
Examiner
LEWIS, MATTHEW LEE
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Optum Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 3 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§102 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendments This action is in response to amendments filed October 8th, 2025, in which Claims 1-5, 7-12, & 14-19 have been amended. Further, claims 6, 13, & 20 have been cancelled, claims 21-23 have been added, and Claims 1-5, 7-12, 14-19, & 21-23 are currently pending. Response to Arguments Regarding the applicant’s traversal of the 35 U.S.C. 101 rejections of the previous office action, the applicant’s arguments filed October 8th, 2025 have been fully considered, and are persuasive. First, applicant argues that claim 1 does not recite a mental process because many of the limitations cannot practically be performed in the human mind, citing multiple amended limitations previously cited as abstract ideas including the substitution of the word “identifying” with the word “receiving” in multiple cases such as with “receiving… a positive input set…” which indeed no longer cites a mental process, and further citing the “retraining” limitation. While the examiner does agree that these limitations do not recite mental processes, there are some remaining limitations which do, which were not amended to overcome this. However, applicant further argues that claim 1, in view of the specification at [0019-0020] and [0096-0097] cites an improvement to the technology tied to machine learning, including techniques for identifying deficiencies in a model and selectively retraining portions of that model based on the identified deficiencies. In view of the claims as amended, the examiner finds this argument to be persuasive. Since many of the previously cited abstract ideas are no longer abstract, the additional elements of the claim and the claim as a whole seems to relate this improvement to the technology in a way that successfully integrates the abstract ideas into a practical application, and as such, the rejections for claims 1-5, 7-12, 14-19, and 21-23 in view of 35 U.S.C 101 have been overcome and have subsequently been withdrawn. Regarding the applicant’s traversal of the 35 U.S.C. 102/103 rejections of the previous office action, the applicant’s arguments filed October 8th, 2025 have been fully considered, and are unpersuasive. Applicant argues that the reference, IWAKURA, fails to disclose “wherein a model deficiency data object that indicates one or more portions of the graph representation machine learning model is generated based on the output” as well as “retraining… the one or more portions of the graph representation machine learning model based on the model deficiency data object, wherein the model deficiency data object is configured to indicate the one or more portions of the graph representation machine learning model to increase a speed of retraining operations”. The examiner respectfully submits that IWAKURA does teach these limitations as follows, and as further cited below: “receiving, by the one or more processors and from a graph representation machine learning model, an output comprising a group of holistic graph links based at least in part on an input comprising the plurality of prediction input feature sets, wherein a model deficiency data object that indicates one or more portions of the graph representation machine learning model is generated based on the output”: ([0042-0045] “The attendance book data (prediction input feature set) can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes (generating a group of holistic graph links). The multiple nodes are composed of date nodes, month nodes, and attendance status nodes. The date, month, and attendance status nodes are present corresponding to the numbers of dates, months, and types of attendance status, respectively (The group of links is “holistic” since all the nodes are connected via a plurality of edges). The respective nodes each have a number corresponding to one of the dates, months, and types of attendance status. For example, when the date is “1”, the value is set to “1”, when the attendance status is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. The edge connects the related nodes among the date, month, and attendance status nodes. The following describes learning in the deep tensor. FIG. 5 is a diagram explaining an example of learning in the deep tensor. As illustrated in FIG. 5, the learning device 100 produces an input tensor from the attendance book data attached with a teacher label (label A) indicating that the employee had received medical treatment (initial input from above.), for example. The learning device 100 performs the tensor decomposition on the input tensor to produce a core tensor (the generation of the model deficiency data object) so as to be similar to the target core tensor randomly produced initially. The learning device 100 inputs the core tensor to a neural network (NN) to obtain a classification result (label A: 70%, label B: 30%). Thereafter, the learning device 100 calculates a classification error between the classification result (label A: 70%, label B: 30%) and the teacher label (label A: 100%, label B: 0%) … …In the prediction after the learning, the input tensor is converted into the core tensor (partial pattern of the input tensor) so as to be similar to the target core tensor by the tensor decomposition, and the core tensor is input to the neural network, resulting in a prediction result being obtained.”) The above cited paragraphs describe the graph tensors being determined for the input by the deep tensor, which is further described in ([0003] “Graph structure learning techniques (hereinafter one configuration of devices performing the graph structure learning is called a “deep tensor (DT)”) have been known that can perform deep learning on graph structured data. The DT uses the graph structure as input and handles the graph structure as tensor data (hereinafter described as a tensor in some cases). The DT extracts a partial structure of a graph (a partial pattern of a tensor) contributing to prediction as a core tensor to achieve highly accurate prediction.” The core tensor is the derived partial structure of the graph identifying the patterns/deficiencies, which is used for highly accurate predictions, much the same as the model deficiency object generated in the claims.) Further, IWAKURA teaches “retraining, by the one or more processors, the one or more portions of the graph representation machine learning model based on the model deficiency data object wherein the model deficiency data object is configured to indicate the one or more portions of the graph representation machine learning model …”: ([Abstract] “A machine learning method includes acquiring data including attendance records of employees and information indicating which employee has taken a leave of absence from work, in response to determining that a first employee of the employees has not taken a leave of absence in accordance with the data, generating a first tensor on a basis of an attendance record of the first employee and parameters associated with elements included in the attendance record (generating a tensor-based deficiency object (the tensor (core tensor) made in response to the employee not actually being absent)), in response to determining that a second employee of the employees has taken a leave of absence in accordance with the data, modifying the parameters, and generating a second tensor on a basis of an attendance record of the second employee and the modified parameters, and generating a model by machine learning based on the first tensor and the second tensor (modifying the parameters and generating a second tensor, to generate a model based on the first and second tensor is equivalent to retraining the model based on the model deficiency object since this core tensor is used to retrain it based on these deficiencies).”) Further, the claim cites at the end of the above limitation, “…to increase a speed of retraining operations”: This limitation merely recites intended use, without limiting the scope of what is happening with the invention. IWAKURA separates the model deficiency data object, as cited above, and uses the first and second tensors cited above (which include the model deficiency data object) to retrain the model by generating a new model with updated parameters. It does not cite the reasoning of increasing speed of retraining operations, instead citing improved accuracy by isolating deficiencies and only retraining based on those. Regardless of the intent, whether to improve accuracy or to improve speed, the functional language of the claims is taught as cited above. Therefore, claim 1 is still rejected under 35 U.S.C. 102. Further, independent claims 8 & 15 are still rejected as well, as they recite similar amendments in addition to the merits of the previous action. Further, dependent claims incorporate the amendments of these claims and are also rejected under this rationale, in addition to their own rationale, as cited below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 8-9, 15-16, & 21-22 are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Iwakura, et al.: U.S. PGPUB No. US2020/0193327 A1 (hereafter, IWAKURA) Regarding claim 1, IWAKURA teaches “A computer-implemented method”: ([Abstract] “A machine learning method (a computer-implemented method) includes…”) Further, IWAKURA teaches “receiving, by one or more processors, a positive input set that is associated with a risk category”: ([Abstract] “A machine learning method includes acquiring data including attendance records of employees and information indicating which employee has taken a leave of absence from work (a risk category), in response to determining that a first employee of the employees has not taken a leave of absence in accordance with the data (positive input data received from the attendance record (the positive input set), as shown in step S201 in figure 18), generating a first tensor on a basis of an attendance record of the first employee and parameters associated with elements included in the attendance record… (here, the input set, including the actual attendance record that defied the incorrect data, is a “positive” input associated with the risk category).”) Further, this step of the process can be seen in step S201 of Figure 18. Further, IWAKURA teaches “wherein: the positive input set comprises a plurality of prediction input data objects that are associated with an affirmative label for the risk category, and a prediction input data object in the positive input set is associated with… a prediction input feature set of a plurality of prediction input feature sets”: ([0042] “The attendance book data (The positive input set) can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes (a plurality of prediction input data objects). The multiple nodes are composed of date nodes, month nodes, and attendance status nodes. The date, month, and attendance status nodes are present corresponding to the numbers of dates, months, and types of attendance status, respectively (prediction input feature set). The respective nodes each have a number corresponding to one of the dates, months, and types of attendance status. For example, when the date is “1”, the value is set to “1”, when the attendance status (the attendance status being the risk category) is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. (The value being set to match the input shows that it is positive input labels associated with the risk category, since the model is learning to adjust weights/values from the input) The edge connects the related nodes among the date, month, and attendance status nodes (a plurality of prediction input feature sets).”) Further, IWAKURA teaches “a prediction input data object in the positive input set is associated with… a plurality of risk tensors generated based at least in part on a categorical subset of the prediction input feature set that is associated with an input category of a plurality of input categories”: ([0042] We see in 0042, that the attendance book data (The positive input set) holds a plurality of prediction input data objects and serves as a prediction input feature set. date, absence, attendance, etc., are all categorical subsets of the prediction input data objects.) And further: ([0043-0045] The following describes learning in the deep tensor… As illustrated in FIG. 5, the learning device 100 produces an input tensor from the attendance book data (risk tensor associated with input) attached with a teacher label (label A) indicating that the employee had received medical treatment… The learning device 100 inputs the core tensor to a neural network (NN) to obtain a classification result… The learning device 100 propagates the classification error to the target core tensor to correct the target core tensor so as to be close to the partial graph structure contributing to prediction, i.e., a feature pattern indicating a feature of the unwell person or a feature pattern indicating a feature of the well person (tensors generated at least partially based on the categorical subsets of input features) ...”) And further: ([0059] “The tensor DB 106 is a database that stores therein respective tensors (tensor data) produced from the pieces of learning data about the respective employees (a plurality of tensors associated with the learning data/input). The tensor DB 106 stores therein tensor data in which each tensor and the label are in association with each other...”) And further: [Figure 11] This figure shows the representation of a tensor. I have encased and labeled the categorized features above, which the tensor is influenced by. Further, IWAKURA teaches “a prediction input data object in the positive input set is associated with… a plurality of tensor-based graph representations generated based at least in part on the plurality of risk tensors for the prediction input data object”: ([0037] “The following describes the learning data input to the deep tensor. FIG. 2 is a diagram explaining a[n] example of the learning data. The learning data includes the attendance book data for six months and a label that indicates whether the employee received medical treatment within three months after the six months….”) And further: [Figure 2] shows the input data described above… And further: Figure 12 shows the two tensors (a plurality) generated from the input being compared. Further, IWAKURA teaches “receiving, by the one or more processors, a tensor-based graph representation set for the positive input set”: ([0042] The attendance book data, as cited above, is a tensor-based graph representation set, and is the received data used for the positive input set.) Further, IWAKURA teaches “wherein… the tensor-based graph representation set comprises a set of the plurality of tensor-based graph representations corresponding to the plurality of prediction input data objects”: [Figure 5] The “input tensor” is shown as representing the attendance book data, including the medically treated and untreated individuals and thus, all above cited objects and tensors, within one large tensor-based graph representation set. Further, IWAKURA teaches “the tensor-based graph representation set describes a group of tensor-based graph links”: ([0042] “…For example, when the date is “1”, the value is set to “1”, when the attendance status is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. (date, absence, attendance, etc., are all categorical subsets of the prediction input data objects) The edge connects the related nodes among the date, month, and attendance status nodes. (The edges connect all related nodes) And further: [Figure 4] The above figure as cited in ([0011] “…is a diagram illustrating an example of extraction of a partial graph structure”) further described in ([0040].”) Further, IWAKURA teaches “receiving, by the one or more processors and from a graph representation machine learning model, an output comprising a group of holistic graph links based at least in part on an input comprising the plurality of prediction input feature sets, wherein a model deficiency data object that indicates one or more portions of the graph representation machine learning model is generated based on the output”: ([0042-0045] “As cited above, the attendance book data is the prediction input feature set, which “can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes” (generating a group of holistic graph links). Further, The group of links is “holistic” since all the nodes are connected via a plurality of edges) … … The learning device 100 performs the tensor decomposition on the input tensor to produce a core tensor (the generation of the model deficiency data object) so as to be similar to the target core tensor randomly produced initially. … …In the prediction after the learning, the input tensor is converted into the core tensor (partial pattern of the input tensor) so as to be similar to the target core tensor by the tensor decomposition, and the core tensor is input to the neural network, resulting in a prediction result being obtained.”) The above cited paragraphs describe the graph tensors being determined for the input by the deep tensor, which is further described in ([0003] The core tensor is the derived partial structure of the graph identifying the patterns/deficiencies, which is used for highly accurate predictions, much the same as the model deficiency object generated in the claims.) Further, IWAKURA teaches “generating a plurality of deficiency graph links that is… present in the group of holistic graph links and… absent from the group of tensor-based graph links”: ([0034-0035] describes the “core tensor” is made from the input data to be compared to the “target core tensor” in an effort to make it “similar to the target core tensor.” As such, the “core tensor” in this case is where the deficiencies lie, and can be interpreted as the “deficiency data object”. The two tensors not being the same, means that the deficiency graph links, while present in the holistic graph links by nature, are absent from the other tensor-based graph links. And further: ([0040-0043] describes that the core tensor is produced from the graph links generated from the input (attendance book data) and is used as the deficiency object, as cited above. (This is, to say, that the graph links used in the core tensor (deficiency object) are also in the group of holistic graph links, but not in the “tensor-based graph links” that it is compared to.) The attendance book data (prediction input feature set) “can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes” (each node an input data object). “The multiple nodes are composed of date nodes, month nodes, and attendance status nodes. The date, month, and attendance status nodes are present corresponding to the numbers of dates, months, and types of attendance status, respectively” (input data objects). “The respective nodes each have a number corresponding to one of the dates, months, and types of attendance status. For example, when the date is “1”, the value is set to “1”, when the attendance status is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. “(tensor-based graph feature embeddings) Further, IWAKURA teaches “generating the model deficiency data object based on a subset of the plurality of deficiency graph links that is not associated with an existing predictive action” and “retraining, by the one or more processors, the one or more portions of the graph representation machine learning model based on the model deficiency data object wherein the model deficiency data object is configured to indicate the one or more portions of the graph representation machine learning model”: ([Abstract] The abstract describes generating a tensor-based deficiency object (the tensor (core tensor) made in response to the employee not actually being absent), which is not already associated with an existing predictive action and further, modifying the parameters and generating a second tensor, to generate a model based on the first and second tensor, which is equivalent to retraining the model based on the model deficiency object since this core tensor is used to retrain it based on these deficiencies).”) Further, the claim cites at the end of the above limitation, “…to increase a speed of retraining operations”: This limitation merely recites intended use, without limiting the scope of what is happening with the invention. IWAKURA separates the model deficiency data object, as cited above, and uses the first and second tensors cited above (which include the model deficiency data object) to retrain the model by generating a new model with updated parameters. It does not cite the reasoning of increasing speed of retraining operations, instead citing improved accuracy by isolating deficiencies and only retraining based on those. Regardless of the intent, whether to improve accuracy or to improve speed, the functional language of the claims is taught as cited above. Regarding claim 2, IWAKURA teaches the limitations of claim 1. Further, IWAKURA teaches “wherein the graph representation machine learning model is configured to: generate a tensor-based graph feature embedding that is associated with a respective input category of a risk tensor that is used to generate a tensor-based graph representation of the plurality of tensor-based graph representations and associated with the prediction input data object”: ([0042-0045] “The attendance book data (prediction input feature set) can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes (each node an input data object). The multiple nodes are composed of date nodes, month nodes, and attendance status nodes. The date, month, and attendance status nodes are present corresponding to the numbers of dates, months, and types of attendance status, respectively (input data objects). The respective nodes each have a number corresponding to one of the dates, months, and types of attendance status. For example, when the date is “1”, the value is set to “1”, when the attendance status is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. (tensor-based graph feature embeddings) The edge connects the related nodes among the date, month, and attendance status nodes… …The learning device 100 learns a prediction model and a method of tensor decomposition using extended backpropagation, which is an extended method of backpropagation. The learning device 100 corrects various parameters in the NN so as to reduce the classification error by propagating the classification error in an input layer, an intermediate layer, and an output layer included in the NN such that the error is propagated toward lower layers. The learning device 100 propagates the classification error to the target core tensor to correct the target core tensor so as to be close to the partial graph structure contributing to prediction, i.e., a feature pattern indicating a feature of the unwell person or a feature pattern indicating a feature of the well person. In the prediction after the learning, the input tensor is converted into the core tensor (partial pattern of the input tensor) so as to be similar to the target core tensor by the tensor decomposition, and the core tensor is input to the neural network, resulting in a prediction result being obtained.”) Further, IWAKURA teaches “generate an inferred hybrid risk score for the prediction input data object based at least in part on the tensor-based graph feature embedding”: [Figure 5] The figure is further described in ([0043] “The following describes learning in the deep tensor. FIG. 5 is a diagram explaining an example of learning in the deep tensor. As illustrated in FIG. 5, the learning device 100 produces an input tensor from the attendance book data attached with a teacher label (label A) indicating that the employee had received medical treatment, for example (a tensor-based graph feature embedding). The learning device 100 performs the tensor decomposition on the input tensor to produce a core tensor so as to be similar to the target core tensor randomly produced initially. The learning device 100 inputs the core tensor to a neural network (NN) to obtain a classification result (label A: 70%, label B: 30%). Thereafter, the learning device 100 calculates a classification error between the classification result (label A: 70%, label B: 30%) and the teacher label (label A: 100%, label B: 0%).” (A: 100%, B: 0%) is the cited “inferred hybrid risk score” associated with the input data object. IWAKURA is classifying the result of the comparison of the core tensor (deficiency object) with the target core sensor (ideal state) to determine if the employee’s behavior resembles those who eventually require treatment, thus an “inferred hybrid risk score” is given that was based upon the graph feature embeddings.) Regarding claim 8, IWAKURA teaches “A system comprising: one or more processors; andat least one memory storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to…”: [Figure 19] The figure above shows the necessary hardware for performing the method as noted in ([0026] “FIG . 19 is a diagram illustrating an exemplary hardware structure”), which includes a processor (100d) and at least one memory (100c), which executes the machine learning model (program code) And further, ([Abstract] “A machine learning method (a computer-implemented method) includes acquiring data including attendance records of employees and information indicating which employee has taken a leave of absence from work (a risk category), in response to determining that a first employee of the employees has not taken a leave of absence in accordance with the data, generating a first tensor on a basis of an attendance record of the first employee and parameters associated with elements included in the attendance record (generating a tensor-based deficiency object (the tensor made in response to the employee not actually being absent)), in response to determining that a second employee of the employees has taken a leave of absence in accordance with the data, modifying the parameters, and generating a second tensor on a basis of an attendance record of the second employee and the modified parameters, and generating a model by machine learning based on the first tensor and the second tensor.”) Further, claim 8 recites similar limitations as claim 1 and is rejected under the same rationale. Regarding claims 9, IWAKURA teaches the limitations of claim 8. Further, claim 9 comprises similar additional limitations as claim 2 and is rejected under the same rationale. Regarding claim 15 IWAKURA teaches “One or more non-transitory computer-readable storage media including instructions that, when executed by one or more processors, cause the one or more processors to…”: [Figure 19] The figure above shows the necessary hardware for performing the method as noted in ([0026] “FIG . 19 is a diagram illustrating an exemplary hardware structure”), which includes a processor (100d) and at least one memory (100c), which executes the machine learning model (program code) which makes up a computer, which is itself, a non-transitory computer-readable storage media. And further, ([Abstract] “A machine learning method (a computer-implemented method) includes acquiring data including attendance records of employees and information indicating which employee has taken a leave of absence from work (a risk category), in response to determining that a first employee of the employees has not taken a leave of absence in accordance with the data, generating a first tensor on a basis of an attendance record of the first employee and parameters associated with elements included in the attendance record (generating a tensor-based deficiency object (the tensor made in response to the employee not actually being absent)), in response to determining that a second employee of the employees has taken a leave of absence in accordance with the data, modifying the parameters, and generating a second tensor on a basis of an attendance record of the second employee and the modified parameters, and generating a model by machine learning based on the first tensor and the second tensor.”) Further, claim 15 recites similar limitations as claim 1, and is rejected under the same rationale. Regarding claim 16, IWAKURA teaches the limitations of claim 15. Further, claim 16 comprises similar additional limitations as claim 2 and is rejected under the same rationale. Regarding claim 21, IWAKURA teaches the limitations of claim 1. Further, IWAKURA teaches “following retraining of the graph representation machine learning model, generating, by the one or more processors and using the graph representation machine learning model, a classification for a predictive input”: ([0043] “The learning device 100 inputs the core tensor to a neural network ( NN ) to obtain a classification result ( label A : 70 % , label B : 30 % ) . Thereafter , the learning device 100 calculates a classification error between the classification result ( label A : 70 % , label B : 30 %) and the teacher label ( label A : 100 % , label B : 0 % ) .”) Further, IWAKURA teaches “initiating, by the one or more processors, one or more prediction-based actions based at least in part on the classification”: ([Abstract] “A machine learning method includes acquiring data including attendance records of employees and information indicating which employee has taken a leave of absence from work, in response to determining that a first employee of the employees has not taken a leave of absence in accordance with the data, generating a first tensor on a basis of an attendance record of the first employee and parameters associated with elements included in the attendance record (generating a tensor-based deficiency object (the tensor (core tensor) made in response to the employee not actually being absent)), in response to determining that a second employee of the employees has taken a leave of absence in accordance with the data, modifying the parameters, and generating a second tensor on a basis of an attendance record of the second employee and the modified parameters, and generating a model by machine learning based on the first tensor and the second tensor (one or more prediction-based actions (generating a second tensor/generating a new model from the tensors)based at least in part on the model deficiency object).”) Regarding claim 22, IWAKURA teaches the limitations of claim 8. Further claim 22 comprises similar additional limitations as claim 21, and is rejected under the same rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 10, & 17 are rejected under 35 U.S.C. 103 as being unpatentable over IWAKURA, as applied to claims above, and further in view of Brownlee, J. et al. “Ensemble Learning Methods for Deep Learning Neural Networks.” on March 5 2019 (hereafter, BROWNLEE) Regarding claim 3, IWAKURA teaches the limitations of claim 2. Further, IWAKURA teaches “generating the inferred hybrid risk score for the prediction input data object comprises: (i) generating, using the graph-based machine learning model and based at least in part on the tensor-based graph feature embedding for the input category, a categorical tensor-based graph feature embedding, and (ii) generating… based at least in part on each categorical tensor-based graph feature embedding, the inferred hybrid risk score”: [Figure 5] And further, ([0042-0045] “The attendance book data (prediction input feature set) can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes (each node an input data object). The multiple nodes are composed of date nodes, month nodes, and attendance status nodes. The date, month, and attendance status nodes are present corresponding to the numbers of dates, months, and types of attendance status, respectively (input data objects). The respective nodes each have a number corresponding to one of the dates, months, and types of attendance status. For example, when the date is “1”, the value is set to “1”, when the attendance status is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. (tensor-based graph feature embeddings) The edge connects the related nodes among the date, month, and attendance status nodes. The following describes learning in the deep tensor. FIG. 5 is a diagram explaining an example of learning in the deep tensor. As illustrated in FIG. 5, the learning device 100 produces an input tensor from the attendance book data attached with a teacher label (label A) indicating that the employee had received medical treatment, for example (a categorized tensor-based graph feature embedding). The learning device 100 performs the tensor decomposition on the input tensor to produce a core tensor so as to be similar to the target core tensor (as cited above, the core tensor is the deficiency object being compared to the target core tensor (an ideal state) to determine the “risk score” of the employee attendance data. The employee receiving medical treatment or not, is a category.) randomly produced initially. The learning device 100 inputs the core tensor to a neural network (NN) to obtain a classification result (label A: 70%, label B: 30%) (Specific categorization). Thereafter, the learning device 100 calculates a classification error between the classification result (label A: 70%, label B: 30%) and the teacher label (label A: 100%, label B: 0%). (A: 100%, B: 0%) is the cited “inferred hybrid risk score” associated with the input data object. IWAKURA is classifying the result of the comparison of the core tensor (deficiency object) with the target core sensor (ideal state) to determine if the employee’s behavior resembles those who eventually require treatment, thus an “inferred hybrid risk score” is given that was based upon the categorized graph feature embeddings.) … The learning device 100 propagates the classification error to the target core tensor to correct the target core tensor so as to be close to the partial graph structure contributing to prediction, i.e., a feature pattern indicating a feature of the unwell person or a feature pattern indicating a feature of the well person. In the prediction after the learning, the input tensor is converted into the core tensor (partial pattern of the input tensor) so as to be similar to the target core tensor by the tensor decomposition, and the core tensor is input to the neural network, resulting in a prediction result being obtained.”) IWAKURA fails to explicitly teach “wherein: the graph representation machine learning model comprises a plurality of graph-based machine learning models and a graph-based machine learning model of the plurality of graph-based machine learning models is associated with a respective input category and an ensemble machine learning model” and “using an ensemble model”. However, analogous art about the benefits of ensemble learning, BROWNLEE, does teach “wherein: the… machine learning model comprises a plurality of… machine learning models and a… machine learning model of the plurality of… machine learning models is associated with… an ensemble machine learning model” and “using an ensemble model”: ([Sentence 1 onward] “Deep learning neural networks are nonlinear methods. They offer increased flexibility and can scale in proportion to the amount of training data available. A downside of this flexibility is that they learn via a stochastic training algorithm which means that they are sensitive to the specifics of the training data and may find a different set of weights each time they are trained, which in turn produce different predictions. Generally, this is referred to as neural networks having a high variance and it can be frustrating when trying to develop a final model to use for making predictions. A successful approach to reducing the variance of neural network models is to train multiple models instead of a single model and to combine the predictions from these models. This is called ensemble learning and not only reduces the variance of predictions but also can result in predictions that are better than any single model.”) Further, the combination of IWAKURA and BROWNLEE would naturally produce “a plurality of graph-based machine learning models and a graph-based machine learning model of the plurality of graph-based machine learning models is associated with a respective input category” since IWAKURA, as cited above, teaches a graph-based machine learning model with various input categories (0042-0045), and BROWNLEE teaches the benefits of using multiple machine learning models as an ensemble (Sentence 1 onward, as cited above). And further: (BROWNLEE, [Varying combinations, paragraph 7] “Another combination that is a little bit different is to combine the weights of multiple neural networks with the same structure. The weights of multiple networks can be averaged, to hopefully result in a new single model that has better overall performance than any original model. This approach is called model weight averaging.”) As such, when combined with the primary reference of IWAKURA, the models would be “tensor-based graph processing” models and with the weights being skewed to various inputs on each based on categories, they could be used to combine the predictions as cited above, in an ensemble model. It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of IWAKURA with the teachings of BROWNLEE because IWAKURA uses machine learning methods and BROWNLEE teaches that machine learning models can be used in unison, in the form of an ensemble. One of ordinary skill in the art would be motivated to do so because, as BROWNLEE points out in the previous citation at [Sentence 1, onward], it “not only reduces the variance of predictions but also can result in predictions that are better than any single model”. Regarding claims 10 & 17, IWAKURA teaches the limitations of claims 8 & 16. Further, claims 10 & 17 recite similar additional limitations as claim 3 and are rejected under the same rationale. Claims 4-5, 11-12, & 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over IWAKURA, as applied to claims above, and further in view of Mundhenk, T. et al. “Symbolic Regression via Neural-Guided Genetic Programming Population Seeding.” Available on November 17 2021 (hereafter, MUNDHENK) Regarding claim 4, IWAKURA teaches the limitations of claim 2. Further, IWAKURA teaches “wherein: a plurality of tensor-based graph feature embeddings for a prior prediction input data object of the plurality of prediction input data objects are generated by the graph representation machine learning model to generate the inferred hybrid risk score for the prior prediction input data object”: [Figure 5] Further, IWAKURA teaches “relate the plurality of tensor-based graph feature embeddings for the prior prediction input data object to the inferred hybrid risk score for the prior prediction input data object”: ([0040-0043] describes figure 4 and the extraction of the graph embeddings for the core tensor (model deficiency object) and the relation of them with the attendance data object (input object) as well as the calculation of the hybrid risk score being inferred by it.) Further IWAKURA teaches “a hybrid risk score generation machine learning model is generated”: [Figure 5] Further, IWAKURA fails to explicitly teach “…using a genetic programming operation”. However, analogous art, MUNDHENK, does teach “a… machine learning model… using a genetic programming operation”: ([Abstract] “Symbolic regression is the process of identifying mathematical expressions that fit observed output from a black-box process. It is a discrete optimization problem generally believed to be NP-hard. Prior approaches to solving the problem include neural-guided search (e.g. using reinforcement learning) and genetic programming. In this work, we introduce a hybrid neural-guided/genetic programming approach to symbolic regression and other combinatorial optimization problems. We propose a neural-guided component used to seed the starting population of a random restart genetic programming component, gradually learning better starting populations. On a number of common benchmark tasks to recover underlying expressions from a dataset, our method recovers 65% more expressions than a recently published top-performing model using the same experimental setup. We demonstrate that running many genetic programming generations without interdependence on the neural-guided component performs better for symbolic regression than alternative formulations where the two are more strongly coupled. Finally, we introduce a new set of 22 symbolic regression benchmark problems with increased difficulty over existing benchmarks.”) And further: ([Introduction] “Symbolic regression involves searching the space of mathematical expressions to fit a dataset using equations which are potentially easier to interpret than, for example, neural networks. A key difference compared to polynomial or neural network-based regression is that we seek to illuminate the true underlying process that generated the data. Thus, the process of symbolic regression is analogous to how a physicist may derive a set of fundamental expressions to describe a natural process. For example, Tycho Brahe meticulously mapped the motion of planets through the sky, but it was Johannes Kepler who later created the expressions for the laws that described their motion.”) In other words, when combined with IWAKURA, genetic programming operations like this can be used to infer exactly why the data was calculated the way that it was, resulting in a more accurate generation of the hybrid risk score model. It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of IWAKURA with the teachings of MUNDHENK because MUNDHENK teaches genetic programming operations to be coupled with neural networks that allow further insight to reasoning while IWAKURA focuses on determining risk based on a reasoning using neural networks. One of ordinary skill in the art would be motivated to do so because, as MUNDHENK points out in its abstract, “We demonstrate that running many genetic programming generations without interdependence on the neural-guided component performs better for symbolic regression than alternative formulations where the two are more strongly coupled.” Regarding claim 5, IWAKURA in view of MUNDHENK teaches the limitations of claim 4. Further, MUNDHENK teaches “wherein the genetic programming operation comprises a symbolic regression operation configured to generate a plurality of refined regressor variables for the… machine learning model”: ([Abstract] “Symbolic regression is the process of identifying mathematical expressions that fit observed output from a black-box process. It is a discrete optimization problem generally believed to be NP-hard. Prior approaches to solving the problem include neural-guided search (e.g. using reinforcement learning) and genetic programming. In this work, we introduce a hybrid neural-guided/genetic programming approach to symbolic regression and other combinatorial optimization problems. We propose a neural-guided component used to seed the starting population of a random restart genetic programming component, gradually learning better starting populations. On a number of common benchmark tasks to recover underlying expressions from a dataset, our method recovers 65% more expressions than a recently published top-performing model using the same experimental setup. We demonstrate that running many genetic programming generations without interdependence on the neural-guided component performs better for symbolic regression than alternative formulations where the two are more strongly coupled. Finally, we introduce a new set of 22 symbolic regression benchmark problems with increased difficulty over existing benchmarks.”) And further: ([Introduction] “Symbolic regression involves searching the space of mathematical expressions to fit a dataset using equations which are potentially easier to interpret than, for example, neural networks. A key difference compared to polynomial or neural network-based regression is that we seek to illuminate the true underlying process that generated the data. Thus, the process of symbolic regression is analogous to how a physicist may derive a set of fundamental expressions to describe a natural process. For example, Tycho Brahe meticulously mapped the motion of planets through the sky, but it was Johannes Kepler who later created the expressions for the laws that described their motion.”) In other words, when combined with IWAKURA, genetic programming operations like this can be used to infer exactly why the data was calculated the way that it was, resulting in a more accurate generation of the hybrid risk score model. Further, the above citation teaches “a plurality of refined input variables for the… machine learning model”: In other words, when combined with IWAKURA, genetic programming operations like this can be used to infer exactly why the data was calculated the way that it was, resulting in a more accurate generation of the hybrid risk score model, as a result of refined input, as shown in [Figure 1]. PNG media_image1.png 354 573 media_image1.png Greyscale The above figure shows the input being refined through various processes. When combined with IWAKURA, the machine learning model would be “a hybrid risk score generation machine learning model” as cited above by IWAKURA [Figure 5] And further: ([0042-0045] “The attendance book data (plurality of prediction input data objects) can be formed as graph data composed of a plurality of nodes and a plurality of edges connecting the nodes (each node a prior prediction input data object). The multiple nodes are composed of date nodes, month nodes, and attendance status nodes. The date, month, and attendance status nodes are present corresponding to the numbers of dates, months, and types of attendance status, respectively (input data objects). The respective nodes each have a number corresponding to one of the dates, months, and types of attendance status. For example, when the date is “1”, the value is set to “1”, when the attendance status is "absence”, the value is set to “2”, and when the attendance status is “attendance”, the value is set to “1”. (tensor-based graph feature embeddings) The edge connects the related nodes among the date, month, and attendance status nodes. The following describes learning in the deep tensor. FIG. 5 is a diagram explaining an example of learning in the deep tensor. As illustrated in FIG. 5, the learning device 100 produces an input tensor from the attendance book data attached with a teacher label (label A) indicating that the employee had received medical treatment, for example (a tensor-based graph feature embedding). The learning device 100 performs the tensor decomposition on the input tensor to produce a core tensor so as to be similar to the target core tensor randomly produced initially. The learning device 100 inputs the core tensor to a neural network (NN) to obtain a classification result (label A: 70%, label B: 30%). Thereafter, the learning device 100 calculates a classification error between the classification result (label A: 70%, label B: 30%) and the teacher label (label A: 100%, label B: 0%). (A: 100%, B: 0%) is the cited “inferred hybrid risk score” associated with the input data object.) The learning device 100 learns a prediction model (a hybrid risk score generation machine learning model is generated) and a method of tensor decomposition using extended backpropagation, which is an extended method of backpropagation. The learning device 100 corrects various parameters in the NN so as to reduce the classification error by propagating the classification error in an input layer, an intermediate layer, and an output layer included in the NN such that the error is propagated toward lower layers. The learning device 100 propagates the classification error to the target core tensor to correct the target core tensor so as to be close to the partial graph structure contributing to prediction, i.e., a feature pattern indicating a feature of the unwell person or a feature pattern indicating a feature of the well person. In the prediction after the learning, the input tensor is converted into the core tensor (partial pattern of the input tensor) so as to be similar to the target core tensor by the tensor decomposition, and the core tensor is input to the neural network, resulting in a prediction result being obtained.”) Regarding claims 11-12, IWAKURA teaches the limitations of claim 9. Further, claims 11-12 recite similar additional limitations as claims 3-4, respectively, and are rejected under the same rationale. Regarding claims 18-19, IWAKURA teaches the limitations of claim 16. Further, claims 18-19 recite similar additional limitations as claims 3-4, respectively, and are rejected under the same rationale. Claims 7 & 14 are rejected under 35 U.S.C. 103 as being unpatentable over IWAKURA, as applied to claims above, and further in view of Zhang, Q. et al. “DSpin: Detecting Automatically Spun Content on the Web.” Available at https://cseweb.ucsd.edu/~voelker/pubs/dspin-ndss14.pdf in February 2014 (hereafter, ZHANG), and further in view of Nakamura, K. et al. “Health improvement framework for planning actionable treatment process using surrogate Bayesian model.” Available at https://arxiv.org/pdf/2010.16087 on November 13 2020 (hereafter, NAKAMURA), and further in view of Filar et al. U.S. PG Pub No. US2022/0100857 A1, published March 31, 2022 (hereafter, FILAR) Regarding claim 7, IWAKURA teaches the limitations of claim 1. Further, IWAKURA fails to explicitly teach “wherein the model deficiency data object comprises a selected subset of the plurality of deficiency graph links that are generated based at least in part on: (i) an immutability score for each deficiency graph link, (ii) an actionability score for each deficiency graph link, and (iii) a prevalence score for each deficiency graph link.” However, analogous art about detecting automatically spun content using graphs, ZHANG, does teach “wherein the model deficiency data object comprises a selected subset of the plurality of deficiency graph links that are generated based at least in part on: (i) an immutability score for each deficiency”: ([Introduction, paragraph 4] “The goal of this paper is to develop effective techniques to detect automatically spun content on the Web. We consider the problem in the context of a search engine crawler. The input is a set of article pages crawled from various Web sites, and the output is a set of pages flagged as automatically spun content.”) This citation shows that a deficiency (automatically spun content) is being detected. Further: ([Page 9, Col. 2, D. Clustering] “The id pairs can be transformed into clusters to convey more information. We cluster duplicate content, near duplicate content and spun content (cluster of spun content is a deficiency data object) as detailed in Section VI both for filtering and also for assessing the behavior of spam campaigns. To transform pairs into clusters, we use a graph representation where each page is a node and each pair has an edge in the graph. Each connected subgraph represents a cluster. We first build the graph using pairs or edges as input and the ids as nodes. For each node, we traverse all reachable nodes using breadth-first search and mark every node traversed as visited. We continually process unvisited nodes until every node has been visited. The results are disjoint clusters of ids.”) In this citation, we see the deficiency data object (a cluster of spun content) being made up of a subset of one or more deficiency graph links. And further, this citation about the aforementioned “ids” that became nodes: ([Page 9, C. Inverted Indexing] “A naive pairwise comparison of two documents leads to O(n2) comparisons, which is infeasible for processing data at scale. Therefore, we implement the immutable method using inverted indexes, similar to the method described in [6]. For every immutable in the text, we generate a pair: < id, immu >”) (5) The id is a unique index corresponding to an article, and immu is an immutable that occurs in id. We differentiate duplicate immutables by marking each with a unique suffix number to simplify document comparison. Next, we perform a “group by” on the immutables: < immu, group < ids >>”) (6) Each group represents all document ids that contain the immutable. We decompose each group into a pairwise id key and a “1”. Each pairwise id, idi : idj, indicates a single shared immutable between two documents. Each group with N ids therefore has N2/2 pairs. The key-value pairs appear as: < idi : idj, 1 > (7) Last, we “group by” idi : idj and count the number of ones, yielding: < idi : idj, count > (8) The count represents the total number of immutables that overlap between idi and idj. This format is also convenient for finding the total number of immutables in the original document. For a document idi, the number of its immutables is given by idi : idi, namely the number of total overlapping immutables an article has with itself. From the list of pairs < idi : idj count >, we calculate the similarity score between each two pages. We set the threshold for the similarity score (a score made from immutables being an immutability score) to be 75%. We determine this threshold from the training data set, in which the lowest similarity score we found for any cluster was 74.9%. So, in summary, immutables are found and processed to generate a “similarity score” (immutability score) to detect deficiencies to be clustered into subsets of graph links based, in part, on that score, making up a deficiency data object (the cluster of spun content.) It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of IWAKURA with the teachings of ZHANG because both references are in relation to using graphs to determine risk levels of input data. One of ordinary skill in the art would be motivated to do so because, as pointed out on page 7, second paragraph, “Applying the immutable method to the training data set, Table II shows that using immutables to compute the Jaccard Coefficient results in ratios well above 90% for most spun content when using recommended spinning parameters. Under the most challenging parameters, spinning every word and/or removing the original mutable words, the immutable method still produces a similarity score as high as 749%. Furthermore, unlike the previous methods, it scores spun content with a high value while scoring articles that are different in the control group with a low coefficient of 278%. It thus provides a clear separation between spun and non-spun content.” In addition, correctly taking into account the chance that various factors make something less likely or resistant to change (immutable), will positively influence accuracy of a risk being calculated. IWAKURA in view of ZHANG still fails to explicitly teach “a selected subset of the plurality of deficiency graph links that are generated based at least in part on:… (ii) an actionability score for each deficiency graph link, and (iii) a prevalence score for each deficiency graph link” However, analogous art, NAKAMURA, does teach “a selected subset of the plurality of deficiency graph links that are generated based at least in part on:… (ii) an actionability score for each deficiency graph link”: ([Abstract] “Clinical decision making regarding treatments based on personal characteristics leads to effective health improvements. Machine learning (ML) has been the primary concern of diagnosis support according to comprehensive patient information. However, the remaining prominent issue is the development of objective treatment processes in clinical situations. This study proposes a novel framework to plan treatment processes in a data-driven manner. A key point of the framework is the evaluation of the “actionability” for personal health improvements by using a surrogate Bayesian model in addition to a high-performance nonlinear ML model. We first evaluated the framework from the viewpoint of its methodology using a synthetic dataset. Subsequently, the framework was applied to an actual health checkup dataset comprising data from 3,132 participants, to improve systolic blood pressure values (deficiencies) at the individual level. We confirmed that the computed treatment processes are actionable and consistent with clinical knowledge for lowering blood pressure. These results demonstrate that our framework could contribute toward decision making in the medical field, providing clinicians with deeper insights.”) And further: ([Pages 9-10, Path planning using surrogate model] “We calculated an optimal path for the treatment for each instance (for each deficiency) based on a breadth-first search algorithm. The intervention variable space was regarded as a grid graph, and the grid points (nodes) were connected to plan a path (a subset of deficiency graph links). The pseudocode of this algorithm is shown in Fig. 3. The purpose of this algorithm was used to obtain the most actionable path to the node that achieved the most improved predictive value within the search iteration count, L. From the computational perspective, we used the negative logarithm of actionability, defined as the product of node probabilities on a path, as a cost of the path (an actionability score for each deficiency graph link). The goal of path planning is to discover the minimal cost path to each node. We obtained a list of nodes adjacent to the currently selected node in line 3 of the pseudocode. Subsequently, the costs for these nodes were updated in lines 5–7. The following node was selected in line 11. After reaching the predetermined count L, the path to the node with the best regression model prediction value was selected as the optimal path (deficiency data object based on actionability score) in line 13. If multiple nodes with the same predictions existed, the path with the minimum cost was selected.” It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of IWAKURA in view of ZHANG with the teachings of NAKAMURA because both references are in relation to using graph-based machine learning calculations to determine risk levels. One of ordinary skill in the art would be motivated to do so because, as NAKAMURA points out in its abstract, “We confirmed that the computed treatment processes are actionable and consistent with clinical knowledge for lowering blood pressure. These results demonstrate that our framework could contribute toward decision making in the medical field, providing clinicians with deeper insights.” Furthermore, correctly taking into account the chance that one path is more or less “actionable”, will positively influence accuracy of a risk being calculated. IWAKURA in view of ZHANG & NAKAMURA still fails to explicitly teach “a selected subset of the plurality of deficiency graph links that are generated based at least in part on:… (iii) a prevalence score for each deficiency graph link” However, analogous art for detecting anomalous patterns using graph-based machine learning, FILAR, does teach “a selected subset of the plurality of deficiency graph links that are generated based at least in part on:… (iii) a prevalence score for each deficiency graph link”: ([0005] “According to one example embodiment of the present disclosure, a method includes creating a graph of processes performed by a computer system using edges of the processes and metadata comprising properties or artifacts of the edges or processes, the edges identify a connection between a parent process and a child process; and detecting anomalous parent-child process chains of the processes (deficiency graph links) by: assigning edge weights to the edges of the processes using a supervised learning process that has been trained to identify malicious edges and benign edges to create a weighted graph, the edge weights comprising predicted class probabilities that are indicative of the processes being malicious (a deficiency data object made up of a subset of deficiency graph links); performing community detection on the weighted graph using an unsupervised learning technique to identify the anomalous parent-child process chains and determine a structure of a grouped attack technique of the anomalous parent-child process chains; and generating an anomalous score for a parent-child process chain by combining a predicted class probability with a prevalence score that is indicative of how often a child process has been encountered as compared to other child processes relative to a parent process.”) It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of IWAKURA in view of ZHANG & NAKAMURA with the teachings of FILAR because both references are in relation to using graph-based machine learning calculations to determine risk levels. One of ordinary skill in the art would be motivated to do so because, as FILAR points out at [0020] “These systems and methods can reduce a large set of process-related events to a manageable list of rank-ordered rare parent-child process chains, suppressing commonly occurring, but previously unobserved activity. It will be understood that ranking or prioritizing events has been shown to help reduce noise in an environment.” Furthermore, correctly taking into account the prevalence of various types of features in association with risks, will positively influence accuracy of a risk being calculated. Regarding claim 14, IWAKURA teaches the limitations of claim 13. Further, claim 14 recites similar additional limitations as claim 7, and is rejected under the same rationale. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over IWAKURA, as applied to claims above, and further in view of Punitha, V. et al. “Traffic classification for efficient load balancing in server cluster using deep learning technique.” Available at https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=09208542&AN=151441969&h=yWbrsqiY2fuRp0ZEKly9beGbBA8F%2BAc00K2BGn3hjmVxScRzFVXXaTenLoZ4J40SOf0HIQqXQcoY2Pvy0BOcCg%3D%3D&crl=c on 12 January 2021 (hereafter, PUNITHA) Regarding claim 23, IWAKURA teaches the limitations of claim 21. Further, IWAKURA fails to explicitly teach “performing one or more operational load balancing for a server system, configured to perform post-prediction processing operations, by allocating one or more computing resources to the server system based on the classification”: ([Abstract] “Extensive use of multimedia services and Internet Data Center applications demand distributed deployment of these applications. It is implemented using edge computing with server clusters. To increase the availability of the services, applications are deployed redundantly in server clusters. In this situation, an efficient server allocation strategy is essential to improve execution fairness in server cluster. Categorizing the incoming traffic at server cluster is desired for the improvement of QoS. The traditional traffic classification models categorize the incoming traffic according to their applications’ type. They are ineffective in selection of suitable server, as they do not consider the characteristics of the server. Hence this paper proposes a classifier to assist the dispatcher to distribute the requests to appropriate server in server cluster. The proposed deep learning classification model based on incoming traffic characteristics and server status is reinforced with extended labelling using correlation-based approach. The experimental results of the proposed classifier have shown considerable performance enhancement in terms of classification measures and waiting time of the requests compared to existing machine learning models.”) PUNITHA teaches of performing load balncing for the server by first classifying the data/traffic and then, post-classification, allocating them to the appropriate server space/allocating computer resources to the server system based on the classification. It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of IWAKURA with the teachings of PUNITHA because both using machine learning models to optimize a system. One of ordinary skill in the art would be motivated to do so because, as pointed out in the abstract of PUNITHA, “Categorizing the incoming traffic at server cluster is desired for the improvement of QoS” and “The experimental results of the proposed classifier have shown considerable performance enhancement in terms of classification measures and waiting time of the requests compared to existing machine learning models.” Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW LEE LEWIS whose telephone number is (571)272-1906. The examiner can normally be reached Monday: 9:30AM - 3:30PM and Tuesday - Friday: 9:30AM - 6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Matthew Lee Lewis/Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Jul 07, 2022
Application Filed
Jul 08, 2025
Non-Final Rejection — §102, §103
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Oct 08, 2025
Response Filed
Jan 27, 2026
Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month