Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The office action is responsive to the amendment filed on 08/27/2025. As directed by the amendments claims 1, 8, and 15 have been amended. Claims 1-3, 5, 8-10, 12, and 15-16 are pending for examination.
Response to Arguments
Regarding objection to the specification:
Applicant’s arguments, see page. 8, sec: Objection to the Specification, filed 08/27/2025, with respect to objection to the specification paragraphs [0030] have been fully considered and are persuasive. The objection of the specification has been withdrawn.
Regarding the 35 U.S.C § 101 Rejection:
Applicant's further arguments see pg. 9-10 filed 08/27/2025 have been fully considered but they are not persuasive.
APPLICANT ARGUMENT:
“Applicant submits that the claims, as amended, are patent eligible under 35 USC 101 for at least the following reasons:
1. The Claims Are Not Directed to an Abstract Idea in the Manner Prohibited by Alice While the claims involve machine learning processes, they are not directed to mere mathematical modeling or data analysis in the abstract. Instead, they are directed to a specific, structured process for detecting performance degradation in a prediction model, using inference data, not labeled ground truth data, to retrain a candidate model, comparing both models using shared input data, and replacing the deployed model only when statistically better performance is confirmed. his goes well beyond mere "receiving, storing, and processing data," and instead recites a practical solution to a technical problem in the field of machine learning systems.
2. The Claims Are Integrated Into a Practical Application (Step 2A, Prong Two) The amended claims recite additional elements that integrate the abstract idea into a practical application, including: creating inference data groups over time, selecting data using a moving time window, performing comparative testing using a shared evaluation dataset, and model replacement logic based on real-world performance. These features tie the abstract idea to a specific implementation in a computing system, improving the operation of the system itself (i.e., avoiding stale or inaccurate prediction models in real time). This is analogous to the eligible subject matter in:
3. The Claims Recite an Inventive Concept (Step 2B) Even if the claims were found to be directed to an abstract idea, they would still satisfy Step 2B because they recite a non-conventional and non-generic solution, i.e., the use of inference data rather than labeled data to retrain the model is not routine. The replacement of the production model is made only after comparative performance testing using the same input dataset, linked to actual ground truth where available. The system includes modules that perform coordinated tasks of abnormality detection, distribution, retraining, comparison, and replacement, forming a novel combination of operations. None of these steps, either individually or in ordered combination, are routine or well-known in the art as applied in this way. The moving time window for selecting inference data and the adaptive model replacement logic further distinguish the claims.
4. The Claims Are Technological in Nature and Solve a Technological Problem The claims improve the functioning of a machine learning system by enabling it to self- monitor and self-correct in a time-sensitive, data-limited environment. This is a technological improvement rather than an abstract business or mental process.
The present claims are technological enhancements that reduce latency and boost model accuracy without needing new labeled data.
Accordingly, Applicant requests that this rejection be withdrawn”.
EXAMINER RESPONSE: Examiner respectfully disagree, applicant arguments are not persuasive. Regarding the arguments of The Claims Are Not Directed to an Abstract Idea in the Manner Prohibited by Alice amended independent claim 1 , for example, is rejected under 35 US.C. § 101 because the claims recites the limitation of:
create groups of the inference data according to a preset unit time period;
select a most recent group of the inference data, based on a moving time window, for use in retraining;
...replacing an earlier portion of training data of the prediction model with the selected group of the inference data;
select a common evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model;
compare, for the evaluation input dataset, a first prediction accuracy of first inference data output from the prediction model and a second prediction accuracy of second inference data output from the retraining model, the comparation being based on a comparison to observed ground truth results for the evaluation input dataset when available; and
determine whether to replace the prediction model with the retraining model based on whether the second prediction accuracy exceed the first prediction accuracy
which are a mental processes of evaluation and judgement that can be all performed by the human mind with the aid of pen and paper. For example, a human can create groups of data according to time period can be performed, can draw a time window as shown in applicant Fig. 3-4 and select a most recent group of the inference data for retraining, can replace previous portion of data with a recent one, select a common evaluation input dataset comprising a subset of input data distributed to both models, can compare predictions accuracy of the models based on the distributed data and can determine to replace the prediction model with the retrained model based on accuracy. Therefore, the additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
Regarding the arguments of The Claims Are Integrated Into a Practical Application (Step 2A, Prong Two), examiner respectfully disagree. Amended claim 1, for example, as presented does not integrate into a practical application under the second prong of the two-prong analysis since the claimed invention do not improves the functioning of a computer or improves another technology or technical field. Rather the claim recites additional element that merely recites the words "apply it" (or an equivalent) with the judicial exception, as discussed in MPEP § 2106.05(f), and adds insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g) which the courts have identified such limitations do not integrate a judicial exception into a practical application (see MPEP 2106.04(d)(I)).
Further, regarding the arguments The Claims Recite an Inventive Concept (Step 2B), examiner respectfully disagree. Amended claim 1 specifically teaches receive inference data inferred on from input data by a prediction model; (emphasis added) for which the courts have recognize as well-understood, routine, conventional activity in a particular field claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity ( see MPEP 2106.05(d)(II)(i)).
Lastly, regarding the arguments The Claims Are Technological in Nature and Solve a Technological Problem examiner respectfully disagree. The applicant assert, “the claims improve the functioning of a machine learning system by enabling it to self- monitor and self-correct in a time-sensitive, data-limited environment”, however, this is not evident in the claim language. Applicant is also reminded that claim language is read under the broadest reasonable interpretation without incorporating limitation from the specification and, if the applicant believes that the specification provides detail of improvement, applicant should amend the claim to reflect such improvement (See MPEP 2106.05 (a)).
Regarding the 35 U.S.C § 103:
Applicant's further arguments see pg. 10-11 filed 08/27/2025vhave been fully considered but they are not persuasive.
APPLICANT ARGUMENT:
Applicant argues, “...that claims 1, 8, and 15 are not obvious over Li and Ghanta for at least the following reasons... Li fails to teach or suggest using inference data generated by the deployed model itself as retraining input. Li also lacks any disclosure of replacing only a portion of training data via a moving window, or of comparing a retrained model against an existing one using the same evaluation input. Ghanta does not disclose retraining using inference data, replacing training data in a windowed fashion, or conducting a side-by-side evaluation of model outputs based on shared inputs and ground truth”.
In addition, applicant argues, “Even if Li and Ghanta were combined, the combination fails to teach or suggest the claimed limitations. Specifically: No teaching or suggestion of using inference data for retraining; No use of a moving time window to select and update training data; No parallel input distribution to both models for comparison; No conditional model replacement based on accuracy comparison against ground truth data. Li and Ghanta address different technical problems and do not contemplate the claimed feedback-driven, self-adaptive model replacement architecture. Accordingly, one of ordinary skill in the art would not have been motivated to combine these teachings to arrive at the claimed invention. The remaining claims are allowable at least due to their respective dependencies. Accordingly, Applicant respectfully requests withdrawal of the rejections and allowance of the claims”.
EXAMINER RESPONSE: Examiner respectfully disagree, applicant argument is not persuasive. First, to clarify, amended claim 1 as presented does not teach or suggest a “moving time window to ...update training data” (emphasis added) as recited in applicant argument, rather amended claim 1, only recites selecting “a most recent group of the inference data, based on a moving time window, for use in retraining” not update of training data based on a “moving time window” is mentioned in the claim. Nonetheless, the combination of Li and Ghanta do teach the claimed limitations of using inference data for retraining; a moving time window to select and update training data; parallel input distribution to both models for comparison; conditional model replacement based on accuracy comparison against ground truth data. Specifically, Li [0089] and [0092-0093] teaches “second training sample library” which include training samples that are from the inference results obtained by a model through the inference computing (see [0091]). In addition, Li teaches using inference data for retraining such as in para. [0089] & [0092] where the second inference model (retrained model) is retrained when the second library refreshes to include the most recent inference results (i.e., a most recent group of the inference data). Further, Li [0101-0102] simultaneously (parallel) distribution of “the data to be processed sent from the user-side device” (i.e., common evaluation input dataset comprising a subset of input data) to both the first inference model (prediction model) and the second inference model (retrained model) and teaches the models being replaced based on performance requirements in various paragraphs such as para. [0006], [0090], [0118-0119], [0124-0125] and [0127].
While Li does not suggest or teach moving time window to select and update training data. Ghanta analogues the art, overcome this deficiency and teaches training data set includes data collected at different time periods which can be view as the training data set having been selected based on a moving time window (see Ghanta page 31 lines 22-24) and teaches updating the training data in Page 34, Lines 11-20, Page 38, Lines 3-8, and Page 35, Lines 28-32.
Therefore for the above reason, claims 1-3, 5, 8-10, 12, and 15-16 are not directed to patent-eligible subject matter under 35 U.S.C § 103.
Information Disclosure Statement
The information disclosure statement filed 02/14/2025 fails to comply with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609 because
The cites document Korean Office Action issued on January 8, 2025 in corresponding Korean Patent Application No. 10-2020-0132435. (5 pages in Korean) does not contain a written English language translation.
It has been placed in the application file, but the information referred to therein has not been considered as to the merits. Applicant is advised that the date of any re-submission of any item of information contained in this information disclosure statement or the submission of any missing element(s) will be the date of submission for purposes of determining compliance with the requirements based on the time of filing the statement, including all certification requirements for statements under 37 CFR 1.97(e). See MPEP § 609.05(a).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-3,5,8-10,12, and 15-16 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Independent claim 1 recites the limitation the following limitations:
select a most recent group of the inference data, based on a moving time window, for use in retraining;
train a retraining model using retraining data including the selected most recent group of the inference data by replacing an earlier portion of training data of the prediction model with the selected group of the inference data;
select a common evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model;
distribute the evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model;
compare, for the evaluation input dataset, a first prediction accuracy of first inference data output from the prediction model and a second prediction accuracy of second inference data output from the retraining model, the comparation being based on a comparison to observed ground truth results for the evaluation input dataset when available; and
(emphasis added). However, the specification does not contain support, teach or suggest the following limitations:
A “most recent group of the inference data” being selected specifically based on a “moving time window” for use in retraining. For example, paragraph [0035] of the applicant application only teaches generating the retraining data using a moving window and Fig. 3-4 teach a moving time window, however neither the specification and the drawing teach selecting a most recent group of the inference data, based on a moving time window, for use in retraining.
Training the retraining model using retraining data including the “selected most recent group of the inference data” by replacing the retraining data with the “selected group of inference data”. Paragraph [0005] of the instant application only teaches “a retraining module configured to train a retraining model using retraining data including the inference data” it does not mention the training being performed based on using retraining data that comprises “selected most recent group of the inference data” .
Selecting “a common evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model”, applicant representative assert support for these amendments to the claims are present on [0036] and [0039], however, the support for this limitation is not present in the references paragraphs cited or any other paragraph of the instant application. As paragraphs [0036] and [0039], only teach distributing input data to the prediction model and the retraining model not a “common evaluation input dataset comprising a subset of input data” as disclosed in the claim.
Also, the specification does not teach or suggest distribute the “evaluation input dataset comprising a subset of input data distributed” to both the prediction model and the retraining model. Paragraphs [0010] and [0036] of the instant application teach “distributes the input data to the prediction model and the retraining model”, however, the specification does not teach an “evaluation input dataset” being selected, generated, obtained, let alone such “evaluation input dataset” comprising a subset of input data distributed to the models.
Lastly, there is not support or suggestion for how a comparation for the evaluation input dataset is being performed or how the comparation is “being based on a comparison to observed ground truth results for the evaluation input dataset when available”. Even though, paragraphs [0038-0039] teach performance of the models being compared, these does not teach or suggest comparation for the evaluation an such comparation “being based on a comparison to observed ground truth results for the evaluation input dataset when available”.
For the reason stated above, the specification does not provides details for each of the emphasizes limitations as disclosed in the amended claims 1.
Independent Claims 8 and 15 are recites similar limitation to those of claim 1, therefore the rejection of claim 1 applies.
Claims 2-3,5,9-10,12, and 16 are dependent on claims 1, 8 and 15. Therefore, the rejection of claims 1,8 and 15 applies.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3,5,8-10,12, and 15-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation “the selected group of the inference data” in line 11, “the evaluation input dataset” in line 17 & 21. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, examiner is interpreting these limitations as “the most recent selected group of the inference data” and “the common evaluation input dataset”.
Independent Claims 8 and 15 are recites similar limitation to those of claim 1, therefore the rejection of claim 1 applies.
Claim 16 recites the limitation “the input data” in line 1-2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, examiner is interpreting these limitations as “the common evaluation input dataset”.
Claims 2-3,5,9-10,12, and 16 are dependent on claims 1, 8 and 15. Therefore, the rejection of claims 1, 8 and 15 applies.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5, 8-10, 12, and 15-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
Claim 1-3, 5 and 16 are a system type claim. Claim 8-10 and 12 are a method claim. Claim 15 is a computer program stored in a non-transitory computer- readable storage medium type claim. Therefore, claims 1-3, 5, 8-10, 12, and 15-16 are directed to either a process, machine, manufacture or composition of matter.
Regarding claim 1: 2A Prong 1:
create groups of the inference data according to a preset unit time period; (mental process – of creating groups of data according to time period can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement )).
select a most recent group of the inference data, based on a moving time window, for use in retraining; (mental process – of select a most recent group of the inference data, based on a moving time window, for use in retraining can be performed by the human mind with the help of pen and paper. For example, a human can draw a time window as shown in applicant Fig. 3-4 and select a most recent group of the inference data for retraining (e.g., evaluation and judgement )).
...the selected most recent group of the inference data by replacing an earlier portion of training data of the prediction model with the selected group of the inference data; (mental process – of replacing previous portion of data with a recent one can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
select a common evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model; (mental process – of selecting a common evaluation input dataset comprising a subset of input data distributed to both models can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
compare, for the evaluation input dataset, a first prediction accuracy of first inference data output from the prediction model and a second prediction accuracy of second inference data output from the retraining model, the comparation being based on a comparison to observed ground truth results for the evaluation input dataset when available; and (mental process – of comparing predictions accuracy of the models based on the distributed data can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
determine whether to replace the prediction model with the retraining model based on whether the second prediction accuracy exceed the first prediction accuracy (mental process – of determine to replace the prediction model with the retrained model based on accuracy can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
A system for enhancing a prediction model, comprising: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
one or more processors configured to execute instructions; and (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
a memory storing the instructions, wherein the execution of the instructions by the one or more processors configures the one or more processors to: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
receive inference data inferred on from input data by a prediction model; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)).
train a retraining model using retraining data including the selected most recent group of the inference data by... (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
distribute the evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)).
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
A system for enhancing a prediction model, comprising: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
one or more processors configured to execute instructions; and (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
a memory storing the instructions, wherein the execution of the instructions by the one or more processors configures the one or more processors to: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
receive inference data inferred on from input data by a prediction model; ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)).
train a retraining model using retraining data including the selected most recent group of the inference data by... (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
distribute the evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model; ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
Regarding claim 8: 2A Prong 1:
creating groups of the inference data according to a preset unit time period; (mental process – of creating groups of data according to time period can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement )).
selecting a most recent group of the inference data, based on a moving time window, for use in retraining; (mental process – of selecting a most recent group of the inference data, based on a moving time window, for use in retraining can be performed by the human mind with the help of pen and paper. For example, a human can draw a time window as shown in applicant Fig. 3-4 and select a most recent group of the inference data for retraining (e.g., evaluation and judgement )).
...replacing an earlier portion of training data of the prediction model; (mental process – of replacing previous portion of data with a recent one can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
selecting a common evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model; (mental process – of selecting a common evaluation input dataset comprising a subset of input data distributed to both models can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
comparing, for the evolution input dataset, a first prediction accuracy of first inference data output from the prediction model and a second prediction accuracy of second inference data output from the retraining model, the comparison being based on a comparison to observed ground truth results for the evaluation input dataset when available; and (mental process – of comparing predictions accuracy of the models based on the distributed data can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
determining whether to replace the prediction mode with the retraining model based on whether the second prediction accuracy exceed the first prediction accuracy (mental process – of determine to replace the prediction model with the retrained model based on accuracy can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement)).
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
A method of enhancing a prediction model, which is performed in a computing device including one or more processors and a memory in which one or more programs to be executed by the one or more processors are stored, the method comprising: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
receiving and storing inference data for input data from a prediction model; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)).
training a retraining model using retraining data including the selected group of the inference data by... (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
distributing the evaluation input dataset to the prediction model and the retraining model; (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)).
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
A method of enhancing a prediction model, which is performed in a computing device including one or more processors and a memory in which one or more programs to be executed by the one or more processors are stored, the method comprising: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
receiving and storing inference data for input data from a prediction model; ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)).
training a retraining model using retraining data including the selected group of the inference data by... (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
distributing the evaluation input dataset to the prediction model and the retraining model; ( This is directed to well understood, routine of receiving or transmitting data over a network. See MPEP 2106.05 (d)(II)).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
Regarding claim 15: is rejected under the same rational of claim 1. Claim 15 only recites the additional elements of A computer program stored in a non-transitory computer- readable storage medium, the computer program comprising one or more instructions that, when executed by a computing device including one or more processors, cause the computing device to perform... which is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f).
Regarding claim 2: 2A Prong 1:
detect an abnormality in the inference data; and (mental process – of detecting an anomaly from inference/prediction/output data can be performed by the human mind with the help of pen and paper (e.g., evaluation and judgement )).
2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the one or more processors are further configured to: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
train the retraining model when the abnormality is detected (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
Regarding claim 3: 2A Prong 1: None.
2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the one or more processors are further configured to: (This is directed to using computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)).
store the groups of the inference data in the memory (This is understood to be insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Further, this is directed to well understood, routine of storing and retrieving information in memory. See MPEP 2106.05 (d)(II)).
The additional elements as disclosed above alone or in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above.
Regarding claim 5: 2A Prong 1: None.
2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein a capacity of training data of the prediction model and a capacity of the retraining data are set to be the same (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)).
Regarding claim 9: See rejection of claim 2, same rational applies.
Regarding claim 10 See rejection of claim 3, same rational applies.
Regarding claim 12: See rejection of claim 5, same rational applies.
Regarding claim 16: 2A Prong 1: None.
2A Prong 2 and 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the distribution of the input data is performed according to a preset distribution ratio (This is directed to restricting the abstract idea to a particular technological environment. See MPEP 2106.05(h)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5, 8-10, 12, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. US 20210209488 A1 (hereinafter Li) in view of Gantha et al. WO 2020123985 A1 (hereinafter Gantha).
Regarding claim 1:
Li teaches:
A system for enhancing a prediction model, comprising: one or more processors configured to execute instructions; and (Li Abstract).
a memory storing the instructions, wherein the execution of the instructions by the one or more processors configures the one or more processors to: (Li Abstract).
receive inference data inferred on from input data by a prediction model; (Li [0089] and [0092-0093]).
train a retraining model using retraining data including the most recent group of the inference data of training data of the prediction model with the group of the inference data; ( Li [0089] and [0092] teaches that the second training library includes the training samples that are from the inference result and, since the inference result is continuously generated, that the second training library will refresh to include the data as new training samples in order for the second inference model to perform better inference computing on the newly-appearing data than the first model. This implies that the second inference model is retrained when the second training library refreshes to include the most recent inference results (i.e., a most recent group of the inference data)).
select a common evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model; ( Li [0102] teaches “the data to be processed sent from the user-side device” (i.e., select a common evaluation input dataset comprising a subset of input data) being processed by both the first inference model (prediction model) and the second inference model (retraining model). In addition, Li [0005] “the data to be processed includes an original product image”, that is the data to be processed comprises a subset of input data such as the original product image).
distribute the evaluation input dataset comprising a subset of input data distributed to both the prediction model and the retraining model; ( Li [0101-0102]).
compare, for the evaluation input dataset, a first prediction accuracy of first inference data output from the prediction model and a second prediction accuracy of second inference data output from the retraining model, the comparation being based on a comparison to observed ground truth results for the evaluation input dataset when available; and ( Li [0118] teaches comparing the performance (i.e., accuracy [0105]) of the first inference model (prediction model) with the performance of the third inference model (retrained model) and Li teaches [0103-104] teaches the comparison being based on “a copy of the original product image of the display panel sent from the user” (ground truth results)).
determine whether to replace the prediction model with the retraining model based on whether the second prediction accuracy exceed the first prediction accuracy (Li [0006], [0090] and [0127] teaches updating/replacing the first inference model (prediction model) with the second inference model (retraining model) based on performance requirements such as accuracy (see [0105]).
Li does not teach create groups of the inference data according to a preset unit time period; select a most recent group of the inference data, based on a moving time window, for use in retraining; train a retraining model using retraining data including the selected most recent group of the inference data by replacing an earlier portion of training data of the prediction model with the selected group of the inference data.
However, in the analogous art, Ghanta teaches:
create groups of the inference data according to a preset unit time period (Page 34, Lines 11-20 and Page 38, Lines 3-8 teach that data is received in batches according to a predetermined time period, which implies that groups of data are created according to the predetermined time period).
select a most recent group of the inference data, based on a moving time window, for use in retraining; ( Ghanta page 31 lines 22-24 teaches the training data sets (i.e. most recent group of the inference data) can “include different types of data, data collected at different time periods, different amounts of data, and/or other types of variation in data” this suggest that if training data set includes data collected at different time periods it can be view as the training data set having been selected based on a moving time window).
train a retraining model using retraining data including the selected most recent group of the inference data by replacing an earlier portion of training data of the prediction model with the selected group of the inference data; (Ghanta Page 34, Lines 11-20, Page 38, Lines 3-8, and Page 35, Lines 28-32 teach that the inference data is batched according to a preset unit time period and that the previous labels (i.e., an earlier portion) of the training data are replaced by the predictions of the primary ML model, which implies that these predictions include the most recently received batch of predictions (i.e., the most recent group of the inference data according to the preset unit time period)).
Ghanta is also in the same field of endeavor as Li (machine learning). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of creating groups of inference data according to present time unit and replacing an earlier portion of training data of the prediction model with the most recent group of the inference data according to the preset unit time period, as being disclosed and taught by Ghanta, in the system taught by Li to yield the predictable results of improve the accuracy of prediction in real time such that accurate explanations for predictions are consistently sustained ( Ghanta pg. 11, lines 26-28).
Regarding claim 2:
Li and Ghanta teach The system of claim 1. Li specifically teaches wherein the one or more processors are further configured to: detect an abnormality in the inference data; (Li [0071], [0077-0078], [0083-0087]) and train the retraining model when the abnormality is detected (Li [0087-0089]).
Regarding claim 3:
Li and Ghanta teach The system of claim 1. Li specifically teaches wherein the one or more processors are further configured to: store the inference data in the memory (Li [0089] and [0093]).
Li does not specifically teaches storing the group of inference data in memory. Nevertheless, Ghanta teaches store the groups of the inference data in the memory (Ghanta Page 34, Lines 11-20 teaches that the first prediction module may receive data in batches according to a predetermined time period and may then store the received groups/batches of the inference data locally (i.e., in the memory), which implies that the data is stored in groups).
Regarding claim 5:
Li and Ghanta teach The system of claim 3. Ghanta specifically teaches wherein a capacity of training data of the prediction model and a capacity of the retraining data are set to be the same (Ghanta Page 26, Lines 27-32; Page 27, Lines 1-4; Page 35, Lines 28-32).
Regarding claims 8-10 and 12, they are method claims comprising limitations similar to those of claims 1-3 and 5, respectively, and are therefore rejected for at least the same rationale.
Regarding claim 15, it is a computer program product claim comprising limitations similar to those of claim 8 and is therefore rejected for at least the same rationale. Li further teaches the additional limitations of a computer program stored in a non-transitory computer-readable storage medium, the computer program comprising one or more instructions that, when executed by a computing device including one or more processors, cause the computing device to perform… (Li [Abstract], [0183]).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Li and Gantha in further view of Doddi et al. US 10,262,271 B1 (hereinafter Doddi) .
Regarding claim 16:
Li and Gantha teach The system of claim 1. Li teaches distribute the input dataset in paragraphs [0101-0102]. However, neither Li or Gantha disclose, wherein the distribution of the input data is performed according to a preset distribution ratio.
Nevertheless, Doddi teaches the following:
wherein the distribution of the input data is performed according to a preset distribution ratio (Doddi col. 2 lines 51-52 teaches data (input data) being allocated (distributed) based on designating a percentage of the data to each of the models, this suggest allocation of the data is done based on a "distribution ratio". In addition, col. 8 lines “In other cases, users may allocate a certain percentage (either globally or based on selective parameters) of the dataset being modeled to different models -e.g., 25% of the data sent to model A, 25% to model B, and 50% to model C, and each stream executed in parallel and/or sequential as described above”).
Doddi is also in the same field of endeavor as Li and Gantha (machine learning). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of distributing data based on designated percentage, as being disclosed and taught by Doddi, in the system taught by Li and Gantha to yield the predictable results of proving a method for “facilitate the model development process-from feature engineering to production deployment” ( Doddi col. 1 lines 51-53).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chu US 2009/0106178 A1 Titled: Computer Implemented system and methods for updating predictive models
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GISEL G FACCENDA whose telephone number is (703)756-1919. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/G.G.F./Examiner, Art Unit 2127
/ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127