Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114 ("RCE"), including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 6, 2026, has been entered.
Status of Claims
Claims 1-19 were previously pending and subject to a Final Office Action having a notification date of November 6, 2026 (“Final Office Action”). Following the Final Office Action, Applicant filed the RCE and an amendment on February 6, 2026 (“Amendment”), canceling claims 1-19 and adding new claim 20-35.
The present non-final Office Action addresses pending claims 20-35 in the Amendment.
Response to Arguments
Response to Applicant's Arguments Regarding Claim Rejections Under 35 USC §101
While the rejection of claims 1-19 under 35 USC 101 has been withdrawn due to their cancelation in the Amendment, new claims 20-35 are rejected under 35 USC 101 as set forth in the rejection below.
Response to Applicant's Arguments Regarding Claim Rejections Under 35 USC §102
While the rejection of claims 1-17 and 19 under 35 USC 102 has been withdrawn due to their cancelation in the Amendment, new rejections under 35 USC 103 are presented herein.
Claim Objections
Claims 20 and 28 are objected to because of the following informalities:
-In claims 20 and 28, the last line, it appears that "selected sample" should be changed to --selected sample data--.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "communication device," "confidence calculation device," and "labeling guide device" in claims 20 and 22-24.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 20-35 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more:
Subject Matter Eligibility Criteria - Step 1:
Claims 20-27 are directed to a system (i.e., a machine) and claims 28-35 are directed to a non-transitory computer-readable storage medium (i.e., a manufacture). Accordingly, claims 20-35 are all within at least one of the four statutory categories. 35 USC §101.
Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong One:
Regarding Prong One of Step 2A of the Alice/Mayo test (which collectively includes the guidance in the January 7, 2019 Federal Register notice and the October 2019 and July 2024 updates issued by the USPTO as incorporated into the MPEP, as supported by relevant case law), the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. MPEP 2106.04(II)(A)(1). An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts. MPEP 2106.04(a).
Representative independent claim 20 includes limitations that recite at least one abstract idea. Specifically, independent claim 20 recites:
A system for guiding a labeling of data for retraining an artificial intelligence (AI) model that receives and classifies input data into any one of a plurality of classes, the system comprising:
a communication device that receives the input data input to the AI model and an inference value with which the AI model classifies the input data into the classes, and externally transmits selected data among the input data;
a confidence calculation device that calculates a confidence of the inference value using the inference value received from the AI model, and classifies a feature of the input data from the calculated confidence;
a labeling guide device that selects sample data of the classes that correspond to the input data or various types of data having the same feature as the input data as a sample for guidance; and
a display device that outputs the input data and the selected sample together.
The Examiner submits that the foregoing underlined limitations constitute “mental processes” because they are observations/evaluations/judgments/analyses that can, at the currently claimed high level of generality, be practically performed in the human mind (e.g., with pen and paper). As an example, a person could practically in their mind with pen and paper "guide a labeling of data" for use in retraining an AI model via receiving input data and an inference value used/generated by an AI model, externally transmitting selected parts of the input data (e.g., via writing it down), calculating a confidence of the inference value from the AI model (e.g., via determining how close features of the input data are to a "center" of the features of the particular inferred class), selecting sample data of the classes having the same feature as the input data, and output the input data and the selected sample together (e.g., writing them down). These recitations, under their broadest reasonable interpretation, are similar to the concepts of collecting information, analyzing it and displaying certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQe2d 1739 (Fed. Cir. 2016)). MPEP 2106.04(a)(2)(III). Claims “directed to collection of information, comprehending the meaning of that collected information, and indication of the results, all on a generic computer network operating in its normal, expected manner,” fail step one of the Alice framework. In re Killian, 45 F.4th 1373, 1380 (Fed. Cir. 2022). Claims directed to “collecting, analyzing, manipulating, and displaying data’’ are abstract. Univ. of Fla. Research Found., Inc. v. General Elec. Co., 916 F.3d 1363, 1368 (Fed. Cir. 2019). Claims directed to organizing, storing, and transmitting information determined to be directed to an abstract idea. Cyberfone Sys., L.L.C. v. CNN Interactive Grp., Inc., 558 F. App’x 988, 992 (Fed. Cir. 2014).
Accordingly, the claim recites at least one abstract idea.
Furthermore, dependent claims 21-24, 27, 29-32, and 35 further define the at least one abstract idea (and thus fail to make the abstract idea any less abstract) as set forth below:
-Claims 21 and 29 call for classifying the feature of the input data into; a first group of data in which the inference value is located within a radius of a specific class among the classes in a latent space and thus that is fully classified into the specific class only, a second group of data in which the inference value is located outside the radius of each of the classes in the latent space but that is located relatively adjacent to two or more of the classes, and a third group of data in which the inference value is at a distance of a preset reference value or more from all the classes in the latent space. A person can practically perform these steps in their mind with pen and paper ("mental processes").
-Claims 22 and 30 call for selecting the sample data that help relabeling of the input data into the second group or the third group which a person can practically perform in their mind with pen and paper ("mental processes").
-Claims 23 and 31 call for selecting any data that have been classified into the classes with a marginal possibility difference as the sample which a person can practically perform in their mind with pen and paper ("mental processes").
-Claims 24 and 32 call for selecting various types of data having the same feature as the input data as the sample and outputting the sample which a person can practically perform in their mind with pen and paper ("mental processes").
-Claims 27 and 35 recite how the data or the inference value disposed in the latent space has a matrix or a vector value derived from the matrix. A person can practically arrange the data and/or the inference value in a matrix/vector at the claimed high level of generality in their mind with pen and paper ("mental processes").
Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong Two:
Regarding Prong Two of Step 2A of the Alice/Mayo test, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. As noted at MPEP §2106.04(II)(A)(2), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” MPEP §2106.05(I)(A).
In the present case, the additional limitations beyond the above-noted at least one abstract idea recited in the claim are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”):
A system for guiding a labeling of data for retraining an artificial intelligence (AI) model that receives and classifies input data into any one of a plurality of classes, the system comprising:
a communication device that (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) receives the input data input to the AI model and an inference value with which the AI model classifies the input data into the classes, and externally transmits selected data among the input data;
a confidence calculation device that (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) calculates a confidence of the inference value using the inference value received from the AI model, and classifies a feature of the input data from the calculated confidence;
a labeling guide device that (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) selects sample data of the classes that correspond to the input data or various types of data having the same feature as the input data as a sample for guidance; and
a display device that (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) outputs the input data and the selected sample together.
For the following reasons, the Examiner submits that the above-identified additional limitations, when considered as a whole with the limitations reciting the at least one abstract idea, do not integrate the above-noted at least one abstract idea into a practical application.
Regarding the additional limitations of the communication, confidence calculation, labeling guide, and display devices, the Examiner submits that these limitations amount to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)).
Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Furthermore, looking at the additional limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. MPEP §2106.05(I)(A) and §2106.04(II)(A)(2).
For these reasons, representative independent claim 20 and analogous independent claim 28 do not recite additional elements that integrate the judicial exception into a practical application. Accordingly, representative independent claim 20 and analogous independent claim 28 are directed to at least one abstract idea.
The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below:
-Claims 25, 26, 33, and 34 recite how the AI model is trained to minimize a cost value of a cost function that is proportional to a distance between the inference value and other data of the same class as of the inference value in a latent space which amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. Requirements that the machine learning model be “iteratively trained” or dynamically adjusted do not represent a technological improvement because iterative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), p. 12. “[T]he way machine learning works is the inputs are defined, the model is trained, and then the algorithm is actually updated and improved over time based on the input.” Id.
When the above additional limitations are considered as a whole along with the limitations directed to the at least one abstract idea, the at least one abstract idea is not integrated into a practical application. Therefore, the claims are directed to at least one abstract idea.
Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2B:
Regarding Step 2B of the Alice/Mayo test, representative independent claim 20 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application.
Regarding the additional limitations of the communication, confidence calculation, labeling guide, and display devices, the Examiner submits that these limitations amount to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)).
The dependent claims also do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application.
-Claims 25, 26, 33, and 34 recite how the AI model is trained to minimize a cost value of a cost function that is proportional to a distance between the inference value and other data of the same class as of the inference value in a latent space which amounts to merely reciting the idea of a solution or outcome without reciting details of how a solution to a problem is accomplished which is equivalent to the words “apply it” (see MPEP § 2106.05(f)). Claims that do no more than apply established methods of machine learning to a new data environment are not patent eligible. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), pp. 10, 14. An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment. Id. Requirements that the machine learning model be “iteratively trained” or dynamically adjusted do not represent a technological improvement because iterative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning. Recentive Analytics, Inc. v. Fox Corp., Fox Broadcasting Company, LLC, Fox Sports Productions, LLC, Case No. 23-2437, (Fed. Cir. 2025), p. 12. “[T]he way machine learning works is the inputs are defined, the model is trained, and then the algorithm is actually updated and improved over time based on the input.” Id.
Therefore, claims 20-35 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 20, 21, 25, 26, 28, 29, 33, and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Int'l Pub. No. WO 2020/030913 to Harang et al. ("Harang") in view of U.S. Patent No. 12,524,677 to Chen et al. ("Chen"):
Regarding claim 20, Harang discloses a system for guiding a labeling of data ([1048], [1071] discloses labeling of training data for training/retraining of ML/AI model) for retraining an artificial intelligence (AI) model ([1048], [1053] disclose retraining ML/AI model) that receives and classifies input data into any one of a plurality of classes (the ML/AI model classifies artifacts (classifies input data into classes) per [1030]; for instance, see different classes at end of [1045] and [1056]), the system comprising:
a communication device that receives the input data input to the AI model and an inference value with which the AI model classifies the input data into the classes, and externally transmits selected data among the input data ([1044]-[1045] and [1056] discuss using the ML model (e.g., an NN) to output an output vector/identification (make an inference) corresponding to a classification of maliciousness of an artifact/file (classifying the input data into one of the classes) while [1089] discloses how the processor 110 (which executes the ML model per Figure 1) stores the determined classification in memory 120 which would necessarily be in connection with the corresponding artifact(s)/file(s) (to distinguish from the classifications of other artifacts/files/etc.); in this regard, the processor 110 and corresponding software/instructions amount to a "communication device" that receives the artifact(s)/file(s)/input data and the classification/inference value and "externally" transmits an indication of the specific artifact(s)/file(s) of the input data to which the classification/inference value corresponds to the memory 120);
a confidence calculation device that calculates a confidence of the inference value using the inference value received from the AI model ([1045] discloses how classifier receives output vector (inference value) from the NN (AI model) and determines a confidence value), and classifies a feature of the input data from the calculated confidence ([1038], [1056] disclose how the input file/artifact includes features while [1075]-[1077] and [1095]-[1096] discuss how the ML/AI model is retrained when a confidence value fails to meet a confidence criterion; accordingly, when the confidence value meets the criterion, features of the input data are "classified" according to the inference/classification value and when the confidence value does not meet the criterion, the ML/AI model is retrained such that the features of the input data are not "classified" according to the inference/classification value; in other words, retraining the ML/AI model when the confidence value does not meet the criterion indicates that the inference/classification value generated by the ML/AI model before the retraining is incorrect/inaccurate such that the features of the input data would not be formally classified according to the same and would be classified after the ML/AI model is retrained; also, [1086] and [1088] discuss how the feature vector for each artifact (input data) is analyzed by the ML model to generate a classification and a confidence value while [1073] discusses low, medium, and high confidence buckets; accordingly, the features would be classified as the particular classification for the "high" buckets and less likely to be classified as the particular classification for the "low" buckets; also, [1099] discusses classifying only "high-confidence" artifacts which amounts to classifying the input data features from the calculated confidences (performing the classification for the "high-confidence" artifacts); furthermore, the processor 110 and corresponding software/instructions for performing the above functions amount to a "confidence calculation device" that performs such functions);
a labeling guide device that selects sample data of the classes that correspond to the input data or various types of data having the same feature as the input data as a sample for guidance ([1066] discloses how evaluator 114 can use a set of artifacts in the training data set that are statistically similar to the ones associated with the confidence metrics to retrain the ML/AI model (e.g., for "guidance") to improve performance; the set of artifacts in the training data set that are statistically similar to the ones associated with the confidence metrics map to the classes that correspond to the input data; the processor 110 and corresponding software/instructions for performing the above functions amount to a "labeling guide device" that performs such functions); and
…
While Harang appears to be silent, Chen teaches that it was known in the machine learning art to provide a display device that outputs the input data and the selected sample together (claim 1 of Chen discloses retraining an ML model with an augmented dataset including an example and then displaying both an original sample and an image representing the retraining example (which would necessarily be via a "display device") which would advantageously allow analysts to more clearly understand the manner in which the ML model is being retrained and thereby more clearly understanding subsequent predictions therefrom.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the system of Harang to include a display device that outputs the input data and the selected sample together as taught by Chen to advantageously allow analysts to more clearly understand the manner in which the ML model is being retrained and thereby more clearly understanding subsequent predictions therefrom, because a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention, and because there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). The courts have made clear that the teaching, suggestion, or motivation test is flexible and an explicit suggestion to combine the prior art is not necessary. The motivation to combine may be implicit and may be found in the knowledge of one of ordinary skill in the art, or, in some cases, from the nature of the problem to be solved. DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007).
Regarding claim 21, the Harang/Chen combination discloses the system of claim 20, further including wherein the feature of the input data is classified into;
a first group of data in which the inference value is located within a radius of a specific class among the classes in a latent space and thus that is fully classified into the specific class only (Figures 4A, 4C, and 4D and [1043] of Harang illustrates/discusses a latent feature space in which first and second features of each of a plurality of artifacts (input data) are classified into one of a plurality of clusters each associated with a respective inference value (e.g., benign, malicious, malware family, etc., also see end of [1056]); for instance, Figures 4A, 4C, and 4D illustrate how each of the clusters has some radius within which the features and thus a respective artifact would be "fully classified" into the specific class/inference value of the respective cluster and which amounts to a "first group"),
a second group of data in which the inference value is located outside the radius of each of the classes in the latent space but that is located relatively adjacent to two or more of the classes ([1043] and Figure 4C of Harang discuss/illustrate features of artifacts/data in a decision boundary of the latent feature space in which the classification/inference value is outside the radii of all classes but relatively adjacent to two or more classes (the bottom two clusters/classes) and amounts to a "second group" (see below annotation)), and
a third group of data in which the inference value is at a distance of a preset reference value or more from all the classes in the latent space ([1043] and Figure 4C of Harang discuss/illustrate features of artifacts/data in a decision boundary of the latent feature space in which the classification/inference value is greater than some reference value from all classes in the latent feature space and which amounts to a "third group" (see below annotation)).
PNG
media_image1.png
310
492
media_image1.png
Greyscale
Regarding claim 25, the Harang/Chen combination discloses the system of claim 20, further including wherein the AI model is trained to minimize a cost value of a cost function ([1067] of Harang discloses error rates (cost values of a cost function) which are known to be minimized during training of AI/ML models; also, [1051] of Harang discloses determining a performance of the NN/ML model on known classifications of the artifacts; such performance is necessarily a comparison of classifications of artifacts output by the ML model to the known classifications of the artifacts which amounts to an error/lost/cost function while it also discloses adjusting the weights of the NN/ML model based on feedback on the above-noted performance which would necessarily involve adjusting the weights so as to reduce/minimize a difference between the classifications of artifacts output by the ML model to the known classifications of the artifacts so as to improve the accuracy of the ML model).
Regarding claim 26, the Harang/Chen combination discloses the system of claim 25, further including wherein the cost function includes a function that is proportional to a distance between the inference value and other data of the same class as of the inference value ([1051] of Harang discloses determining a performance of the NN/ML model on known classifications of the artifacts; such performance necessarily corresponds to a function that is proportional to a distance between classifications of artifacts output by the ML model (inference values) to the known classifications of the artifacts (other data of the same class as of the inference value), where greater performance would correlate with reduced distance between the "inference value" and "other data of the same class as of the inference value") in a latent space (Figures 4A-4D)).
Claims 28, 29, 33, and 34 are rejected in view of the Harang/Chen combination similar to the rejection of claims 20, 21, 25, and 26 above, respectively.
Claims 27 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Int'l Pub. No. WO 2020/030913 to Harang et al. ("Harang") in view of U.S. Patent No. 12,524,677 to Chen et al. ("Chen"), and further in view of U.S. Patent No. 11,315,196 to Narayan et al ("Narayan"):
Regarding claim 27, the Harang/Chen combination discloses the system of claim 26, but appears to be silent regarding wherein the data or the inference value disposed in the latent space has a matrix or a vector value derived from the matrix.
Nevertheless, Narayan teaches (10:44-11:37 and Figure 3B) that it was known in the machine learning art for data in a latent space to have a matrix or vector derived from the matrix to advantageously provide the basis for determined classifications and to expose an ML model to more context and information than just scores/predictions (12:7-12).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the data or the inference value disposed in the latent space of the Harang/Chen combination to have a matrix or a vector value derived from the matrix similar to as taught by Narayan to advantageously provide the basis for determined classifications and to expose an ML model to more context and information than just scores/predictions. A person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success in doing so. KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007). Furthermore, all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination yielded nothing more than predictable results to one of ordinary skill in the art. Id.
Claim 35 is rejected in view of the Harang/Chen/Narayan combination as discussed above in relation to claim 27.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The references cited on the attached PTO-892 disclose various systems for training ML models with training data, generating predictions and confidence values with the trained models based on input data, analyzing performance of the ML models, and retraining the ML models with new training data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHON A. SZUMNY whose telephone number is (303) 297-4376. The examiner can normally be reached Monday-Friday 7-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Dunham, can be reached at 571-272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHON A. SZUMNY/Primary Examiner, Art Unit 3686