Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is Non-Final Office Action, in responses to Patent Application filed 01/23/2023. Claim(s) 1-20 are pending. Claim(s) 1, 8 and 15 is/are independent.
In addition, in the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
A signed and dated copy of applicant’s IDS, which was filed 01/23/2023 is/are attached to this Office Action.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 fail to recite statutory subject matter, as defined in 35 U.S.C. 101, because: The claimed invention is/are directed to a judicial exception (i.e., abstract idea) without significantly more.
Step 1: YES (Claim(s) is/are process, machine, manufacture or composition of the matter). ... for identifying anomalies in a “trained prediction model”, the method comprising: “receiving an input data set”; obtaining a prediction from the trained prediction model “based on the input data set”; “receiving, from a “subject matter expert”, a “determination” that the prediction is a “false positive”; “creating a false positive contour based on the input data set”; and “adding” the false positive contour to a false positive feature space for the trained prediction model ...and therefore, fall into one of the four categories of patent eligible subject matter (process, machine, manufacture or composition of the matter).
Step 2A : Prong One: ( whether a claim recites a judicial exception ?) the claim(s) recite ... for identifying anomalies in a “trained prediction model”, the method comprising: “receiving an input data set”; obtaining a prediction from the trained prediction model “based on the input data set”; “receiving, from a “subject matter expert”, a “determination” that the prediction is a “false positive”; “creating a false positive contour based on the input data set”; and “adding” the false positive contour to a false positive feature space for the trained prediction mode.... These limitation(s) recite mental processes....since trained prediction model goes through the entire lifecycle of the model, which enables trained prediction model and identify false positive predictions, ... is created based on identified false positive predictions through manual (intervention of SME - subject matter expert) .... processes (see the current specification in USPGPUB 20240249509 A1 Para 34).
--------------Step 2A : Prong Two: (Do the claim(s) recite “additional element(s) that integrate the “Judicial Exception” into “A Practical Application” ? The claim(s) recite additional limitation(s) such as computer implemented method/system/computer program product for... “adding” the false positive contour to a false positive feature space for the trained prediction model ...it is noted, the improvement in the abstract idea itself ... but do not integrate the judicial exception into a practical application, i.e., the trained prediction model goes through the entire lifecycle of the model, which enables trained prediction model to evaluate positive predictions, identify false positive predictions, ... The false positive feature space is created based on identified false positive predictions through manual (intervention of SME) ...then based on the input data set”; and “adding” the false positive contour to a false positive feature space for the trained prediction mode...
These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not integrate the judicial exception into a practical application. (MPEP 2106.04(d), 2106.05(f)).
Step 2B: (Whether a Claim Amounts to Significantly More) ? The claim(s) recite additional limitation(s) such as ... system/computer program product for identifying anomalies in a “trained prediction model”, ... and “adding” the false positive contour to a false positive feature space for the trained prediction model ...These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not amount to significantly more than the abstract idea itself (MPEP 2106.05, 2106.04(d) and 2106.05(f)).
As to the dependent claim(s) 2-7, 9-14 and 16-20, further recite, addition limitation(s) such as, (trained prediction model prior to providing the prediction to the subject matter expert, analysis of the comparison of the prediction to the false positive feature space, primary prediction and a secondary prediction, prediction is a numerical value associated with the binary value, etc., These limitation(s) only amounts to mere instructions to implement the abstract idea ...and do not include elements that amount to significantly more than the abstract idea and are also rejected under the same rational.
Accordingly, claims 1-20 fail to recite statutory subject matter, as defined in 35 U.S.C. 101.
Claims Rejection – 35 U.S.C. 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 rejected under 35 U.S.C. 103 as being unpatentable over Gullikson et al., (“US 20230110056 A1” filed 10/12/2022 [hereinafter “Gullikson”], in view of Gandenberger et al., (“US 20200103886 A” filed 09/28/20181 [hereinafter “Gandenberger”].
Independent Claim 1, Gullikson teaches: A method for identifying anomalies in a trained prediction model, the method comprising: receiving an input data set; obtaining a prediction from the trained prediction model based on the input data set; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data...)
Also, Gullikson further teaches: receiving, from a subject matter expert, a determination that the prediction is a false positive; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data using rules established by a subject matter expert.....Also, in Gullikson Para 96, further mentions after training the models, the model(s) is/are validated by a model validator 410.(i.e., subject matter expert) ... to determine whether each of the model(s) is/are able to distinguish normal operational behavior from abnormal operational behavior with sufficient reliability. In this context, sufficient reliability is determined based on specified reliability criteria, such as a false positive rate, a false negative rate, an accurate detection rate, or other metrics indicative of reliability of a model. ...)
It is noted, Gullikson discloses , discloses a method for detecting anomaly of trained model based on input data... However, Gullikson does not expressly teach, But the combination of Gullikson and Gandenberger teach,... creating a false positive contour based on the input data set; and adding the false positive contour to a false positive feature space for the trained prediction model. (in Para(s) 182-188 and Fig(s) 7A-D and 8A-D, describing a method for predicting positive and false prediction(s) model(s), then grouped into alert such as 1 catch and 0 false flags into a graph having catches and false flags... shown in FIGS. 7A-D which is potentially be grouped into alerts and then counted for catches and false flags is provided by FIGS. 8A-D, each of which comprises a graph having an x-axis 802 that indicates the time of the model's output relative to the actual event occurrence and a y-axis 804 that indicates whether the model's output was a “Yes” prediction or a “No” prediction... by an event prediction model...)
In the BRI, Gandenberger’s graph having an x-axis 802 that indicates the time of the model's output relative to the actual event occurrence and a y-axis 804 that indicates whether the model's output was a “Yes” prediction or a “No” prediction... by an event prediction model, is recognized as ...the contour (i.e.,, shape, line, edge, graph with x and y axis in 2 dimensional)..as claimed.
Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Gullikson’s method for detecting anomaly of trained model based on input data, , to include a means of said ,... creating a false positive contour based on the input data set; and adding the false positive contour to a false positive feature space for the trained prediction model as taught by Gandenberger, that provides the event prediction models that are configured to predict whether event occurrences are forthcoming and then preemptively notify a user of forthcoming event occurrences sufficiently in advance of when such event occurrences actually happen, so that action can be taken to address the event occurrences before they actually happens. In this way, an event prediction model may help to mitigate the costs that may otherwise result from an unexpected occurrence of an undesirable event like an asset failure—such as an increase in maintenance cost and/or a decrease in productivity—and may thus provide a positive net business value...[in Gandenberger Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined.
Claim 2, Gullikson and Gandenberger further teach: wherein the false positive contour is created based on one or more features extracted from the data set; (in Para(s) 182-188 and Fig(s) 7A-D and 8A-D, describing a method for predicting positive and false prediction(s) model(s), then grouped into alert such as 1 catch and 0 false flags into a graph having catches and false flags... shown in FIGS. 7A-D which is potentially be grouped into alerts and then counted for catches and false flags is provided by FIGS. 8A-D, each of which comprises a graph having an x-axis 802 that indicates the time of the model's output relative to the actual event occurrence and a y-axis 804 that indicates whether the model's output was a “Yes” prediction or a “No” prediction... by an event prediction model...)
In the BRI, Gandengerger’s graph having an x-axis 802 that indicates the time of the model's output relative to the actual event occurrence and a y-axis 804 that indicates whether the model's output was a “Yes” prediction or a “No” prediction... by an event prediction model, is recognized as ...the false positive contour (i.e.,, shape, line, edge, graph with x and y axis in 2 dimensional)..as claimed.
Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Gullikson’s method for detecting anomaly of trained model based on input data, , to include a means of said ,... wherein the false positive contour is created based on one or more features extracted from the data set as taught by Gandengerger, that provides the event prediction models that are configured to predict whether event occurrences are forthcoming and then preemptively notify a user of forthcoming event occurrences sufficiently in advance of when such event occurrences actually happen, so that action can be taken to address the event occurrences before they actually happens. In this way, an event prediction model may help to mitigate the costs that may otherwise result from an unexpected occurrence of an undesirable event like an asset failure—such as an increase in maintenance cost and/or a decrease in productivity—and may thus provide a positive net business value...[in Gandenberger Para 4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined.
Claim 3, Gullikson and Gandenberger further teach: wherein the one or more features are identified by the subject matter expert; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data using rules established by a subject matter expert.....)
Claim 4, Dakin and Jain further teach: comparing the prediction false positive feature space for the trained prediction model prior to providing the prediction to the subject matter expert; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data using rules established by a subject matter expert..... Also, in Gullikson Para 96, further mentions after training the models, wherein the model(s) is/are validated by a model validator 410.(i.e., subject matter expert) ... to determine whether each of the model(s) is/are able to distinguish normal operational behavior from abnormal operational behavior with sufficient reliability. In this context, sufficient reliability is determined based on specified reliability criteria, such as a false positive rate, a false negative rate, an accurate detection rate, or other metrics indicative of reliability of a model. ...)
Claim 5, Gullikson and Gandenberger further teach: wherein the prediction is provided to the subject matter expert with an analysis of the comparison of the prediction to the false positive feature space; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data using rules established by a subject matter expert..... Also, in Gullikson Para 96, further mentions after training the models, wherein the model(s) is/are validated by a model validator 410.(i.e., subject matter expert) ... to determine whether each of the model(s) is/are able to distinguish normal operational behavior from abnormal operational behavior with sufficient reliability. In this context, sufficient reliability is determined based on specified reliability criteria, such as a false positive rate, a false negative rate, an accurate detection rate, or other metrics indicative of reliability of a model. ...)
Claim 6, Gullikson and Gandenberger further teach: wherein the prediction includes a primary prediction and a secondary prediction; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data using rules established by a subject matter expert..... Also, in Gullikson Para 96, further mentions after training the models, wherein the model(s) is/are validated by a model validator 410.(i.e., subject matter expert) ... to determine whether each of the model(s) is/are able to distinguish normal operational behavior from abnormal operational behavior with sufficient reliability. In this context, sufficient reliability is determined based on specified reliability criteria, such as a false positive rate, a false negative rate, an accurate detection rate, or other metrics indicative of reliability of a model. ...(i.e., primary prediction and a secondary prediction...).
Claim 7, Gullikson and Gandenberger further teach: ... wherein the primary prediction is a binary value and the secondary prediction is a numerical value associated with the binary value; (In Gullikson Para(s) 3-5, discloses a method for detecting anomaly of trained model based on input data using rules established by a subject matter expert..... Also, in Gullikson Para 96, further mentions after training the models, wherein the model(s) is/are validated by a model validator 410.(i.e., subject matter expert) ... to determine whether each of the model(s) is/are able to distinguish normal operational behavior from abnormal operational behavior with sufficient reliability. In this context, sufficient reliability is determined based on specified reliability criteria, such as a false positive rate, a false negative rate, an accurate detection rate, or other metrics indicative of reliability of a model. ...(i.e., primary prediction and a secondary prediction... Also, in Gullikson Para(s) 51, 72 and 79, further mentions binary value and the secondary prediction is a numerical value associated with the binary value...)
Regarding Claim(s) 8-14 (respectively) is/are fully incorporated similar subject of claim(s) 1-7 (respectively) cited above.
Regarding Claim(s) 15-20 (respectively) is/are fully incorporated similar subject of claim(s) 1-6 (respectively) cited above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Palani (“ US 20210344695 A1” filed 04/30/2020, relates to techniques for automated anomaly detection including a technique comprising training an ensemble of deep learning models using clustered time series training data from numerous components in an Information Technology (IT) infrastructure. The technique further comprises inputting aggregated time series data to the ensemble of deep learning models and identifying anomalies in the aggregated time series data based on respective portions of the aggregated time series data that are indicated as anomalous by a majority of deep learning models in the ensemble of deep learning models. The technique further comprises grouping the anomalies according to relationships between the anomalies and performing a mitigation action in response to grouping the anomalies... [the Abstract].
Kurup et al., (“US 8,688,603 B1 filed 11/14/2011, describing system and method for identifying and correcting marginal false positives in machine learning models may include, based on reference data that includes pairs of information items and labels indicating whether pairs of information items have a specific relationship, generating a first machine learning model for determining whether pairs of information items have that relationship. Embodiments may include identifying one or more false positive pairs (e.g., a pair of information items that the first machine learning model indicates as having the specific relationship and which are labeled within the reference data as not having that relationship). Embodiments may include selecting identified false positive pairs as candidates for correction. Embodiments may include, subsequent to a correction of the reference data associated with the selected false positives, generating based on the corrected reference data a new machine learning model for determining whether pairs of information items have the specific relationship..[The Abstract].
Lee et al., (“US 20200050968 A1 filed 10/18/2019, relates to an evaluation run of a model is generated at a machine learning service for display via an interactive interface. The data set includes a prediction quality metric. A target value of an interpretation threshold associated with the model is determined based on a detection of a particular client's interaction with the interface. An indication of a change to the prediction quality metric that results from the selection of the target value may be initiated...[The Abstract].
Breckenridge et al., (“US 20150170056 A1 filed 06/4/2014, relates to training predictive models; wherein multiple training data records are received that each include an input data portion and an output data portion. A training data type is determined that corresponds to the training data. For example, a training data type can be determined by inputting the output data portions into one or more trained predictive classifiers. In other example, the training data type can be determined by comparison of the output data portions to data formats. Based on the determined training data type, a set of training functions are identified that are compatible with the training data of the determined training data type. The training data and the identified set of training functions are used to train multiple predictive models...[The Abstract].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC A TRAN whose telephone number is (571)272-8664. The examiner can normally be reached Monday-Friday 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUOC A TRAN/ Primary Examiner, Art Unit 2145