DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is responsive to the following communication: Original claims filed 04/14/23. This action is made non-final.
3. Claims 1-15 are pending in the case. Claims 1, 8 and 15 are independent claims.
Claim Rejections - 35 USC § 101
4. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claim 1 is a method type claim. Claim 8 is a system claim. Claim 15 is a computer readable storage claim. Therefore, claims 1-20 are directed to either a process, machine, manufacture or composition of matter.
With respect to claim 1, 8 and 15:
2A Prong 1:
the limitations of "classifying, …, a record with a label …, the machine learning model abstaining from classifying a given record in response to the given record being outside of a scope of an information technology (IT) domain; generating, by the processor, an explanation of a decision by the machine learning model to classify the record with the label;" reflect an abstract idea (mental process).
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of " displaying the explanation in a human readable form " reflect an additional element of insignificant extra solution activity of mere data gathering and therefore does not integrate into a practical application. MPEP 2106.05(g).
The limitations "by the processor " and "using a machine learning model" are additional elements of mere instructions to apply by generic computing devices and therefore do not integrate into a practical application or provide significantly more. MPEP 2106.05(f).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of " displaying the explanation in a human readable form " (can be categorized as well understood, routine and conventional activity of “transmitting or receiving data over a network” and therefore does not provide significantly more. MPEP 2106.05(d)(ii)
The limitations "by the processor " and "using a machine learning model" are additional elements of mere instructions to apply by generic computing devices and therefore do not integrate into a practical application or provide significantly more. MPEP 2106.05(f).
With respect to claim 2, 9 and 16:
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of "wherein the human readable form comprises a disjunctive normal form" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of "wherein the human readable form comprises a disjunctive normal form" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
With respect to claim 3, 10 and 17:
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of "wherein the explanation of the decision by the machine learning model is based on a linear classifier formula utilized by the machine learning model.can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of " wherein the explanation of the decision by the machine learning model is based on a linear classifier formula utilized by the machine learning model" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
With respect to claim 4, 11 and 18:
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of " wherein: the explanation of the decision by the machine learning model is based on features and respective coefficients corresponding to the features, the features and the respective coefficients being derived from a linear classifier formula of the machine learning model; and the features are extracted from text of the record" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of " wherein: the explanation of the decision by the machine learning model is based on features and respective coefficients corresponding to the features, the features and the respective coefficients being derived from a linear classifier formula of the machine learning model; and the features are extracted from text of the record" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
With respect to claim 5, 12 and 19:
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of " wherein the human readable form comprises a display of pertinent positive features with a measure of respective contributions for each of the pertinent positive features to the decision by the machine learning model" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of " wherein the human readable form comprises a display of pertinent positive features with a measure of respective contributions for each of the pertinent positive features to the decision by the machine learning model" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
With respect to claim 6, 13 and 20:
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of " wherein the machine learning model is trained on training data in the IT domain" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of " wherein the machine learning model is trained on training data in the IT domain" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
With respect to claim 7 and 14:
2A Prong 2: This judicial exception is not integrated into a practical application.
The limitation of " the machine learning model comprises a linear classifier algorithm; and the record is a ticket of technical problems in an IT environment." can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitation of " the machine learning model comprises a linear classifier algorithm; and the record is a ticket of technical problems in an IT environment" can be categorized as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f)).
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1, 3- 6, 10, 11, 13-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mhatre (US 20240220579) in view of Kanazawa (US 20240255939).
Regarding claim 1, Mhatre discloses a computer-implemented method comprising:
classifying, by a processor, a record with a label using a machine learning model (with the data records labeled as having the particular characteristic and the data records labeled as not having the particular characteristic, training a machine learning model; with the trained machine learning model, of the set of data records that are not labeled, classifying each data record as either having the particular characteristic or not having the particular characteristic, paragraph 0008),
the machine learning model abstaining from classifying a given record in response to the given record being outside of a scope of an information technology (IT) domain (for at least some data records of the data records that are not labeled: selecting a data record of the at least some data records that matches a decision criterion; with a label propagation algorithm, labeling the selected data record as either having the particular characteristic or not having the particular characteristic, paragraph 0008, see also paragraph 0009 and decision boundaries determining the domain of a decision of which IT would be a part of);
Mhatre does not disclose generating, by the processor, an explanation of a decision by the machine learning model to classify the record with the label; and displaying the explanation in a human readable form.
However, Kanazawa discloses wherein Decision maker training engine 210 and RUL estimator training engine 216 retrieve historical records/data from the database 212 to generate trained decision maker 102 and RUL estimator 104. The RUL estimator 104 receives processed data as input and generates estimated RUL as input to the decision maker 102. The decision maker 102 receives explanation, processed data, and reward associated with the processed data as inputs and generates a next action, and a confidence score. The explanation is generated by the XAI unit 214. The XAI unit 214 accesses both the database 212 and the decision maker 102 to analyze the inner workings of the decision maker 102 and thereby creating human-readable explanations. After which the explanation is sent from the decision maker 102 to the GUI 218 (paragraph 0037).
The combination of Mhatre and Kanazawa would have resulted in the decision maker of Mhatre to further include providing an explanation of the decision maker and provide it to a user. One would have been motivated to have combined these teachings as a user of Mhatre is already involved in the decision-making results and allowing a user to process/read those explanations would have provided a superior way to see what the decision makers results were. As such, the combination of references would have been obvious as it would have led to a predictable invention.
Regarding claim 8, Mhatre discloses a system comprising: a memory having computer readable instructions; and one or more processors for executing the computer readable instructions, the computer readable instructions controlling the one or more processors to perform operations (see FIG. 22) comprising:
classifying a record with a label using a machine learning model (with the data records labeled as having the particular characteristic and the data records labeled as not having the particular characteristic, training a machine learning model; with the trained machine learning model, of the set of data records that are not labeled, classifying each data record as either having the particular characteristic or not having the particular characteristic, paragraph 0008),
the machine learning model abstaining from classifying a given record in response to the given record being outside of a scope of an information technology (IT) domain for at least some data records of the data records that are not labeled: selecting a data record of the at least some data records that matches a decision criterion; with a label propagation algorithm, labeling the selected data record as either having the particular characteristic or not having the particular characteristic, paragraph 0008, see also paragraph 0009 and decision boundaries determining the domain of a decision of which IT would be a part of);
Mhatre does not disclose generating an explanation of a decision by the machine learning model to classify the record with the label; and displaying the explanation in a human readable form.
However, Kanazawa discloses wherein Decision maker training engine 210 and RUL estimator training engine 216 retrieve historical records/data from the database 212 to generate trained decision maker 102 and RUL estimator 104. The RUL estimator 104 receives processed data as input and generates estimated RUL as input to the decision maker 102. The decision maker 102 receives explanation, processed data, and reward associated with the processed data as inputs and generates a next action, and a confidence score. The explanation is generated by the XAI unit 214. The XAI unit 214 accesses both the database 212 and the decision maker 102 to analyze the inner workings of the decision maker 102 and thereby creating human-readable explanations. After which the explanation is sent from the decision maker 102 to the GUI 218 (paragraph 0037).
The combination of Mhatre and Kanazawa would have resulted in the decision maker of Mhatre to further include providing an explanation of the decision maker and provide it to a user. One would have been motivated to have combined these teachings as a user of Mhatre is already involved in the decision-making results and allowing a user to process/read those explanations would have provided a superior way to see what the decision makers results were. As such, the combination of references would have been obvious as it would have led to a predictable invention.
Regarding claim 3, Mhatre does not disclose wherein the explanation of the decision by the machine learning model is based on a linear classifier formula utilized by the machine learning model (Various querying strategies can affect not only the final accuracy of the model, but also the time required to achieve that accuracy. Querying strategies may include, but are not limited to, pool-based sampling, ranked batch mode sampling, stream-based sampling, active regression, ensemble regression, queries by committee, Keras classifier, etc. In some embodiments, multiple querying strategies are used in parallel, and the highest-performing strategy is used to train the final ML model (paragraph 0200)
Regarding claim 4, Mhatre discloses wherein: the explanation of the decision by the machine learning model is based on features and respective coefficients corresponding to the features, the features and the respective coefficients being derived from a linear classifier formula of the machine learning model (Various querying strategies can affect not only the final accuracy of the model, but also the time required to achieve that accuracy. Querying strategies may include, but are not limited to, pool-based sampling, ranked batch mode sampling, stream-based sampling, active regression, ensemble regression, queries by committee, Keras classifier, etc. In some embodiments, multiple querying strategies are used in parallel, and the highest-performing strategy is used to train the final ML model, paragraph 0200).
Mhatre does not disclose the features are extracted from text of the record.
However, Kanazawa discloses wherein Decision maker training engine 210 and RUL estimator training engine 216 retrieve historical records/data from the database 212 to generate trained decision maker 102 and RUL estimator 104. The RUL estimator 104 receives processed data as input and generates estimated RUL as input to the decision maker 102. The decision maker 102 receives explanation, processed data, and reward associated with the processed data as inputs and generates a next action, and a confidence score. The explanation is generated by the XAI unit 214. The XAI unit 214 accesses both the database 212 and the decision maker 102 to analyze the inner workings of the decision maker 102 and thereby creating human-readable explanations. After which the explanation is sent from the decision maker 102 to the GUI 218 (paragraph 0037).
The combination of Mhatre and Kanazawa would have resulted in the decision maker of Mhatre to further include providing an explanation of the decision maker and provide it to a user. One would have been motivated to have combined these teachings as a user of Mhatre is already involved in the decision-making results and allowing a user to process/read those explanations would have provided a superior way to see what the decision makers results were. As such, the combination of references would have been obvious as it would have led to a predictable invention.
Regarding claim 5, Mhatre discloses wherein the human readable form comprises a display of pertinent positive features with a measure of respective contributions for each of the pertinent positive features to the decision by the machine learning model (The trained classifier 180 is then capable of generating a predictive score 610 for any new records received by the active machine learning labeling system 200 (e.g., indicating the probability that the entity generating a given record is watch-listed). The trained classifier 180 will thus rank all the “hits” (suspected fraudulent transactions), and in some embodiments the active machine learning labeling system 200 can then elevate all positive hits suggested by model and suppress false positives by defining probability scores below a defined threshold value as being false positives. The predictive score 610 can then be used to generate a recommendation 620. Example recommendations include Alert (e.g., forward the transaction to a human investigator for further analysis), No Issues (e.g., allow the transaction), or Block (e.g., automatically prevent the transaction from completing). Other recommendations based on the predictive score 610 may be generated instead or in addition, without departing from the spirit of the present disclosure, paragraph 0168).
Regarding claim 6, Mhatre discloses wherein the machine learning model is trained on training data in the IT domain (Although specific examples for detection of financial fraud have been described herein, a person of ordinary skill in the art will appreciate that the active machine learning labeling system can be applied in any domain where ML models are trained, paragraph 0218).
Regarding claim 10, the subject matter of the claim is substantially similar to claim 3 and as such the same rationale of rejection applies.
Regarding claim 11, the subject matter of the claim is substantially similar to claim 4 and as such the same rationale of rejection applies.
Regarding claim 12, the subject matter of the claim is substantially similar to claim 5 and as such the same rationale of rejection applies.
Regarding claim 13, the subject matter of the claim is substantially similar to claim 6 and as such the same rationale of rejection applies.
Regarding claim 15, the subject matter of the claim is substantially similar to claim 1 and as such the same rationale of rejection applies.
Regarding claim 17, the subject matter of the claim is substantially similar to claim 3 and as such the same rationale of rejection applies.
Regarding claim 18, the subject matter of the claim is substantially similar to claim 4 and as such the same rationale of rejection applies.
Regarding claim 19, the subject matter of the claim is substantially similar to claim 5 and as such the same rationale of rejection applies.
Regarding claim 20, the subject matter of the claim is substantially similar to claim 6 and as such the same rationale of rejection applies.
8. Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mhatre in view of Kanazawa in further view of Dalli (US 20210174168).
Regarding claim 2, Mhatre does not disclose wherein the human readable form comprises a disjunctive normal form.
However, Dalli discloses wherein Human knowledge injection is the process of defining new rules, or the related process of editing existing rules. Human rules may be written and represented in a generalized XAI rule-based format, such as in the disjunctive normal form, which allow human knowledge to be injected to XNNs via the conversion methods defined herein. Gradient descent methods make it possible for rules to be refined in a way that it now takes into consideration the human rules within the global scope of the entire model. Additionally, human rules may also be configured to be trainable or non-trainable. In the latter case, only machine-generated rules are refined, and the human rules may remain untouched. This allows for manual control over the resulting XNN model and ensures that there is safe operation of the resulting system that is predictable (paragraph 0109).
The combination of Mhatre and Dalli would have resulted in the decision maker of Mhatre to further include providing language decisions and provide it to a user. One would have been motivated to have combined these teachings as a user of Mhatre is already involved in the decision-making results and allowing a user to process/read those explanations would have provided a superior way to see what the decision makers results were. As such, the combination of references would have been obvious as it would have led to a predictable invention.
Regarding claim 9, the subject matter of the claim is substantially similar to claim 2 and as such the same rationale of rejection applies.
Regarding claim 16, the subject matter of the claim is substantially similar to claim 2 and as such the same rationale of rejection applies.
9. Claim 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Mhatre-Kanazawa in view of Wu (US 20230222358).
Regarding claim 7, Mhatre discloses wherein: the machine learning model comprises a linear classifier algorithm (Various querying strategies can affect not only the final accuracy of the model, but also the time required to achieve that accuracy. Querying strategies may include, but are not limited to, pool-based sampling, ranked batch mode sampling, stream-based sampling, active regression, ensemble regression, queries by committee, Keras classifier, etc. In some embodiments, multiple querying strategies are used in parallel, and the highest-performing strategy is used to train the final ML model (paragraph 0200); and
Mhatre does not disclose wherein the record is a ticket of technical problems in an IT environment.
However, Wu discloses wherein Implementations of the invention address the technical problem of generating desired analytic outputs for large amounts of incoming IT operations data (e.g., big data), such as IT tickets and log records reflecting IT operation events (e.g., errors and system failures). In aspects, special purpose computing tools such as trained classification models and modified event parsers are utilized to provide customized analytics outputs through adaptive learning (paragraph 0019).
The combination of Mhatre and Wu would have resulted in the decision maker of Mhatre to further include providing language samples to take from. One would have been motivated to have combined these teachings as a user of Mhatre is already involved in the decision-making results and allowing a user to process/read those explanations would have provided a superior way to see what the decision makers results were. As such, the combination of references would have been obvious as it would have led to a predictable invention.
Regarding claim 14, the subject matter of the claim is substantially similar to claim 7 and as such the same rationale of rejection applies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID E CHOI whose telephone number is (571)270-3780. The examiner can normally be reached on M-F: 7-2, 7-10 (PST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bechtold, Michelle T. can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID E CHOI/Primary Examiner, Art Unit 2148