DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1, 3-9, 11-17 and 19-23 are pending for examination.
Claims 1, 9 and 17 are independent Claims.
Claims 1, 3, 5-9, 11, 13-17 and 19-23 are rejected under 35 U.S.C. §102.
Claims 4, 12 and 20 are rejected under 35 U.S.C. §103.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent recite “the target model is different than the first type of explanatory model”, “first type of output … is different than the second type of output generated by a target model”, “second type of explanation information is also different from the second type of output generated by the target model”. There are no support(s) for this limitation in the Applicants’ Specification.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 5-9, 11, 13-17, 19 and 21-23 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sharpe et al. (U.S. 2024/0037427 hereinafter Sharpe).
As Claim 1, Sharpe teaches a method comprising:
obtaining, by one or more processors (Sharpe (¶0032 line 5), processor), tuning data used in a tuning process (Sharpe (¶0016 line 1-2, ¶0036 line 17-26), a dataset is used for training machine learning model. Output of the machine learning model is an indication of whether an action is predicted to be completed by a corresponding deadline or not) for configuring a first type of explanatory model (Sharpe (¶0023 line 1-5), “the ML explanation system 102 may generate a first performance metric of the machine learning model. The first performance metric may indicate an amount that the performance of the machine learning model changed due to removal of the first feature from the first dataset”, the first type of the explanation model (ML explanation system with Machine learning model trained with first features removed) produces the first performance metric), wherein the first type of explanatory model is configured to generate a first type of output (Sharpe (¶0023 line 1-5), the first type of the explanation model (ML explanation system with Machine learning model trained with first features removed) produces the first performance metric) that is different than a second type of output generated by a target model (Sharpe (¶0016 line 1-2, ¶0036 line 17-26), a dataset is used for training machine learning model. Output of the machine learning model is an indication of whether an action is predicted to be completed by a corresponding deadline or not. Second type of output is the prediction), wherein the target model is different (Sharpe (¶0016 line 1-2, ¶0036 line 17-26), a dataset is used for training machine learning model. Machine learning model is trained on full-feature dataset) than the first type of explanatory model (Sharpe (¶0023 line 1-5), the first type of the explanation model (ML explanation system with Machine learning model trained with first features removed) produces the first performance metric), and wherein the first type of output includes a first type of explanatory information indicating a reason for an output of the target model (Sharpe (¶0023 line 1-5, ¶0026 line 1-4), the first type of the explanation model (ML explanation system with Machine learning model trained with first features removed) produces the first performance metric (explanatory information indicating the impact of removed features on model output)), the tuning data including a mapping of inputs to the target model and corresponding outputs from the target model (Sharpe (¶0016 line 1-2, ¶0036 line 17-26), “During training, an output layer of the machine learning model 342 may correspond to a classification, and an input known to correspond to that classification may be input into an input layer of the machine learning model during training”);
storing, by the one or more processors, the tuning data in a datastore (Sharpe (¶0020 line 1-2), system determines a subset of the first dataset that corresponds to the first feature);
subsequent to storing the tuning data in the datastore, receiving, by the one or more processors, a request for a second type of explanatory information associated with the target model (Sharpe (¶0019 line 1-2) system determines a first feature corresponding to the first important metric), the request including a target output from the target model (Sharpe (¶00023 last 6 lines, ¶0035 line 5-15), the overall performance metric is the target output. The first performance metric is subtracted from the overall performance metric in order to determine model change. User inputs the accuracy of outputs), wherein the second type of explanatory information is different from the first type of explanatory information generated by the first type of explanatory model (Sharpe (¶0027 line 1-6), “ML explanation system 102 may generate, based on a first plurality of importance metrics, a first plurality of performance metrics ” and “based on a second plurality of importance metrics, a second plurality of performance metrics”), wherein the second type of explanatory information (Sharpe (¶0023 line 1-5), the second type of the explanation model (ML explanation system with Machine learning model trained with second features removed) produces the second performance metric) is also different from the second type of output generated by the target model (Sharpe (¶0016 line 1-2, ¶0036 line 17-26), a dataset is used for training machine learning model. Output of the machine learning model is an indication of whether an action is predicted to be completed by a corresponding deadline or not. Second type of output is the prediction), and wherein the second type of explanatory information is capable of being obtained by executing a second type of explanatory model that is different than the first type of explanatory model and the target model (Sharpe (¶0027 line 1-6), “ML explanation system 102 may generate, based on a first plurality of importance metrics, a first plurality of performance metrics ” and “based on a second plurality of importance metrics, a second plurality of performance metrics”. Second plurality of performance metrics explains different relationship than first performance metrics); and
in response to receiving the request:
determining, by the one or more processors, whether any outputs in the mapping match the target output (Sharpe (¶0022 line 1-5, ¶0023 last 6 lines), model is retrained using the second dataset (mapping). The first performance metric is subtracted from the overall performance metric in order to determine model change);
in response to determining that a particular output in the mapping matches the target output, determining, by the one or more processors, a particular input in the mapping that corresponds to the particular output (Sharpe (¶0024 line 1-2, ¶0025 line 5-17, ¶0026 line 1-4), the performance metric is used to evaluate the effect of removing the feature from the first dataset. If the change of the first metric follows a particular pattern, the first set of performance metrics are more accurately reflects an expected change. ML system determines that first set of important metrics should be used to explain the classification); and
providing, by the one or more processors, a response to the request that includes the particular input as the second type of explanatory information (Sharpe (¶0026 line 1-4), ML system determines that first set of important metrics should be used to explain the classification).
As Claim 3, besides Claim 2, Sharpe teaches wherein the response to the request is determined without executing the second type of explanatory model (Sharpe (¶0022 line 1-5, ¶0023 last 6 lines, ¶0036 line 17-23), model is retrained using the second dataset (mapping). The first performance metric is subtracted from the overall performance metric in order to determine model change. Second model is retained and tested. It is not executed on a real data).
As Claim 5, besides Claim 2, wherein the first type of explanatory model is a saliency model, the first type of explanatory information includes saliency values (Sharpe (¶0018 line 1-2), first importance metric is a greatest importance), the second type of explanatory model is a counterfactual explainer model (Sharpe (¶00023 last 6 lines, ¶0035 line 5-15), the overall performance metric is the target output. The first performance metric is subtracted from the overall performance metric in order to determine model change. User inputs the accuracy of outputs), and the second type of explanatory information is counterfactual information (Sharpe (¶00023 last 6 lines, ¶0035 line 5-15), the overall performance metric is the target output. The first performance metric is subtracted from the overall performance metric in order to determine model change. User inputs the accuracy of outputs).
As Claim 6, besides Claim 1, Sharpe teaches further comprising:
determining that at least two outputs in the mapping match the target output (Sharpe (¶0024 line 12-15), the first and second performance metric may be a part of a first set of performance metric. Sharpe (¶0024 line 1-2, ¶0025 line 5-17, ¶0026 line 1-4), the performance metric is used to evaluate the effect of removing the feature from the first dataset. If the change of the first metric follows a particular pattern, the first set of performance metrics are more accurately reflects an expected change);
determining at least two inputs that correspond in the mapping to the at least two outputs (Sharpe (¶0024 line 1-2, ¶0025 line 5-17, ¶0026 line 1-4), the performance metric is used to evaluate the effect of removing the feature from the first dataset. If the change of the first metric follows a particular pattern, the first set of performance metrics are more accurately reflects an expected change. ML system determines that first set of important metrics should be used to explain the classification); and
providing the at least two inputs as the second type of explanatory information (Sharpe (¶0026 line 1-4), ML system determines that first set of important metrics should be used to explain the classification).
As Claim 7, besides Claim 6, further comprising:
determining similarity scores corresponding to the at least two outputs, each similarity score indicating a level of similarity between a respective output and the target output (Sharpe (¶0024 line 12-15), the first and second performance metric may be a part of a first set of performance metric. Sharpe (¶0024 line 1-2, ¶0025 line 5-17, ¶0026 line 1-4), the performance metric is used to evaluate the effect of removing the feature from the first dataset. If the change of the first metric follows a particular pattern, the first set of performance metrics are more accurately reflects an expected change);
determining that the at least two outputs in the mapping match the target output based on the similarity scores (Sharpe (¶0024 line 12-15), the first and second performance metric may be a part of a first set of performance metric); and
generating a graphical visualization that indicates the at least two inputs, the at least two outputs, and the similarity scores (Sharpe (¶0029 line 1-7, figure 2), graph in figure 2 shows the first and second set of plurality metrics. The graph also shows a performance metric of a model after a feature has been removed form a dataset).
As Claim 8, besides Claim 1, Sharpe teaches wherein the inputs comprise synthetic data samples for use in tuning the explanatory model (Sharpe (¶0022 line 1-5, ¶0024 last 7 lines), model is retrained using the original dataset with the feature removed (synthetic data). First set of performance metrics maybe compared with second set of performance metric), and further comprising generating the inputs based on background data and input data (Sharpe (¶0022 line 1-5, ¶0035), model is retrained using the original dataset with the feature removed. User also input data such accuracy, label and feedback).
As Claims 9, 11 and 13-16, the Claims are rejected for the same reasons as Claims 1, 3 and 5-8, respectively.
As Claims 17 and 19, the Claims are rejected for the same reasons as Claims 1 and 3, respectively.
As Claim 21-23, the Claims is rejected for the same reasons as Claim 5-7, respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 12 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharpe in view of DeCaprio et al. (U.S. 11176471 hereinafter DeCaprio).
As Claim 4, besides Claim 1, Sharpe may not explicitly disclose wherein the particular input is a first version of the second type of explanatory information, and further comprising:
determining a second version of the second type of explanatory information by executing the second type of explanatory model; and
providing the second version of the second type of explanatory information.
DeCaprio teaches:
wherein the particular input is a first version of the second type of explanatory information, and further comprising:
determining a second version of the second type of explanatory information by executing the second type of explanatory model (DeCaprio (col. 16 line 8-15), system provides evidence data that explain the prediction from the model); and
providing the second version of the second type of explanatory information (DeCaprio (col. 16 line 8-15), system provides evidence data that explain the prediction from the model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify second model of Sharpe instead be a explainer taught by DeCaprio, with a reasonable expectation of success. The motivation would be to “enable more efficient allocation of computational resources, e.g., by concentrating the computing resources required to generate explainability data on only certain predictions generated by the machine learning model” (DeCaprio (col. 4 line 61-65)) (Teaching, Suggestion or Motivation).
As Claim 12 and 20, the Claims are rejected for the same reasons as Claim 4.
Response to Arguments
Patentability of the Claims under 35 U.S.C. §§102, 103:
Applicants’ arguments are persuasive; therefore, 35 U.S.C. §101 rejections are respectfully withdrawn.
Patentability of the Claims under 35 U.S.C. §§102, 103:
As Claims 1-3, 5-11 and 13-19, Applicants argue that limitation “the target model is different than the explanatory model” and that “the first type of explanatory model is configure to generate a first type of outputs that are different than a second type of output generated by a target model” and “the second type of explanation information is different from the first type of explanation information …” (second and third paragraph of page 13 in the remarks).
Applicants’ arguments are not persuasive because Sharpe teaches the limitation(s). Current office action is updated with the mapping for amended limitation(s).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHAT HUY T NGUYEN whose telephone number is (571)270-7333. The examiner can normally be reached M-F: 12:00-8:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NHAT HUY T NGUYEN/Primary Examiner, Art Unit 2147