DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to communications filed on 05/11/2023. Claims 1-20 are pending and have been examined.
Information Disclosure Statement
The information disclosure statement (IDS) submitted was filed on 05/11/2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted was filed on 08/18/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method, system, and medium associated with acquiring, processing, and analyzing.
The limitations “acquiring… processing… analyzing…” as recited in claim 1 are each a process, under the broadest reasonable interpretation, covering performance of the limitations in the mind or by pen and paper (See Berkheimer v. HP, Inc., 881 F.3d 1360, 1366, 125 USPQ2d 1649 (Fed. Cir. 2018)) but for the recitation of generic computer components. That is, other than reciting “a plurality of runs of the artificial neural network”, the limitation “acquiring data of…” in the context of the claim encompasses the user making observations. The limitation “processing the acquired data using an attributation method to obtain attributation data” in the context of the claim encompasses the user making calculations. Other than reciting “the artificial neural network”, the limitation “analyzing…based on the attributation data” in the context of the claim encompasses the user making evaluations. If a claimed limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “mental processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites additional elements. The claim recites “computer implemented”. The element is recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)). The elements “an [“the”] artificial neural network” and “a plurality of runs of the artificial neural network”” amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)). Note that “for analyzing a reinforcement learning agent” in the context of the claim is recited as intended use and is not afforded patentable weight (at best, this also amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h))). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are no more than a generic computer component and/or field of use. Therefore, the claims are not patent eligible.
Claims 19 and 20 also recite similar claim language as claim 1, and thus have the same issues. It is noted, with respect to claim 19, that the claim recites “a plurality of computer hardware components” to perform the method. The elements are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)). It is noted, with respect to claim 20, that the claim further recites non-transitory computer readable medium comprising instructions that, when executed, configure computer hardware components” to perform the method. The elements are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the exception using a generic computer component (e.g. See MPEP 2106.05(f)). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea and are not sufficient to amount to significantly more than the judicial exception.
Regarding claim 2, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes the data being acquired in one of particular runs, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 3, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes what is determined in the attribution method, which is part of the mental steps (encompassing a user making calculations) and does not include any additional elements. This similarly apples to claim 4.
Regarding claim 5, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes the baseline represents, which is part of the mental steps (encompassing a user making calculations) and does not include any additional elements.
Regarding claim 6, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes what the attribution method comprises, which is part of the mental steps (encompassing a user making calculations) and does not include any additional elements. This similarly apples to claim 7.
Regarding claim 8, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further describes dividing into groups and analyzing based on the groups, which are mental steps (encompassing a user making calculations/evaluations) and analyzing the ANN amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 9, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim describes further determining a correlation and attributation, which are mental steps (encompassing a user making evaluations) and determining an output of the ANN, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 10, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes what the attribution method comprises, which is part of the mental steps (encompassing a user making calculations) and does not include any additional elements.
Regarding claim 11, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further describes dividing into groups and analyzing based on the groups, which are mental steps (encompassing a user making calculations/evaluations) and analyzing the ANN amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 12, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes what the method applies to, which is part of the mental steps (encompassing a user making determinations) and does not include any additional elements.
Regarding claim 13, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim describes further determining errors associated with the ANN, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 14, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim further describes dividing into groups and analyzing based on the groups, which are mental steps (encompassing a user making calculations/evaluations) and analyzing the ANN amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 15, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim describes further determining a correlation and attributation, which are mental steps (encompassing a user making evaluations) and determining an output of the ANN, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 16, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim merely further describes what the attribution method comprises, which is part of the mental steps (encompassing a user making calculations) and does not include any additional elements.
Regarding claim 17, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim describes further dividing into groups and analyzing based on the groups, which are mental steps (encompassing a user making calculations/evaluations) and analyzing the ANN amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)), determining a correlation and attributation, which are mental steps (encompassing a user making evaluations) and determining an output of the ANN, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)), and determining errors associated with the ANN, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Regarding claim 18, the claim does not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception. For example, the claim describes further aspects of the method for explaining, which is part of the mental steps (which encompasses a user making observations, calculations and evaluations) and the ANN, which amounts to generally linking the use of the judicial exception to a particular technological environment or field of use (e.g. see MPEP 2106.05(h)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 7-9, 11-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gou et al. (US 20190303765 A1) in view of Heuillet et al. (“Explainability in Deep Reinforcement Learning”, 12/18/2020, 24 pages cited in IDS dated 05/11/2023).
As per independent claim 1, Gou teaches a computer implemented method for analyzing a reinforcement learning agent based on an artificial neural network (e.g. in paragraph 4, “reinforcement learning (RL) model may be intended to train an agent (e.g., a programmed actor and/or the like) to perform actions within an environment to achieve a desired goal. For example, a deep Q-network (DQN) model may include a neural network (e.g. a deep convolutional neural network and/or the like)”; note: not an actual brain, i.e. “artificial”), the method comprising:
acquiring data of a plurality of runs of the artificial neural network (e.g. in paragraphs 9, 115, and 161, “for each epoch of a first predetermined number of epochs, performing a second predetermined number of training iterations and a third predetermined number of testing iterations using a first neural network… determining one or more patterns based on segments of iterations (e.g., testing and/or training iterations)… the event sequences over time or at a particular time step (e.g., iteration)”);
processing the acquired data using a method to obtain data (e.g. in paragraphs 115 and 161, “analytic framework to help interpret behavior, enhance understanding, provide insight, and/or the like of a neural network (and/or an agent including and/or using such a neural network)… quantitatively summarize the event sequences over time or at a particular time step (e.g., iteration)”); and
analyzing the artificial neural network based on the data (e.g. in paragraphs 115, 140, 161, and 173, “provide a visual analytic framework to help interpret behavior, enhance understanding, provide insight, and/or the like of a neural network (and/or an agent including and/or using such a neural network)… multiple visual depictions (e.g., charts, graphs, and/or the like) of the iterations (e.g., testing iterations) and/or statistics thereof as well as visual depictions of subsets (e.g., epochs, episodes, segments, and/or the like) of the iterations and/or statistics thereof may be displayed… a user may observe patterns and make adjustments (e.g., to hyperparameters) to improve the neural network (and/or an agent including and/or using such a neural network)… analyses”),
but does not specifically teach wherein the method comprises an attributation method and the data comprises attributation data.
However, Heuillet teaches a method comprising an attributation method to obtain data comprising attributation data (e.g. in pages 1, 5-6, 11, and 17, “explain a deep neural network… provide explanations of an RL algorithm after its training, such as SHAP (SHapley Additive exPlanations)… Shapley Q-values Deep Deterministic Policy Gradient… it is worth noting that all presented methods decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration... proposed approaches (such as such as Integrated Gradients [111] based on the continuous extension of Shapley value”; note: “attributation” does not appear to be a word defined in a dictionary; while applicant can act as his or her own lexicographer, it is noted that the specification only generally describes what attributation “may be” [e.g. in page 3], but does not specifically define the term; applicant may be referring to “attribution” as page 10 describes integrated gradients as an example of “attributation” in the context of the article “Axiomatic Attribution for Deep Networks”; for the purposes of examination, the term “attributation” is broadly interpreted to include any element(s) that is attributed to other element(s), or integrated gradients, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gou to include the teachings of Heuillet because one of ordinary skill in the art would have recognized the benefit of further facilitating explanation of neural network(s)/agent behavior (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]).
As per claim 2, the rejection of claim 1 is incorporated and the combination further teaches wherein the data is acquired during at least one of: a real-life run or a simulated run (e.g. Gou, in paragraphs 9, 115, and 161, “for each epoch of a first predetermined number of epochs, performing a second predetermined number of training iterations and a third predetermined number of testing iterations using a first neural network… determining one or more patterns based on segments of iterations (e.g., testing and/or training iterations)… the event sequences over time or at a particular time step (e.g., iteration)”).
As per claim 7, the rejection of claim 1 is incorporated and the combination further teaches wherein the attributation method comprises at least one of: Integrated Gradients; DeepLIFT; Gradient SHAP; or Guided Backpropagation and Deconvolution (e.g. Heuillet, in pages 5-6, 11, and 17, “provide explanations of an RL algorithm after its training, such as SHAP (SHapley Additive exPlanations)… Shapley Q-values Deep Deterministic Policy Gradient… it is worth noting that all presented methods decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration... proposed approaches (such as such as Integrated Gradients [111] based on the continuous extension of Shapley value”).
As per claim 8, the rejection of claim 1 is incorporated and the combination further teaches dividing the attributation data into a plurality of groups, wherein the artificial neural network is analyzed based on the plurality of groups (e.g. Gou, in paragraphs 115, 138, and 156, “multiple visual depictions (e.g., charts, graphs, and/or the like) of the iterations (e.g., testing iterations) and/or statistics thereof as well as visual depictions of subsets (e.g., epochs, episodes, segments, and/or the like) of the iterations and/or statistics thereof may be displayed… Data associated with each iteration may be grouped in a tuple” and/or “visual analytic system may be used, e.g., to help a user in understanding the experiences of a DQN agent 520 in multiple levels (e.g., four or five different levels)… overall training level, epoch level, episode level, and segment level”, i.e. different groups; Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”).
As per claim 9, the rejection of claim 1 is incorporated and the combination further teaches determining a correlation between parameters, attributation related to the parameters, and an output of the artificial neural network (e.g. Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network (DNN) output… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”).
As per claim 11, the rejection of claim 9 is incorporated and the combination further teaches dividing the attributation data into a plurality of groups, wherein the artificial neural network is analyzed based on the plurality of groups (e.g. Gou, in paragraphs 115, 138, and 156, “multiple visual depictions (e.g., charts, graphs, and/or the like) of the iterations (e.g., testing iterations) and/or statistics thereof as well as visual depictions of subsets (e.g., epochs, episodes, segments, and/or the like) of the iterations and/or statistics thereof may be displayed… Data associated with each iteration may be grouped in a tuple” and/or “visual analytic system may be used, e.g., to help a user in understanding the experiences of a DQN agent 520 in multiple levels (e.g., four or five different levels)… overall training level, epoch level, episode level, and segment level”, i.e. different groups; Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”).
As per claim 12, the rejection of claim 1 is incorporated and the combination further teaches wherein the computer implemented method is applied to a motion planning module (e.g. Heuillet, in page 11, “navigation task… planning module, used for route planning”).
As per claim 13, the rejection of claim 1 is incorporated and the combination further teaches wherein analyzing the artificial neural network comprises at least one of: detecting errors in the artificial neural network or detecting errors in input data to the artificial neural network (e.g. Gou, in paragraphs 140 and 154, “updating the first set of parameters of the first neural network may include adjusting the first set of parameters to increase (e.g., maximize) a potential score and/or to reduce (e.g., minimize) a loss, error, or difference… Through iterative trainings, the agent 520 may become increasingly intelligent”).
As per claim 14, the rejection of claim 13 is incorporated and the combination further teaches dividing the attributation data into a plurality of groups, wherein the artificial neural network is analyzed based on the plurality of groups (e.g. Gou, in paragraphs 115, 138, and 156, “multiple visual depictions (e.g., charts, graphs, and/or the like) of the iterations (e.g., testing iterations) and/or statistics thereof as well as visual depictions of subsets (e.g., epochs, episodes, segments, and/or the like) of the iterations and/or statistics thereof may be displayed… Data associated with each iteration may be grouped in a tuple” and/or “visual analytic system may be used, e.g., to help a user in understanding the experiences of a DQN agent 520 in multiple levels (e.g., four or five different levels)… overall training level, epoch level, episode level, and segment level”, i.e. different groups; Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”).
As per claim 15, the rejection of claim 13 is incorporated and the combination further teaches determining a correlation between parameters, attributation related to the parameters, and an output of the artificial neural network (e.g. Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network (DNN) output… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”).
As per claim 17, the rejection of claim 1 is incorporated and the combination further teaches dividing the attributation data into a plurality of groups, wherein the artificial neural network is analyzed based on the plurality of groups (e.g. Gou, in paragraphs 115, 138, and 156, “multiple visual depictions (e.g., charts, graphs, and/or the like) of the iterations (e.g., testing iterations) and/or statistics thereof as well as visual depictions of subsets (e.g., epochs, episodes, segments, and/or the like) of the iterations and/or statistics thereof may be displayed… Data associated with each iteration may be grouped in a tuple” and/or “visual analytic system may be used, e.g., to help a user in understanding the experiences of a DQN agent 520 in multiple levels (e.g., four or five different levels)… overall training level, epoch level, episode level, and segment level”, i.e. different groups; Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”); and determining a correlation between parameters, attributation related to the parameters, and an output of the artificial neural network (e.g. Heuillet, in pages 1, 5-6 and 17, “explain a deep neural network (DNN) output… decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration”), wherein analyzing the artificial neural network comprises at least one of: detecting errors in the artificial neural network; or detecting errors in input data to the artificial neural network (e.g. Gou, in paragraphs 140 and 154, “updating the first set of parameters of the first neural network may include adjusting the first set of parameters to increase (e.g., maximize) a potential score and/or to reduce (e.g., minimize) a loss, error, or difference… Through iterative trainings, the agent 520 may become increasingly intelligent”).
As per claim 18, the rejection of claim 1 is incorporated and the combination further teaches wherein the computer implemented method provides a local and post-hoc method for explaining the artificial neural network (e.g. Heuillet, pages 4-6 and 13, “local… Post-Hoc explainability”).
Claim 19 is the system claim corresponding to method claim 1 and is rejected under the same reasons set forth, and the combination further teaches a plurality of computer hardware components (e.g. Gou, in paragraphs 125-127, “processor 204, memory 20”, etc.).
Claim 20 is the medium claim corresponding to method claim 1 and is rejected under the same reasons set forth, and the combination further teaches a non-transitory computer readable medium comprising instructions that, when executed, configure computer hardware components (e.g. Gou, in paragraphs 125-127, “processor 204, memory 20…that stores information and/or instructions for use by processor… a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk”, etc.).
Claims 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Gou et al. (US 20190303765 A1) in view of Heuillet et al. (“Explainability in Deep Reinforcement Learning”, 12/18/2020, 24 pages cited in IDS dated 05/11/2023) and further in view of Sundararajan et al. (“Axiomatic Attribution for Deep Networks”, 06/13/2017, 11 pages cited in IDS dated 05/11/2023).
As per claim 3, the rejection of claim 2 is incorporated, but the combination does not specifically teach wherein the attributation method is based on determining a gradient with respect to input data along a path from a baseline to the input data. However, Sundararajan teaches determining a gradient with respect to input data along a path from a baseline to the input data (e.g. in page 3, “We consider the straightline path (in Rn) from the baseline x' to the input x, and compute the gradients at all points along the path. Integrated gradients are obtained by cumulating these gradients”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Sundararajan because one of ordinary skill in the art would have recognized the benefit of incorporating well-known integrated gradient calculation (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]).
As per claim 4, the rejection of claim 1 is incorporated, but the combination does not specifically teach wherein the attributation method is based on determining a gradient with respect to input data along a path from a baseline to the input data. However, Sundararajan teaches determining a gradient with respect to input data along a path from a baseline to the input data (e.g. in page 3, “We consider the straightline path (in Rn) from the baseline x' to the input x, and compute the gradients at all points along the path. Integrated gradients are obtained by cumulating these gradients”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Sundararajan because one of ordinary skill in the art would have recognized the benefit of incorporating well-known integrated gradient calculations (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]).
As per claim 5, the rejection of claim 4 is incorporated and the combination further teaches wherein the baseline represents a general reference to all possible inputs (e.g. Sundararajan, in page 3, “We consider the straightline path (in Rn) from the baseline x' to the input x, and compute the gradients at all points along the path”, i.e. calculates all possible inputs x to a general reference, which is called “baseline”).
As per claim 6, the rejection of claim 4 is incorporated and the combination further teaches wherein the attributation method comprises at least one of: Integrated Gradients; DeepLIFT; Gradient SHAP; or Guided Backpropagation and Deconvolution (e.g. Heuillet, in pages 5-6, 11, and 17, “provide explanations of an RL algorithm after its training, such as SHAP (SHapley Additive exPlanations)… Shapley Q-values Deep Deterministic Policy Gradient… it is worth noting that all presented methods decompose final prediction into additive components attributed to particular features [109], and thus interaction between features should be accounted for, and included in the explanation elaboration... proposed approaches (such as such as Integrated Gradients [111] based on the continuous extension of Shapley value”; Sundararajan, in page 3, “Integrated Gradients”).
Claims 10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gou et al. (US 20190303765 A1) in view of Heuillet et al. (“Explainability in Deep Reinforcement Learning”, 12/18/2020, 24 pages cited in IDS dated 05/11/2023) and further in view of Stephens et al. (US 20210133742 A1).
As per claim 10, the rejection of claim 9 is incorporated, but the combination does not specifically teach wherein the correlation comprises at least one of: a Pearson correlation coefficient; or a Spearman’s rank correlation coefficient. However, Stephens teaches a correlation comprising at least one of a Pearson correlation coefficient or a Spearman’s rank correlation coefficient (e.g. in page 3, “A feature may be defined as correlated with another when a measure of the correlation exceeds a threshold value... measure may be any known parameter for quantifying correlation, e.g. Pearson product-moment correlation coefficient, Spearman's rank correlation coefficient, etc.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Stephens because one of ordinary skill in the art would have recognized the benefit of incorporating well-known correlation calculations (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]).
As per claim 16, the rejection of claim 15 is incorporated, but the combination does not specifically teach wherein the correlation comprises at least one of: a Pearson correlation coefficient; or a Spearman’s rank correlation coefficient. However, Stephens teaches a correlation comprising at least one of a Pearson correlation coefficient or a Spearman’s rank correlation coefficient (e.g. in page 3, “A feature may be defined as correlated with another when a measure of the correlation exceeds a threshold value... measure may be any known parameter for quantifying correlation, e.g. Pearson product-moment correlation coefficient, Spearman's rank correlation coefficient, etc.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Stephens because one of ordinary skill in the art would have recognized the benefit of incorporating well-known correlation calculations (also amounts a simple substitution that yields predictable results [e.g. see KSR Int'l Co v. Teleflex Inc., 550 US 398,82 USPQ2d 1385,1396 (U.S. 2007) and MPEP 2143(B)]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
For example,
Pai et al. (US 20200279140 A1) teaches “after a first few rounds of training, the model training system 102 may cause a user interface to be displayed to a client device, which graphically illustrates local and/or global explanation scores for each feature of one or more classes or test points. In this way, for example, a machine learning model developer can visually identify any potential biases or other problems in the data such that the machine learning model can be modified if needed” (e.g. in paragraph 51).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/W.W/Examiner, Art Unit 2144 02/07/2026
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144