DETAILED ACTION
1. This communication is in response to the arguments/remarks filed on February 9, 2026 for Application No. 17/992,334 in which Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
3. The information disclosure statement submitted on 03/09/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
4. The arguments/remarks filed on February 9, 2026 have been considered. No claims have been amended or cancelled. Thus, Claims 1-20 are pending and presented for examination.
5. Applicant's arguments filed February 9, 2026 with respect to the 35 U.S.C. 101 rejection have been fully considered but they are not persuasive.
Applicant’s Arguments on Pgs. 8-9 of Arguments/Remarks state:
“Claim 1 is like the claim of Examiner 39. Like the claim of Example 39, claim 1 recites operations associated with the training of a machine learning model. Further, like the claim of Examiner 39, claim 1 recites various operations for bringing about the training of the model. Further, like the claim of Example 39, claim 1 "does not recite any mathematical relationships, formulas, or calculations" and "mathematical concepts are not recited in the claims." See id.
Also, like the claim of Example 39, claim 1 does not recite a mental process "because the steps are not practically performed in the human mind." See id. Applying transformations to digital facial images, creating various training sets, and training a neural network make the claim of Example 39 impractical to perform in the human mind. In the same way, claim 1 recites: […]
These operations are not capable of mental execution. For example, it is not practical to execute a trained model mentally ("applying an explainer model . . ."). Neither is it practical to mentally modify a "bias-cleared private model based at least in part on the first explanation loss data and the first noise data."
For at least these reasons, Applicant submits that there is no prima facie ineligibility of claim
1 or of claims 2-10 that depend directly or indirectly from claim 1. Further, Applicant submits that independent claims 11 and 20 include the features of claim 1 described herein. Accordingly, Applicant submits that there is no prima facie ineligibility of claim 11, of claims 12-19 that depend directly or indirectly from claim 11, or of claim 20.”
Examiner respectfully disagrees. First, Examiner notes that it is unclear what Applicant is referencing with the first citation on Pg. 8 (“In the same way, claim 1 recites: accessing the sensor data stream, the sensor data stream being generated by a sensor […]”), as these limitations are not part of the instant claims, specification, or Example 39 as referenced by Applicant. As such, this is believed to be a typographical error. Regarding the 35 U.S.C. 101 abstract idea rejection, Applicant asserts that the instant claim 1 is analogous to Example 39 of the subject matter eligibility examples. Examiner respectfully disagrees. First, it must be noted that Example 39 recites the specific limitations “applying one or more transformations to each digital facial image including mirroring, rotating, smoothing, or contrast reduction to create a modified set of digital facial images” and “creating a first training set comprising the collected set of digital facial images, the modified set of digital facial images, and a set of digital non-facial images” – these limitations clearly cannot be performed by mental process. For example, a human user is not capable of applying transformations, such as mirroring, rotating, smoothing, or contrast reduction to create a modified set of digital facial images. Furthermore, since the facial images/modified set of facial images/non-facial images are all digital, it is clearly not feasible for a human to create a training set comprising digital facial images. In comparison, instant claim 1 recites the limitations “[…] generate first bias-cleared private model explanation data”, “determining a first explanation loss using the first bias-cleared private model explanation data and the first bias-cleared model explanation data”, “determining first noise data to be added to the bias-cleared private model based at least in part on a privacy budget”, and “modifying the bias-cleared private model based at least in part on the first explanation loss and the first noise data” – these limitations are not recited in a technical manner which would preclude them from being performed by mental process. For example, a user is capable of generating first bias-cleared private model explanation data by observing/analyzing the first bias-cleared private model output data (which is already obtained based on the claim language) and accordingly using judgement/evaluation to generate first bias-cleared private model explanation data, in which explanation data describes the model output to provide a rationale behind the prediction. Further, a user is capable of “determining a first explanation loss […]” by determining a difference between the first bias-cleared private model explanation data and the first bias-cleared model explanation data – this interpretation is also supported by Applicant’s specification Par. [0033]. Similarly, a user is able to “determine first noise data to be added […]” by a user observing/analyzing a predetermined privacy budget and accordingly using judgement/evaluation to determine first noise data based on said analysis of the privacy budget. Finally, a user is also capable of “modifying the bias-cleared private model […]” by observing/analyzing the first explanation loss and first noise data and accordingly using judgement/evaluation to modify the model (with the aid of pen and paper) by updating weights or adding noise. Again, there is nothing in the presently drafted claim language which precludes the limitations from being performed by mental process.
Furthermore, Applicant states that “it is not practical to execute a trained model mentally (“applying an explainer model…”) – however, Examiner notes that this limitation is not a mental process at Step 2A Prong 1 and is instead addressed at Step 2A Prong 2 and Step 2B. More specifically, “applying an explainer model […]” amounts to merely adding the words “apply it” with the judicial exception or mere instructions to implement an abstract idea on a computer – this cannot provide an inventive concept. Examiner also notes that Applicant merely states “neither it is practical to mentally modify a “bias-cleared private model […]” without actually addressing how this is impractical or not possible – the argument is not persuasive.
Thus, the 35 U.S.C. 101 rejection is maintained.
6. Applicant's arguments filed February 9, 2026 with respect to the 35 U.S.C. 103 rejection have been fully considered but they are not persuasive.
Applicant’s Arguments on Pg. 11 of Arguments/Remarks state:
“Initially, although Patel does mention training a model, Patel's description of training is superficial and does not include considering both explanation and privacy-related factors during training. Patel mentions that its "differentially private neural networks" are trained using particular data sets, such as the IMBD data set with the Tensorflow differential privacy library, and the Face dataset. See Patel at p. 17. Patel fails, however, to disclose specific details of the training including, for example, specific losses determined during a training epoch.
Further, although Patel does mention differential privacy and explainability, Patel fails to teach or suggest any arrangement that considers both factors during training of the machine learning model. Instead, Patel trains a differentially private model. After training the differentially private model, Patel develops differentially private explanation data. See Patel at p. 5.
Kamkar fails to remedy the defects of Patel. Kamkar does disclose training a model, and particularly adversarial training methods, where an adversarial model is used to attempt to breach the privacy of the training data used to train a main model. See Kamkar at 208. Kamkar discloses the use of model decomposition to generate explanations of the contributions of various input variables to the model. See Kamkar at 61-62, 188 (model decomposition occurs after the initial and new models are generated).
Accordingly, neither cited references teaches or suggests the features of claim 1 above including, for example, any executing a training epoch that comprises, "modifying the bias-cleared private model based at least in part on the first explanation loss and the first noise data," as recited by claim 1.”
Examiner respectfully disagrees. First, Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Regarding Applicant’s argument that “Patel fails to teach or suggest any arrangement that considers both factors during training of the machine learning model” specifically regarding executing a training epoch that comprises “modifying the bias-cleared private model based at least in part on the first explanation loss and the first noise data”, Examiner respectfully disagrees. This limitation is taught by Patel Pg. 13 Section 5 Protecting the Privacy of the Training Dataset which discloses how gaussian noise is injected into the gradient of the loss function at each iteration – hence, at each iteration/epoch of the training, the private model is clearly modified based at least in part on the explanation loss and noise data.
Thus, the 35 U.S.C. 103 rejection is maintained.
Claim Rejections - 35 USC § 101
7. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
8. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Step 1: Claim 1 is a system type claim. Therefore, Claims 1-10 are directed to either a process, machine, manufacture, or composition of matter.
2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas.
[…] generate first bias-cleared private model explanation data (mental process – other than reciting “an explainer model”, generating first bias-cleared private model explanation data may be performed manually by a user observing/analyzing the first bias-cleared private model output data and based on said analysis, accordingly using judgement/evaluation to generate first bias-cleared private model explanation data, in which the explanation data describes the model output to provide a rationale behind the private models prediction, hence making the model more easily understood by a human user)
determining a first explanation loss using the first bias-cleared private model explanation data and the first bias-cleared model explanation data (mental process/mathematical process – determining a first explanation loss may be performed manually by a user observing/analyzing the first bias-cleared private model explanation data and the first bias-cleared model explanation data and accordingly using judgement/evaluation to determine a difference between the first bias-cleared private model explanation data and the first bias-cleared model explanation data. This is similarly supported by Applicant’s specification Par. [0033] & dependent claim 10 which further details how such an explanation loss may be determined by a difference between both values. Alternatively, the first explanation loss may be calculated by mathematical process utilizing a formula for calculating a mean absolute error value between the private explanation data and the explanation data, as supported by Applicant’s spec Par. [0042])
determining first noise data to be added to the bias-cleared private model based at least in part on a privacy budget (mental process – determining first noise data may be performed manually by a user observing/analyzing the privacy budget and accordingly using judgement/evaluation to determine first noise data to be added to the bias-cleared private model based on said analysis of the privacy budget. For example, Applicant’s specification Par. [0036] states that the privacy budget can describe the amount of random noise that is added to a dataset, such that a lower privacy budget implies a higher level of random noise – hence, a user may observe/analyze the privacy budget to determine the amount of noise to be added)
modifying the bias-cleared private model based at least in part on the first explanation loss and the first noise data (mental process – modifying the bias-cleared private model may be performed manually by a user observing/analyzing the first explanation loss and first noise data and accordingly using judgement/evaluation to modify the model (with the aid of pen and paper), by updating weights or adding noise (See Applicant’s specification Par. [0019-0020]), based on said analysis of the explanation loss and first noise data)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
a computing system […] the computing system comprising: at least one processor programmed to perform operations […] (recited at a high-level of generality (i.e., as a generic system comprising a generic processor already configured to perform the operations of the claim) such that it amounts to no more than mere instructions to apply the exception using generic computer components)
[…] train a machine learning model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
accessing a bias-cleared model […] (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g))
[…] the bias-cleared model being a machine learning model trained according to at least one fairness constraint (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
executing a first training epoch for a bias-cleared private model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
accessing first bias-cleared private model output data generated by the bias-cleared private model (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g))
applying an explainer model to first bias-cleared private model output data […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of applying an already trained and configured machine learning model using previously determined data without significantly more)
accessing first bias-cleared model explanation data describing first bias-cleared model output data generated by the bias-cleared model […] (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g))
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
a computing system […] the computing system comprising: at least one processor programmed to perform operations […] (mere instructions to apply the exception using generic computer components cannot provide an inventive concept)
[…] train a machine learning model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
accessing a bias-cleared model […] (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” (e.g. using the Internet to gather data) is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
[…] the bias-cleared model being a machine learning model trained according to at least one fairness constraint (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
executing a first training epoch for a bias-cleared private model […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data without significantly more)
accessing first bias-cleared private model output data generated by the bias-cleared private model (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” (e.g. using the Internet to gather data) is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
applying an explainer model to first bias-cleared private model output data […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of applying an already trained and configured machine learning model using previously determined data without significantly more)
accessing first bias-cleared model explanation data describing first bias-cleared model output data generated by the bias-cleared model […] (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” (e.g. using the Internet to gather data) is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-10. The additional limitations of the dependent claims are addressed below.
Regarding Claim 2:
Step 2A Prong 1:
See the rejection of Claim 1 above, which Claim 2 depends on.
Step 2A Prong 2 & Step 2B:
accessing model structure data describing a machine learning model structure […] (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” (e.g. using the Internet to gather data) is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer)
[…] the bias-cleared model and the bias-cleared private model being arranged according to the machine learning model structure (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the bias-cleared model and bias-cleared private model are arranged to a generic machine learning model structure does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h))
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 3:
Step 2A Prong 1:
See the rejection of Claim 1 above, which Claim 3 depends on.
Step 2A Prong 2 & Step 2B:
the bias-cleared model comprising a first neural network and the bias-cleared private model comprising a second neural network: the first neural network comprising a first input layer, a first number of hidden layers, and a first output layer; and the second neural network comprising a second input layer, the first number of hidden layers, and a second output layer (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the models comprise a generic first/second neural network with a plurality of layers does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h))
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 4:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 4 depends on.
[…] generate the first bias-cleared model output data (mental process – other than reciting “bias-cleared model”, generating the first bias-cleared model output data may be performed manually by a user observing/analyzing the first batch of training data and accordingly using judgement/evaluation to mitigate/clear analyzed biases from the data to generate first bias-cleared output data)
[…] generate the first bias-cleared private model output data […] (mental process – other than reciting “bias-cleared private model”, generating the first bias-cleared private model output data may be performed manually by a user observing/analyzing the second batch of training data and accordingly using judgement/evaluation to mitigate/clear analyzed biases from the data to generate first bias-cleared private model output data)
Step 2A Prong 2 & Step 2B:
executing the bias-cleared model on a first batch of training data […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data)
the executing of the first training epoch further comprising executing the bias-cleared private model on a second batch of training data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data)
[…] the first batch of training data being different than the second batch of training data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the first batch of training data is different than the second batch of training data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h))
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 5:
Step 2A Prong 1: See the rejection of Claim 4 above, which Claim 5 depends on.
selecting the second batch of training data based at least in part on the first batch of training data (mental process – selecting the second batch of training data may be performed manually by user observing/analyzing the first batch of training data and accordingly using judgement/evaluation to select the second batch of training data based at least in part on the analysis of said first batch of training data)
Step 2A Prong 2 & Step 2B:
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 6:
Step 2A Prong 1: See the rejection of Claim 5 above, which Claim 6 depends on.
selecting of the second batch of training data comprising comparing first label data describing the first batch of training data and second label data describing the second batch of training data (mental process – selecting of the second batch of training data may be performed manually by a user observing/analyzing the first label data and second label data and accordingly using judgement/evaluation to compare the first label data and second label data to determine which samples to select for the second batch of training data)
Step 2A Prong 2 & Step 2B:
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 7:
Step 2A Prong 1:
See the rejection of Claim 4 above, which Claim 7 depends on.
Step 2A Prong 2 & Step 2B:
training of the bias-cleared model being based at least in part on the first batch of training data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data)
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 8:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 8 depends on.
determining a utility loss of the bias-cleared private model using the first bias-cleared private model output data (mathematical process – determining a utility loss may be performed by mathematical process using a loss function, for example a stochastic gradient descent technique as supported by Applicant specification Par. [0041], to determine a utility loss using the first bias-cleared private model output data)
Step 2A Prong 2 & Step 2B:
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 9:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 9 depends on.
[…] generate the first bias-cleared model output data (mental process – other than reciting “bias-cleared model”, generating the first bias-cleared model output data may be performed manually by user observing/analyzing the first batch of training data and accordingly using judgement/evaluation to mitigate/clear bias and generate first bias-cleared model output based on said analysis)
[…] generate the first bias-cleared private model output data (mental process – other than reciting “bias-cleared private model”, generating the first bias-cleared private model output data may be performed manually by user observing/analyzing the first batch of training data and accordingly using judgement/evaluation to mitigate/clear bias and generate first bias-cleared private model output based on said analysis)
Step 2A Prong 2 & Step 2B:
executing of the first training epoch further comprising: executing the bias-cleared model on a first batch of training data […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data)
executing the bias-cleared private model on the first batch of training data […] (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with previously determined data)
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Regarding Claim 10:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 10 depends on.
determining of the first explanation loss comprising comparing a target explanation to a difference between the first bias-cleared private model explanation data and the first bias-cleared model explanation data (mental process – determining the first explanation loss may be performed manually by a user observing/analyzing the first bias-cleared private model explanation data, the first bias-cleared model explanation data, and the target explanation and accordingly using judgement/evaluation to compare the target explanation to a difference between the first bias-cleared private model explanation data and the first bias-cleared model explanation data)
Step 2A Prong 2 & Step 2B:
Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1.
Independent Claim 11 recites substantially the same limitations as Claim 1, in the form of a method. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
For the reasons above, Claim 11 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 12-19. The additional limitations of the dependent claims are addressed below.
Claim 12 recites substantially the same limitations as Claim 2, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 13 recites substantially the same limitations as Claim 3, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 14 recites substantially the same limitations as Claim 4, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 15 recites substantially the same limitations as Claim 5, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 16 recites substantially the same limitations as Claim 6, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 17 recites substantially the same limitations as Claim 7, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 18 recites substantially the same limitations as Claim 8, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Claim 19 recites substantially the same limitations as Claim 9, in the form of a method, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
Independent Claim 20 recites substantially the same limitations as Claim 1, in the form of a machine-readable medium, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale.
For the reasons above, Claim 20 is rejected as being directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 103
9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (hereinafter Patel) (“Model Explanations with Differential Privacy”), in view of Kamkar et al. (hereinafter Kamkar) (US PG-PUB 20230105547).
Regarding Claim 1, Patel teaches a computing system programmed to train a machine learning model (Patel, Pg. 3, Section 2 Problem Statement, “Consider a black-box decision-making machine learning model f : Rn Rtrained on the training dataset T . The model returns the predicted label, and provides no more information about its decision. Our goal is to generate a model explanation (zXf(X)), whose input is a point of interest z, an explanation dataset X = x1 xm Rn (used to generate the explanation), and the output of f on X.”, thus, a computing system programmed to train a machine learning model and provide explanations on the model is disclosed. It is further stated in Section 6 Experimental Results on Pg. 14 that a random forest classifier and convolutional neural network are trained across different benchmark datasets), the computing system comprising:
at least one processor programmed (See introduction of Kamkar reference below for explicit teaching of the computing system comprising at least one processor) to perform operations comprising:
accessing a bias-cleared model (Patel, Pg. 17, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We also train a non-private model, f, which achieves a test accuracy of 836%.”, thus, a non-private model is accessed), the bias-cleared model being a machine learning model trained according to at least one fairness constraint (See introduction of Kamkar reference below for explicit teaching of a bias-cleared model. Examiner’s Note: This teaching is applied to all subsequent recitations of the term “bias-cleared model” (which are struck through below), as the limitation “bias-cleared” is simply considered a label, with no impact on the subsequent functional steps of the claim); and
executing a first training epoch for a Patel, Pg. 17, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We train differentially private neural networks (all with the same architecture) on the IMDB Dataset [22] using the Tensorflow differential privacy library [4]; we obtain differentially private models of the form f with different privacy parameters = 05210 and = 10 5. The models are trained on the same training and test dataset, split in a 70 : 30 ratio (35000 training reviews and 15000 test reviews).”, thus, a first training epoch is executed for at least one private model), the executing of the first training epoch comprising:
accessing first Patel, Pg. 3, Section 2 Problem Statement, “Consider a black-box decision-making machine learning model f : Rn -> R trained on the training dataset T . The model returns the predicted label, and provides no more information about its decision. Our goal is to generate a model explanation (zXf(X)), whose input is a point of interest z, an explanation dataset X = x1 xm Rn (used to generate the explanation), and the output of f on X.”, thus, first private model output data, which may be generated by a private model (supported by Patel Pg. 17 Section 6.3 which states that a plurality of differentially private neural networks are trained), is accessed);
applying an explainer model to first Patel, Pg. 3, Section 2 Problem Statement, “Consider a black-box decision-making machine learning model f : Rn -> R trained on the training dataset T . The model returns the predicted label, and provides no more information about its decision. Our goal is to generate a model explanation (zXf(X)), whose input is a point of interest z, an explanation dataset X = x1 xm Rn (used to generate the explanation), and the output of f on X.”, thus, private model explanation data (See Patel Pg. 5 Section 3 A Basic DP Mechanism for Model Explanation which specifically recites that the model explanations include differentially private model explanations) is generated by applying an explainer model to first private model output data. Further, it is stated by Patel Pg. 4 Section 2.1 Model Explanations that a Local Interpretable Model-Agnostic model (LIME) may be applied for generating model explanations – Applicant’s specification similarly supports the use of a LIME model as the explainer model in Par. [0024]);
accessing first Patel, Pg. 18, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We compare the private explanation for the POI for the different private models with the base optimal explanation for the non-private model.”, therefore, model explanation data describing first model output data generated by the non-private model is accessed);
determining a first explanation loss using the first Patel, Pg. 5, Section 2.2 Evaluation Metrics, “The quality of a model explanation is measured in terms the difference between its (expected) loss and the loss of the optimal explanation which is referred as approximation loss of explanation: [See equation 4 on Pg. 5]”, thus, a first explanation loss is determined using the first private model explanation data (expected loss of private explanation) and the first model explanation data (loss of the optimal explanation));
determining first noise data to be added to the Patel, Pg. 12, Section 4.4 Early Termination and an Enhanced Adaptive Algorithm, “Therefore, in high-dimensional settings, the noise required for privacy dominates the lower norm of the initial gradient and prevents Algorithm 3 from saving more of the privacy budget. During the gradient descent optimization phase, if L( t ) nmin, then Gaussian noise starts dominating L( t ). This domination by random Gaussian noise prevents further improvement in the loss function. Thus, the extra privacy budget spent once L( t ) nmin does not significantly improve explanation quality (approximation error); rather, it starts oscillating around the optimal point, resulting in slower convergence/decrease in approximation loss.”, therefore, first noise data, to be added to the private model, is determined based at least in part on a privacy budget); and
modifying the Patel, Pg. 13, Section 5 Protecting the Privacy of the Training Dataset, “Our algorithms inject Gaussian noise to the gradient of the loss function at each iteration; fortunately, this noise injection offers some privacy guarantees with respect to the training dataset. We quantify the improvement in the privacy guarantees of the training dataset due to the randomness inserted by our explanation algorithms when the underlying training process is ( )-differentially private.”, thus, the private model may be modified, at each training iteration, based at least in part on injection of noise to the gradient of the loss function – hence, the private model may be modified based at least in part on the explanation loss and first noise data).
While Patel broadly discloses a computing system programmed to train a machine learning model, Patel does not explicitly disclose at least one processor programmed to perform operations comprising: […]
However, Kamkar teaches at least one processor programmed to perform operations (Kamkar, Par. [0055], “In some variations, one or more of the components of the system are implemented as a hardware device that includes one or more of a processor (e.g., a CPU (central processing unit), GPU (graphics processing unit), NPU (neural processing unit), etc.), a display device, a memory, a storage device, an audible output device, an input device, an output device, and a communication interface”, thus, at least one processor is disclosed) comprising: […]
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer system programmed to train a machine learning model, as disclosed by Patel to include wherein the computing system comprises at least one processor programmed to perform operations, as disclosed by Kamkar. One of ordinary skill in the art would have been motivated to make this modification to enable the use of a processor (e.g. CPU, GPU, NPU) which may receive instructions, from a storage medium, to be executed by said processor to perform efficient training of a machine learning model (Kamkar, Par. [0055], “In some variations, one or more of the components of the system are implemented as a hardware device that includes one or more of a processor (e.g., a CPU (central processing unit), GPU (graphics processing unit), NPU (neural processing unit), etc.), a display device, a memory, a storage device, an audible output device, an input device, an output device, and a communication interface.”).
Patel does not explicitly disclose accessing a bias-cleared model, the bias-cleared model being a machine learning model trained according to at least one fairness constraint;
However, Kamkar teaches accessing a bias-cleared model, the bias-cleared model being a machine learning model trained according to at least one fairness constraint (Kamkar, Abstract, “obtaining data relating to a plurality of potential borrowers; providing the data to the trained machine learning model; obtaining, by the trained machine learning model’s processing of the provided data, the one or more outputs of the trained machine learning model; and automatically generating a report that explains the one or more outputs of the trained machine learning model with respect to one or more fairness metrics and one or more accuracy metrics; and providing the automatically generated report for display on a user device.”, thus, a bias-cleared model is accessed, where the bias-cleared model is a machine learning model trained according to at least one fairness constraint/metric. Further, it is stated by Kamkar Par. [0005] & [0007] that the disclosure relates to mitigating bias/training de-biased machine learning models)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer system programmed to train a machine learning model, as disclosed by Patel to include accessing a bias-cleared model, the bias-cleared model being a machine learning model trained according to at least one fairness constraint, as disclosed by Kamkar. One of ordinary skill in the art would have been motivated to make this modification to enable training, which integrates improved privacy and explainability, for bias-cleared models – hence, enhancing the accuracy, interpretability, and fairness of decisions informed by private and non-private machine learning models (Kamkar, Par. [0002], “This specification generally relates to enhancing the accuracy and fairness of decisions informed by machine learning models, and to automated methods for explaining the outputs of those models, and assessing their suitability for use in decision-making.”).
Regarding Claim 2, Patel in view of Kamkar teaches the computing system of claim 1, the operations further comprising accessing model structure data describing a machine learning model structure, the bias-cleared model (Patel, Pgs. 17-18, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We also train differentially private neural networks (all with the same architecture) on the Face Dataset, with privacy parameters = 05210 and = 10 5. They achieve a test accuracy of 51%62% and 80% respectively. We also train a non-private model, f, which achieves test accuracy of 843%. The models are trained on the same training and test dataset split into 70 : 30 ratio (number of training images = 8510 and test images = 3646)”, thus, model structure data/architecture data is accessed and the model is arranged according to the same architecture. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above) and the bias-cleared private model being arranged according to the machine learning model structure (Patel, Pg. 17, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We train differentially private neural networks (all with the same architecture) on the IMDB Dataset [22] using the Tensorflow differential privacy library [4]; we obtain differentially private models of the form f with different privacy parameters = 05210 and = 10 5. The models are trained on the same training and test dataset, split in a 70 : 30 ratio (35000 training reviews and 15000 test reviews).”, thus, model structure data/architecture data is accessed and the private model is arranged according to the same architecture. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above).
Regarding Claim 3, Patel in view of Kamkar teaches the computing system of claim 1, the bias-cleared model comprising a first neural network and the bias-cleared private model comprising a second neural network (Patel, Pg. 17, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We train differentially private neural networks (all with the same architecture) on the IMDB Dataset [22] using the Tensorflow differential privacy library [4]; we obtain differentially private models of the form f with different privacy parameters = 05210 and = 10 5. The models are trained on the same training and test dataset, split in a 70 : 30 ratio (35000 training reviews and 15000 test reviews).”, thus, the private and non-private models may both comprise neural networks. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above):
the first neural network comprising a first input layer, a first number of hidden layers, and a first output layer; and the second neural network comprising a second input layer, the first number of hidden layers, and a second output layer (Patel, Pg. 14, “We train a CNN with two convolution layers with 5 5 filters followed by max-pooling and a fully connected layer, achieving training and test accuracy of 86% and 843%, respectively. We use this dataset to demonstrate the visualization of our explanations.”, thus, the neural networks (both private and non-private) utilized in the experimental results for the Facial expression/Face dataset may comprise a convolutional neural network, which consists of an input layer, two convolution layers (hidden layers), and an output layer).
Regarding Claim 4, Patel in view of Kamkar teaches the computing system of claim 1, the operations further comprising:
executing the bias-cleared model on a first batch of training data to generate the first bias-cleared model output data (Patel, Pgs. 17-18, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We also train a non-private model, f, which achieves test accuracy of 843%. The models are trained on the same training and test dataset split into 70 : 30 ratio (number of training images = 8510 and test images = 3646)”, thus, the model is executed on a first batch of training data (Face dataset) to generate first model output data. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above); and
the executing of the first training epoch further comprising executing the bias-cleared private model on a second batch of training data to generate the first bias-cleared private model output data, the first batch of training data being different than the second batch of training data (Patel, Pg. 17, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We train differentially private neural networks (all with the same architecture) on the IMDB Dataset [22] using the Tensorflow differential privacy library [4]; we obtain differentially private models of the form f with different privacy parameters = 05210 and = 10 5. The models are trained on the same training and test dataset, split in a 70 : 30 ratio (35000 training reviews and 15000 test reviews). The models f05 f2 f10 achieve a test accuracy of 52%74%81% respectively.”, therefore, the private model may be executed on a second batch of training data (IMDB Dataset) to generate private model output data, where the second batch is different than the first batch of training data. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above).
Regarding Claim 5, Patel in view of Kamkar teaches the computing system of claim 4, the operations further comprising selecting the second batch of training data based at least in part on the first batch of training data (Patel. Pg. 14, Section 6 Experimental Results, “We evaluate our model explanations on standard machine learning datasets. We use the following benchmark datasets (we use the datasets in their entirety as the explanation dataset X): […]”, therefore, the model explanations are based on multiple batches of training data, including the IMDB and Facial expression/Face dataset. Using broadest reasonable interpretation of the limitation “selecting the second batch of training data based at least in part on the first batch of training data”, Patel also teaches selecting the second batch of training data (IMDB dataset) based at least in part on the first batch of training data (Facial expression/Face dataset), as it is stated that all datasets in their entirety are used as the explanation dataset X, including both the first/second batches of training data)
Regarding Claim 6, Patel in view of Kamkar teaches the computing system of claim 5, the selecting of the second batch of training data comprising comparing first label data describing the first batch of training data and second label data describing the second batch of training data (Patel, Pg. 14, Section 6.1 Interactive DP Model Explanation, “We use our non-adaptive protocol (Algorithm 1) to generate private model explanations for points in all datasets. We compare the explanations we generate with existing non-private model agnostic explanation methods. In the Text dataset, we generate differentially private model explanations for 1000 randomly sampled datapoints (movie reviews) from the dataset with strong privacy parameters = 01 and = 10 6. We present a few examples in Table 2. The explanations generated by our protocol agree with well-established non-private model agnostic explanations: using LIME [27] and MIM [33], we extract the top 5 most influential words.”, therefore, the selecting of the second batch of training data may comprise comparing the private model explanations (using the second batch of training data) to the non-private model explanations (using the first batch of training data). Furthermore, Table 2 on Pg. 15 depicts how labels of the training data (in the second batch of training data, IMDB dataset) may be compared).
Regarding Claim 7, Patel in view of Kamkar teaches the computing system of claim 4, the training of the bias-cleared model being based at least in part on the first batch of training data (Patel, Pgs. 17-18, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We also train a non-private model, f, which achieves test accuracy of 843%. The models are trained on the same training and test dataset split into 70 : 30 ratio (number of training images = 8510 and test images = 3646)”, thus, the model is trained based at least in part on a first batch of training data (Face dataset). For explicit teaching of a bias-cleared model, see rejection of Claim 1 above).
Regarding Claim 8, Patel in view of Kamkar teaches the computing system of claim 1, the operations further comprising determining a utility loss of the bias-cleared private model using the first bias-cleared private model output data (Patel, Pg. 6, Section 3.2 Privacy and Utility Guarantees, “We bound the privacy and utility loss of model explanations produced by Algorithm 1. The data dependent quantity, in each iteration of Algorithm 1, is L( X). Given Lemma 3.1, we use the Gaussian mechanism to ensure that the update in each round is differentially private. Using the differential privacy composition theorem [19, Theorem-4.3], the output of algorithm 1 is ( ) differentially private. We show that the differentially private mechanism for model explanation offers good utility loss guarantees.”, thus, a utility loss of the private model may be determined using the private model output data at each iteration of its training. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above).
Regarding Claim 9, Patel in view of Kamkar teaches the computing system of claim 1, the executing of the first training epoch further comprising:
executing the bias-cleared model on a first batch of training data to generate the first bias-cleared model output data (Patel, Pgs. 17-18, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We also train a non-private model, f, which achieves test accuracy of 843%. The models are trained on the same training and test dataset split into 70 : 30 ratio (number of training images = 8510 and test images = 3646)”, thus, the model is executed on a first batch of training data (Face dataset, same as the private model) to generate first model output data. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above); and
executing the bias-cleared private model on the first batch of training data to generate the first bias-cleared private model output data (Patel, Pg. 17, Section 6.3 Differentially Private Explanation for Differentially Private Models, “We also train differentially private neural networks (all with the same architecture) on the Face Dataset, with privacy parameters = 05210 and = 10 5. They achieve a test accuracy of 51%62% and 80% respectively.”, thus, the private model is executed on a first batch of training data (Face dataset) to generate private model output data. For explicit teaching of a bias-cleared model, see rejection of Claim 1 above).
Regarding Claim 10, Patel in view of Kamkar teaches the computing system of claim 1, the determining of the first explanation loss comprising comparing a target explanation to a difference between the first bias-cleared private model explanation data and the first bias-cleared model explanation data (Patel, Pg. 5, Section 2.2 Evaluation Metrics, “The quality of a model explanation is measured in terms the difference between its (expected) loss and the loss of the optimal explanation which is referred as approximation loss of explanation: [See equation 4 on Pg. 5]”, thus, a first explanation loss is determined using a difference between the first private model explanation data (expected loss of private explanation) and the first model explanation data (loss of the optimal explanation). Further it is stated on Patel Pg. 18 that “We compare the private explanation for the POI for the different private models with the base optimal explanation for the nonprivate model.” – thus, a target explanation/base optimal explanation is compared to the difference).
Regarding Claim 11, Patel in view of Kamkar teaches a computer-implemented method for training a machine learning model (Patel, Pg. 3, Section 2 Problem Statement, “Consider a black-box decision-making machine learning model f : Rn Rtrained on the training dataset T . The model returns the predicted label, and provides no more information about its decision. Our goal is to generate a model explanation (zXf(X)), whose input is a point of interest z, an explanation dataset X = x1 xm Rn (used to generate the explanation), and the output of f on X.”, thus, methods for training a machine learning model and providing explanations on the model are disclosed), the model comprising: […]
The rest of the claim language in Claim 11 recites substantially the same limitations as Claim 1, in the form of a method, therefore it is rejected under the same rationale.
The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein.
Claim 12 recites substantially the same limitations as Claim 2 in the form of a method, therefore it is rejected under the same rationale.
Claim 13 recites substantially the same limitations as Claim 3 in the form of a method, therefore it is rejected under the same rationale.
Claim 14 recites substantially the same limitations as Claim 4 in the form of a method, therefore it is rejected under the same rationale.
Claim 15 recites substantially the same limitations as Claim 5 in the form of a method, therefore it is rejected under the same rationale.
Claim 16 recites substantially the same limitations as Claim 6 in the form of a method, therefore it is rejected under the same rationale.
Claim 17 recites substantially the same limitations as Claim 7 in the form of a method, therefore it is rejected under the same rationale.
Claim 18 recites substantially the same limitations as Claim 8 in the form of a method, therefore it is rejected under the same rationale.
Claim 19 recites substantially the same limitations as Claim 9 in the form of a method, therefore it is rejected under the same rationale.
Regarding Claim 20, Patel in view of Kamkar teaches a machine-readable medium comprising instructions (Kamkar, Par. [0041], “In some embodiments, the repository is included in a storage medium. In some embodiments, the storage system is a database or file system and the storage medium is a hard drive.”, thus, a storage medium for storing instructions is disclosed) thereon that, when executed by at least one processor, cause the at least one processor to perform operations (Kamkar, Par. [0055], “In some variations, one or more of the components of the system are implemented as a hardware device that includes one or more of a processor (e.g., a CPU (central processing unit), GPU (graphics processing unit), NPU (neural processing unit), etc.), a display device, a memory, a storage device, an audible output device, an input device, an output device, and a communication interface”, thus, at least one processor for executing instructions is disclosed) comprising: […]
The rest of the claim language in Claim 20 recites substantially the same limitations as Claim 1, in the form of a machine-readable medium, therefore it is rejected under the same rationale.
The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein.
Conclusion
11. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devika S Maharaj whose telephone number is (571)272-0829. The examiner can normally be reached Monday - Thursday 8:30am - 5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEVIKA S MAHARAJ/Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123