Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the application and claims filed 06/07/2023. Claims 1-20 are
pending and have been examined. Claims 1-20 are rejected.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/07/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The present application claims foreign priority based on Korean Patent Application KR10-2022-0179589 filed on 12/20/2022. The examiner notes that a certified copy (in Korean) of the above-noted application was retrieved on 07/16/2023. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract
idea without significantly more.
Claim 1
Step 1: The claim recites a method; therefore, it is directed to the statutory category of process.
Step 2A prong 1: The claim recites:
generating a process result … provided input data, where the input data comprises feature values corresponding to a plurality of process features; (a person mentally or with a pen and paper looks at input data and uses their knowledge to come up with a result).
generating sample data by a first modifying of at least a portion of reference data based on dependency between two or more of the plurality of process features, where the reference data comprises a plurality of feature values for a reference process result; and (a person mentally or with a pen and paper considers dependency between different features and then mentally modifies the reference data to come up with different sample data).
identifying an attribution of the plurality of process features based on the generated process result and a sample process result generated … provided the generated sample data. (a person mentally or with a pen and paper compares the process result with the sample result to assign a level of importance (attribution).
Step 2A prong 2 and Step 2B:
using a first machine learning model (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
using the first machine learning model, or a second machine learning model related to the first machine learning model, (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
Claim 2
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 2 depends on. Claim 2 further recites:
modifying a first feature value of a first process feature of the reference data; and (a person mentally or with a pen and paper modifies a first value).
in response to a determination that a second process feature is dependent on the first process feature, selecting a second feature value of the second process feature from a candidate feature value that is dependent on the modified first feature value, (a person mentally or with a pen and paper can determine if a second feature depends on the first and then can take action to change the second variable since the first variable changed and then select a new variable for this second feature from a list of values that make sense with the newly updated first feature.)
wherein the sample data is based on the modified first feature value and the selected second feature value. (a person mentally or with a pen and paper can take the aforementioned steps and create “sample data” that contains both the newly modified first and corresponding updated second feature.)
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 3
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 3 depends on. Claim 3 further recites:
modifying a first feature value of a first process feature, of the reference data, that is related to first equipment among pieces of the reference data; and (a person mentally or with a pen and paper modifies a first value that is related to first equipment).
modifying a second feature value of a second process feature, which is related to at least one of a chamber or a reticle that are dependent on the first equipment, to be another feature value indicating at least one of a corresponding chamber and a corresponding reticle allocated to another equipment indicated by the modified first feature value, (a person mentally or with a pen and paper recognizes that other equipment is physically dependent on the first equipment and therefore modifies the value of the second value so it correctly indicates that the other equipment belongs to the first equipment.)
wherein the sample data is based on the modified first feature value and the modified second feature value. (a person mentally or with a pen and paper can take the aforementioned steps and create “sample data” that contains both the newly modified first and corresponding updated second feature.)
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 4
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 4 depends on. Claim 4 further recites:
modifying a first feature value of a first process feature, of the reference data, that is related to a first operation stage among pieces of the reference data; and (a person mentally or with a pen and paper modifies a first value that is related to first operation stage).
based on a result of a determination of whether a path, which includes the first operation stage, also includes a second operation stage being that the path also includes the second operation stage, (a person mentally or with a pen and paper looks at the path that includes this first operation stage and makes a determination whether a second operation stage is also required as part of that same path.)
generating the sample data corresponding to the path by modifying a second feature value of a second process feature that is related to the second operation stage. (if the person mentally or with a pen and paper determines that the path does include that second operation stage, they create “sample data” by modifying the value related to that second stage to ensure the whole sequence makes logical sense.)
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 5
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 5 depends on. Claim 5 further recites:
wherein the generating of the sample data by the first modifying of the at least portion of the reference data comprises generating the sample data (a person mentally or with a pen and paper can make “sample data” by modifying reference data as explained previously in claims 2-4).
Step 2A prong 2 & Step 2B:
using a sample generation machine learning model. (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
Claim 6
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 6 depends on. Claim 6 further recites:
wherein the generating of the sample data comprises generating a respective sample data for each of the plurality of process features (a person mentally or with a pen and paper makes a separate piece of sample data for every single variable involved in the process).
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 7
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 7 depends on. Claim 7 further recites:
wherein the identifying of the attribution of the plurality of process features comprises: calculating confidence of respective sample data corresponding to each of the process features based on the generated sample process result; and (a person looks at the “sample process result” and mentally or with a pen and paper computes a confidence level for that specific piece of sample data.)
identifying the attribution based on the calculated confidence. (a person mentally or with a pen and paper uses the confidence level to determine how much credit/blame (attribution) the variable gets.)
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 8
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 8 depends on. Claim 8 further recites:
wherein the identifying of the attribution of the plurality of process features comprises: when the sample process result is generated …, generating another sample process result …; (If a person sees that the original test result was generated using a “first machine learning model” then they feed that exact same sample data into a “second machine learning model” to get another result).
when the sample process result is generated … , generating the other sample process result …, different from the second machine learning model, related to the first machine learning model; (If a person sees that the original test result was already generated using a second model, then they feed it into another different model).
calculating confidence of respective sample data corresponding to each of the process features based on the generated sample process result and the generated other sample process result; and (a person looks at the original result and the newly generated other result to mentally or with a pen and paper compute a confidence level. If both models give similar results, confidence is high. If they are different, confidence is low.)
identifying the attribution based on the calculated confidence. (a person mentally or with a pen and paper uses the confidence level to determine how much credit/blame (attribution) the variable gets.)
Step 2A prong 2 and Step 2B:
using the first machine learning model, using the second machine learning model, using the second machine learning model, using another second machine learning model (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
Claim 9
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 9 depends on. Claim 9 further recites:
further comprising generating another sample process result … provided second modified sample data, where the second modified sample data is obtained by performing a second modifying of the sample data that is different from the first modifying of the sample data, (a person mentally or with a pen and paper takes the sample data and modifies it in a different way than it did the first time to create “second modified sample data” and manually feeds it into the first machine learning model to get “another sample process result”.)
wherein the identifying of the attribution of the plurality of process features comprises: calculating a confidence of the sample data based on the sample process result and the other sample process result; (a person looks at the original sample process result and the new test result to mentally or with a pen and paper compute a confidence level.)
and calculating an attribution of a process feature based on the calculated confidence. (a person mentally or with a pen and paper uses the confidence level to determine how much credit/blame (attribution) the variable gets.)
Step 2A prong 2 and Step 2B:
using the first machine learning model (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
Claim 10
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 10 depends on. Claim 10 further recites:
wherein the generating of the process result comprises generating process results respectively … provided the input data, (a person can manually feed the input data into a “plurality of different machine learning models” to get process results.)
wherein the identifying of the attribution of the plurality of process features comprises: respectively calculating attributions for a process feature, of the plurality of process features, based on the generated process results; (a person mentally or with a pen and paper computes an attribution score for a specific variable based on the individual process results from the different models.)
and identifying an attribution of the process feature based on the calculated attributions. (a person looks at all those separate attribution scores and evaluates them together to identify one final, overall attribution).
Step 2A prong 2 and Step 2B:
using a plurality of different machine learning models (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
Claim 11
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 11 depends on. Claim 11 further recites:
for first input data and second input data, respectively of the input data, comprising a same feature value for a target process feature, identifying a representative attribution, as the identified attribution, of the target process feature based on a sum of a first attribution of the target process feature, which is calculated based on the first input data, and a second attribution of the target process feature, which is calculated based on the second input data. (a person looks at different scenarios and identifies two data that both share the same value for a specific target process feature then they mentally or with a pen and paper compute a “representative attribution” by summing the attribution scores together.)
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 12
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 12 depends on. Claim 12 further recites:
further comprising sorting at least one of the plurality of process features based on the identified attribution. (a person mentally or with a pen and paper sorts the process features based on identified attribution).
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 13
Step 1: A process, as above
Step 2A prong 1: See the rejection of Claim 1 above, which claim 13 depends on. Claim 13 further recites:
further comprising adjusting at least one of the plurality of process features based on the identified attribution. (a person mentally or with a pen and paper adjusts process features based on the identified attribution).
Step 2A prong 2 and Step 2B: The claim does not recite any additional elements that integrate the judicial exception into a practical application nor recite any additional elements that amount to significantly more than the judicial exception. Therefore, the claim is not patent eligible.
Claim 14
Step 1: The claim recites a non-transitory computer-readable storage medium; therefore, it is directed to the statutory category of manufacture.
Step 2A prong 1: The claim recites the method of claim 1, therefore see the rejection for claim 1 where the abstract ideas are listed.
Step 2A prong 2 and Step 2B: The claim recites the method of claim 1, therefore see the rejection for claim 1 where the additional elements are listed. Claim 14 further recites:
non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the method of (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
Claim 15
Step 1: The claim recites an electronic device; therefore, it is directed to the statutory category of machine.
Step 2A prong 1: The claim recites:
generate a process result … provided input data, where the input data comprises feature values corresponding to a plurality of process features; (a person mentally or with a pen and paper looks at input data and uses their knowledge to come up with a result).
generate sample data by performance of a first modification of at least a portion of reference data based on dependency between two or more of the plurality of process features, where the reference data comprises feature values that are considered when a reference process result is generated; and (a person mentally or with a pen and paper considers dependency between different features and then mentally modifies the reference data to come up with different sample data).
identify an attribution of the plurality of process features based on the generated process result and a sample process result generated …, provided the generated sample data. (a person mentally or with a pen and paper compares the process result with the sample result to assign a level of importance (attribution).
Step 2A prong 2 and Step 2B:
An electronic device, the electronic device comprising: a processor configured to: (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).
using a first machine learning model (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
using the first machine learning model, or a second machine learning model related to the first machine learning model (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). – EN: Claim recites a generic machine learning applied to an abstract idea.)
Claim 16 is a machine claim that recites substantially the same as limitations of claim 2. Therefore, claim 16 is rejected under the same rationale as claim 2.
Claim 17 is a machine claim that recites substantially the same as limitations of claim 3. Therefore, claim 17 is rejected under the same rationale as claim 3.
Claim 18 is a machine claim that recites substantially the same as limitations of claim 4. Therefore, claim 18 is rejected under the same rationale as claim 4.
Claim 19 is a machine claim that recites substantially the same as limitations of claim 7. Therefore, claim 19 is rejected under the same rationale as claim 7.
Claim 20 is a machine claim that recites substantially the same as limitations of claim 13. Therefore, claim 20 is rejected under the same rationale as claim 13.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 6, and 12 are rejected under 35 U.S.C. 102 as being unpatentable over non-patent literature Aas et al. (“EXPLAINING INDIVIDUAL PREDICTIONS WHEN FEATURES ARE DEPENDENT: MORE ACCURATE APPROXIMATIONS TO SHAPLEY VALUES”, hereinafter “Aas”)
Claim 1
Aas teaches:
A processor-implemented method, the method comprising: (Page 2, “Our methodology has been implemented in an R-package currently available at…”) generating a process result using a first machine learning model provided input data, where the input data comprises feature values corresponding to a plurality of process features; (Page 4, “Consider a classical machine learning scenario where a training set {yi,xi}i=1,…,ntrain of size ntrain has been used to train a predictive model f(x) attempting to resemble a response value y as closely as possible. Assume now that we want to explain the prediction from the model f(x*), for a specific feature vector x = x*.” – Examiner’s Note (EN): this denotes using a predictive machine learning model f(x) to output a specific prediction f(x*) for a specific feature vector x = x*. The model f(x) corresponds to the “first machine learning model”, the feature vector x* corresponds to the “input data” with “plurality of process features”, and the prediction f(x*) corresponds to the “process result”.) generating sample data by a first modifying of at least a portion of reference data based on dependency between two or more of the plurality of process features, where the reference data comprises a plurality of feature values for a reference process result; and (Page 6, “We propose four approaches for estimating p(xS̅|xS = x*S); (i) assuming a Gaussian distribution for p(x), (ii) assuming a Gaussian copula distribution for p(x), (iii) approximating p(xS̅|xS = x*S) by an empirical (conditional) distribution and (iv) a combination of the empirical approach and either the Gaussian or the Gaussian copula approach.” Page 8, “Note that we could have used (9), with the xk¯S sampled (with replacement) from the training data with weights wS(xi),i = 1,...,ntrain.” Page 7, “Using the samples xk¯ S , k = 1,...,K from the conditional distribution, the integral in (8) is finally approximated by (9).” – EN: This denotes training data (xi) correspond to the “reference data” The act of estimating the conditional distribution p(x¯S|xS = x* S) and applying weights wS(xi) to the training data based strictly on the values of the known subset of dependent features corresponds to “a first modifying of at least a portion of reference data based on dependency between two or more of the plurality of process features”. The drawn samples (x^kS) generated from this conditional distribution correspond to the “generated sample data”.) identifying an attribution of the plurality of process features based on the generated process result and a sample process result generated using the first machine learning model, or a second machine learning model related to the first machine learning model, provided the generated sample data. (Page 4, “That is, the Shapley values explain the difference between the prediction y∗ = f(x*) and the global average prediction.” Page 4, “To be able to compute the Shapley values in the prediction explanation setting, we need to define the contribution function v(S)… we follow Lundberg & Lee (2017a) and use the expected output of the predictive model, conditional on the feature values xS = x* S of this subset: (2) v(S) = E[f(x)|xS = x* S]”, Page 8, “Approximate the integral in (8) with a weighted version of (9):”
PNG
media_image1.png
96
675
media_image1.png
Greyscale
EN: The calculated Shapley values (φj) correspond to the “attribution of the plurality of process features”. The output of the predictive machine learning model evaluated on the specific, conditionally generated samples, denoted as f(x[k] ¯ S , x* S), corresponds to the “sample process result generated using the first machine learning model… provided the generated sample data”. The overall calculation to find the Shapley values by mathematically comparing the actual specific predctions y* = F(x*) against the evaluated outputs of the model on the sample subsets v(S) corresponds to “identifying an attribution of the plurality of process features based on the generated process result and a sample process result”)
Examiner’s Note (EN): Under the broadest reasonable interpretation (BRI), the terms “process feature” and “process result” and its counterparts are not limited to any particular industrial or manufacturing context, as claim 1 contains no such narrowing language nor does the specification explicitly define these terms. Therefore, the independent claim language is broad enough to encompass any process in which input features are provided to a machine learning model to generate an output. Accordingly, under BRI, the general-purpose machine learning features and predictions of Aas read on the claimed “process features” and “process result” and its counterparts.
Claim 2
Aas teaches:
wherein the first modifying comprises: modifying a first feature value of a first process feature of the reference data; and (Page 4, “To be able to compute the Shapley values in the prediction explanation setting, we need to define the contribution function v(S) for a certain subset S. This function should resemble the value of f(x∗) when we only know the value of the subset S of these features. To quantify this, we follow Lundberg & Lee (2017a) and use the expected output of the predictive model, conditional on the feature values xS = x∗ S of this subset: (2) v(S) = E[f(x)|xS = x∗ S]” – EN: this denotes the act of setting x_S = x*_S which is taking a portion of reference data (training data) and fixing/modifying the feature values of certain features (Subset S) to the values x*_S from the instance being explained. This corresponds to “modifying a first feature value of a first process feature of the reference data”.) in response to a determination that a second process feature is dependent on the first process feature, selecting a second feature value of the second process feature from a candidate feature value that is dependent on the modified first feature value, (Page 6, “If the features in a given model are highly dependent, the Kernel SHAP method may give a completely wrong answer… This can be done by estimating/approximating p(x¯S|xS = x∗ S) directly and generate samples from this distribution, instead of generating them independently from xS as in Section 2.3.2.” Page 6-7, “If we assume that the feature vector x stems from a multivariate Gaussian distribution with some mean vector µ and covariance matrix Σ, the conditional distribution p(x¯S|xS = x∗ S) is also multivariate Gaussian… Hence, instead of sampling from the marginal distribution of x¯S, we can sample from the Gaussian distribution with expectation vector and covariance matrix given by (10) and (11)” Page 7, “The method, which is motivated by the idea that samples (x¯S,xS) with xS close to x∗ S are informative about the conditional distribution p(x¯S|x∗ S), consists of the following steps: (1) Compute the distance between the instance x∗ to be explained and all training instances…” – EN: the entire reference denotes that features may be dependent and this dependency must be accounted for. The four proposed approaches all estimate the conditional distribution p(x¯S|xS = x∗ S), which by definition yields values for the dependent features x¯S that are conditioned on the values of the known features (x*_S). the sampled values for x¯S from these conditional distributions correspond to “selecting a second feature value of the second process feature from a candidate feature value that is dependent on the modified first feature value”.) wherein the sample data is based on the modified first feature value and the selected second feature value (page 6, equation (9)
PNG
media_image2.png
190
1158
media_image2.png
Greyscale
EN: this denotes that each generated sample comprises both the fixed feature values XS (modified first feature value) and the conditionally drawn feature values XK
s
-
(selected second feature value), where (XK
s
-
,
X
S
) corresponds to “sample data is based on the modified first feature value and the selected second feature value”.
Claim 6
Aas teaches:
wherein the generating of the sample data comprises generating a respective sample data for each of the plurality of process features. (Page 3, equation (1)
PNG
media_image3.png
99
1044
media_image3.png
Greyscale
EN: this denotes that computing the shapley value 𝜙j (for each individual feature j = 1, … , M requires generating samples to estimate the contribution function v(S) for subsets associated with that feature, this means that sample data is generated on a per-feature basis.
Claim 12
further comprising sorting at least one of the plurality of process features based on the identified attribution. (Page 18, “One way to then visually present the explanation of a particular prediction could be to rank the absolute Shapley values and present them and their corresponding features in descending order, for example for the ten most important features.”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over non-patent literature Aas et al. (“EXPLAINING INDIVIDUAL PREDICTIONS WHEN FEATURES ARE DEPENDENT: MORE ACCURATE APPROXIMATIONS TO SHAPLEY VALUES”, hereinafter “Aas”) in view of non-patent literature Senoner et al. (“Using Explainable Artificial Intelligence to Improve Process Quality: Evidence from Semiconductor Manufacturing”, hereinafter “Senoner”) further in view of US Patent US 11693326 B1 Pan et al., hereinafter “Pan”.
Claim 3
Aas teaches:
wherein the first modifying comprises: modifying a first feature value of a first process feature, of the reference data, (…) (Page 4, equation 2 and quote “conditional on the feature values xS = x∗ S of this subset:”
PNG
media_image4.png
51
818
media_image4.png
Greyscale
EN: as mapped to claim 2, this denotes modifying feature values of the reference (training) data by conditioning, setting xS = x∗ S) and modifying a second feature value of a second process feature, which is (…) dependent on the first (…), to be another feature value [dependent on] (…) the modified first feature value, (Page 6, “If the features in a given model are highly dependent, the Kernel SHAP method may give a completely wrong answer… This can be done by estimating/approximating p(x¯S|xS = x∗ S) directly and generate samples from this distribution, instead of generating them independently from xS as in Section 2.3.2.” – EN: also similar to how it’s mapped to claim 2, this denotes when features are dependent, the values for non-conditioned features are sampled from the conditional distribution, thereby modifying the second feature’s value to one that is consistent with (dependent on) the modified first feature value.)
wherein the sample data is based on the modified first feature value and the modified second feature value. (page 6, equation (9)
PNG
media_image2.png
190
1158
media_image2.png
Greyscale
EN: also similar to claim 2, this denotes that each generated sample comprises both the fixed feature values XS (modified first feature value) and the conditionally drawn feature values XK
s
-
(selected second feature value), where (XK
s
-
,
X
S
) corresponds to “sample data is based on the modified first feature value and the selected second feature value”.
Aas does not explicitly disclose:
“that is related to first equipment among pieces of the reference data;”
“which is related to at least one of a chamber or a reticle that are dependent on the first equipment,”
“to be another feature value indicating at least one of a corresponding chamber and a corresponding reticle allocated to another equipment indicated by the modified first feature value,”
However, Senoner teaches:
“that is related to first equipment among pieces of the reference data;” (Page 5710, “The transistor chip production at Hitachi ABB consists of 200 processes that are carried out in a low-vibration and temperature-constant clean room.” Page 5711, “Process parameters describe machine-related properties (e.g., the average pressure measured in a machine)” – EN: this denotes production parameters that are related to specific production machines (equipment).) “which is related to at least one of a (Page 5713, “The process engineers at Hitachi ABB suggested that the measured production parameters can depend on the production equipment used” Additional context on Page 5711, “Additionally, there may be critical combinations of production parameters that can trigger undesired interaction effects. For example, a machine from a given process may only induce quality issues if it is used in combination with another ma-chine from a different process.”) “to be another feature value indicating at least one of a corresponding chamber and a corresponding reticle allocated to another equipment indicated by the modified first feature value,” (Page 5713, “possible improvement actions involve a change in the material routing. This is achieved by altering the corresponding prioritization of machines so that one machine is preferred over others from the same process.” Page 5717, “In implantation process 25, the worst-performing machines, QI613 and QI614, are prone to particles that induce chip failures during processing. In contrast, machine QI615 does not engender the same amount of contamination.” – EN: this denotes that when the machine (equipment) selection is modified (like when the first feature value related to equipment is changed from one machine to another), the dependent production parameters inherently change to values corresponding to the newly selected machine.)
Aas in view of Senoner does not explicitly disclose:
“chamber” and “reticle”
However Pan teaches:“chamber” and “reticle” (Col 8, line 46-48, “The EUV system 400 depicted in FIG. 4 further includes a process chamber 442 wherein the reticle assembly 100, 200, 300 is positioned for processing of the wafer 460.”
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the equipment dependent production parameter modification and machine routing optimization of Senoner with the semiconductor system that has process chambers and reticles of Pan. The motivation for doing so would be to have more accurate quality prediction and improvement in semiconductor manufacturing by accounting for the physical dependencies between equipment’s and their corresponding components. provide a quote from Senoner explaining the benefit or the background and provide a quote from Pan. See Page 5711 of Senoner, “there may be critical combinations of production parameters that can trig-ger undesired interaction effects. For example, a ma-chine from a given process may only induce quality issues if it is used in combination with another ma-chine from a different process.” Also Page 5713 of Senoner “the process engineers at Hitachi ABB suggested that the measured production parameters can depend on the production equipment used, and therefore, possible improvement actions involve a change in the material routing.” As for Pan, they provide the motivation for applying such techniques specially to lithography equipment with chambers and reticles, explaining that “Each exposure of the reticle during EUV operations causes fluctuations in reticle temperature, which can cause defects in the transferred pattern. (col. 1, lines 28-31)” and the system includes “a process chamber 442 wherein the reticle assembly 100, 200, 300 is positioned for processing of the wafer 460. (col. 8, lines 46-48)”.
Claim 4 and 13 is rejected under 35 U.S.C. 103 as being unpatentable over non-patent literature Aas et al. (“EXPLAINING INDIVIDUAL PREDICTIONS WHEN FEATURES ARE DEPENDENT: MORE ACCURATE APPROXIMATIONS TO SHAPLEY VALUES”, hereinafter “Aas”) in view of non-patent literature Senoner et al. (“Using Explainable Artificial Intelligence to Improve Process Quality: Evidence from Semiconductor Manufacturing”, hereinafter “Senoner”)
Claim 4
wherein the first modifying comprises: modifying a first feature value of a first process feature, of the reference data, (Page 4, equation 2 and quote “conditional on the feature values xS = x∗ S of this subset:”
PNG
media_image4.png
51
818
media_image4.png
Greyscale
EN: as mapped to claim 2, this denotes modifying feature values of the reference (training) data by conditioning, setting xS = x∗ S) (…) and based on a result of a determination of [dependency] (…) , generating the sample data (...) by modifying a second feature value of a second process feature (…)
(Page 6, “If the features in a given model are highly dependent, the Kernel SHAP method may give a completely wrong answer… This can be done by estimating/approximating p(x¯S|xS = x∗ S) directly and generate samples from this distribution, instead of generating them independently from xS as in Section 2.3.2.” – EN: also similar to how it’s mapped to claim 2, this denotes when features are dependent, the values for non-conditioned features are sampled from the conditional distribution, thereby modifying the second feature’s value to one that is consistent with (dependent on) the modified first feature value.)
Aas does not explicitly disclose:
“that is related to a first operation stage among pieces of the reference data;”
“a determination of whether a path, which includes the first operation stage, also includes a second operation stage being that the path also includes the second operation stage”
“generating the sample data corresponding to the path”
“a second process feature that is related to the second operation stage.”
However Senoner teaches:
“that is related to a first operation stage among pieces of the reference data;” (Page 5706, “We consider a manufacturing system with sequential processes (see Figure 1).2 Each process is specified by production parameters that potentially influence the performance of the manufacturing system… The production parameters are captured at different processes k 1, : : : , K. Here, let the process specification Pk ⊆ {1, : : : , N} define which specific production parameters belong to a certain process k.” – EN: this denotes a manufacturing system with K=200 sequential processes, where each process k constitutes a distinct stage in t he manufacturing flow. Production parameters are captured at each process and are associated with one process via the process specification Pk). “a determination of whether a path, which includes the first operation stage, also includes a second operation stage being that the path also includes the second operation stage” (Page 5713, “possible improvement actions involve a change in the material routing. This is achieved by altering the corresponding prioritization of machines so that one machine is preferred over others from the same process.” Page 5714, “The decision model now selects improvement actions in the form of machine prioritizations… As improvement actions, the decision model suggests that selecting machine QI615 in process 25 and machine QP212 in process 166 improves the normalized yield.” Also see Figure 8 which shows the suggested material routing through the two prioritized processes. – EN: this denotes model dynamically determining an optimal material routing path that includes both a first operation stage (Machine QI615 at Process 25) and a second operation stage (Machine QP212 at Process 166). By linking these specific machine prioritizations into a single route, the system executes “a determination of whether a path” includes both stages.) “generating the sample data corresponding to the path” (page 5711, “Hitachi ABB provided us with historical data on M 1,197 production batches… The fabrication conditions of each production batch are described by N 3,614 production parameters from K200 different processes.” – EN: this denotes a dataset batches with parameters spanning every process stage in the manufacturing route which maps “sample data corresponding to the path”.)
“a second process feature that is related to the second operation stage.” (page 5706, “Each production parameter is associated with exactly one process; that is, Pk′ ∩ Pk′′ ø for k′ ≠ k′′ and ∪kPk {1, : : : , N}.” Page 5714, table 3
PNG
media_image5.png
249
1298
media_image5.png
Greyscale
EN: this denotes every production parameter belongs to exactly one process. Process 166 (etching) is the second operation stage that the decision model prioritized, as shown in table 3. The production parameters x2197 through x2369 are second process features related to this second operation stage. These are distinct from the first process features (x278 to x340 ) at the first operation stage (process 25).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the sequential multi-stage manufacturing path determination and stage-dependent production parameter of Senoner. The motivation for doing so would be to improve the accuracy of sample data generation of path and stage-dependent features in manufacturing. See page 5711 of Senoner, “Additionally, there may be critical combinations of production parameters that can trig-ger undesired interaction effects. For example, a ma-chine from a given process may only induce quality issues if it is used in combination with another ma-chine from a different process.”
Claim 13
Senoner teaches:
further comprising adjusting at least one of the plurality of process features based on the identified attribution. (Page 5707, “For some production parameters, it may be possible to manipulate the absolute parameter values directly. For example, if the temperature in a certain process is associated with an influence on process quality, an improvement action can be to adjust the temperature levels.” – EN: the entire purpose of this reference is to first compute SHAP-based feature attributions for production parameters across all processes, and then use those attributions to adjust the manufacturing process.)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the attribution based process feature adjustment of Senoner. The motivation for doing so would be to enable the direct modifying of the production parameters identified as significant contributors to quality variation, thereby increasing process improvements. See page 5707 of Senoner, “For some production parameters, it may be possible to manipulate the absolute parameter values directly. For example, if the temperature in a certain process is associated with an influence on process quality, an improvement action can be to adjust the temperature levels.”
Claim 7-9 is rejected under 35 U.S.C. 103 as being unpatentable over non-patent literature Aas et al. (“EXPLAINING INDIVIDUAL PREDICTIONS WHEN FEATURES ARE DEPENDENT: MORE ACCURATE APPROXIMATIONS TO SHAPLEY VALUES”, hereinafter “Aas”) in view of non-patent literature Covert et al. (“Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression”, hereinafter “Covert”)
Claim 7
Covert teaches:
wherein the identifying of the attribution of the plurality of process features comprises: calculating confidence of respective sample data corresponding to each of the process features based on the generated sample process result; and (Page 6, Section 4.3, “we suggest estimating the variance by selecting an intermediate value m such that m << n and calculating multiple independent estimates” Page 7, Section 5, “Figure 3: Shapley value-based explanations with 95% uncertainty estimates.” Page 8, Section 6, “Both explanations used a convergence threshold of t = 0.01 and display 95% confidence intervals, which are features not previously offered by KernelSHAP.” – EN: this denotes computing per-feature variance/standard deviation (confidence) based on model evaluations on sampled coalitions (sample process results). identifying the attribution based on the calculated confidence. (Page 6, section 4.3, “For detection, we propose stopping at the current value n when the largest standard deviation is a sufficiently small portion ¢ (e.g., t = 0.01) of the gap between the largest and smallest Shapley value estimates.” – EN: section 4.3 denotes outputting the final Shapley value attributions once the calculated confidence meets a specified convergence threshold.)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the convergence detection and uncertainty estimation techniques of Covert. The motivation for doing so would be to give better insights on the feature attributions accuracies. Quote in covert where the benefit is mentioned or gives background of the benefit or implies the benefit. See page 1, section 1 of Covert, “KernelSHAP does not provide un-certainty estimates. Furthermore, it provides no guidance on the number of samples required because its convergence properties are not well understood.” Covert further demonstrates that the improved estimators “display 95% confidence intervals, which are features not previously offered by KernelSHAP.”
Claim 8
Covert teaches:
wherein the identifying of the attribution of the plurality of process features comprises: when the sample process result is generated using the first machine learning model, generating another sample process result using the second machine learning model; (Page 4, Section 4.1, “we arrive at an alternative to the original KernelSHAP estimator, which we refer to as unbiased KernelSHAP” Page 6, “calculating multiple independent estimates 3, while accumulating samples for 3,,.” – EN: this denotes using alternative estimator models (like original vs unbiased KernelSHAP, or multiple independent batch estimators) to generate independent sample results for the same predictive model.) when the sample process result is generated using the second machine learning model, generating the other sample process result using another second machine learning model, different from the second machine learning model, related to the first machine learning model; (Page 5, Section 4.2, “Substituting this into unbiased KernelSHAP (Eq. yields a new estimator 3, that preserves the properties of being both consistent and unbiased” – EN: this denotes utilizing a third distinct estimator variant (like the paired sampling estimator) to generate additional distinct sample results.) calculating confidence of respective sample data corresponding to each of the process features based on the generated sample process result and the generated other sample process result; and (Page 6, section 4.3,
PNG
media_image6.png
208
624
media_image6.png
Greyscale
EN: this denotes empirically calculating the variance (confidence) by comparing the outputs of these multiple independent estimators.) identifying the attribution based on the calculated confidence. (Page 6, section 4.3, “For detection, we propose stopping at the current value n when the largest standard deviation is a sufficiently small portion ¢ (e.g., t = 0.01) of the gap between the largest and smallest Shapley value estimates.” – EN: section 4.3 denotes outputting the final Shapley value attributions once the calculated confidence meets a specified convergence threshold.)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the multiple independent estimator variance approximation of Covert. The motivation for doing so would be to obtain reliable estimate of the covariance of the Shapley value estimator. See Page 6, section 4.3 where covert explains “the original KernelSHAP (Bn) difficult to characterize” with respect to it’s variance, so Covert propose “calculating multiple independent estimates” at an intermediate sample size to approximate the covariance.
Claim 9
Covert teaches:
generating another sample process result using the first machine learning model provided second modified sample data, where the second modified sample data is obtained by performing a second modifying of the sample data that is different from the first modifying of the sample data, (Page 5, section 4.2, “‘When sampling n subsets according to the distribution 2; ~p(Z), we suggest a paired sampling strategy where each sample z; is paired with its complement 1-zi… consider using the following modified estimator that combines z; with 1 — z;:” – EN: this denotes evaluating the model using both the originally modified sample coalition and its complement (a second, distinct modification) to generate two sets of results.) wherein the identifying of the attribution of the plurality of process features comprises: (Page 6, section 4.2, “Theorem [1] shows that 3, is a more precise estimator than B2n “
PNG
media_image7.png
457
598
media_image7.png
Greyscale
– EN this denotes computing tighter confidence bounds (variance ellipsoids) by combing the results from both the original and complement samples) calculating a confidence of the sample data based on the sample process result and the other sample process result; and (Page 6 Section 4.2, “Figure 2 illustrates the result of Theorem 1 empirical 95% confidence ellipsoids for two SHAP values.” – EN: this denotes outputting the converged Shapley value as the final feature attribution once the paired-sampling confidence is established.)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the paired sampling estimation technique of Covert. The motivation for doing so would be to obtain a more precise confidence estimates for feature attributions by evaluating the model on both an original sample modification and a second, distinct modification of that same sample. Covert explains that by combining a sample with its complement (a second, different modification) the resulting estimator produces tighter confidence bounds, noting that the paired estimator “is a more precise estimator” and illustrating that the approach yields smaller “95% confidence ellipsoids” when comparing results from both modifications. (Page 6, Section 4.2 and Page 6, Figure 2).
Claim 10 and 11 is rejected under 35 U.S.C. 103 as being unpatentable over non-patent literature Aas et al. (“EXPLAINING INDIVIDUAL PREDICTIONS WHEN FEATURES ARE DEPENDENT: MORE ACCURATE APPROXIMATIONS TO SHAPLEY VALUES”, hereinafter “Aas”) in view of non-patent literature Lundberg et al. (“Explainable AI for Trees: From Local Explanations to Global Understanding”, hereinafter “Lundberg”)
Claim 10
Lundberg teaches:
wherein the generating of the process result comprises generating process results respectively using a plurality of different machine learning models provided the input data, (Page 18, “When explaining an ensemble model made up of a sum of many decision trees, the Saabas values for the ensemble model are defined as the sum of the Saabas values for each tree.” Page 3, “TreeExplainer enables the exact computation of optimal local explanations for tree-based models” applied to “ (Page 1) Random forests, gradient boosted trees, and other tree-based models… used in … manufacturing … and many other areas to make predictions based on sets of input features”) wherein the identifying of the attribution of the plurality of process features comprises: respectively calculating attributions for a process feature, of the plurality of process features, based on the generated process results; and (Page 22, “Algorithm 2 reduces the computational complexity of exact SHAP value computation from exponential to low order polynomial for trees and sums of trees (since the SHAP values of a sum of two functions is the sum of the original functions’ SHAP values).” Page 23 Algorithm 2 TreeSHAP. – EN: this denotes algorithm 2 which computes per-feature attribution values (φ) for each individual tree.) identifying an attribution of the process feature based on the calculated attributions. (Page 18, “When explaining an ensemble model made up of a sum of many decision trees, the Saabas values for the ensemble model are defined as the sum of the Saabas values for each tree.”)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the feature dependent machine learning prediction model of Aas with the ensemble-based attribution computation of Lundberg. The motivation for doing so would be to improve the reliability and accuracy of feature attribution by using multiple models. See page 7 of Lundberg, “Note that, as expected, Saabas becomes a better approximation to the Shapley values (and so a better attribution method) as the number of trees increases (Methods 9)”.
Claim 11
Lundberg teaches:
for first input data and second input data, respectively of the input data, comprising a same feature value for a target process feature, (Page 31, “Many different individuals have a recorded blood pressure of 180 mmHg in the mortality dataset, but the impact that measurement has on their log-hazard ratio varies from 0.2 to 0.6 because of other factors that differ among these individuals.” – EN: this denotes identifying multiple distinct input data samples that share the same feature value for a target feature.) identifying a representative attribution, as the identified attribution, of the target process feature based on a sum of a first attribution of the target process feature, which is calculated based on the first input data, and a second attribution of the target process feature, which is calculated based on the second input data. (Page 30, “By averaging the SHAP values across a dataset, we can get a single global measure of feature importance that retains the theoretical guarantees of SHAP values.” Page 8, “a standard bar-chart based on the average magnitude of the SHAP values” is produced by combining “local explanations from TreeExplainer across an entire dataset” – EN: this denotes individual SHAP attributions are computed per-sample for each feature and then averaged across the dataset to produce a single representative global attribution value for each feature, where the averaging operation inherently involves summing a first attribution calculated from first input data and a second attribution calculated from second input data.”
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the feature dependent machine learning prediction model of Aas with the attribution aggregation of Lundberg. The motivation for doing so would be to obtain a better measure of a feature’s importance that accounts for variability across samples sharing the same feature value. See page 8 of Lundberg, “Combining local explanations from TreeExplainer across an entire dataset enhances traditional global representations of feature importance by: 1) avoiding the inconsistency problems of current methods (Supplementary Figure 2), 2) increasing the power to detect true feature dependencies in a dataset (Supplementary Figure 7), and 3) enabling us to build SHAP summary plots that succinctly display the magnitude, prevalence, and direction of a feature’s effect.”
Claim 14-20 is rejected under 35 U.S.C. 103 as being unpatentable over non-patent literature Aas et al. (“EXPLAINING INDIVIDUAL PREDICTIONS WHEN FEATURES ARE DEPENDENT: MORE ACCURATE APPROXIMATIONS TO SHAPLEY VALUES”, hereinafter “Aas”) in view of US Patent US 11693326 B1 Pan et al., hereinafter “Pan”.
Claim 14
A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform (Col 19, line 13-24, “The methods illustrated throughout the specification, may be implemented in a computer program product that may be executed on a computer. The computer program product may comprise a non-transitory computer-readable recording medium on which a control program is recorded, such as a disk, hard drive, or the like. Common forms of non-transitory computer-readable media include, for example, floppy disks, flexible disks, hard disks, magnetic tape, or any other magnetic storage medium, CD-ROM, DVD, or any other optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, or other memory chip or cartridge, or any other tangible medium from which a computer can read and use.”) the method of claim 1. (EN: See the rejection for claim 1)
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the hardware for running the system of Pan. The motivation for doing so would be have a tangible system that runs the method. See Col 18, line 45-50 of Pan, “The exemplary embodiment also relates to an apparatus for performing the operations discussed herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.”
Claim 15
An electronic device, the electronic device comprising: (Col 10, line 56-60, “The controller 490 may include one or more of a computer server, workstation, personal computer, cellular telephone, tablet computer, pager, combination thereof, or other computing device capable of executing instructions for performing the exemplary method.”) a processor configured to: (Col 17, Line 43-44, “The system further includes a controller that includes a processor in communication with memory.”)
The remaining limitations of claim 15 are substantially the same as claim 1, therefore claim 15 is rejected under the same rationale as claim 1.
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in
the art to combine the feature dependent machine learning prediction model of Aas with the hardware for running the system of Pan. The motivation for doing so would be have a tangible system that runs the method. See Col 18, line 45-50 of Pan, “The exemplary embodiment also relates to an apparatus for performing the operations discussed herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.”
Claim 16 is a machine claim that recites substantially the same as limitations of claim 2. Therefore, claim 16 is rejected under the same rationale as claim 2.
Claim 17 is a machine claim that recites substantially the same as limitations of claim 3. Therefore, claim 17 is rejected under the same rationale as claim 3.
Claim 18 is a machine claim that recites substantially the same as limitations of claim 4. Therefore, claim 18 is rejected under the same rationale as claim 4.
Claim 19 is a machine claim that recites substantially the same as limitations of claim 7. Therefore, claim 19 is rejected under the same rationale as claim 7.
Claim 20 is a machine claim that recites substantially the same as limitations of claim 13. Therefore, claim 20 is rejected under the same rationale as claim 13.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAYMUR RAHMAN ALI whose telephone number is (571)272-0007. The examiner can normally be reached Mon-Fri. 9:30-6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAYMUR RAHMAN ALI/Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123