Prosecution Insights
Last updated: April 19, 2026
Application No. 18/070,775

INTERPRETATION METHOD FOR NEURAL NETWORK MODEL, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §101§102
Filed
Nov 29, 2022
Examiner
SUSSMAN MOSS, JACOB ZACHARY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
2 (Non-Final)
14%
Grant Probability
At Risk
2-3
OA Rounds
3y 3m
To Grant
-6%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
1 granted / 7 resolved
-40.7% vs TC avg
Minimal -20% lift
Without
With
+-20.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
37.3%
-2.7% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to amendments filed December 23rd, 2025, in which claims 1, 3, 6, 7, 9, 12, 13, 15, and 18 have been amended and claims 2, 8, and 14 have been cancelled. No claims have been added. The amendments have been entered, and claims 1, 3-7, 9-13, and 15-18 are currently pending in the case. Claims 1, 7 and 13 are independent claims. Regarding Applicant’s arguments that the rejection under 35 U.S.C. § 102(a)(1) was confusing, examiner agrees that the rejection was unclear because the Office Action mailed September 23rd, 2025 did not identify the reference used to reject the claims. Accordingly, this action is made NON-FINAL. Specification The title of the invention is not descriptive and not idiomatic English. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: METHOD, SYSTEM, AND STORAGE MEDIUM IMPLEMENTING AN INTERPRETABLE NEURAL NETWORK MODEL USING CONCEPT-BASED INFERENCE PATHS. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-7, 9-13, and 15-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: Claim 1 is directed to An interpretation method, therefore it falls under the statuary category of a process. Step 2A Prong 1: The claim recites, in part: “acquiring a key inference path through which the output data is obtained…based on the input data, wherein the key inference path comprises target concepts respectively used by the layers…when the input data is processed…, wherein the target concepts are selected from the plurality of candidate concepts” this encompasses the mental creation of a key inference path from observed input data and selecting target concepts from observed candidate concepts. “determining interpretation information corresponding to the layers…according to the target concepts corresponding to the layers…, respectively” this encompasses the mental determination of interpretation information corresponding to the observed layers and observed target concepts. “acquiring a jth layer…corresponding to the output data, wherein j is equal to N, and N is a total number of layers” this encompasses the mental observation of a jth layer of a number of observed layers. “acquiring a target concept in the jth layer…” this encompasses the mental observation of a target concept. “acquiring quantitative relationships between candidate concepts in an ith layer…and the target concept, respectively, wherein i is equal to j minus 1” this limitation is a mathematical concept. “determining a target concept in the ith layer…according to the candidate concepts in the ith layer…and the quantitative relationships” this encompasses the mental determination of a target concept among observed candidate concepts. “subtracting 1 from j, and executing acquiring the target concept in the jth layer…when j is greater than 2” this encompasses the mental subtraction of 1 from an observed j and the mental observation of a target concept. Further, this limitation is a mathematical concept. “generating the key inference path according to the target concepts in the layers…when j is equal to 2” this encompasses the mental creation of a key inference path when an observed j is equal to 2. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “acquiring input data and output data corresponding to the input data”, “the input data for the neural network model is variable values of various observed variables, and the observed variables represent attribute information of the objects” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). “of a neural network model” (line 2), “the neural network model comprises layers of the neural network model connected sequentially, and each layer of the neural network model corresponds to a plurality of candidate concepts”, “in the neural network model” (line 8), “…the neural network model” (throughout the claim), “networks in the neural network model” these limitations are an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). “the neural network model is applied to classify objects”, “outputting the key inference path and the interpretation information” the limitations are an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Step 2B: The additional elements, “of a neural network model” (line 2), “the neural network model comprises layers of the neural network model connected sequentially, and each layer of the neural network model corresponds to a plurality of candidate concepts”, “in the neural network model” (line 8), “…the neural network model” (throughout the claim), “networks in the neural network model”, “the neural network model is applied to classify objects”, “outputting the key inference path and the interpretation information”, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Further, “acquiring input data and output data corresponding to the input data”, “the input data for the neural network model is variable values of various observed variables, and the observed variables represent attribute information of the objects” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015) as well as receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d. See MPEP § 2106.05(d)/(II). Therefore, the claim is ineligible. Regarding claim 3, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “acquiring importance values of the quantitative relationships” this encompasses the mental creation of importance values for observed quantitative relationships. “ranking the quantitative relationships in a descending order of the importance values of the quantitative relationships to obtain a ranking result” this encompasses the mental ranking of observed quantitative relationships. “taking out the quantitative relationships sequentially according to the ranking result, and acquiring candidate concepts corresponding to the quantitative relationships from the candidate concepts in the ith layer…” this encompasses the mental removal of observed quantitative relationships and observing new candidate concepts. “accumulating estimated values of the candidate concepts corresponding to the quantitative relationships taken out until an accumulated value is greater than a preset threshold” this encompasses the mental accumulation of estimated values of observed candidate concepts until a threshold is met. “determining the target concept in the ith layer…from the candidate concepts corresponding to the quantitative relationships taken out from the ranking result” this encompasses the mental determination of a target concept in an observed layer amongst observed candidate concepts. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “…the neural network model” (lines 10 and 14 of the claim) these limitations are an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 4, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: a continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “the interpretation information comprises semantic information of the target concept” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 5, the rejection of claim 4 is incorporated and further: Step 2A Prong 1: a continuation of the abstract idea identified in the parent claim. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “the interpretation information further comprises sample characteristics of a target sample corresponding to the target concept” the limitation is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 6, the rejection of claim 1 is incorporated and further: Step 2A Prong 1: The claim recites, in part: “acquiring a quantitative relationship between target concepts in two adjacent layers…according to the target concepts in the two adjacent layers…for any two adjacent layers…in the key inference path” this encompasses the mental creation of a quantitative relationship according to observed target concepts. “marking the quantitative relationship between the target concepts in the two adjacent layers…in the key inference path” this encompasses the mental marking of observed quantitative relationships in observed layers. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “…the neural network model” (lines 3, 4, 5 and 7 of the claim) these limitations are an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Step 2B: The additional elements, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Therefore, the claim is ineligible. Regarding claim 7: Step 1: Claim 7 is directed to An electronic device, therefore it falls under the statuary category of a machine. Step 2A Prong 1: The claim recites, in part: “acquire a key inference path through which the output data is obtained…based on the input data, wherein the key inference path comprises target concepts respectively used by the layers…when the input data is processed…, wherein the target concepts are selected from the plurality of candidate concepts” this encompasses the mental creation of a key inference path from observed input data and selecting target concepts from observed candidate concepts. “determine interpretation information corresponding to the layers…according to the target concepts corresponding to the layers…, respectively” this encompasses the mental determination of interpretation information corresponding to the observed layers and observed target concepts. “acquire a jth layer…corresponding to the output data, wherein j is equal to N, and N is a total number of layers” this encompasses the mental observation of a jth layer of a number of observed layers. “acquire a target concept in the jth layer…” this encompasses the mental observation of a target concept. “acquire quantitative relationships between candidate concepts in an ith layer…and the target concept, respectively, wherein i is equal to j minus 1” this limitation is a mathematical concept. “determine a target concept in the ith layer…according to the candidate concepts in the ith layer…and the quantitative relationships” this encompasses the mental determination of a target concept among observed candidate concepts. “subtract 1 from j, and executing acquiring the target concept in the jth layer…when j is greater than 2” this encompasses the mental subtraction of 1 from an observed j and the mental observation of a target concept. Further, this limitation is a mathematical concept. “generate the key inference path according to the target concepts in the layers…when j is equal to 2” this encompasses the mental creation of a key inference path when an observed j is equal to 2. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “acquire input data and output data corresponding to the input data”, “the input data for the neural network model is variable values of various observed variables, and the observed variables represent attribute information of the objects” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). “of a neural network model” (line 2), “the neural network model comprises layers of the neural network model connected sequentially, and each layer of the neural network model corresponds to a plurality of candidate concepts”, “in the neural network model” (line 8), “…the neural network model” (throughout the claim), “networks in the neural network model” these limitations are an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). “the neural network model is applied to classify objects”, “output the key inference path and the interpretation information” the limitations are an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Step 2B: The additional elements, “of a neural network model” (line 2), “the neural network model comprises layers of the neural network model connected sequentially, and each layer of the neural network model corresponds to a plurality of candidate concepts”, “in the neural network model” (line 8), “…the neural network model” (throughout the claim), “networks in the neural network model”, “the neural network model is applied to classify objects”, “output the key inference path and the interpretation information”, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Further, “acquire input data and output data corresponding to the input data”, “the input data for the neural network model is variable values of various observed variables, and the observed variables represent attribute information of the objects” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015) as well as receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d. See MPEP § 2106.05(d)/(II). Therefore, the claim is ineligible. Regarding claims 9-12: The rejection of claim 7 is further incorporated, the rejection of claims 3-6 are applicable to claims 9-12, respectively. Regarding claim 13: Step 1: Claim 1 is directed to An interpretation method, therefore it falls under the statuary category of a process. Step 2A Prong 1: The claim recites, in part: “acquiring a key inference path through which the output data is obtained…based on the input data, wherein the key inference path comprises target concepts respectively used by the layers…when the input data is processed…, wherein the target concepts are selected from the plurality of candidate concepts” this encompasses the mental creation of a key inference path from observed input data and selecting target concepts from observed candidate concepts. “determining interpretation information corresponding to the layers…according to the target concepts corresponding to the layers…, respectively” this encompasses the mental determination of interpretation information corresponding to the observed layers and observed target concepts. Step 2A Prong 2: The judicial exception is not integrated into a practical application; the remaining limitations of the claim are as follows: “acquiring input data and output data corresponding to the input data” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). “of a neural network model” (line 6), “the neural network model comprises layers of networks connected sequentially, and each layer of network corresponds to a plurality of candidate concepts”, “the neural network model” (line 10), “in the neural network model” (line 12), “…network” (lines 14 and 15) these limitations are an additional element that generally links the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). “outputting the key inference path and the interpretation information” the limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Step 2B: The additional elements, “of a neural network model” (line 2), “the neural network model comprises layers of the neural network model connected sequentially, and each layer of the neural network model corresponds to a plurality of candidate concepts”, “in the neural network model” (line 8), “…the neural network model” (throughout the claim), “networks in the neural network model”, “the neural network model is applied to classify objects”, “outputting the key inference path and the interpretation information”, taken individually and in combination, do not provide an inventive concept of significantly more than the abstract idea itself for the reasons set forth in step 2A prong 2 above. Further, “acquiring input data and output data corresponding to the input data”, “the input data for the neural network model is variable values of various observed variables, and the observed variables represent attribute information of the objects” the limitation is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore the additional element is directed to storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015) as well as receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d. See MPEP § 2106.05(d)/(II). Therefore, the claim is ineligible. Regarding claims 15-18: The rejection of claim 13 is further incorporated, the rejection of claims 3-6 are applicable to claims 15-18, respectively. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-7, 9-13, and 15-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bach et al. (“On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation”, Bach et al., 10 July 2015) hereinafter Bach. Regarding claim 1: Bach teaches An interpretation method for a neural network model, wherein the neural network model is applied to classify objects (Bach, page 1, Abstract, ¶1 “This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers.”), the method comprising: acquiring input data and output data corresponding to the input data of a neural network model, wherein the neural network model comprises layers of the neural network model connected sequentially (Bach, page 10, ¶2 “The latter is independent from the neural network properties in higher layers whereas in our approach we feed the classification score into the top neurons and use quantities computed by using properties of higher layers to obtain a representation at lower layers.”), and each layer of the neural network model corresponds to a plurality of candidate concepts (Bach, pages 3-4, ¶3 “Layer-wise relevance propagation in its general form assumes that the classifier can be decomposed into several layers of computation. Such layers can be parts of the feature extraction from the image or parts of a classification algorithm run on the computed features.” Here, layers for feature extraction, and classification of features can be considered layers corresponding to candidate concepts); acquiring a key inference path through which the output data is obtained by the neural network model based on the input data (Bach, page 2, ¶5 “We are interested to find out the contribution of each input pixel x(d) of an input image x to a particular prediction f(x).” here the particular prediction can be considered the output data and the input image can be considered the input data), wherein the key inference path comprises target concepts respectively used by the layers of the neural network model when the input data is processed in the neural network model (Bach, page 7, figure 2B “The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition.” Here, the pixelwise decomposition can be considered a key inference path. In light of the specification, the key inference path can be obtained by identifying target concepts from the output from top to bottom and this layer-wise propagation can be considered recursively identifying target concepts from the output from top to bottom. As stated in the specification, page 9, lines 26-30 “In the embodiment, target concepts in the layers of networks are gradually determined from the top to bottom starting from the output, and a key inference path for processing the input data to obtain the output data by the neural network model is accurately generated based on the target concepts in the layers of networks, thus accurately reflecting an inference logic inside the neural network model and improving an interpretability of the model.”), wherein the target concepts are selected from the plurality of candidate concepts (Bach, page 8, ¶2 “One can interpret condition (13) by saying that the messages R ⅈ ⃪ j l , l + 1 are used to distribute the relevance R k l + 1 of a neuron k onto its input neurons at layer l.” in light of the spec, the input neurons to k can be considered target concepts amongst candidate concepts. Specification, page 4, lines 2-3 “Candidate concepts in each layer of network correspond to hidden units in each layer of network. In the embodiment, each hidden unit corresponds to one candidate concept.”); determining interpretation information corresponding to the layers of the neural network model according to the target concepts corresponding to the layers of the neural network model, respectively (Bach, page 23, figure 6 “Pixel-wise decomposition for Bag of Words features over χ2 -kernels using the Taylor-type decomposition for the third layer and the layerwise relevance propagation for the subsequent layers.” Here the decomposition for the third layer and subsequent propagation can be considered interpretation information and target concepts corresponding to layers); and outputting the key inference path (Bach, page 3, ¶ 2 “In this paper we propose a novel concept we denote as layer-wise relevance propagation as a general concept for the purpose of achieving a pixel-wise decomposition as in Eq (1)” here, the pixel-wise decomposition can be considered the key inference path) and the interpretation information (Bach, page 1, abstract, ¶1 “These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.” Here, the heatmaps can be considered the outputted interpretation information) wherein acquiring the key inference path through which the output data is obtained by the neural network model based on the input data comprises: acquiring a jth layer of the neural network model corresponding to the output data, wherein j is equal to N, and N is a total number of layers of the neural network model in the neural network model (Bach, page 5, ¶2 “The top layer consists of one output neuron, indexed by 7. For each neuron i we would like to compute a relevance Ri” here, the top layer of an N layered network would be layer N, and thus the selected top layer can be considered the jth layer); acquiring a target concept in the jth layer of the neural network model (Bach, page 5, ¶4 “Note that neuron 7 has no incoming messages anyway. Instead its relevance is defined as R 7 3 = f x .” in light of the specification, the output, f(x),can be considered the target concept. Specification, page 6, lines 19-20 “Specifically, the last layer of network corresponding to the output data can be acquired, and each candidate concept in the last layer can be taken as a target concept”); acquiring quantitative relationships between candidate concepts in an ith layer of network and the target concept, respectively, wherein i is equal to j minus 1 (Bach, page 5, ¶1 “The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing.” PNG media_image1.png 97 944 media_image1.png Greyscale In light of the spec, a non-linear univariate function, such as formula (2), can be considered the quantitative relationships. Specification, page 6, lines 6-8 “It is to be noted that quantitative relationships can be used to quantify relationships between explicit concepts with different physical meanings. In some embodiments, the quantitative relationship in the embodiments can be a univariate nonlinear function.”); determining a target concept in the ith layer of the neural network model according to the candidate concepts in the ith layer of the neural network model and the quantitative relationships (Bach, page 5, ¶4 “Secondly, we define the relevance of any neuron except neuron 7 as the sum of incoming messages: PNG media_image2.png 44 520 media_image2.png Greyscale For example   R 3 1 = R 3 ⃪ 5 1,2 + R 3 ⃪ 6 1,2 .”); subtracting 1 from j, and executing acquiring the target concept in the jth layer of the neural network model when j is greater than 2 (Bach, page 4, ¶3 “Iterating Eq (2) from the last layer which is the classifier output f(x) down to the input layer x consisting of image pixels then yields the desired Eq (1).”; and generating the key inference path according to the target concepts in the layers of the neural network model when j is equal to 2 (Bach, page 42, ¶3 “The second one, coined layer-wise relevance propagation, applies a propagation rule that distributes class relevance found at a given layer onto the previous layer. The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition” here, the pixelwise decomposition can be considered the inference path). Regarding claim 3: Bach teaches The method of claim 1, wherein determining the target concepts of the ith layer of the neural network model according to the candidate concepts in the ith layer of network and the quantitative relationships comprises: acquiring importance values of the quantitative relationships (Bach, page 5, ¶1 “The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing.” here, the actual values of the relevance can be considered the importance values); ranking the quantitative relationships in a descending order of the importance values of the quantitative relationships to obtain a ranking result (Bach, page 23, figure 6 “In the heatmap, based on linearly mapping the interval [−1, +1] to the jet color map available in many visualization packages, green corresponds to scores close to zero, yellow and red to positive scores and blue color to negative scores” here, the values given to the relevance scores, from relevant to not, can be considered a descending ranking result); taking out the quantitative relationships sequentially according to the ranking result (Bach, pages 36-37, ¶1 “The first difference between Figs 23 and 24 is that in the former the pixels were flipped according to the heatmap for the highest scoring class, and in the latter the pixels were flipped according to the heatmap for a random class which did not yield the highest score.” Here, flipping the highest scoring pixels according to the heatmap can be considered taking out quantitative relationships according to the ranking result), and acquiring candidate concepts corresponding to the quantitative relationships from the candidate concepts in the ith layer of the neural network model (Bach, page 8, ¶2 “One can interpret condition (13) by saying that the messages R ⅈ ⃪ j l , l + 1 are used to distribute the relevance R k l + 1 of a neuron k onto its input neurons at layer l.” in light of the spec, the input neurons to k can be considered target concepts amongst candidate concepts. Specification, page 4, lines 2-3 “Candidate concepts in each layer of network correspond to hidden units in each layer of network. In the embodiment, each hidden unit corresponds to one candidate concept.”); accumulating estimated values of the candidate concepts corresponding to the quantitative relationships (Bach, page 5, ¶4 “Secondly, we define the relevance of any neuron except neuron 7 as the sum of incoming messages” here, a summation of incoming relevance scores can be considered an accumulation of estimated values) taken out until an accumulated value is greater than a preset threshold (Bach, page 7, ¶5 “If a node i has a larger weighted activation Z i k = a i w i k , then, in a qualitative sense, it should also receive a larger fraction of the relevance score R k l + 1 of the node k. In particular, for all nodes k satisfying R k , Σ i z i k > 0 , one can define the constraint 0 < z i k < z i ' k ⇒ R ⅈ ⃪ k l , l + 1 ≤   R ⅈ ' ⃪ k l , l + 1 .” here, 0 can be considered the preset threshold); and determining the target concept in the ith layer of the neural network model from the candidate concepts corresponding to the quantitative relationships taken out from the ranking result. (Bach, page 6, ¶2 “The difference between condition (13) and definition (8) is that in the condition (13) the sum runs over the sources at layer l for a fixed neuron k at layer l+1, while in the definition (8) the sum runs over the sinks at layer l+1 for a fixed neuron i at a layer l.” here, the running over the fixed neuron i can be considered determining the target concept, and the sums of the sinks for the fixed neuron can be considered determining the target concept according to the quantitative relationships) Regarding claim 4: Bach teaches The method of claim 1, wherein the interpretation information comprises semantic information of the target concept (Bach, page 1, abstract, ¶1 “These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.” In light of the spec, the heatmaps can be considered the semantic information of the target concept. Specification, page 7, line 18 “The semantic information of the target concept may be an intuitive interpretation of the concept.”). Regarding claim 5: Bach teaches The method of claim 4, wherein the interpretation information further comprises sample characteristics of a target sample corresponding to the target concept (Bach, page 23, figure 6 “Furthermore regions which are far from shapes result in scores close to zero which are colored in green in the heatmap from pixel-wise decomposition of the predictions and in the overlay of the image and the decomposition.” Here, each pixel with a color assigned can be considered sample characteristics of a target sample). Regarding claim 6: Bach teaches The method of claim 1, further comprising: acquiring a quantitative relationship between target concepts in two adjacent layers of the neural network model according to the target concepts in the two adjacent layers of the neural network model for any two adjacent layers of the neural network model in the key inference path (Bach, page 5, ¶1 “The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing.” Here, formula 2, can be considered the quantitative relationship between layers and the features can be considered the target concepts); and marking the quantitative relationship between the target concepts in the two adjacent layers of the neural network model in the key inference path (Bach, page 23, figure 6 “The decompositions from the overlapping tiles were averaged. In the heatmap, based on linearly mapping the interval [−1, +1] to the jet color map available in many visualization packages, green corresponds to scores close to zero, yellow and red to positive scores and blue color to negative scores.” Here, the colors used for different scores can be considered marking the relationships). Regarding claim 7: Bach teaches An electronic device, comprising: at least one processor; and a memory communicatively connected with the at least one processor for storing instructions executable by the at least one processor (Bach, page 42, ¶7 “This desirable property is demonstrated in this paper by the heatmapping of images classified by the third-party GPU-trained ImageNet neural network. In particular, our heatmapping procedure was applied to this network without any further training or retraining. Thus, heatmaps for the ImageNet network could be quickly produced using a modest CPU.”); wherein the at least one processor is configured to: acquire input data and output data corresponding to the input data of a neural network model, wherein the neural network model is applied to classify objects (Bach, page 1, Abstract, ¶1 “This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers.”), the neural network model comprises layers of the neural network model connected sequentially (Bach, page 10, ¶2 “The latter is independent from the neural network properties in higher layers whereas in our approach we feed the classification score into the top neurons and use quantities computed by using properties of higher layers to obtain a representation at lower layers.”), and each layer of the neural network model corresponds to a plurality of candidate concepts (Bach, pages 3-4, ¶3 “Layer-wise relevance propagation in its general form assumes that the classifier can be decomposed into several layers of computation. Such layers can be parts of the feature extraction from the image or parts of a classification algorithm run on the computed features.” Here, layers for feature extraction, and classification of features can be considered layers corresponding to candidate concepts); acquire a key inference path through which the output data is obtained by the neural network model based on the input data (Bach, page 2, ¶5 “We are interested to find out the contribution of each input pixel x(d) of an input image x to a particular prediction f(x).” here the particular prediction can be considered the output data and the input image can be considered the input data), wherein the key inference path comprises target concepts respectively used by the layers of the neural network model when the input data is processed in the neural network model (Bach, page 7, figure 2B “The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition.” Here, the pixelwise decomposition can be considered a key inference path. In light of the specification, the key inference path can be obtained by identifying target concepts from the output from top to bottom and this layer-wise propagation can be considered recursively identifying target concepts from the output from top to bottom. As stated in the specification, page 9, lines 26-30 “In the embodiment, target concepts in the layers of networks are gradually determined from the top to bottom starting from the output, and a key inference path for processing the input data to obtain the output data by the neural network model is accurately generated based on the target concepts in the layers of networks, thus accurately reflecting an inference logic inside the neural network model and improving an interpretability of the model.”), wherein the target concepts are selected from the plurality of candidate concepts (Bach, page 8, ¶2 “One can interpret condition (13) by saying that the messages R ⅈ ⃪ j l , l + 1 are used to distribute the relevance R k l + 1 of a neuron k onto its input neurons at layer l.” in light of the spec, the input neurons to k can be considered target concepts amongst candidate concepts. Specification, page 4, lines 2-3 “Candidate concepts in each layer of network correspond to hidden units in each layer of network. In the embodiment, each hidden unit corresponds to one candidate concept.”); determine interpretation information corresponding to the layers of the neural network model according to the target concepts corresponding to the layers of the neural network model, respectively (Bach, page 23, figure 6 “Pixel-wise decomposition for Bag of Words features over χ2 -kernels using the Taylor-type decomposition for the third layer and the layerwise relevance propagation for the subsequent layers.” Here the decomposition for the third layer and subsequent propagation can be considered interpretation information and target concepts corresponding to layers); and output the key inference path (Bach, page 3, ¶ 2 “In this paper we propose a novel concept we denote as layer-wise relevance propagation as a general concept for the purpose of achieving a pixel-wise decomposition as in Eq (1)” here, the pixel-wise decomposition can be considered the key inference path) and the interpretation information (Bach, page 1, abstract, ¶1 “These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.” Here, the heatmaps can be considered the outputted interpretation information) wherein the at least one processor is further configured to: acquire a jth layer of the neural network model corresponding to the output data, wherein j is equal to N, and N is a total number of layers of the neural network model in the neural network model (Bach, page 5, ¶2 “The top layer consists of one output neuron, indexed by 7. For each neuron i we would like to compute a relevance Ri” here, the top layer of an N layered network would be layer N, and thus the selected top layer can be considered the jth layer); acquire a target concept in the jth layer of the neural network model (Bach, page 5, ¶4 “Note that neuron 7 has no incoming messages anyway. Instead its relevance is defined as R 7 3 = f x .” in light of the specification, the output, f(x),can be considered the target concept. Specification, page 6, lines 19-20 “Specifically, the last layer of network corresponding to the output data can be acquired, and each candidate concept in the last layer can be taken as a target concept”); acquire quantitative relationships between candidate concepts in an ith layer of network and the target concept, respectively, wherein i is equal to j minus 1 (Bach, page 5, ¶1 “The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing.” PNG media_image1.png 97 944 media_image1.png Greyscale In light of the spec, a non-linear univariate function, such as formula (2), can be considered the quantitative relationships. Specification, page 6, lines 6-8 “It is to be noted that quantitative relationships can be used to quantify relationships between explicit concepts with different physical meanings. In some embodiments, the quantitative relationship in the embodiments can be a univariate nonlinear function.”); determine a target concept in the ith layer of the neural network model according to the candidate concepts in the ith layer of the neural network model and the quantitative relationships (Bach, page 5, ¶4 “Secondly, we define the relevance of any neuron except neuron 7 as the sum of incoming messages: PNG media_image2.png 44 520 media_image2.png Greyscale For example   R 3 1 = R 3 ⃪ 5 1,2 + R 3 ⃪ 6 1,2 .”); subtract from j, and executing acquiring the target concept in the jth layer of the neural network model when j is greater than 2 (Bach, page 4, ¶3 “Iterating Eq (2) from the last layer which is the classifier output f(x) down to the input layer x consisting of image pixels then yields the desired Eq (1).”; and generate the key inference path according to the target concepts in the layers of the neural network model when j is equal to 2 (Bach, page 42, ¶3 “The second one, coined layer-wise relevance propagation, applies a propagation rule that distributes class relevance found at a given layer onto the previous layer. The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition” here, the pixelwise decomposition can be considered the inference path). Regarding claims 9-12: Claims 9-12 are rejected under the same rationale as claims 3-6, respectively. Regarding claim 13: Bach teaches A non-transitory computer-readable storage medium having stored therein computer instructions (Bach, page 42, ¶7 “This desirable property is demonstrated in this paper by the heatmapping of images classified by the third-party GPU-trained ImageNet neural network. In particular, our heatmapping procedure was applied to this network without any further training or retraining. Thus, heatmaps for the ImageNet network could be quickly produced using a modest CPU.” Here, the ImageNet network is inherently stored on a non-transitory computer-readable storage medium for use in the system) that, when executed by a computer, cause the computer to perform: acquiring input data and output data corresponding to the input data of a neural network model, wherein the neural network model is applied to classify objects (Bach, page 1, Abstract, ¶1 “This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers.”), the neural network model comprises layers of the neural network model connected sequentially (Bach, page 10, ¶2 “The latter is independent from the neural network properties in higher layers whereas in our approach we feed the classification score into the top neurons and use quantities computed by using properties of higher layers to obtain a representation at lower layers.”), and each layer of the neural network model corresponds to a plurality of candidate concepts (Bach, pages 3-4, ¶3 “Layer-wise relevance propagation in its general form assumes that the classifier can be decomposed into several layers of computation. Such layers can be parts of the feature extraction from the image or parts of a classification algorithm run on the computed features.” Here, layers for feature extraction, and classification of features can be considered layers corresponding to candidate concepts); acquiring a key inference path through which the output data is obtained by the neural network model based on the input data (Bach, page 8, ¶2 “One can interpret condition (13) by saying that the messages PNG media_image3.png 38 65 media_image3.png Greyscale are used to distribute the relevance PNG media_image4.png 35 56 media_image4.png Greyscale of a neuron k onto its input neurons at layer l.” in light of the spec, the input neurons to k can be considered target concepts amongst candidate concepts. Specification, page 4, lines 2-3 “Candidate concepts in each layer of network correspond to hidden units in each layer of network. In the embodiment, each hidden unit corresponds to one candidate concept.”), wherein the key inference path comprises target concepts respectively used by the layers of the neural network model when the input data is processed in the neural network model (Bach, page 7, figure 2B “The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition.” Here, the pixelwise decomposition can be considered a key inference path. In light of the specification, the key inference path can be obtained by identifying target concepts from the output from top to bottom and this layer-wise propagation can be considered recursively identifying target concepts from the output from top to bottom. As stated in the specification, page 9, lines 26-30 “In the embodiment, target concepts in the layers of networks are gradually determined from the top to bottom starting from the output, and a key inference path for processing the input data to obtain the output data by the neural network model is accurately generated based on the target concepts in the layers of networks, thus accurately reflecting an inference logic inside the neural network model and improving an interpretability of the model.”), wherein the target concepts are selected from the plurality of candidate concepts (Bach, page 8, ¶2 “One can interpret condition (13) by saying that the messages R ⅈ ⃪ j l , l + 1 are used to distribute the relevance R k l + 1 of a neuron k onto its input neurons at layer l.” in light of the spec, the input neurons to k can be considered target concepts amongst candidate concepts. Specification, page 4, lines 2-3 “Candidate concepts in each layer of network correspond to hidden units in each layer of network. In the embodiment, each hidden unit corresponds to one candidate concept.”); determining interpretation information corresponding to the layers of the neural network model according to the target concepts corresponding to the layers of the neural network model, respectively (Bach, page 23, figure 6 “Pixel-wise decomposition for Bag of Words features over χ2 -kernels using the Taylor-type decomposition for the third layer and the layerwise relevance propagation for the subsequent layers.” Here the decomposition for the third layer and subsequent propagation can be considered interpretation information and target concepts corresponding to layers); and outputting the key inference path (Bach, page 3, ¶ 2 “In this paper we propose a novel concept we denote as layer-wise relevance propagation as a general concept for the purpose of achieving a pixel-wise decomposition as in Eq (1)” here, the pixel-wise decomposition can be considered the key inference path) and the interpretation information (Bach, page 1, abstract, ¶1 “These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest.” Here, the heatmaps can be considered the outputted interpretation information) when executed by a computer, further cause the computer to perform: acquiring a jth layer of the neural network model corresponding to the output data, wherein j is equal to N, and N is a total number of layers of the neural network model in the neural network model (Bach, page 5, ¶2 “The top layer consists of one output neuron, indexed by 7. For each neuron i we would like to compute a relevance Ri” here, the top layer of an N layered network would be layer N, and thus the selected top layer can be considered the jth layer); acquiring a target concept in the jth layer of the neural network model (Bach, page 5, ¶4 “Note that neuron 7 has no incoming messages anyway. Instead its relevance is defined as R 7 3 = f x .” in light of the specification, the output, f(x),can be considered the target concept. Specification, page 6, lines 19-20 “Specifically, the last layer of network corresponding to the output data can be acquired, and each candidate concept in the last layer can be taken as a target concept”); acquiring quantitative relationships between candidate concepts in an ith layer of network and the target concept, respectively, wherein i is equal to j minus 1 (Bach, page 5, ¶1 “The underlying Formula (2) can be interpreted as a conservation law for the relevance R in between layers of the feature processing.” PNG media_image1.png 97 944 media_image1.png Greyscale In light of the spec, a non-linear univariate function, such as formula (2), can be considered the quantitative relationships. Specification, page 6, lines 6-8 “It is to be noted that quantitative relationships can be used to quantify relationships between explicit concepts with different physical meanings. In some embodiments, the quantitative relationship in the embodiments can be a univariate nonlinear function.”); determining a target concept in the ith layer of the neural network model according to the candidate concepts in the ith layer of the neural network model and the quantitative relationships (Bach, page 5, ¶4 “Secondly, we define the relevance of any neuron except neuron 7 as the sum of incoming messages: PNG media_image2.png 44 520 media_image2.png Greyscale For example   R 3 1 = R 3 ⃪ 5 1,2 + R 3 ⃪ 6 1,2 .”); subtracting 1 from j, and executing acquiring the target concept in the jth layer of the neural network model when j is greater than 2 (Bach, page 4, ¶3 “Iterating Eq (2) from the last layer which is the classifier output f(x) down to the input layer x consisting of image pixels then yields the desired Eq (1).”; and generating the key inference path according to the target concepts in the layers of the neural network model when j is equal to 2 (Bach, page 42, ¶3 “The second one, coined layer-wise relevance propagation, applies a propagation rule that distributes class relevance found at a given layer onto the previous layer. The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition” here, the pixelwise decomposition can be considered the inference path). Regarding claims 15-18: Claims 15-18 are rejected under the same rationale as claims 3-6, respectively. Response to Arguments Applicant's arguments filed December 23rd, 2025 (Hereinafter “Remarks”) have been fully considered but they are not persuasive. Applicant’s arguments regarding the 35 U.S.C. 112(b) rejections of the previous office action have been fully considered, and are persuasive. The rejections have been withdrawn due to claim amendments. Regarding the rejections under 35 U.S.C. § 101: Argument 1: “Amended independent claim 1 clearly specifies its practical application scenario as object classification.” (Remarks, page 11, ¶6). Examiners Response: Examiner respectfully disagrees, Applicant’s arguments rely on language solely recited in preamble recitations in claim 1. When reading the preamble in the context of the entire claim, the recitation “the neural network model is applied to classify objects” is not limiting because the body of the claim describes a complete invention and the language recited solely in the preamble does not provide any distinct definition of any of the claimed invention’s limitations. Thus, the preamble of the claim(s) is not considered a limitation and is of no significance to claim construction. See Pitney Bowes, Inc. v. Hewlett-Packard Co., 182 F.3d 1298, 1305, 51 USPQ2d 1161, 1165 (Fed. Cir. 1999). See MPEP § 2111.02. Further, the classification of objects is a mental process that can practically be performed in the human mind, for instance classifying observed objects into various classes, and the use of a neural network can be considered an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP § 2106.05(f)(2). Argument 2: “Amended independent claim 1 further states that the data input into the neural network model is attribute information of the objects.” (Remarks, page 11, ¶6). Examiners Response: Examiner respectfully disagrees, the MPEP states “Another consideration when determining whether a claim integrates the judicial exception into a practical application in Step 2A Prong Two or recites significantly more in Step 2B is whether the additional elements add more than insignificant extra-solution activity to the judicial exception. The term “extra-solution activity” can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process, e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” See MPEP § 2106.05(g). The limitation as recited is an additional element that amounts to adding insignificant extra-solution activity to the judicial exception. Therefore, the claim remains rejected under 35 U.S.C. § 101. Regarding the rejections under 35 U.S.C. § 102(a)(1): Argument 3: “Claims 1-18 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Bach (US 2018/0018553A1)…Applicant notes that the Office's citations to Bach are somewhat confusing. For example, on page 15 of the Office Action, the Office cites Bach's “page 5, ¶2” but the quoted passage appears to refer to paragraph [0097].” (Remarks, page 13, ¶10). Examiners Response: Claims 1-18 were rejected as being anticipated by Bach et al. (“On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation”, Bach et al., 10 July 2015) while Bach et al. (US 2018/0018553A1) was cited as prior art made of record and not relied upon. Due to a missing header, the rejection under 35 U.S.C. § 102 was unclear and accordingly this action is made non-final. Argument 4: “In the present disclosure, “the jth layer” varies with iteration and is not a fixed network layer; it only initially equals the total number of network layers at the start of iteration. Thus, the top layer discussed in Bach's paragraph [0097] cannot be equated with “the jth layer” in the present application. Therefore, Bach does not disclose “acquiring a jth layer of the neural network model corresponding to the output data, wherein j is equal to N, and N is a total number of layers of the neural network model in the neural network model” as recited in amended independent claim 1.” (Remarks, page 13, ¶11-12). Examiners Response: Examiner respectfully disagrees, as claimed, the output layer of an N layered network would be layer N, and thus the selected top layer can be considered the jth layer. Bach, page 5, ¶2 “The top layer consists of one output neuron, indexed by 7. For each neuron i we would like to compute a relevance Ri” Argument 5: “The cited disclosure only involves “nonlinear functions” but does not relate to control content regarding the initiation of iteration. The present disclosure acquires “quantitative relationships” of the ith layer of the neural network model, where i is equal to j minus 1. This demonstrates that the iterative process of the present disclosure proceeds backwards from the last network layer, whereas Bach does not disclose any information related to such an iterative process. Therefore, Bach does not disclose “acquiring quantitative relationships between candidate concepts in an ith layer of the neural network model and the target concept, respectively, wherein i is equal to j minus 1” as recited in amended independent claim 1.” (Remarks, page 14, ¶2). Examiners Response: Examiner respectfully disagrees, as shown in equation two, PNG media_image5.png 82 805 media_image5.png Greyscale quantitative relationships are acquired from each layer iteratively, starting at the output layer and continuing backwards. Here, R d l corresponds to layer i as layer (l+1) (which corresponds to j) minus 1 would be layer (l). Argument 6: “Bach does not disclose the jump step in the iterative process. Therefore, Bach does not disclose “subtracting 1 from j, and executing acquiring the target concept in the jth layer of the neural network model when j is greater than 2” as recited in amended independent claim 1.” (Remarks, page 14, ¶5). Examiners Response: Examiner respectfully disagrees, as shown in equation two, PNG media_image5.png 82 805 media_image5.png Greyscale here, 1 is subtracted from j (the current layer) iteratively back to the first layer, thus showing acquiring the target concept in the jth layer of the neural network model when j is greater than 2. Argument 7: “Bach fails to disclose an iteration stopping condition, let alone the specific iteration stopping condition of stopping when j equals 2. Thus, Bach does not disclose “generating the key inference path according to the target concepts in the layers of the neural network model when j is equal to 2” as recited in amended claim 1.” (Remarks, page 15, ¶1-2). Examiners Response: Examiner respectfully disagrees, as disclosed in Bach, page 42, ¶3 “The second one, coined layer-wise relevance propagation, applies a propagation rule that distributes class relevance found at a given layer onto the previous layer. The layer-wise propagation rule was applied iteratively from the output back to the input, thus, forming another possible pixelwise decomposition” here, by applying the decomposition back to the input, the path is generated for layers in the neural network model when j equals 2. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bach et al. (US 20180018553 A1) teaches the task of relevance score assignment to a set of items onto which an artificial neural network is applied is obtained by redistributing an initial relevance score derived from the network output, onto the set of items by reversely propagating the initial relevance score through the artificial neural network so as to obtain a relevance score for each item. In particular, this reverse propagation is applicable to a broader set of artificial neural networks and/or at lower computational efforts by performing same in a manner so that for each neuron, preliminarily redistributed relevance scores of a set of downstream neighbor neurons of the respective neuron are distributed on a set of upstream neighbor neurons of the respective neuron according to a distribution function. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB Z SUSSMAN MOSS whose telephone number is (571) 272-1579. The examiner can normally be reached Monday - Friday, 9 a.m. - 5 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.S.M./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Nov 29, 2022
Application Filed
Sep 18, 2025
Non-Final Rejection — §101, §102
Dec 23, 2025
Response Filed
Feb 20, 2026
Non-Final Rejection — §101, §102 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
14%
Grant Probability
-6%
With Interview (-20.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month