Prosecution Insights
Last updated: April 19, 2026
Application No. 17/684,635

ARTIFICIAL INTELLIGENCE WITH EXPLAINABILITY INSIGHTS

Non-Final OA §101§103
Filed
Mar 02, 2022
Examiner
HAN, JOSEP
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Cisco Technology Inc.
OA Round
3 (Non-Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
3y 11m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
6 granted / 16 resolved
-17.5% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
33 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
33.4%
-6.6% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action The following action is in response to the communication(s) received on 12/12/2025. As of the claims filed 12/12/2025: Claims 1-20 are pending. Claims 1-11 and 18-20 have been amended. Claims 1, 11, and 20 are independent claims. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action 12/12/2025 has been entered. Response to Arguments Applicant’s arguments filed 12/12/2025 have been fully considered, but are not fully persuasive. With respect to the rejection under 35 USC § 101, Applicant asserts the claims do not recite mathematical concepts (p.10 last ¶). Examiner respectfully submits that the abstract ideas were not identified as mathematical concepts, but rather evaluations or judgment that can be performed in the human mind. Although Applicant also asserts the limitations require operating within the models' learned representation spaces, such representation spaces are not recited in the claims to warrant such a requirement. Thus, this cannot be read into the claims as currently recited. Applicant further asserts that the human cannot perform the claimed steps in the human mind (p.11 1st ¶). Examiner respectfully disagrees. There are not enough details of the construction of the Boolean rules to suggest that the differentiable neural logic model is necessary for the abstract idea. Applicant further asserts that the claim addresses a technical problem. However, explaining AI, as currently recited, is merely another abstract idea and not a technology that is being improved. Additionally, the capturing and displaying limitations are not the crux of the invention. The capturing of the vectors is not recited to "fix the semantics of the comparison". The displaying also does not include the "generation of human-readable semantics from internal model representations"; currently, this limitation merely requires the method to display the abstract ideas previously performed. Applicant further asserts that the claims integrate the exceptions into a practical application through capturing the vectors in a specific model component and selects a representative sample to output the Boolean rules derived from the feature vectors (p.11 ¶3). Examiner respectfully submits that, as currently recited, the Boolean rules are merely constructed and displayed, and explainable AI is merely an abstract idea as mentioned above. Applicant further asserts that the claims recite methods of outputting the Boolean rules that are not well-understood, routine, or conventional (p.11 last ¶). Examiner respectfully disagrees, as there are not enough details in the construction of the Boolean rules to suggest something significantly more than constructing Boolean rules using the vectors. Thus, the instant claims remain directed to abstract ideas and thus not patentable. With respect to the rejection under 35 USC § 102, Applicant asserts Chen does not teach the trainable, differentiable neural logic model and constructing one or more Boolean rules that represent the one or more features used to make the inference regarding the input data (p.12 last ¶). This argument has been considered but is moot in view of the new art rejection in view of the combination of Chen into Payani under 35 USC § 103 ([p.8 right ¶1] [p.2 left last ¶] [p.8 right ¶1]), where the differentiable neural-logic ILP solver corresponds to the trainable, differentiable neural logic model. Learning the predicates Color and isCircle using the CNN corresponds to constructing the Boolean rules that represent the one or more feature vectors. Thus, Chen/Payani remains to teach the methods and limitations recited in the claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a…method, thus a process, one of the four statutory categories of patentable subject matter (Step 1). However, Claim 1 further recites: making an inference regarding input data, which is an evaluation or judgement that can be performed in the human mind; selecting, based at least in part on the one or more feature vectors, a representative sample from a training dataset used to train the artificial intelligence model, which is an evaluation or judgement that can be performed in the human mind; wherein selecting comprises comparing the one or more feature vectors to stored feature vectors…, which is an evaluation or judgement that can be performed in the human mind; constructing…one or more Boolean rules that correspond to the one or more feature vectors, respectively, where the one or more Boolean rules represent the one or more feature vectors used to make the inference regarding the input data, which is an evaluation or judgement that can be performed in the human mind. Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: …computer-implemented; …using an artificial intelligence model, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application; receiving input data from a user, which is merely an insignificant extra-solution activity of data gathering, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application; capturing one or more feature vectors... to make the inference, which is merely an insignificant extra-solution activity of data gathering, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application; …from a predetermined feature layer of the artificial intelligence model that were used by the artificial intelligence model, which merely specifies the particular field of use or particular technological environment in which the abstract idea is to be performed, which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application. causing the representative sample, the inference, and the one or more Boolean rules to be concurrently displayed on a display device associated with the user, which is merely an insignificant extra-solution activity of data output, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application. associated with respective samples of the training dataset in a feature-vector store generated from the training dataset, which merely specifies the particular field of use or particular technological environment in which the abstract idea is to be performed, which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application. and selecting the representative sample based on which of the stored feature vectors the artificial intelligence model considers to be similar to the one or more feature vectors, which merely specifies the particular field of use or particular technological environment in which the abstract idea is to be performed, which by MPEP 2106.05(h) cannot integrate the abstract idea into a practical application. using a trainable, differentiable neural logic model, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application; Thus, the claim is directed towards an abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because the particular field of use or particular technological environment (MPEP 2106.05(h)), implementation on a computer (MPEP 2106.05(f)),and the activity of data gathering/output(MPEP 2106.05(g)) cannot provide significantly more, as storing and retrieving information in memory is well understood, routine, and conventional (MPEP 2106.05(d)(II)(iv)) as receiving or transmitting data over a network is well understood, routine, and conventional (MPEP 2106.05(d)(II)(i)) and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 2, dependent upon Claim 1, further recites no additional abstract ideas. However: Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: the artificial intelligence model comprises a classifier, which are mere details of the performance of an abstract idea on a computer, which is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more. Thus, the claim is ineligible. Claim 3, dependent upon Claim 1, further recites the input data comprises an image, which is merely a detail of an abstract idea (making an inference regarding input data). Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible. Claim 4, dependent upon Claim 1, further recites no additional abstract ideas. However: Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: the artificial intelligence model comprises a neural network, which are mere details of the performance of an abstract idea on a computer, which is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more. Thus, the claim is ineligible. Claim 5, dependent upon Claim 1, further recites selecting the representative sample comprises: determining a distance between one or more feature vectors associated with the representative sample to the one or more feature vectors used by the artificial intelligence model to make the inference, which is an evaluation or judgement that can be performed in the human mind. Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2 and 2B, the claim does not recite any new additional elements which could integrate the abstract idea into a practical application or provide significantly more than the abstract idea itself. Thus, the claim is ineligible. Claim 6, dependent upon Claim 5, further recites clustering feature vectors associated with the training dataset, which is an evaluation or judgement that can be performed in the human mind. Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: the one or more feature vectors associated with the representative sample are captured during training of the artificial intelligence model, which is merely an insignificant extra-solution activity of data gathering, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because the activity of data gathering (MPEP 2106.05(g)) cannot provide significantly more, as storing and retrieving information in memory is well understood, routine, and conventional (MPEP 2106.05(d)(II)(iv)). Thus, the claim is ineligible. Claim 7, dependent upon Claim 5, further recites no additional abstract ideas. However: Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: configuring one or more neural network layers of the artificial intelligence model to capture them when the representative sample as used as input to the artificial intelligence model, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application; the one or more feature vectors associated with the representative sample are captured during training of the artificial intelligence model, which is merely an insignificant extra-solution activity of data gathering, which by MPEP 2106.05(g) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)), and the activity of data gathering (MPEP 2106.05(g)) cannot provide significantly more, as storing and retrieving information in memory is well understood, routine, and conventional (MPEP 2106.05(d)(II)(iv)) and the combination of additional elements does not provide an inventive concept. Thus, the claim is ineligible. Claim 8, dependent upon Claim 1, further recites no additional abstract ideas. However: Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: the trainable, differentiable neural logic model comprises one or more conjunction neurons, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more. Thus, the claim is ineligible. Claim 9, dependent upon Claim 8, further recites construct a conjunction of a subset of the one or more feature vectors, which is an evaluation or judgement that can be performed in the human mind. Thus, the claim recites an abstract idea under Step 2A Prong 1. Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: the one or more conjunction neurons are configured to, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more. Thus, the claim is ineligible. Claim 10, dependent upon Claim 8, further recites no additional abstract ideas. However: Under Step 2A Prong 2, the claim does not include any additional elements which integrate the abstract idea into a practical application, since the additional elements consist of: the trainable, differentiable neural logic model further comprises at least one disjunction neuron, as the performance of an abstract idea on a computer is not more than instructions to "apply it" on a computer, which by MPEP 2106.05(f) cannot integrate an abstract idea into a practical application. Thus, the claim is directed towards an abstract idea. Further, under Step 2B, the additional element does not provide significantly more than the abstract idea itself, because implementation on a computer (MPEP 2106.05(f)) cannot provide significantly more. Thus, the claim is ineligible. Claims 11-19 recite An apparatus, thus a machine, one of the four statutory categories of patentable subject matter. However, Claims 11-20 recite comprising: one or more network interfaces; a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor, the process when executed configured to precisely perform the abstract ideas and additional elements of Claims 1-9, respectively. Therefore, Step 2A Prong 1 analysis remains the same. As for Step 2A Prong 2 and Step 2B: performance on a computer cannot integrate an abstract idea into a practical application (Step 2A Prong 2) nor provide significantly more than the abstract idea itself (Step 2B) (MPEP 2106.05(f)), Claims 11-19 are rejected as subject-matter ineligible for reasons set forth in the rejections of Claims 1-9, respectively. Claim 20 recites A tangible, non-transitory, computer-readable medium, thus an article of manufacture, one of the four statutory categories of patentable subject matter. However, Claim 20 recites storing program instructions that cause a device to execute a process comprising precisely the abstract ideas and additional elements of Claim 1. Therefore, Step 2A Prong 1 analysis remains the same. As for Step 2A Prong 2 and Step 2B: performance on a computer cannot integrate an abstract idea into a practical application (Step 2A Prong 2) nor provide significantly more than the abstract idea itself (Step 2B) (MPEP 2106.05(f)), and thus Claim 20 is rejected as subject-matter ineligible for reasons set forth in the rejections of Claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al., “This Looks Like That: Deep Learning for Interpretable Image Recognition” (hereinafter Chen), in view of Payani et al., “Incorporating Relational Background Knowledge into Reinforcement Learning via Differentiable Inductive Logic Programming” (hereinafter Payani). Regarding Claim 1, Chen teaches: A computer-implemented method comprising: receiving input data from a user; making an inference regarding the input data using an artificial intelligence model; (Chen [Abstract] The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture – prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. [p.7 4th ¶] Figure 3 shows the reasoning process of our ProtoPNet in reaching a classification decision on a test image of a red-bellied woodpecker at the top of the figure. PNG media_image1.png 484 959 media_image1.png Greyscale ) (Note: the image of the bird classified as a red-bellied woodpecker corresponds to the input data received from the user; reaching a classification decision corresponds to making an inference) capturing one or more feature vectors from a predetermined feature layer of the artificial intelligence model that were used by the artificial intelligence model to make the inference; (Chen [Abstract] The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture – prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification.) (Note: the prototypical parts correspond to the feature vectors used by the predetermined feature layers of the artificial intelligence model used to make the inference) selecting, based at least in part on the one or more feature vectors, a representative sample from a training dataset used to train the artificial intelligence model, (Chen, fig. 3 PNG media_image1.png 484 959 media_image1.png Greyscale ) wherein selecting comprises comparing the one or more feature vectors to stored feature vectors, (Chen, p.7 ¶4, “Figure 3 shows the reasoning process of our ProtoPNet in reaching a classification decision on a test image of a red-bellied woodpecker at the top of the figure. Given this test image x, our model compares its latent features f(x) against the learned prototypes.”) and selecting the representative sample based at least in part on which of the stored feature vectors the artificial intelligence model considers to be similar to the one or more feature vectors; (Chen PNG media_image1.png 484 959 media_image1.png Greyscale PNG media_image2.png 348 783 media_image2.png Greyscale [p.5 ¶1] This activation map preserves the spatial relation of the convolutional output, and can be upsampled to the size of the input image to produce a heat map that identifies which part of the input image is most similar to the learned prototype.) (Note: the training image where the prototype comes from corresponds to the representative sample from the training dataset; each label in the output logits corresponds to each representative sample (in this case, the clay colored sparrow is the selected representative sample)) Chen does not teach, but Payani further teaches: constructing, using a trainable, differentiable neural logic model, one or more Boolean rules that correspond to the one or more feature vectors, respectively, where the one or more Boolean rules represent the one or more feature vectors used to make the inference regarding the input data; (Payani [p.1 right last ¶] In (Payani & Fekri, 2019) a novel ILP solver was introduced which uses Neural-Logical Network (NLN) (Payani & Fekri, 2018) for constructing a differentiable neural-logic ILP solver (dNL-ILP). [p.2 left last ¶] Each atom is created by applying an n-ary Boolean function called predicate to some constants or variables. A predicate states the relation between some variables or constants in the logic program. [p.8 right ¶1] We use the same CNN network and similar to the GridWorld experiment, we learn the state representation using predicate color(X,Y,C) (the color of each cell in the grid) as well as isCircle(X,Y) which learn if the shape of an object is circle or not.) (Note: learning the predicates color and isCircle using the CNN network corresponds to constructing Boolean rules that correspond to the feature vectors) Payani and Chen are analogous to the present invention because both are from the same field of endeavor of CNN-based interpretation of feature vectors. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the differentiable neural-logic ILP solver from Payani into Chen’s method of selecting feature identifiers. The motivation would be to “effectively learn relational information from image and present the state of the environment as first order logic predicates” (Payani [abstract]). Chen, via Chen/Payani, further teaches: and causing the representative sample, the inference…to be concurrently displayed on a display device associated with the user. (Chen PNG media_image1.png 484 959 media_image1.png Greyscale PNG media_image3.png 432 1068 media_image3.png Greyscale ) (Note: fig. 4c corresponds to human-interpretable feature identifiers; the training image where the prototype comes from corresponds to the representative sample from the training dataset; the document containing the figure corresponds to the display device) Payani, via Chen/Payani, further teaches: and the one or more Boolean rules to be concurrently displayed on a display device associated with the user (Payani [p.5, fig.4] PNG media_image4.png 211 899 media_image4.png Greyscale ) (Note: each state corresponds to each Boolean rule; the document containing the figure corresponds to the display device) Regarding Claim 2, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Payani, further teaches: The computer-implemented method as in claim 1, wherein the artificial intelligence model comprises a classifier. (Chen [Abstract] The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture – prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification.) (Note: the ProtoPNet making a final classification corresponds to comprising a classifier.) Regarding Claim 3, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Payani, further teaches: The computer-implemented method as in claim 1, wherein the input data comprises an image. (Chen [Abstract] The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture – prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification.) Regarding Claim 4, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Payani, further teaches: The computer-implemented method as in claim 1, wherein the artificial intelligence model comprises a neural network. (Chen [Abstract] The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture – prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. [p.2 2nd ¶] Our work relates to (but contrasts with) those that perform posthoc interpretability analysis for a trained convolutional neural network (CNN).) Regarding Claim 5, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Chen, via Chen/Payani, further teaches: The computer-implemented method as in claim 1, wherein selecting the representative sample comprises: determining a distance between one or more feature vectors associated with the representative sample to the one or more feature vectors used by the artificial intelligence model to make the inference. (Chen [p.3 last ¶] Given a convolutional output z = f(x), the j-th prototype unit gpj in the prototype layer gp computes the squared L 2 distances between the j-th prototype pj and all patches of z that have the same shape as pj , and inverts the distances into similarity scores. The result is an activation map of similarity scores whose value indicates how strong a prototypical part is present in the image.) Regarding Claim 6, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Chen, via Chen/Payani, further teaches: The computer-implemented method as in claim 5, wherein the one or more feature vectors associated with the representative sample are captured during training of the artificial intelligence model in part by clustering feature vectors associated with the training dataset. (Chen [p.5 6th ¶] In the first training stage, we aim to learn a meaningful latent space, where the most important patches for classifying images are clustered (in L 2 -distance) around semantically similar prototypes of the images’ true classes, and the clusters that are centered at prototypes from different classes are well-separated. To achieve this goal, we jointly optimize the convolutional layers’ parameters wconv and the prototypes P = {pj} m j=1 in the prototype layer gp using SGD, while keeping the last layer weight matrix wh fixed.) Regarding Claim 7, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Chen, via Chen/Payani, further teaches: The computer-implemented method as in claim 5, wherein the one or more feature vectors associated with the representative sample are captured during training of the artificial intelligence model in part by configuring one or more neural network layers of the artificial intelligence model to capture them when the representative sample as used as input to the artificial intelligence model. (Chen [p.3 last ¶] The network learns m prototypes P = {pj} m j=1, whose shape is H1 × W1 × D with H1 ≤ H and W1 ≤ W. In our experiments, we used H1 = W1 = 1. Since the depth of each prototype is the same as that of the convolutional output but the height and the width of each prototype is smaller than those of the whole convolutional output, each prototype will be used to represent some prototypical activation pattern in a patch of the convolutional output, which in turn will correspond to some prototypical image patch in the original pixel space. Hence, each prototype pj can be understood as the latent representation of some prototypical part of some bird image in this case study.) (Note: the prototypical part corresponds to the representative sample.) Regarding Claim 8, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Payani, via Chen/Payani, further teaches: The computer-implemented method as in claim 1, wherein the trainable, differentiable neural logic model comprises one or more conjunction neurons. (Payani [p.3 left ¶1] Likewise, a neural disjunction function fdisj(xn) can be defined using the auxiliary function Fd with the truth table as in Fig. 2b. By cascading a layer of N neural conjunction functions with a layer of N neural disjunction functions, we can construct a differentiable function to be used for representing and learning a Boolean Disjunctive Normal Form (DNF).) Regarding Claim 9, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 8. Payani, via Chen/Payani, further teaches: The computer-implemented method as in claim 8, wherein the one or more conjunction neurons are configured to construct a conjunction of a subset of the one or more feature vectors. (Payani [p.2 right last ¶] In (Payani &Fekri, 2019), a novel approach was introduced to alleviate the above limitation and to allow for learning arbitrary complex predicate formulas. The main idea behind this approach is to use multiplicative neurons (Payani & Fekri, 2018) that are capable of learning and representing Boolean logic. [p.8 right ¶1] We use the same CNN network and similar to the GridWorld experiment, we learn the state representation using predicate color(X,Y,C) (the color of each cell in the grid) as well as isCircle(X,Y) which learn if the shape of an object is circle or not.) (Note: learning the predicates color and isCircle using the CNN network corresponds to constructing Boolean rules that correspond to the feature vectors) Regarding Claim 10, Chen/Payani respectively teaches and incorporates the claimed limitations and rejections of Claim 8. Payani, via Chen/Payani, further teaches: The computer-implemented method as in claim 8, wherein the trainable, differentiable neural logic model further comprises at least one disjunction neuron. (Payani [p.3 left ¶1] Likewise, a neural disjunction function fdisj(xn) can be defined using the auxiliary function Fd with the truth table as in Fig. 2b. By cascading a layer of N neural conjunction functions with a layer of N neural disjunction functions, we can construct a differentiable function to be used for representing and learning a Boolean Disjunctive Normal Form (DNF).) Independent Claim 11 recites An apparatus, comprising: one or more network interfaces; a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor, the process when executed configured to (Chen [p.3 8th line] In contrast, our ProtoPNet uses a specialized neural network architecture for feature extraction and prototype learning, and can be trained in an end-to-end fashion. [p.9 4th to last line] Supplementary Material and Code: The supplementary material and code are available at https://github.com/cfchen-duke/ProtoPNet.) to perform precisely the methods of Claim 1. Thus, Claim 11 is rejected for reasons set forth in Claim 1. (Note: training ProtoPNet requires a processor and memory; running the code stored on GitHub requires a network interface.) Claim(s) 12-19, dependent on Claim 11 also recite the apparatus configured to perform precisely the methods of Claims 2-9, respectively, and thus are rejected for reasons set forth in these claims. Independent Claim 20 recites A tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising (Chen [p.3 8th line] In contrast, our ProtoPNet uses a specialized neural network architecture for feature extraction and prototype learning, and can be trained in an end-to-end fashion.) to perform precisely the methods of Claim 1. Thus, Claim 20 is rejected for reasons set forth in Claim 1. (Note: training ProtoPNet requires a processor and memory) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.H./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Mar 02, 2022
Application Filed
Apr 15, 2025
Non-Final Rejection — §101, §103
Jul 14, 2025
Interview Requested
Jul 21, 2025
Applicant Interview (Telephonic)
Jul 21, 2025
Examiner Interview Summary
Aug 21, 2025
Response Filed
Sep 02, 2025
Final Rejection — §101, §103
Oct 28, 2025
Interview Requested
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Examiner Interview Summary
Dec 12, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Mar 24, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585965
INTERACTIVE MACHINE-LEARNING FRAMEWORK
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
62%
With Interview (+25.0%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month