Prosecution Insights
Last updated: April 19, 2026
Application No. 18/214,377

TRAINING A LOGICAL NEURAL NETWORK WITH A PRUNED LIST OF PREDICATES

Non-Final OA §101§103
Filed
Jun 26, 2023
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on December 29th, 2025. Claims 1, 7-8, 10, 16-17 and 19 have been amended, claims 1-20 are pending and have been examined. All previous objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Arguments With respect to rejections made under 35 U.S.C. 101, Applicant argues, “Specifically, claim 1 was amended to include limitations of dependent claim 8 which are indicated in the FOA at 8 to be allowable over the prior art and not subject to the 35 U.S.C. section 101 rejection,” (page 11 of Remarks). Applicant’s argument is not persuasive. The limitations of claim 1 as amended (as well as the similar recitations of claim 10 and 19) still lack recitation of a practical application, and describe a generic computer deployed to carry out steps that may be embodied by a human actor. Without each and every limitation of the previously allowable claim 8 included, the rejections under 35 U.S.C. 101 are maintained. With respect to rejections made under 35 U.S.C. 103, Applicant argues “Particularly, claim 1 has been amended to require limitation of dependent claim 8 which are indicated in the FOA at 8 to be allowable over the prior art and not subject to the 35 U.S.C. section 101 rejection,” (page 12 of Remarks). Similarly to the above argument against 35 U.S.C. 101, without each and every limitation of the allowable claim included, claim 1 stands rejected under prior art. Further details are provided below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a mental process that may be performed in the human mind or with the aid of pen and paper. This judicial exception is not integrated into a practical application because the recited generic computer elements do not add a meaningful limitation to the abstract idea, and amount to simply applying the idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because none of the elements precludes the performance of a mental process. Regarding claim 1, the claim recites “A computer-implemented method, comprising:extracting predicates from a predetermined plurality of sentences;causing an explainer component to analyze the sentences to determine attentions from the predicates of the sentences, wherein the attentions are based on different predetermined classes of words;causing the extracted predicates to be input into a predetermined pruner model, wherein the pruner model is trained to use the attentions to generate a pruned list of predicates from the extracted predicates, wherein a subset of the extracted predicates are not included in the pruned list of predicates;causing a logical neural network to be trained using the pruned list of predicates before deployment of the logical neural network as a service, wherein the subset of the extracted predicates are not used in the training of the logical neural network to thereby reduce a training time and processing workload associated with the training of the logical neural network; anddeploying the trained logical neural network as the service in a computing environment,wherein an architecture of the logical neural network includes different logical AND gates for the different predetermined classes of words,wherein the architecture of the logical neural network includes an exclusive logical OR gate for mutually exclusive class(es) of the predetermined classes of words.” The limitations of “extracting predicates…” “causing the extracted predicates… to generate a pruned list of predicates…” “causing a logical neural network to be trained…” and “deploying the trained logical neural network…” as drafted cover mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole, these limitations describe acts which are equivalent to human mental work of identifying and sorting keywords or topics. Under the broadest reasonable interpretation, the neural network and pruner model are carrying out steps that may be performed by an individual, but embodied by generic computer elements. With that interpretation, these limitations may be embodied by an individual picking keywords out of a document, learning patterns, and performing some service in a computer environment, like answering questions or identifying similar keywords. The limitations of “the logical neural network includes different logical AND gates…” and “the logical neural network includes an exclusive logical OR gate…” broadly describe basic implementations of logical functions, which may be carried out by an individual as part of a mental process, or otherwise as well-understood building blocks of logical processes. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be performed mentally, and no additional features in the claims would preclude them from being performed as such. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 2, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein extracting the predicates from the sentences includes: applying an abstract meaning representation (AMR) parser to the sentences to extract semantics from the sentences, and converting the semantics into a graph, wherein the predicates are determined from the graph.”The limitations of “applying an abstract meaning representation (AMR) parser…” and “converting the sentences…” as drafted cover mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of organizing information. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 3, the claim depends from claim 2, and thus recites the limitations of claims 1 and 2, “wherein nodes in the graph represent concepts of the sentences, wherein edges in the graph represent relations to the concepts.” The limitation of “the graph represents concepts…” as drafted covers mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of organizing information. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 4, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein local interpretable model-agnostic explanations (LIMEs) are used by the explainer component for analyzing the sentences.” The limitation of “local interpretable model-agnostic explanations (LIMEs) are used…” as drafted covers mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of analyzing information. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 5, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein analyzing the sentences to determine the attentions includes:inputting text of the sentences into the explainer component, tokenizing the text to determine a plurality of tokens, feeding the plurality of tokens separately into a predetermined neural network, wherein the neural network generates a probabilistic ranking of words of the sentences in terms of classification of the sentences to predetermined classes, andwherein an output of the neural network includes the attentions determined from the sentences, wherein the attentions are determined, from a plurality of attentions consumed by the neural network, as having relatively highest probabilities for being associated with the predetermined classes.” The limitations of “tokenizing the text…” “generates a probabilistic ranking…” and “attentions determined from the sentences,” as drafted cover mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole, these limitations describe acts which are equivalent to human mental work of identifying keywords. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 6, the claim depends from claim 5, and thus recites the limitations of claims 1 and 5, “wherein the attentions are at least some of the words of the sentences, wherein the attentions are determined to have at least a predetermined probability for increasing an accuracy of the logical neural network during the training of the logical neural network based on the attentions being determined as having the relatively highest probabilities for being associated with the predetermined classes.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of evaluating keywords. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 7, the claim depends from claim 6, and thus recites the limitations of claims 1 and 5-6, “wherein the different predetermined classes of words associated with a predetermined product, wherein at least some of the predetermined plurality of sentences are reviews of the product, wherein a first of the predetermined classes of words includes a durability of the product.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of sorting keywords. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 8, the claim depends from claim 7, and thus recites the limitations of claims 1 and 5-7, “wherein the deployment of the trained logical neural network includes using the trained logical neural network to determine states for an additional plurality of sentences, wherein the determined states comprise true or false.” Taken individually, or as a whole with the preceding claims, these limitations describe acts which are equivalent to human mental work of evaluating logical conditions. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claim 9, the claim depends from claim 1, and thus recites the limitations of claim 1, “wherein using the attentions to generate the pruned list of predicates from the extracted predicates includes comparing mapped abstract meaning representations (AMRs) to the attentions, wherein the subset of the extracted predicates are the extracted predicates of AMRs that are determined to not contain at least one of the attentions.” The limitation of “comparing mapped abstract meaning representations (AMRs) to the attentions,” as drafted covers mental activities which can be performed in the mind or with the aid of pen and paper. Taken individually, or as a whole with claim 1, these limitations describe acts which are equivalent to human mental work of identifying and evaluating keywords. Accordingly, the claim is directed to an abstract idea without significantly more. The claim is not patent eligible. Regarding claims 10-18, computer-readable medium claims 10-18 and method claims 1-9 are related as method and computer-readable medium for performing the same, with each computer-readable medium element’s function corresponding to the method step. Accordingly, claims 10-18 are similarly rejected under the same rationale as applied to claims 1-9. Regarding claims 19 and 20, system claims 19 and 20 and method claims 1 and 2 are related as a method and system of using the same, with each system element’s function corresponding to the method step. Accordingly, claims 19 and 20 are similarly rejected under the same rationale as applied to claims 1 and 2. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over China Invention Application CN 114662469 to Peng et al. (hereinafter, "Peng") in view of UK Patent Application GB 2609718 to Daguang et al. (hereinafter, "Daguang"), further in view of “Logical Neural Networks” by Riegle et al. (hereinafter, “Riegel”). Regarding claims 1, 10 and 19, Peng teaches a method, computer program product and system comprising: extracting predicates from a predetermined plurality of sentences (page 6, "S203, extracting the main language from the specified sentence, the predicate corresponding to the main language, the modification language of the predicate, the object corresponding to the object and the modification language of the object, as the pruning result information executing step S206;"); causing an explainer component to analyze the sentences to determine attentions from the predicates of the sentences, wherein the attentions are based on different predetermined classes of words (page 3, "It should be noted that the emotional analysis object in the specified sentence in the embodiment can be specified by the user, or can be other device with emotion requirement. For example, when the user is doing research, it can select an emotion analysis object for each specified sentence, analyzing the emotion analysis object emotion in the specified sentence. or it also can adopt the pre-trained emotion analysis object selection model, from the appointed sentence in one filter analysis object, to analyze the emotion analysis object emotion in the specified sentence.In this embodiment, the role of the emotion analysis recognition in the specified sentence, refers to the emotion analysis object in the specified sentence in the role."); and causing a logical neural network to be trained using the pruned list of predicates before deployment of the logical neural network as a service, wherein the subset of the extracted predicates are not used in the training of the logical neural network to thereby reduce a training time and processing workload associated with the training of the logical neural network (page7, "under the TrimOrigin set, training and measuring the training data set and the data set of the measuring data set model is pruned by the pruning scheme of the present disclosure. under the Attack setting, training the model on the original training data set, and performing measuring measuring [sic] the data set after the attack. under the TrimAttack setting, training and measuring the model on the pruned attack training data set and measuring data set."). Peng does not explicitly teach “causing the extracted predicates to be input into a predetermined pruner model, wherein the pruner model is trained to use the attentions to generate a pruned list of predicates from the extracted predicates, wherein a subset of the extracted predicates are not included in the pruned list of predicates,” or “deploying the trained logical neural network as the service in a computing environment,” and thus, Daguang is introduced. Daguang teaches causing the extracted predicates to be input into a predetermined pruner model, wherein the pruner model is trained to use the attentions to generate a pruned list of predicates from the extracted predicates, wherein a subset of the extracted predicates are not included in the pruned list of predicates (paragraph [0079], "In at least one embodiment, GPU 106 then uses said list to identify a subset of training data (e.g., proxy data) 110. In at least one embodiment, a subset of the training images 110 are selected based on importance scores from said list."); and deploying the trained logical neural network as the service in a computing environment (paragraph [0110], "In at least one embodiment, training framework 904 trains untrained neural network 906 until untrained neural network 906 achieves a desired accuracy. In at least one embodiment, trained neural network 908 can then be deployed to implement any number of machine learning operations."). Peng and Daguang are considered analogous because they are each concerned with training neural networks. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Peng with the teachings of Daguang for the purpose of improving neural network performance. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. The combination of Peng and Danguang does not teach “wherein an architecture of the logical neural network includes different logical AND gates for the different predetermined classes of words, wherein the architecture of the logical neural network includes an exclusive logical OR gate for mutually exclusive class(es) of the predetermined classes of words,” and thus, Riegel is introduced. Riegle teaches wherein an architecture of the logical neural network includes different logical AND gates for the different predetermined classes of words, wherein the architecture of the logical neural network includes an exclusive logical OR gate for mutually exclusive class(es) of the predetermined classes of words (section 1, "The central idea is to create a 1-to-1 correspondence between neurons and the elements of logical formulae, using the observation that the weights of neurons can be constrained to act as, e.g. AND or OR gates."). Peng, Danguang and Riegel are considered analogous because they are each concerned with using neural networks. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have used a logical neural network as taught by Riegel in the combination of Peng and Danguang with a reasonable expectation of success. A person of ordinary skill has good reason to pursue the known options of neural networks within the field of the invention. If this leads to the anticipated success, it is likely that product is not of innovation but of ordinary skill and common sense. Claims 2-3, 11-12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Peng, Daguang and Riegel as applied to claims 1, 10 and 19 above, and further in view of "AMR Parsing with Action-Pointer Transformer" by Zhou et al. (hereinafter, "Zhou"). Regarding claims 2, 11 and 20, the combination of Peng, Daguang and Riegel does not teach a method, computer program product or system “wherein extracting the predicates from the sentences includes: applying an abstract meaning representation (AMR) parser to the sentences to extract semantics from the sentences, and converting the semantics into a graph, wherein the predicates are determined from the graph,” and thus, Zhou is introduced. Zhou teaches applying an abstract meaning representation (AMR) parser to the sentences to extract semantics from the sentences, and converting the semantics into a graph, wherein the predicates are determined from the graph (section 1, "Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a sentence level semantic formalism encoding who does what to whom in the form of a rooted directed acyclic graph. Nodes represent concepts such as entities or predicates which are not explicitly aligned to words, and edges represent relations such as subject/object (see Figure 1). AMR parsing, the task of generating the graph from a sentence, is nowadays tackled with sequence to sequence models parameterized with neural networks."). Peng, Daguang, Riegel and Zhou are considered analogous because they are each concerned with using neural networks. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Peng, Daguang and Riegel with the teachings of Zhou for the purpose of improving neural network performance. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 3 and 12, Zhou further teaches nodes in the graph represent concepts of the sentences, wherein edges in the graph represent relations to the concepts (section 1, "Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a sentence level semantic formalism encoding who does what to whom in the form of a rooted directed acyclic graph. Nodes represent concepts such as entities or predicates which are not explicitly aligned to words, and edges represent relations such as subject/object (see Figure 1)."). Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Peng, Daguang and Riegel as applied to claims 1 and 10 above, and further in view of "'Why Should I Trust You?' Explaining the Predictions of Any Classifier" by Ribeiro et al. (hereinafter, "Ribeiro"). Regarding claims 4 and 13, the combination of Peng, Daguang and Riegel does not teach a method or computer program product “wherein local interpretable model-agnostic explanations (LIMEs) are used by the explainer component for analyzing the sentences,” and thus, Ribeiro is introduced. Riberio teaches local interpretable model-agnostic explanations (LIMEs) are used by the explainer component for analyzing the sentences (section 1, "LIME, an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model," and section 2, "By 'explaining a prediction', we mean presenting textual or visual artifacts that provide qualitative understanding of the relationship between the instance's components (e.g. words in text, patches in an image) and the model's prediction."). Peng, Daguang and Ribeiro are considered analogous because they are each concerned with using neural networks. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Peng, Daguang and Riegel with the teachings of Ribeiro for the purpose of improving neural network usability. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Claims 5-7 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Peng, Daguang and Riegel as applied to claims 1 and 10 above, and further in view of U.S. Patent Application Publication 2014/0108005 to Kassis et al. (hereinafter, "Kassis"). Regarding claims 5 and 14, the combination of Peng, Daguang and Riegel does not teach a method or system “wherein analyzing the sentences to determine the attentions includes: inputting text of the sentences into the explainer component, tokenizing the text to determine a plurality of tokens, feeding the plurality of tokens separately into a predetermined neural network, wherein the neural network generates a probabilistic ranking of words of the sentences in terms of classification of the sentences to predetermined classes, and wherein an output of the neural network includes the attentions determined from the sentences, wherein the attentions are determined, from a plurality of attentions consumed by the neural network, as having relatively highest probabilities for being associated with the predetermined classes,” and thus, Kassis is introduced. Kassis teaches inputting text of the sentences into the explainer component (paragraph [0018], "With reference to FIG. 3, the Sentence Parser 302 identifies sentences in a provided block of input text. The ULC Noun Parser 304 identifies keyword/tokens/phrases in a provided block of input text."),tokenizing the text to determine a plurality of tokens, feeding the plurality of tokens separately into a predetermined neural network (paragraph [0018], "With reference to FIG. 3, the Sentence Parser 302 identifies sentences in a provided block of input text. The ULC Noun Parser 304 identifies keyword/tokens/phrases in a provided block of input text."),wherein the neural network generates a probabilistic ranking of words of the sentences in terms of classification of the sentences to predetermined classes (paragraph [0012], "As shown in FIG. 1, in a ULC system 100, a universal language classifier 102 takes as input a document 104 and produces one or more outputs 106 including one or more of: sentence(s) 108, keyword(s) 110, abstract(s) 112, and ranked category (categories) 114."), andwherein an output of the neural network includes the attentions determined from the sentences (paragraph [0018], "The ULC Term Scorer 310 processes input text and identifies unique keywords keeping track of the sentences they occur in, the number of times the keyword occurs, and determines a Term Scorer Keyword Score for each keyword in the input document."),wherein the attentions are determined, from a plurality of attentions consumed by the neural network, as having relatively highest probabilities for being associated with the predetermined classes (paragraph [0025], "The Known Words object 306 must be created and a Known Words List 326 must be loaded into memory before any categorization can occur… The category with the highest score determines the most accurate category assignment for each categorization session."). Peng, Daguang, Riegel and Kassis are considered analogous because they are each concerned with language processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Peng, Daguang and Riegel with the teachings of Kassis for the purpose of improving language processing accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 6 and 15, Kassis further teaches a method and system wherein the attentions are at least some of the words of the sentences, wherein the attentions are determined to have at least a predetermined probability for increasing an accuracy of the logical neural network during the training of the logical neural network based on the attentions being determined as having the relatively highest probabilities for being associated with the predetermined classes (paragraph [0207], "Known Words Base Scores are calculated for each Noun Parser Keyword that represent how rare that Keyword is within each specific Known Words category. Rare terms are given higher scores using the current base score algorithm."). Claims 7-8 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Peng, Daguang, Riegel and Kassis as applied to claims 6 and 15 above, and further in view of U.S. Patent 11,699,177 to Alexandrov et al. (hereinafter, "Alexandrov"). Regarding claims 7 and 16, the combination of Peng, Daguang, Riegel and Kassis does not teach a method and system “wherein the different predetermined classes of words are associated with a predetermined product, wherein at least some of the predetermined plurality of sentences are review of the product, wherein a first of the predetermined classes of words includes a durability of the product,” however, Alexandrov teaches the different predetermined classes of words are associated with a predetermined product, wherein at least some of the predetermined plurality of sentences are review of the product, wherein a first of the predetermined classes of words includes a durability of the product (column 9, line 16, "Attention-based encoder 211 may receive (at 402) one or more filtered reviews that are determined to be of relevance to the quality assessment of a product or service. Attention-based encoder 211 may evaluate the review structure to isolate (at 404) different topics within the review. The different topics may correspond to references to the product or service name as well as features, qualities, attributes, and/or other descriptive characteristics of the product or service that are included as part of the contextual relevance model."). Peng, Danguang, Riegel, Kassis and Alexandrov are considered analogous because they are each concerned with language processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Peng, Danguang, Riegel and Kassis with the teachings of Alexandrov for the purpose of applying the system to a commercial field of endeavor. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 8 and 17, Riegel teaches a method and system wherein the deployment of the trained logical neural network includes using the trained logical neural network to determine states for an additional plurality of sentences, wherein the determined states comprise true or false (section 3 Model structure, "In general, LNNs are described in terms of FOL, but it is useful to discuss LNNs restricted to the scope of propositional logic.1 Structurally, an LNN is a graph made up of the syntax trees of all represented formulae connected to each other via neurons added for each proposition… To aid interpretability of bounds, we define a threshold of truth 1 2 < α ≤ 1   such that a continuous truth value is considered True if it is greater than α   and False if it is less than 1 - α ."). Peng, Danguang, Kassis and Alexandrov and Riegel are considered analogous because they are each concerned with natural language processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have used a logical neural network as taught by Riegel in the combination of Peng, Danguang, Kassis and Alexandrov with a reasonable expectation of success. A person of ordinary skill has good reason to pursue the known options of neural networks within the field of the invention. If this leads to the anticipated success, it is likely that product is not of innovation but of ordinary skill and common sense. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Peng, Daguang and Riegel as applied to claims 1 and 10 above, and further in view of U.S. Patent Application Publication 2023/0082485 to Sengupta et al. (hereinafter, "Sengupta"). Regarding claims 9 and 18, the combination of Peng, Daguang and Riegel does not teach a method or system “wherein using the attentions to generate the pruned list of predicates from the extracted predicates includes comparing mapped abstract meaning representations (AMRs) to the attentions, wherein the subset of the extracted predicates are the extracted predicates of AMRs that are determined to not contain at least one of the attentions,” and thus, Sengupta is introduced. Sengupta teaches comparing mapped abstract meaning representations (AMRs) to the attentions, wherein the subset of the extracted predicates are the extracted predicates of AMRs that are determined to not contain at least one of the attentions (paragraph [0021], "Various embodiments of the present invention disclose two different variant solutions for data denoising. In both the solutions, transformers are used as the base architecture. Transformers may use multi-headed self-attention to capture both local and global contexts from texts. Various embodiments of the present invention propose using two primary building blocks: an encoder to identify the noises in the data; and a decoder to correct the identified noises. The encoder may read the incorrect text data as input, extract an abstract representation from the text data, and identify the probability that each token of the text data is contextually incorrect. In some embodiments, a proposed system calculates three probabilities for each word token: a copy probability, a removal probability, and a generation probability. If the copy probability of token is greater than 0.5, the proposed system may copy the exact token from input to the output. For example, proper nouns in the texts can be copied directly to the output without making any changes. Using the removal probability of the token, the encoder decides whether the system should remove the entire token in the output or not."). Peng, Daguang, Riegel and Sengupta are considered analogous because they are each concerned with language processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Peng, Daguang and Riegel with the teachings of Sengupta for the purpose of improving language processing accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent Application Publication 2004/0019601 to Gates. U.S. Patent Application Publication 2020/0212297 to Chatterjee et al. U.S. Patent Application Publication 2021/0124739 to Karanasos et al. U.S. Patent Application Publication 2021/0365817 to Riegel et al. U.S. Patent Application Publication 2022/0100962 to Akhalwaya et al. U.S. Patent Application Publication 2023/0100508 to Abobakr et al. U.S. Patent Application Publication 2023/0108135 to Kimura et al. U.S. Patent Application Publication 2023/0367322 to Hou et al. U.S. Patent 7,027,974 to Busch et al. U.S. Patent 11,941,531 to Arik et al. China Invention Application CN 111581365 to Wu et al. China Invention Application CN 115081427 to Shi et al. WIPO Publication WO 2019/045758 to Chen et al. “QuickFOIL: Scalable Inductive Logic Programming” by Zeng et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 26, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §101, §103
Aug 21, 2025
Examiner Interview Summary
Aug 21, 2025
Applicant Interview (Telephonic)
Aug 22, 2025
Response Filed
Sep 25, 2025
Final Rejection — §101, §103
Nov 26, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month