Prosecution Insights
Last updated: April 19, 2026
Application No. 17/358,148

EXPLAINING A THEOREM PROVING MODEL

Final Rejection §103
Filed
Jun 25, 2021
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action is in response to the amendment filed 2 January 2026. Claims 1, 4-5, 8, 11-12, 15, 18-19, 23-24, 26, 29-31, and 34-36 are pending. Claims 34-36 are newly added. Claims 21-22, 25, 27-28, and 32 are cancelled. Claims 1, 8, and 15 are independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-5, 8, 11-12, 15, 18-19, 23-24, 26, 29-31, and 34-36 are rejected under 35 U.S.C. 103 as being unpatentable over Abzianidze (LANGPRO: Natural Language Theorem Prover, 2017) and further in view of Culbertson (US 2013/0332458, published 12 December 2013) and further in view of Liang et al. (US 2021/0144107, filed 12 November 2019, hereafter Liang) and further in view of Ponniah et al. (US 2021/0035566, filed 2 August 2019, hereafter Ponniah) and further in view of Gimza et al. (US 11128539, filed 5 May 2020, hereafter Gimza) and further in view of Rockstaschel et al. (End-to-End Differentiable Proving, published 4 December 2017, hereafter Rockstaschel) and further in view of Padmanabhan et al. (US 11138473, filed 15 July 2019, hereafter Padmanabhan). As per independent claim 1, Abzianidze discloses a computer-implemented method comprising: predicting a truth value query through a pre-trained theorem proving model, based on the query and one or more facts and rules in a knowledge base (Section 5: Here, a truth value may be determined (Section 2) using an inventory of rules (Section 4.2) and a knowledge base of facts (Section 4.3) by a natural language theorem prover) ranking the one or more facts and rules according to contribution, calculated in a pre-defined scoring method, made to predict truth value of the query (Section 4.4: Here, the proof engine prioritizes (ranks) facts and rules to be used by their efficiency (score) to be used to predict the truth value (Section 5)) generating a proof of the predicted truth value, wherein the proof is one or more logical steps that demonstrate the predicted truth value in a natural language (Section 5: Here, the theorem prover receives a natural language predicted truth value (theorem) and generates a proof based on a Combinator Categorical Grammar (CCG) parser, Lamba Logical Forms generator (LLFgen) (Section 3) and NLogPro (Section 4)) wherein generating the proof the predicted truth value comprises taking, as input, the one or more ranked facts (Section 4.3) and rules (Section 4.2), the query and the predicted truth value (Figure 2: Here, the example of receiving the predicted truth value of “no pug is evil” and the queries “several pugs bark.” and “every dog which barks is vicious.” are received). outputting the proof (Section 5: Here, in “addition to an entailment judgement, LangPro can output the actual tableau proof trees”) Abzianidze fails to specifically disclose a processor and inverting by one or more processors, a meaning of the query when the truth value predicted is false, wherein the meaning of the query is inverted by a lexical negation operator utilizing a sequence to sequence machine translation model trained using synthetic generation training data. However, Culbertson, which is analogous to the claimed invention because it is directed toward using a lexical negation, discloses a processor (Figure 5, item 502) and inverting by one or more processors, a meaning of the query when the truth value predicted is false, wherein the meaning of the query is inverted by a lexical negation operator utilizing a sequence to sequence machine translation model trained (paragraph 0077: Here, lexical negation is applied to invert a truth value). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Culbertson with Abzianidze, with a reasonable expectation of success, as it would have allowed for applying a negation operation (Culbertson: paragraph 0077) for creating proofs (Abzianidze: Section 2). Abzianidze fails to specifically disclose receiving a pre-defined maximum number of generated results. However, Liang, which is analogous to the claimed invention because it is directed toward generating a maximum number of results based upon input, discloses receiving a pre-defined maximum number of generated results (paragraph 0091: Here, an orchestrator service sets a maximum number of results to the query). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Liang with Abzianidze-Culbertson, with a reasonable expectation of success, as it would have allowed for setting a maximum number of results to avoid overburdening the system. This would have improved user experience by conserving processing resources once a maximum number of candidate results have been generated. Abzianidze fails to specifically disclose outputting comprises outputting a sorted set of pre-defined maximum number of proofs of the predicted truth value for the query, wherein each proof is ranked based on a score that represents estimated accuracy of the proof. However, Ponniah, which is analogous to the claimed invention because it is directed toward sorting candidate items based upon a score representing the likely most accurate candidate, discloses outputting a sorted set of candidate items, wherein each item is ranked based on a score that represents the estimated accuracy of the candidate (paragraph 0054). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ponniah with Abzianidze-Culbertson-Liang, with a reasonable expectation of success, as it would have allowed a user to view a sorted list based upon accuracy. Further, Abzianidze fails to disclose receiving feedback from a user including a quantitative indication of the proof, wherein the pre-trained theorem proving model is a neural network model trained in a supervised manner to improve an accuracy of the proof generated based on the user feedback. However, Gimza, which is analogous to the claimed invention because it is directed toward retraining the machine learning model based upon user feedback, discloses: receiving feedback from a user based on the data, the feedback including quantitative indication of the data (column 7, lines 45-58: Here, user feedback of the evaluation of components is provided) using, by one or more processors, previously collected user feedback to improve the accuracy of the model (column 7, lines 45-58: Here, based upon the user feedback, the ML model is retrained to improve accuracy of the evaluation) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Gimza with Abzianidze-Culbertson-Liang-Ponniah, with a reasonable expectation of success, as it would have allowed for improving the proof generating model based upon user feedback. Abzianidze fails to specifically disclose, replacing, by a proof generator, the query if the truth value predicted for the query is false with a new query, wherein the proof generator is comprised of at least a lexical negation, a query decomposer, and a lexical unifier, wherein the query decomposer is applied to a left-most sub query until the query decomposer detects no disjunction or conjunction, wherein the lexical unifier is applied between the left-most query and a top ranked facts or rules, and wherein the proof generator replaces the left-most query with an output of the lexical unifier. However, Rockstaschel, which is analogous to the claimed invention because it is directed toward a proof generator, discloses: replacing, by a proof generator, the query if the truth value predicted for the query is false with a new query (Sections 3.1 and 3.3: Here, a predicted value for a query is predicted to be false, and a new set of atoms), wherein the proof generator is comprised of at least a lexical negation, a query decomposer, and a lexical unifier (Section 3: Here, the proof generator includes a negation (Section 3.1), a lexical unifier (section 3.1) and a query decomposer (Sections 3.2 and 3.3)) applying the query decomposer on the left-most sub-query until the query decomposer detects no disjunctions or conjunctions (Sections 2, 3.2, and 3.3: Here, the queries are decomposed using backward chaining and AND/OR functions) applying the lexical unifier between the left-most query and a top ranked facts or rules (Section 3.1: Here, the unification of two atoms are used to compare vector representations of symbols in the sequence of terms) replacing, by the proof generator, the left-most query with an output of the lexical unifier (Section 3.1: Here, the unification module performs substitutions based upon comparisons) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Rockstaschel with Abzianidze-Culbertson-Liang-Ponniah-Gimza, with a reasonable expectation of success, as it would have allowed for applying the advantages of symbolic theorem providers, multi-hop reason, interpretability, and easy integration of domain knowledge, with the ability to reason with vector representations of predicates and constants (Rockstaschel: Section 1 (see: page 2, first full paragraph)). Finally, Abzianidze fails to specifically disclose receiving feedback from a domain expert user including a quantitative indication of a correctness or usefulness of each proof ranked. However, Padmanabhan, which is analogous to the claimed invention because it is directed toward using a domain expert to classify correctness, discloses receiving feedback from a domain expert user including a quantitative indication of a correctness or usefulness of each classification (column 4, lines 8-34: Here, an expert assisted cascading classifier approach is described. This approach uses an expert classifier to indicate the correctness of the classification of the classifier). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Padmanabhan with Abzianidze-Culbertson-Liang-Ponniah-Gimza-Rockstaschel, with a reasonable expectation of success, as it would have allowed for improving results by incorporating domain expert feedback (Padmanabhan: column 4, lines 20-21). As per dependent claim 4, Abzianidze, Culbertson, Liang, Ponniah, Gimza, Rockstaschel, and Padmanabhan disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Abzianidze discloses wherein the proof includes one or more logical proofing paths, wherein the one or more proving paths prove the predicted truth value with a possible explanation of how the pre-trained theorem proving model uses the one or more facts and rules in the knowledge based to prove the query (Section 2: Here, the tableau and Lambda Logical Forms are used to provide two approaches, both having paths, of applying rules to generate proofs. These paths constitute possible explanations of how the pre-trained theorem proving model proves/disproves the query). As per dependent claim 5, Abzianidze, Culbertson, Liang, Ponniah, Gimza, Rockstaschel, and Padmanabhan disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Abzianidze discloses wherein the generating the proof comprises: decomposing the query (Figure 2-4: Here, the query is decomposed into various terms) unifying the query with the one or more facts and rules (Section 2: Here, the terms are combined with rules (Section 4.2) and knowledge base (Section 4.3) to generate the proof) negating the query when the predicted truth value is false (Section 3: Here, the LLF attempts to prove the theorem via contradiction. If the theorem can be contradicted, then the theorem is refuted) With respect to claims 8 and 11-12, the claims recite limitations substantially similar to those in claims 1 and 4-5, respectively. Claims 8 and 11-12 are similarly rejected. Additionally, Culbertson discloses one or more compute readable storage media, and program instructions collectively stored on the one or more computer readable storage media (paragraph 0080: Here, a computer readable storage device is disclosed). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Culbertson with Abzianidze, with a reasonable expectation of success, as it would have allowed for storing processing instructions in order to implement the method on a device. With respect to claims 15 and 18-19, the claims recite limitations substantially similar to those in claims 1 and 4-5, respectively. Claims 15 and 18-19 are similarly rejected. However, Culbertson discloses one or more processors (Figure 5, item 502), one or more computer readable storage media, and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors (Figure 5, item 504; paragraph 0079). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Culbertson with Abzianidze, with a reasonable expectation of success, as it would have allowed for applying a negation operation (Culbertson: paragraph 0077) for creating proofs (Abzianidze: Section 2). As per dependent claim 23, Abzianidze, Culbertson, Liang, Ponniah, Gimza, Rockstaschel, and Padmanabhan disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Rockstaschel discloses wherein the proof generator memorizes a sub-query and a resolvent in an AND/OR computational tree (Figure 2; Sections 3.2 and 3.3: Here, a computation graph (tree) is constructed based upon the unifications and decomposer, using AND/OR functions). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Rockstaschel with Abzianidze-Kalukin-Liang, with a reasonable expectation of success, as it would have allowed for applying the advantages of symbolic theorem providers, multi-hop reason, interpretability, and easy integration of domain knowledge, with the ability to reason with vector representations of predicates and constants (Rockstaschel: Section 1 (see: page 2, first full paragraph)). As per dependent claim 24, Abzianidze, Culbertson, Liang, Ponniah, Gimza, Rockstaschel, and Padmanabhan the limitations similar to those in claim 1, and the same rejection is incorporated herein. Abzianidze discloses wherein the query, the one or more facts, and the one or more rules are expressed in a natural language (Figure 2; Section 3: Here, the theorem prover receives raw text, as natural language, for creating the query, facts, and rules). Abzianidze fails to specifically disclose wherein the query decomposer is implemented as a neural network-based technique for natural language processing pre-training able to classify and query into NONE, AND, and OR. However, Rockstaschel, which is analogous to the claimed invention because it is directed toward a proof generator, discloses wherein the query decomposer is implemented as a neural network-based technique (Abstract: Here, a neural network is used in conjunction with a backward chaining algorithm to generate proofs) able to classify and query into NONE (Section 3.1: Here, the determination by the unification module that the rules are not equal is analogous to the NONE operand), AND (Section 3.3) and OR (Section 3.2). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Rockstaschel with Abzianidze-Kalukin-Liang, with a reasonable expectation of success, as it would have allowed for applying the advantages of symbolic theorem providers, multi-hop reason, interpretability, and easy integration of domain knowledge, with the ability to reason with vector representations of predicates and constants (Rockstaschel: Section 1 (see: page 2, first full paragraph)). As per dependent claim 26, Abzianidze, Culbertson, Liang, Ponniah, Gimza, Rockstaschel, and Padmanabhan disclose the limitations similar to those in claim 21, and the same rejection is incorporated herein. Rockstaschel discloses: receiving, by a lexical unifier, the query and at least one fact or rule from the one or more facts and rules, wherein the lexical unifier is implemented using a sequence to sequence neural machine translation model (Section 3) outputting, by the lexical unifier, a textual unification of the query and the at least one fact or rule, wherein an output is either a resolvent or an empty string (Section 3.5) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Rockstaschel with Abzianidze-Kalukin-Liang, with a reasonable expectation of success, as it would have allowed for applying the advantages of symbolic theorem providers, multi-hop reason, interpretability, and easy integration of domain knowledge, with the ability to reason with vector representations of predicates and constants (Rockstaschel: Section 1 (see: page 2, first full paragraph)). With respect to claims 29-31, the applicant discloses the limitations substantially similar to those in claims 23-24 and 26, respectively. Claims 29-31 are similarly rejected. With respect to claims 34-36, the applicant discloses the limitations substantially similar to those in claims 23-24 and 26, respectively. Claims 34-36 are similarly rejected. Response to Arguments Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Abzianidze, Culbertson, Liang, Ponniah, Gimza, Rockstaschel, and Padmanabhan. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Araki (US 2022/0269863): Discloses training a classification model based on an initial set of labeled data, receiving additional labeling by a domain expert, and training the classification model using the domain expert labeled data (paragraph 0026) Al-Boni (US 2022/0207049): Discloses training a machine learning model, receiving domain expert inputs and/or user feedback, and refining (re-training) the model to improve classification accuracy (paragraph 0032) Muthuswamy et al. (US 2022/0172211): Discloses an analyst or domain expert reviewing data and providing feedback, responsive to this feedback, retraining the model (paragraph 0053) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jun 25, 2021
Application Filed
Mar 03, 2025
Non-Final Rejection — §103
Apr 25, 2025
Interview Requested
May 08, 2025
Applicant Interview (Telephonic)
May 09, 2025
Examiner Interview Summary
May 09, 2025
Response Filed
Jul 18, 2025
Final Rejection — §103
Aug 27, 2025
Interview Requested
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 13, 2025
Examiner Interview Summary
Sep 17, 2025
Request for Continued Examination
Sep 23, 2025
Response after Non-Final Action
Sep 29, 2025
Non-Final Rejection — §103
Nov 19, 2025
Interview Requested
Dec 17, 2025
Applicant Interview (Telephonic)
Dec 20, 2025
Examiner Interview Summary
Jan 02, 2026
Response Filed
Mar 17, 2026
Final Rejection — §103
Mar 24, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month