Prosecution Insights
Last updated: April 19, 2026
Application No. 17/538,221

AUTOMATICALLY GENERATING FACTSHEETS FOR ARTIFICIAL INTELLIGENCE-BASED QUESTION ANSWERING SYSTEMS

Non-Final OA §103
Filed
Nov 30, 2021
Examiner
SCHALLHORN, TYLER J
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
34%
Grant Probability
At Risk
1-2
OA Rounds
5y 1m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
89 granted / 262 resolved
-21.0% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
20 currently pending
Career history
282
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 262 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the application filed 30 November 2021. Claims 1–20 are pending. Claims 1, 14, and 20 are independent. Claims 1–20 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after 16 March 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections—35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 1–3, 9, 11, 14, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Ribeiro et al. (“Beyond Accuracy: Behavorial Testing of NLP Models with CheckList”) [hereinafter Ribeiro] in view of Gan et al. (“Towards Robustness of Text-to-SQL Models against Synonym Substitution”) [hereinafter Gan]. Regarding independent claim 1, Ribeiro teaches [a] computer-implemented method comprising: processing at least one given artificial intelligence-based question answering system on tabular data using at least one test engine; Evaluating natural language processing models, including question answering systems (Ribeiro, p. 4903, § 2.1). generating, based at least in part on the processing of the at least one given artificial intelligence-based question answering system, one or more accuracy values attributed to the at least one given artificial intelligence-based question answering system […]; The evaluation includes, e.g., failure rates (Ribeiro, p. 4903, figure 1). generating, based at least in part on the processing of the at least one given artificial intelligence-based question answering system, a set of one or more queries determined to be addressable by the at least one given artificial intelligence-based question answering system […]; Generating test cases (Ribeiro, p. 4904, § 2.3). generating, based at least in part on the one or more accuracy values and the one or more queries determined to be addressable, at least one human-readable summary of the at least one given artificial intelligence-based question answering system; and Generating visualizations of the test cases and results (Ribeiro, p. 4902, § 1, p. 4903, figure 1). […] wherein the method is carried out by at least one computing device. [Software inherently requires a computing device; Ribeiro discloses that the system comprises software components in, e.g., § 4.1.] Ribeiro teaches generating a human-readable summary of a QA system, but does not expressly teach a QA system based on tabular data. However, Gan teaches: [generating, based at least in part on the processing of the at least one given artificial intelligence-based question answering system, one or more accuracy values attributed to the at least one given artificial intelligence-based question answering system] in connection with particular tabular data Benchmarking text-to-SQL [tabular question answering] systems (Gan, § 1). [generating, based at least in part on the processing of the at least one given artificial intelligence-based question answering system, a set of one or more queries determined to be addressable by the at least one given artificial intelligence-based question answering system] on the particular tabular data Generating a benchmark data set (Gan, § 2.1). performing one or more automated actions based at least in part on the at least one human-readable summary; Performing adversarial training (Gan, § 3.2). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro with those of Gan. One would have been motivated to do so Regarding dependent claim 2, the rejection of claim 1 is incorporated and Ribeiro/Gan further teaches: wherein processing at least one given artificial intelligence-based question answering system on tabular data using at least one test engine comprises processing multiple artificial intelligence-based question answering systems on the tabular data using the at least one test engine; and The system can test a plurality of models (Ribeiro, p. 4905, § 3). wherein generating, based at least in part on the processing of the multiple artificial intelligence-based question answering systems, a set of one or more queries comprises generating a universal test bed of queries determined to be addressable by the multiple artificial intelligence-based question answering systems. A user can generate large numbers of test cases (Ribeiro, p. 4902, § 1). A benchmark data set is generated by synonym substitution on a data set for evaluating text-to-SQL models (Gan, § 2.1). Regarding dependent claim 3, the rejection of claim 2 is incorporated and Ribeiro/Gan further teaches: further comprising: comparing performance of the multiple artificial intelligence-based question answering systems in connection with the universal test bed of queries. The systems can test a plurality of models (Ribeiro, p. 4905, § 3; Gan, § 4.1). The system can generate a comparison of models (Ribeiro, p. 4903, § 2). Regarding dependent claim 9, the rejection of claim 1 is incorporated and Ribeiro/Gan further teaches: wherein processing the at least one given artificial intelligence-based question answering system on tabular data using at least one test engine comprises testing the at least one given artificial intelligence-based question answering system on the particular tabular data using multiple questions of varying complexity. The data set is generated from the Spider data set, which comprises questions having varying degrees of difficulty1 (Gan, § 2.1). Regarding dependent claim 11, the rejection of claim 1 is incorporated and Ribeiro/Gan further teaches: wherein generating the one or more accuracy values comprises generating at least one accuracy value measured on at least one standardized test set of natural language questions. Models are evaluated using an accuracy metric (Gan, § 4.1). Regarding independent claim 14, this claim recites limitations similar to those of claim 1, and is rejected for the same reasons. Regarding independent claim 20, this claim recites limitations similar to those of claim 1, and is rejected for the same reasons. Claims 4, 12, and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over Ribeiro et al. (“Beyond Accuracy: Behavorial Testing of NLP Models with CheckList”) [hereinafter Ribeiro] in view of Gan et al. (“Towards Robustness of Text-to-SQL Models against Synonym Substitution”) [hereinafter Gan], further in view of Lei et al. (US 2020/0167669 A1) [hereinafter Lei]. Regarding dependent claim 4, the rejection of claim 1 is incorporated. Ribeiro/Gan teaches generating a human-readable summary, but does not expressly teach generating an API. However, Lei teaches: wherein performing the one or more automated actions comprises automatically generating, based at least in part on the at least one human-readable summary, one or more application programming interfaces associated with the at least one given artificial intelligence-based question answering system. A software services generates an application programming interface for a machine learning model (Lei, ¶ 68). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan with those of Lei. One would have been motivated to do so in order to allow the model to be used with other software tools (Lei, ¶¶ 68–70). Regarding dependent claim 12, the rejection of claim 1 is incorporated. Ribeiro/Gan teaches generating a human-readable summary, but does not expressly teach updating an API. However, Lei teaches: wherein performing the one or more automated actions comprises automatically updating, based at least in part on the at least one human-readable summary, one or more existing application programming interfaces associated with the at least one given artificial intelligence-based question answering system. An API can be updated in response to changes to a machine learning model (Lei, ¶ 79). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan with those of Lei. One would have been motivated to do so in order to allow the model to be used with other software tools (Lei, ¶¶ 68–70) and updated to incorporate changes to the specifications of the model (Lei, ¶ 79). Regarding dependent claim 15, this claim recites limitations similar to those of claim 4, and is rejected for the same reasons. Claims 5–7 and 16–18 are rejected under 35 U.S.C. § 103 as being unpatentable over Ribeiro et al. (“Beyond Accuracy: Behavorial Testing of NLP Models with CheckList”) [hereinafter Ribeiro] in view of Gan et al. (“Towards Robustness of Text-to-SQL Models against Synonym Substitution”) [hereinafter Gan] and Lei et al. (US 2020/0167669 A1) [hereinafter Lei], further in view of Lester et al. (US 2024/0378196 A1) [hereinafter Lester]. Regarding dependent claim 5, the rejection of claim 4 is incorporated. Lei teaches generating an API, but does not expressly teach an API for search operations. However, Lester teaches: wherein generating one or more application programming interfaces associated with the at least one given artificial intelligence-based question answering system comprises generating at least one application programming interface pertaining to search operations with respect to the at least one given artificial intelligence-based question answering system, the particular tabular data, and one or more natural language queries. A prompt tuning semantic search API (Lester, ¶ 96). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan/Lei with those of Lester. One would have been motivated to do so in order to tune the machine learning model (Lester, ¶ 41). Regarding dependent claim 6, the rejection of claim 4 is incorporated. Lei teaches generating an API, but does not expressly teach an API for obtaining training data. However, Lester teaches: wherein generating one or more application programming interfaces associated with the at least one given artificial intelligence-based question answering system comprises generating at least one application programming interface pertaining to obtaining training data for at least one of the at least one given artificial intelligence-based question answering system and the particular tabular data. A training API for inputting training data (Lester, ¶ 109). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan/Lei with those of Lester. One would have been motivated to do so in order to tune the machine learning model (Lester, ¶ 41). Regarding dependent claim 7, the rejection of claim 4 is incorporated. Lei teaches generating an API, but does not expressly teach an API for modifying the QA system. However, Lester teaches: wherein generating one or more application programming interfaces associated with the at least one given artificial intelligence-based question answering system comprises generating at least one application programming interface pertaining to modifying the at least one given artificial intelligence-based question answering system with respect to the particular tabular data. A training API for inputting data for training [modifying] a model (Lester, ¶ 109). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan/Lei with those of Lester. One would have been motivated to do so in order to tune the machine learning model (Lester, ¶ 41). Regarding dependent claim 16, this claim recites limitations similar to those of claim 5, and is rejected for the same reasons. Regarding dependent claim 17, this claim recites limitations similar to those of claim 6, and is rejected for the same reasons. Regarding dependent claim 18, this claim recites limitations similar to those of claim 7, and is rejected for the same reasons. Claims 8, 10, 13, and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over Ribeiro et al. (“Beyond Accuracy: Behavorial Testing of NLP Models with CheckList”) [hereinafter Ribeiro] in view of Gan et al. (“Towards Robustness of Text-to-SQL Models against Synonym Substitution”) [hereinafter Gan], further in view of Elprin et al. (US 2021/0133632 A1) [hereinafter Elprin]. Regarding dependent claim 8, the rejection of claim 1 is incorporated. Ribeiro/Gan teaches generating a human-readable summary, but does not expressly teach automatically training the QA system. However, Elprin teaches: wherein performing the one or more automated actions comprises training the at least one given artificial intelligence-based question answering system based at least in part on at least a portion of the at least one human-readable summary. A model may be retrained in response to monitoring the model (Elprin, ¶ 73). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan with those of Elprin. One would have been motivated to do so in order to prevent the model from drifting, i.e., having degraded accuracy (Elprin, ¶ 8). Regarding dependent claim 10, the rejection of claim 1 is incorporated. Ribeiro/Gan teaches generating a human-readable summary, but does not expressly teach outputting suggestions for improving the QA system. However, Elprin teaches: wherein automatically generating the at least one human-readable summary comprises determining and outputting one or more suggestions for improving the at least one given artificial intelligence-based question answering system. Retraining of a model may be suggested based on the model drifting (Elprin, ¶ 49). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan with those of Elprin. One would have been motivated to do so in order to prevent the model from drifting, i.e., having degraded accuracy (Elprin, ¶ 8). Regarding dependent claim 13, the rejection of claim 1 is incorporated. Ribeiro/Gan teaches generating a human-readable summary, but does not expressly teach providing a cloud service. However, Elprin teaches: wherein software implementing the method is provided as a service in a cloud environment. A model monitoring system in communication with machine learning models on a cloud (Elprin, ¶¶ 54, 100, 111). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the teachings of Ribeiro/Gan with those of Elprin. One would have been motivated to do so in order to, e.g., provide remote access to the system, increase security/reliability of the system, etc. Regarding dependent claim 19, this claim recites limitations similar to those of claim 8, and is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler Schallhorn whose telephone number is 571-270-3178. The examiner can normally be reached Monday through Friday, 8:30 a.m. to 6 p.m. (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in the USA or Canada) or 571-272-1000. /Tyler Schallhorn/Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144 1 See Yu et al. “Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task”, p. 3917, § 6 “SQL Hardness Criteria”, and figure 3.
Read full office action

Prosecution Timeline

Nov 30, 2021
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572403
AUTOMATICALLY CONVERTING ERROR LOGS HAVING DIFFERENT FORMAT TYPES INTO A STANDARDIZED AND LABELED FORMAT HAVING RELEVANT NATURAL LANGUAGE INFORMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554987
COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR DNN WEIGHT PRUNING FOR REAL-TIME EXECUTION ON MOBILE DEVICES
2y 5m to grant Granted Feb 17, 2026
Patent 12481824
CONTENT ASSOCIATION IN FILE EDITING
2y 5m to grant Granted Nov 25, 2025
Patent 12475176
AUTOMATED SYSTEM AND METHOD FOR CREATING STRUCTURED DATA OBJECTS FOR A MEDIA-BASED ELECTRONIC DOCUMENT
2y 5m to grant Granted Nov 18, 2025
Patent 12450420
GENERATION AND OPTIMIZATION OF OUTPUT REPRESENTATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
34%
Grant Probability
48%
With Interview (+13.8%)
5y 1m
Median Time to Grant
Low
PTA Risk
Based on 262 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month