Prosecution Insights
Last updated: April 19, 2026
Application No. 17/903,361

Framework for Evaluation of Document Summarization Models

Final Rejection §103
Filed
Sep 06, 2022
Examiner
ISKENDER, ALVIN ALIK
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Servicenow Inc.
OA Round
4 (Final)
48%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
12 granted / 25 resolved
-14.0% vs TC avg
Strong +60% interview lift
Without
With
+60.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
20 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
25.8%
-14.2% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 03 November 2025 have been fully considered but they are not persuasive. Applicant argues that Durmus et al. does not teach a plurality of summarization models utilizing different natural language processing techniques. However, Durmus’ evaluation method is tested against a variety of models such as seen in Table 4, second column. (In Section 4, FEQA was experimentally compared to the human annotated scores). Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Durmus et al. (“FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization”) in view of Vadlamani et al. (US 20110218947 A1), Baughman et al. (US 20220129636 A1) and Zhou et al. (US 11100179 B1). Regarding claim 1, Durmus discloses persistent storage containing: (i) an original document, and (ii) a plurality of summaries of the original document respectively produced by a plurality of summarization models, wherein the original document and each of the plurality of summaries include textual content; (Figure 2: Textual summary provided by a summarization model and a corresponding source document) one or more processors configured to: provide, to a query answering application, the set of queries, the original document, and the plurality of summaries; (Figure 2: Generated questions, the summaries, and the source documents are inputted to a QA model) receive, from the query answering application and for the set of queries, a set of document answers corresponding to the original document and sets of summary answers respectively corresponding to each of the plurality of summaries; (Figure 2: Answers from the document and answers from the summary) provide, to an answer matching application, the set of document answers and the sets of summary answers; (Section 3: Answer Verification) receive, from the answer matching application, respective scores for each of the plurality of summaries, wherein the respective scores represent accuracies of the sets of summary answers with respect to the set of document answers. (Figure 2, Section 3 Answer Verification, Table 9: Answers collected from the summary and answers collected from the source document are compared. A faithfulness score is assigned based on how well the summary answer matches the source answer) Durmus does not disclose the processor configured to provide, to an entity extractor application, the original document; receive, from the entity extractor application, a list of entities found within the textual content of the original document; provide, to a query generator application, the original document and the list of entities; receive, from the query generator application, a set of queries answerable by the textual content of the original document, wherein the set of queries is based on the list of entities; However, Vadlamani does disclose the processor configured to provide, to an entity extractor application, the original document; ([0025]: Identify and extract entities within the text of a document) receive, from the entity extractor application, a list of entities found within the textual content of the original document; ([0025]: Identify and extract entities within the text of a document) provide, to a query generator application, the original document and the list of entities; ([0013]: from the document and list of entities, assertions are identified which are then inverted to create questions) receive, from the query generator application, a set of queries answerable by the textual content of the original document, wherein the set of queries is based on the list of entities; ([0013]: from the document and list of entities, assertions are identified which are then inverted to create questions) Durmus largely differs from the claim in that it generates questions from the plurality of summaries, not from the source document. Vadlamani remedies this deficiency by disclosing a method of generating queries by extracting entities from a source document. It would have been obvious to one with ordinary art skill in the art to generate queries from a source document as described Vadlamani in performing the method of Durmus as the collection of assertions from which the queries are generated can themselves be considered a summarization of the document’s facts (Vadlamani Abstract). Durmus and Vadlamani do not disclose: receive, by way of a web-based interface or a mobile application, a request for a search of the persistent storage; provide, in response to the request for the search of the persistent storage, a particular summary of the plurality of summaries that has a score above a threshold value. However, Baughman does disclose receive, by way of a web-based interface or a mobile application, a request for a search of the persistent storage; ([0038], [0138]: display summary via web interface) provide, in response to the request for the search of the persistent storage, a particular summary of the plurality of summaries that has a score above a threshold value. ([0037]-[0038]: provide a summary if its factual score satisfies the threshold) It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to check a summary’s score against a threshold when providing a summary to a user because it ensures the provided summaries are factually sound (see Baughman [0022]). Durmus, Vadlamani, and Baughman do not disclose: retrieve and present, in response to the selection of the particular summary by way of the web-based interface or the mobile application, the original document. Zhou et al. does disclose retrieve and present, in response to the selection of the particular summary by way of the web-based interface or the mobile application, the original document. (Column 20, lines 51-58: user may be able to select a summary of a content object to be redirected to the original content object). It would have been obvious to one with ordinary skill in the art to incorporate the retrieval of the original document as taught by Zhou with the teachings of Durmus, Vadlamani, and Baughman because it enables the user to interact with the original content object from which the summary is derived (see Zhou Column 20 Lines 51-58). Regarding claim 2, which depends from claim 1 as addressed above, Durmus further discloses the system wherein the one or more processors are further configured to: identify a particular summarization model of the plurality of summarization models that produced the particular summary of the plurality of summaries that has the highest score out of all of the plurality of summaries; (Table 4, Section 3: gather faithfulness scores from each model, the highest scoring model can be identified) select the particular summarization model to produce further summaries for a set of further original documents. (Section 1: more faithful models are more desirable to use) Regarding claim 3, which depends from claim 2 as addressed above, Durmus further discloses the system wherein at least some of the further original documents are knowledgebase articles, incidents, or email threads (Section 2.2 Datasets: FEQA metric tested on summarization results from articles) Regarding claim 4, which depends from claim 1 as addressed above, Durmus further discloses the system wherein the persistent storage also contains: (i) a second original document, and (ii) a second plurality of summaries of the second original document respectively produced by the plurality of summarization models, wherein the second original document and each of the second plurality of summaries include textual content, (Figure 2: Textual summary provided by a summarization model and a corresponding source document) and wherein the one or more processors are further configured to: provide, to the query answering application, the second set of queries, the second original document, and the second plurality of summaries; (Figure 2: Textual summary provided by a summarization model and a corresponding source document) receive, from the query answering application and for the second set of queries, a second set of document answers corresponding to the second original document and second sets of summary answers respectively corresponding to each of the second plurality of summaries; (Figure 2: Answers from the document and answers from the summary) provide, to the answer matching application, the second set of document answers and the second sets of summary answers; (Section 3: Answer Verification) receive, from the answer matching application, second respective scores for each of the second plurality of summaries, wherein the second respective scores represent accuracies of the second sets of summary answers respect to the second set of document answers; and (Figure 2, Section 3 Answer Verification, Table 9: Answers collected from the summary and answers collected from the source document are compared. A faithfulness score is assigned based on how well the summary answer matches the source answer) modify the respective scores based on the second respective scores. (Section 3, 4: summary faithfulness of a model is per-summary, metric scores calculated on a dataset of summaries for data analysis; Table 4: averaging of faithfulness scores between all summaries produced from a dataset for each model) Vadlamani further discloses the processor configured to provide, to the entity extractor application, the second original document; ([0025]: Identify and extract entities within the text of a document) receive, from the entity extractor application, a second list of entities found within the textual content of the second original document; ([0025]: Identify and extract entities within the text of a document) provide, to the query generator application, the second original document and the second list of entities; ([0013]: from the document and list of entities, assertions are identified which are then inverted to create questions) receive, from the query generator application, a second set of queries answerable by the textual content of the second original document, wherein the second set of queries is based on the second list of entities; ([0013]: from the document and list of entities, assertions are identified which are then inverted to create questions) Regarding claim 5, which depends from claim 1 as addressed above, Durmus further discloses the system wherein at least some of the set of queries are provided by one or more human users. (Section 5, QA as a proxy: QA-based metrics using human-produced questions) Regarding claim 6, which depends from claim 1 as addressed above, Vadlamani further discloses the system wherein at least some of the set of queries are directed to a specific sub-topic within the textual content of the original document. ([0011]: questions are clustered around concepts/topics within the document) Regarding claim 7, which depends from claim 1 as addressed above, Vadlamani further discloses the system wherein the entity extractor application employs at least one of named entity recognition, regular expressions, or dependency parsing. ([0026]: entity and relationship extraction) Regarding claim 8, which depends from claim 1 as addressed above, Vadlamani further discloses the system wherein the entities in the list of entities are words or phrases that are predicted to represent a semantic meaning of the original document. ([0026]: extract entities from important sentences [0027]: use of an ontology to select important sentences and extract entities corresponding to semantic patterns in the document) Regarding claim 9, which depends from claim 1 as addressed above, Vadlamani further discloses the system wherein the query generator application employs at least one of a rule-based algorithm, an expert system, or an encoder-decoder neural network architecture. ([0029]: queries are generated by inverting an assertion through a rule-based transformation) Regarding claim 10, which depends from claim 1 as addressed above, Durmus further discloses the system wherein the query answering application employs sentiment analysis of the set of queries, the original document, and the plurality of summaries. (Section 3 Answer Verification: QA model based on pre-trained BERT, which can employ sentiment analysis) Regarding claim 11, which depends from claim 1 as addressed above, Durmus further discloses the system wherein the answer matching application determines, on a question by question basis, whether answers from each set of summary answers matches corresponding answers from the set of document answers. (Figure 2: F1 score is 0.5 because only 50% of the answers match) Regarding claim 12, which depends from claim 11 as addressed above, Durmus further discloses the system wherein the respective scores are based on respective counts of matched answers between each set of summary answers and the set of document answers. (Figure 2: F1 score is a percentage of matching answers) Regarding claim 13, which depends from claim 1 as addressed above, Durmus further discloses the system wherein the one or more processors are further configured to: provide, to the entity extractor application, the plurality of summaries; (Section 3 Question generation: identify entities from summaries) receive, from the entity extractor application, respective lists of further entities found within the textual content of the plurality of summaries; (Section 3 Question generation: identify entities from summaries) determine, for each of the respective lists of further entities, whether the entities therein are a subset of the list of entities; (Section 3 Question generation: metric is equivalent to comparing whether the fact list of the summary is a subset of the fact list of the documents) modify the respective scores based on extents to which the respective lists of further entities are subsets of the list of entities. (Figure 2: summary answer A1 is a mismatch because the word ‘inspection’, identified as a relevant entity in the sentence produced by the summary model, isn’t a relevant entity in the source document, causing the F1 score to decrease) Regarding claim 14, which depends from claim 13 as addressed above, Durmus further discloses the system wherein the one or more processors are further configured to: provide, to the query generator application, the respective lists of further entities; (Section 3 Question Generator: questions are created from entities found in the summary) receive, from the query generator application, sets of further queries respectively corresponding to each of the respective lists of further entities; (Section 3 Question Generator: questions are created from entities found in the summary) provide, to the query answering application, the sets of further queries and the original document; (Figure 2: Generated questions and the source document are input to a QA model) receive, from the query answering application and for the sets of further queries, sets of further answers respectively corresponding the sets of further queries; (Figure 2: Answers from document) further modify the respective scores based on extents to which the sets of further answers were found in the original document. (Section 3 Answer Verification: A QA model may be able to hypothesize that a question is unanswerable, which will decrease the faithfulness score) Regarding claims 15-20, they are composed of limitations analogous to claims 1-2, 5, and 13-14. They are therefore rejected in the same manner as described above for the respective claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALVIN ISKENDER whose telephone number is (703)756-4565. The examiner can normally be reached M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HAI PHAN can be reached at (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALVIN ISKENDER/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Sep 06, 2022
Application Filed
Nov 16, 2024
Non-Final Rejection — §103
Jan 30, 2025
Response Filed
May 13, 2025
Final Rejection — §103
Jun 11, 2025
Interview Requested
Jul 07, 2025
Response after Non-Final Action
Jul 07, 2025
Applicant Interview (Telephonic)
Jul 12, 2025
Examiner Interview Summary
Aug 05, 2025
Request for Continued Examination
Aug 06, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103
Oct 06, 2025
Interview Requested
Oct 31, 2025
Applicant Interview (Telephonic)
Nov 02, 2025
Examiner Interview Summary
Nov 03, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12562244
COMBINING DOMAIN-SPECIFIC ONTOLOGIES FOR LANGUAGE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Patent 12531078
NOISE SUPPRESSION FOR SPEECH ENHANCEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12505825
SPONTANEOUS TEXT TO SPEECH (TTS) SYNTHESIS
2y 5m to grant Granted Dec 23, 2025
Patent 12456457
ALL DEEP LEARNING MINIMUM VARIANCE DISTORTIONLESS RESPONSE BEAMFORMER FOR SPEECH SEPARATION AND ENHANCEMENT
2y 5m to grant Granted Oct 28, 2025
Patent 12407783
DOUBLE-MICROPHONE ARRAY ECHO ELIMINATING METHOD, DEVICE AND ELECTRONIC EQUIPMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
48%
Grant Probability
99%
With Interview (+60.3%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month