Prosecution Insights
Last updated: April 19, 2026
Application No. 18/484,663

SUMMARIZATION PROCESSOR FOR UNSTRUCTURED HEALTH CARE DOCUMENTATION RECEIVED FROM OVER A COMMUNICATIONS NETWORK

Non-Final OA §103
Filed
Oct 11, 2023
Examiner
NAZAR, AHAMED I
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Concord Iii LLC
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
202 granted / 378 resolved
-1.6% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
29 currently pending
Career history
407
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 378 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is responsive to the application filed 10/11/2023. Claims 1-20 are pending with claims 1, 8, and 14 as independent claims. Claim Objections The second claims 16-17 are objected to because of the following informalities: it appears there is a typo error in numbering the second claim 16 with the limitation “a word count limit”. The claim should be numbered claim 18 instead of claim 16. Similarly, the second claim 17 with the limitation “a spoken language preference for the summarization of the text” should be numbered claim 19 instead of claim 17. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-7, 14-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Soubbotin (US 12,038,958, filed 4/14/2023) in view of Bathwal et al. (US 2024/0281487, filed 2/17/2023, hereinafter as Bathwal). Claim 1. A document summarization method comprising: receiving an origin document from over a communications network into memory of a host computing device; Soubbotin discloses in [16:43-57] “The search results page may display a user's query for “causes of ocean tides” in query field 101, as well as search results list 103 as links to documents that were found by the search engine while matching the query.” (emphasis added) examiner note: the search results page may be the received document, segmenting text of the origin document into different text sections; Soubbotin discloses in [16:43-67] “the page may comprise three additions to a conventional display of search results including, without limitation, summary request button 100, text fragments field 105, and checkbox 110 for each search result in search results list 103.” (emphasis added) examiner note: the checkbox segments the search results page to different text sections as shown in fig. 1, [submitting the different text sections over the communications network to a large language model (LLM) through a generative artificial intelligence (AI) engine] with a precursor directive [to prepare for each of the different text sections, a summarization so as to produce a set of summarizations], and further [submitting the summarizations to the LLM of the generative AI engine with the pre-cursor directive to prepare a summarization of the summarizations]; Soubbotin discloses in [22:19-39] “the page may comprise three additions to a conventional display of search results including, without limitation, summary request button 100, text fragments field 105, and checkbox 110 for each search result in search results list 103… Text fragments field 105 may indicate the number of text fragments or sentences to be included in the summary, and may be specified by the user in text fragments field 105.” (emphasis added) examiner note: the precursor directive may be the text fragment field 105 such that a user may be allowed to indicate the number of text fragments or text to be the length of the summary of each search result as shown in fig. 1. Soubbotin does not explicitly disclose submitting the different text sections over the communications network to a large language model (LLM) through a generative artificial intelligence (AI) engine… to prepare for each of the different text sections, a summarization so as to produce a set of summarizations… submitting the summarizations to the LLM of the generative AI engine with the pre-cursor directive to prepare a summarization of the summarizations. However, Bathwal, in an analogous art, discloses in [0093] “the method 600 receives, from a user device, a search query. In block 604, the method 600 retrieves a plurality of search result documents based on the search query. In block 606, the method 600 generates a summary of each of the plurality of search result documents using distinct per-document summarization machine learning models. According to some examples, the distinct per-document summarization machine learning models have been fine-tuned using a training dataset automatically created by prompting a large language model. In block 608, the method 600 synthesizes the summary of each of the plurality of search result documents into a single-consolidated answer responsive to the received search query from the user device. In block 610, the method 600 formats the consolidated answer to include citations to the plurality of search results documents.” (emphasis added) examiner note: each document (section) of the search results may be retrieved and summarized separately using large language model (LLM), wherein the summaries of each search result document may be combined to generate a single summary as an answer to the query. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Soubbotin with the teaching of Bathwal for “using an interface combined with generative artificial intelligence (GenAI or GAI) to generate a single, coherent answer to a user's query or set of queries based on multiple documents… An enhanced summarization system provides a comprehensive summary that combines information from various sources generating output results including citations of sources that not only provide the answer to the user's query, but also show the sources of information.” Bathwal [0017]. inserting the summarization of the summarizations into a summarization document along with each individual summarization of the different text sections; further, Soubbotin discloses in [21:49 to 22:18] “FIG. 2 illustrates an exemplary Graphical User Interface (GUI) of summary 200 on a user's query, produced by a search system.. the summary may be displayed in different formats such as, but not limited to, a list of key concepts, a cluster hierarchy, a tag cloud, etc. The sentences or text fragments within summary 200 may originate from various sources, which may be listed below the summary in source list 205. The sources in source list 205 may refer to the documents or Web pages found by the search engine as displayed in a conventional search result, shown by way of example in FIG. 1.” (emphasis added) examiner note: summary 200 may be the summarization document along with each individual summarization of different search results (sections) may be displayed in a line as shown in fig. 2, and, persisting the summarization document in the memory. Soubbotin discloses in [22:19-39] “The user may also click a Save summary button 220 to, for example, without limitation, save summary 200 in a file on a local or network drive, send the summary as an email, print the summary page, etc. In an alternate embodiment, all original sources may be displayed, allowing the user to request a modified summary that would use the sources specified by the user.” (emphasis added). Claims 2 and 15. The rejection of the method of claim 1 is incorporated, wherein the electronic document is a raster image of a document, the method further comprising optical character recognizing the text from the origin document. Soubbotin discloses in [31:45 to 32:16] “Objects in the image may be identified by image recognition techniques… each identified element such as but not limited to, a bird, a face, a plane, etc., may be associated with an explanatory text… such explanatory text may be just one word—“bird”, but may also be longer—for example, coming from a Wikipedia article on birds. The explanatory text of such partial image may be further analyzed, and included in the resulting summary as a text fragment.” (emphasis added). Claims 3 and 16. The rejection of the method of claim 1 is incorporated, further comprising transforming the text of the origin document into a structured document and submitting the structured document to the LLM of the generative AI engine with the pre-cursor directive. Soubbotin discloses in [25:40 to 26:5] “the multi-document summarization engine may receive the query and the links to the documents. Then, the multi-document summarization engine parses each individual document to extract key concepts and corresponding text fragments or sentences and performs multi-document summarization of the documents based on the extracted concepts, producing a text summary representing a digest of the search results.” (emphasis added) examiner note: the origin document may be the search results page as shown in fig. 1. The search results page may be transformed into a structured document by incorporating a checkbox for each search result so that a user may have an option to whether include or exclude each search result segment/section to be part of the summary or not. Each search result segment/section may be associated with text fragment field 105 as the pre-cursor directive to determine the number of sentences to be included in the summary from each search result segment/section. Claims 4 and 17. The rejection of the method of claim 3 is incorporated, wherein the structured document comprises a set of fields and a corresponding value for each of the fields. Soubbotin discloses in [16:43-67] “the page may comprise three additions to a conventional display of search results including, without limitation, summary request button 100, text fragments field 105, and checkbox 110 for each search result in search results list 103.” (emphasis added) examiner note: the checkbox segments the search results page to different text sections such that each checkbox represents a field and a corresponding value (text) next each field as shown in fig. 1. Claims 6 and 19. The rejection of the method of claim 1 is incorporated, wherein the pre-cursor directive comprises a spoken language preference for the summarization of the text. Soubbotin discloses in [36:12-47] “a user may input a search query and metadata to be part of a search on a user input device such as computer terminal 725… The compiled search results and metadata may then be sent to a summary module 715. Summary module 715 has a processing means such as server 730 and may create a summary of one or more search results. The summaries created by summary module 715 may also factor in the metadata received from search module 715 and create specialized summaries according to the metadata… Search summary metadata may include, without limitation, a tone of language of any line within a search summary, a source material for any line within a search summary, a value for language complexity, search query match metrics, author's viewpoint, the audience's preference or presumed viewpoint, source document format, ranking of sources, rating or degree of authority of the author, language of the source, etc.” (emphasis added) examiner note: the language of the source may be the spoken language preference for summarizing the search results. Claims 7 and 20. The rejection of the method of claim 1 is incorporated, wherein different terms in the summarization document are replaced with reference codes drawn from an index for a contextual domain of the origin document. Soubbotin discloses in [24:45-56] “FIG. 3 illustrates an exemplary summary 300 on a user's query, where each sentence or text fragment in summary 300 is followed by a reference 303 to its source… This setting may be defined by a “Show sources” setting, as shown by way of example in FIG. 2… references 303 are listed as clickable URLs to the web pages from which the text fragments are sourced, which may be used to take the user to the corresponding source web page.” (emphasis added) examiner note: the different terms may be titles associated with each search result section as shown in fig. 1. Each search result section may be replaced with a source reference 303 as shown in fig. Claim 14. The claim directed towards a computing device comprising a non-transitory computer readable storage medium having program instructions stored therein to implement the steps of the method of claim 1, therefore, the claim is similarly rejected as claim 1. Claims 5, 8-13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Soubbotin and Bathwal as applied to claim 1 above, and further in view of Gardner et al. (US 12,008,332, filed 8/18/2023, hereinafter as Gardner). Claims 5, 11, and 18. The rejection of the method of claim 1 is incorporated, Soubbotin does not explicitly disclose wherein the pre-cursor directive comprises a word count limit. However, Gardner, in an analogous art, discloses in [12:22 to 13:18] “generating abstractive summaries of content items using a large language model (LLM)… a level of abstraction is determined for the first content item. This determination may be based on user input or an API parameter specifying a compression percentage or other abstraction parameter. The target abstraction level may be specified numerically (e.g. 50% compression), using qualitative descriptors (high-level summary), or combining metrics like word count and percentage of named entities to remove.” (emphasis added) examiner note: the user may utilize metrics to determine desired summary length. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Soubbotin with the teaching of Gardner to “train customized models to construct optimized prompts… These prompts dynamically guide LLMs to generate summaries with the desired abstraction level, length, structure, and style.” Gardner [17:6-10]. Claim 8. A data processing system adapted for document summarization, the system comprising: a host computing platform comprising one or more computers, each with memory and one or more processing units including one or more processing cores; network communications circuitry and supporting software logic adapted to manage data communications over a computer communications network; Soubbotin discloses in [29:28 to 30: 9] “The computer system 600 may include any number of processors 602 (also referred to as central processing units, or CPUs) that may be coupled to storage devices including primary storage 606 (typically a random access memory, or RAM), primary storage 604 (typically a read only memory, or ROM)… CPU 602 optionally may be coupled to an external device such as a database or a computer or telecommunications or internet network using an external connection as shown generally at 612, which may be implemented as a hardwired or wireless communications link using suitable conventional technologies.” (emphasis added). an optical character recognition engine; Soubbotin discloses in [27:36 to 28:16] “Language models learn from text and can be used for producing original text, predicting the next word in a text, speech recognition, optical character recognition and handwriting recognition.” And in [31:33 to 32:35] “Voice recognition techniques may allow an audio such as but not limited to a voicemail or a recorded speech, to be transliterated into text. The translated text may be further split into text fragments… Objects in the image may be identified by image recognition techniques… each identified element such as but not limited to, a bird, a face, a plane, etc., may be associated with an explanatory text… such explanatory text may be just one word—“bird”, but may also be longer—for example, coming from a Wikipedia article on birds… The explanatory text of such partial image may be further analyzed, and included in the resulting summary as a text fragment.” (emphasis added) examiner note: the language model may use optical character recognition for identifying text associated with object in an image, and a summarization module comprising computer program instructions enabled while executing in the memory of at least one of the processing units of the host computing platform to perform: Soubbotin does not explicitly disclose receiving an image of an origin document through the network circuitry from over the communications network into the memory of the host computing platform; optical character recognizing text of the origin document in the optical character recognition engine. However, Gardner, in an analogous art, discloses in [12:57 to 13:8] “For image and video inputs, multimedia analysis techniques may be applied to extract salient objects, people, scenes, and text segments from the visual content. This extracted visual metadata provides signals for determining importance and relevance when generating a summary… Optical character recognition (OCR) may be used to extract text from image or video files.” And in [24:10-17] “Metadata may be extracted to identify content types and characteristics: Headers, filenames, MIME types, and so on may indicate formats like text, images, video, and so on… Structure analysis may identify logical sections and layout; Optical character recognition (OCR) may extract text;. (emphasis added) examiner note: an OCR may be utilized to extract text from document image. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Soubbotin with the teaching of Gardner because OCR may be utilized “For image and video inputs, multimedia analysis techniques may be applied to extract salient objects, people, scenes, and text segments from the visual content. This extracted visual metadata provides signals for determining importance and relevance when generating a summary.” Gardner [12:57 to 13:8], segmenting the recognized text into different text sections; further, Soubbotin discloses in [16:43-67] “the page may comprise three additions to a conventional display of search results including, without limitation, summary request button 100, text fragments field 105, and checkbox 110 for each search result in search results list 103.” (emphasis added) examiner note: the checkbox segments the search results page to different text sections as shown in fig. 1, [submitting the different text sections through the network circuitry over the communications network to a large language model (LLM) through a generative artificial intelligence (AI) engine] with a pre-cursor directive [to prepare for each of the different text sections, a summarization so as to produce a set of summarizations], and further [submitting the summarizations to the LLM of the generative AI engine with the pre-cursor directive to prepare a summarization of the summarizations]; Soubbotin discloses in [22:19-39] “the page may comprise three additions to a conventional display of search results including, without limitation, summary request button 100, text fragments field 105, and checkbox 110 for each search result in search results list 103… Text fragments field 105 may indicate the number of text fragments or sentences to be included in the summary, and may be specified by the user in text fragments field 105.” (emphasis added) examiner note: the precursor directive may be the text fragment field 105 such that a user may be allowed to indicate the number of text fragments or text to be the length of the summary of each search result as shown in fig. 1. Soubbotin does not explicitly disclose submitting the different text sections over the communications network to a large language model (LLM) through a generative artificial intelligence (AI) engine… to prepare for each of the different text sections, a summarization so as to produce a set of summarizations… submitting the summarizations to the LLM of the generative AI engine with the pre-cursor directive to prepare a summarization of the summarizations. However, Bathwal, in an analogous art, discloses in [0093] “the method 600 receives, from a user device, a search query. In block 604, the method 600 retrieves a plurality of search result documents based on the search query. In block 606, the method 600 generates a summary of each of the plurality of search result documents using distinct per-document summarization machine learning models. According to some examples, the distinct per-document summarization machine learning models have been fine-tuned using a training dataset automatically created by prompting a large language model. In block 608, the method 600 synthesizes the summary of each of the plurality of search result documents into a single-consolidated answer responsive to the received search query from the user device. In block 610, the method 600 formats the consolidated answer to include citations to the plurality of search results documents.” (emphasis added) examiner note: each document (section) of the search results may be retrieved and summarized separately using large language model (LLM), wherein the summaries of each search result document may be combined to generate a single summary as an answer to the query. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Soubbotin with the teaching of Bathwal for “using an interface combined with generative artificial intelligence (GenAI or GAI) to generate a single, coherent answer to a user's query or set of queries based on multiple documents… An enhanced summarization system provides a comprehensive summary that combines information from various sources generating output results including citations of sources that not only provide the answer to the user's query, but also show the sources of information.” Bathwal [0017], inserting the summarization of the summarizations into a summarization document along with each individual summarization of the different text sections; further, Soubbotin discloses in [21:49 to 22:18] “FIG. 2 illustrates an exemplary Graphical User Interface (GUI) of summary 200 on a user's query, produced by a search system.. the summary may be displayed in different formats such as, but not limited to, a list of key concepts, a cluster hierarchy, a tag cloud, etc. The sentences or text fragments within summary 200 may originate from various sources, which may be listed below the summary in source list 205. The sources in source list 205 may refer to the documents or Web pages found by the search engine as displayed in a conventional search result, shown by way of example in FIG. 1.” (emphasis added) examiner note: summary 200 may be the summarization document along with each individual summarization of different search results (sections) may be displayed in a line as shown in fig. 2, and, persisting the summarization document in the memory. Soubbotin discloses in [22:19-39] “The user may also click a Save summary button 220 to, for example, without limitation, save summary 200 in a file on a local or network drive, send the summary as an email, print the summary page, etc. In an alternate embodiment, all original sources may be displayed, allowing the user to request a modified summary that would use the sources specified by the user.” (emphasis added). Claim 9. The rejection of the system of claim 8 is incorporated, wherein the program instructions are further enabled to perform transforming the text of the origin document into a structured document and submitting the structured document to the LLM of the generative AI engine with the precursor directive. Soubbotin discloses in [25:40 to 26:5] “the multi-document summarization engine may receive the query and the links to the documents. Then, the multi-document summarization engine parses each individual document to extract key concepts and corresponding text fragments or sentences and performs multi-document summarization of the documents based on the extracted concepts, producing a text summary representing a digest of the search results.” (emphasis added) examiner note: the origin document may be the search results page as shown in fig. 1. The search results page may be transformed into a structured document by incorporating a checkbox for each search result so that a user may have an option to whether include or exclude each search result segment/section to be part of the summary or not. Each search result segment/section may be associated with text fragment field 105 as the pre-cursor directive to determine the number of sentences to be included in the summary from each search result segment/section. Claim 10. The rejection of the system of claim 9 is incorporated, wherein the structured document comprises a set of fields and a corresponding value for each of the fields. Soubbotin discloses in [16:43-67] “the page may comprise three additions to a conventional display of search results including, without limitation, summary request button 100, text fragments field 105, and checkbox 110 for each search result in search results list 103.” (emphasis added) examiner note: the checkbox segments the search results page to different text sections such that each checkbox represents a field and a corresponding value (text) next each field as shown in fig. 1. Claim 12. The rejection of the system of claim 8 is incorporated, wherein the pre-cursor directive comprises a spoken language preference for the summarization of the text. Soubbotin discloses in [36:12-47] “a user may input a search query and metadata to be part of a search on a user input device such as computer terminal 725… The compiled search results and metadata may then be sent to a summary module 715. Summary module 715 has a processing means such as server 730 and may create a summary of one or more search results. The summaries created by summary module 715 may also factor in the metadata received from search module 715 and create specialized summaries according to the metadata… Search summary metadata may include, without limitation, a tone of language of any line within a search summary, a source material for any line within a search summary, a value for language complexity, search query match metrics, author's viewpoint, the audience's preference or presumed viewpoint, source document format, ranking of sources, rating or degree of authority of the author, language of the source, etc.” (emphasis added) examiner note: the language of the source may be the spoken language preference for summarizing the search results. Claim 13. The rejection of the system of claim 8 is incorporated, wherein different terms in the summarization document are replaced with reference codes drawn from an index for a contextual domain of the origin document. Soubbotin discloses in [31:45-67] “The explanatory text of such partial image may be further analyzed, and included in the resulting summary as a text fragment… full image or a fragment of the image may be inserted in the multi-document summary as opposed to the explanatory text… the full image or the fragment of the bird from the painting may be placed in the multi-document summary alongside the text, audio, or other fragments.” (emphasis added) examiner note: the explanatory text may be replaced with reference to a full image or fragment of the image. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHAMED I NAZAR whose telephone number is (571)270-3174. The examiner can normally be reached 10 am to 7 pm Mon-Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at 571-272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHAMED I NAZAR/Examiner, Art Unit 2178 12/4/2025 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Oct 11, 2023
Application Filed
Dec 05, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12564342
METHODS, SYSTEMS, AND DEVICES FOR THE DIAGNOSIS OF BEHAVIORAL DISORDERS, DEVELOPMENTAL DELAYS, AND NEUROLOGIC IMPAIRMENTS
2y 5m to grant Granted Mar 03, 2026
Patent 12548333
DYNAMIC NETWORK QUANTIZATION FOR EFFICIENT VIDEO INFERENCE
2y 5m to grant Granted Feb 10, 2026
Patent 12549503
INFORMATION INTERACTION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12539042
Multi-Modal Imaging System and Method Therefor
2y 5m to grant Granted Feb 03, 2026
Patent 12541546
LOSSLESS SUMMARIZATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
88%
With Interview (+35.1%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 378 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month