Prosecution Insights
Last updated: April 19, 2026
Application No. 18/883,595

ENHANCED ENTITY IDENTIFICATION FOR AUTOMATIC SOAP NOTE GENERATION

Non-Final OA §101§102§103§DP
Filed
Sep 12, 2024
Examiner
TOKARCZYK, CHRISTOPHER B
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
65%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
133 granted / 313 resolved
-9.5% vs TC avg
Strong +22% interview lift
Without
With
+22.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
340
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
32.1%
-7.9% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 313 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Application This action is in reply to the correspondence received through February 4, 2025. Claims 1-20 are pending. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of copending Application No. 18/830,934 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-24 of the reference application disclose the features of claims 1-20 of the instant application. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 U.S.C. § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-20 are directed to an abstract idea without significantly more as required by the Alice test as discussed below. Step 1 Claims 1-20 are directed to a process, machine, manufacture, or composition of matter. Step 2A Claims 1-20 are directed to abstract ideas, as explained below. Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea; and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity. The claims recite the following limitations that are directed to abstract ideas. Claim 1 recites accessing a text transcript, the text transcript corresponding to an interaction between a first entity and a second entity; segmenting the text transcript into a plurality of portions; for each respective portion of the plurality of portions: identifying one or more entities included in the respective portion, wherein identifying the one or more entities included in the respective portion comprises: using a first machine-learning model to extract one or more first entities included in the respective portion, using a second machine-learning model to extract one or more second entities included in the respective portion, and combining the one or more first entities with the one or more second entities to result in the one or more entities; and generating a Subjective, Objective, Assessment, and Plan (SOAP) note using a third machine-learning model based at least in-part on the one or more entities. Claims 8 and 15 recite similar features as claim 1. Claims 2-7, 9-14, and 16-20 further specify features of these identified ideas or characteristics of the data used thereby. These limitations describe abstract ideas that correspond to concepts identified as abstract ideas by the courts as mental processes—such as concepts performed in the human mind (including an observation, evaluation, judgment, or opinion)—because the claimed features identified above are concepts performed in the human mind (including an observation, evaluation, judgment, or opinion). These limitations describe abstract ideas that correspond to concepts identified as abstract ideas by the courts as certain methods of organizing human activity—such as fundamental economic principles or practices (including hedging, insurance, mitigating risk), commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)—claimed features identified above manage personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. Thus, the concepts set forth in claims 1-20 recite abstract ideas. Prong two of the Step 2A requires identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. “Integration into a practical application” requires an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Further, “integration into a practical application” uses the considerations laid out by the Supreme Court and the Federal Circuit to evaluate whether the judicial exception is integrated into a practical application, such as considerations discussed in M.P.E.P. § 2106.05(a)-(h). The claims recite the following additional elements beyond those identified above as being directed to an abstract idea. Claim 1 recites that its method is computer implemented and storing the SOAP note in a database. Claim 8 recites similar features as claim 1 and further recites one or more processing systems and one or more computer-readable media. Claim 15 recites similar features as claim 1 and further recites one or more processors and one or more non-transitory computer-readable media. The identified judicial exception(s) are not integrated into a practical application for the following reasons. First, evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. The additional computer elements identified above—the computer, processing systems, computer-readable media, processors, and non-transitory computer-readable media—are recited at a high level of generality. Inclusion of these elements amounts to mere instructions to implement the identified abstract ideas on a computer. See M.P.E.P. § 2106.05(f). The use of conventional computer elements to store the SOAP note in a database is the insignificant, extra-solution activity of mere data gathering or outputting in conjunction with a law of nature or abstract idea. See M.P.E.P. § 2106.05(g). To the extent that the claims transform data, the mere manipulation of data is not a transformation. See M.P.E.P. § 2106.05(c). Inclusion of computing in the claims amounts to generally linking the use of the judicial exception to a particular technological environment or field of use. See M.P.E.P. § 2106.05(h). Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception. Second, evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. See M.P.E.P. § 2106.05(a). Their collective functions merely provide an implementation of the identified abstract ideas on a computer system in the general field of use of organizing medical notes. See M.P.E.P. § 2106.05(h). Thus, claims 1-20 recite mathematical concepts, mental processes, or certain methods of organizing human activity without including additional elements that integrate the exception into a practical application of the exception. Accordingly, claims 1-20 are directed to abstract ideas. Step 2B Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea. The analysis above describes how the claims recite the additional elements beyond those identified above as being directed to an abstract idea, as well as why identified judicial exception(s) are not integrated into a practical application. These findings are hereby incorporated into the analysis of the additional elements when considered both individually and in combination. Additional features of these analyses are discussed below. Evaluated individually, the additional elements do not amount to significantly more than a judicial exception. In addition to the factors discussed regarding Step 2A, prong two, these additional computer elements also provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. The use of generic computer components to store the SOAP note in a database is the well-understood, routine, and conventional computer functions of receiving or transmitting data over a network, e.g., the Internet, and does not impose any meaningful limit on the computer implementation of the identified abstract ideas. See M.P.E.P. § 2106.05(d)(II). Similarly, the use of generic computer components to store the SOAP note in a database is likewise the well-understood, routine, and conventional computer functions of receiving, processing, and storing data and does not impose any meaningful limit on the computer implementation of the identified abstract ideas. See M.P.E.P. § 2106.05(d)(II). Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception. Evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. In addition to the factors discussed regarding Step 2A, prong two, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely amount to mere instructions to implement the identified abstract ideas on a computer. Thus, claims 1-20, taken individually and as an ordered combination of elements, are not directed to eligible subject matter since they are directed to an abstract idea without significantly more. Claim Rejections - 35 U.S.C. § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5, 6, 8, 12, 13, 15, and 19 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Moriarty et al. (U.S. Pub. No. 2025/0005282 A1) (hereinafter “Moriarty”). Claims 1, 8, and 15: Moriarty, as shown, discloses the following limitations: accessing a text transcript, the text transcript corresponding to an interaction between a first entity and a second entity (see at least ¶ [0012]: text analysis tasks may be performed as part of many different natural language or other text processing applications. Text may be obtained, for example, from documents or generated from audio or video transcripts, among other sources; see also at least ¶ [0013]: text analysis tasks performed in the healthcare domain may include tasks to generate medical summaries of doctor-patient conversations from clinical visits; see at least ¶ [0026]: a user, such as a physician, may upload a clinical visit audio between a patient and the physician to the input interface in order to generate a transcript and a summary based on the audio; see also at least ¶ [0027]); segmenting the text transcript into a plurality of portions (see at least ¶ [0015]: domain entity recognition 110 may evaluate the analysis task ground truth 104 b, 106 b, 108 b—i.e., portions—in training data set 102 for corresponding input texts 104 a. 106 a, and 108 a—i.e., portions—to recognize domain entities in the ground truth data; see also at least ¶ [0029]: a computing instance instantiated as a summarization task processing engine 232 may access respective ones of the models 236 with domain-specific fine-tuning 238 to perform discrete tasks, such as medical entity detection, role identification, and various summarization tasks, such as sectioning, extraction, and abstraction. The summarization task processing engine 232 may merge results from each task into a current version of the transcript that is being updated as the discrete tasks are performed. The currently updated (and merged) version of the transcript may be used as an input to perform respective ones of the subsequent discrete tasks; see also at least ¶ [0043]); for each respective portion of the plurality of portions: identifying one or more entities included in the respective portion, wherein identifying the one or more entities included in the respective portion comprises: using a first machine-learning model to extract one or more first entities included in the respective portion (see at least ¶ [0015]: domain entity recognition 110 may be implemented. Domain entity recognition 110, which may be a locally hosted (e.g., on a same system as text analysis system 140) or remotely hosted machine learning model that is trained to recognize entities in given text for a domain (e.g., a different models for medical, legal, individual scientific disciplines, and so on). Domain entity recognition 110 may evaluate the analysis task ground truth 104 b, 106 b, 108 b in training data set 102 for corresponding input texts 104 a. 106 a, and 108 a, to recognize domain entities in the ground truth data. For example, entity recognition machine learning models (e.g., Named Entity Recognition (NER) models) may be implemented as part of domain entity recognition 110 to analyze task ground truth labels to identify key terms or other entities that are significant to the domain; see also at least ¶¶ [0028] and [0030]), using a second machine-learning model to extract one or more second entities included in the respective portion (see at least ¶ [0016]: once the domain entities are identified, then the domain entities may be passed to tuning data set augmentation 120. Tuning data set augmentation 120 may augment training data set 102 to include the domain entities, as indicated at 104 c, 106 c, and 108 c. The augmented training data set 1020 can then be used to perform fine-tuning on a pre-trained large language model, as indicated at 130. For example, fine-tuning techniques may include adding instructions to include the domain entit(ies) (e.g., 104 c, 106 c, and 108 c in training requests) in the response as part of performing the text analysis task; see also at least ¶¶ [0017] and [0030]), and combining the one or more first entities with the one or more second entities to result in the one or more entities (see at least ¶ [0016]: tuning data set augmentation 120 may augment training data set 102 to include the domain entities, as indicated at 104 c, 106 c, and 108 c. The augmented training data set 1020 can then be used to perform fine-tuning on a pre-trained large language model, as indicated at 130. For example, fine-tuning techniques may include adding instructions to include the domain entit(ies) (e.g., 104 c, 106 c, and 108 c in training requests) in the response as part of performing the text analysis task; see also at least ¶ [0017]: domain instructions that are generated for input text that have domain entities extracted, as indicated at 144, and then sent, as indicated at 154, to pre-trained model that is fine-tuned to the domain 142; see also at least ¶¶ [0043]-[0045]); generating a Subjective, Objective, Assessment, and Plan (SOAP) note using a third machine-learning model based at least in-part on the one or more entities (see at least ¶ [0014]: techniques for domain entity extraction for performing text analysis tasks reduce hallucination and improve summary completeness by guiding the performance of task analysis with the terms present in the text (e.g., guiding generation of SOAP note summaries with the clinical concepts present in the conversation); see also at least ¶ [0017]: pre-trained large language model 142 that is fine tuned to the domain can then be used to perform text analysis tasks, such as summarization, comparison, question answering, or adding introductory or conclusory sections, among other text analysis tasks, using domain instructions that are generated for input text that have domain entities extracted, as indicated at 144, and then sent, as indicated at 154, to pre-trained model that is fine-tuned to the domain 142, to perform the text analysis tasks and return a result 156 which can be passed back as text analysis 158; see also at least ¶¶ [0013], [0030], and [0044]-[0046]); and storing the SOAP note in a database associated with at least one of the first entity and the second entity (see at least ¶¶ [0014] and [0017] and the analysis above; see also at least ¶ [0033]: once the summary is generated, the summarization task processing engine 232 may provide the generated summary to an output interface. The output interface may notify the customer of the completed job request. In some embodiments, the output interface may provide a notification of a completed job to the output API. In some embodiments, the output API may be implemented to provide the summary for upload to an electronic health record (EHR) or may push the summary out to an electronic health record (EHR), in response to a notification of a completed job; see also at least ¶ [0062]: while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity). Moriarty discloses various architectures for implementing the features addressed above (see at least ¶¶ [0051]-[0062]). Claims 5, 12, and 19: Moriarty discloses the limitations as shown in the rejection above. Further, Moriarty, as shown, discloses the following limitations: wherein the first machine-learning model is a named entity recognition model that is configured to recognize named entities in input text (see at least ¶ [0015]: domain entity recognition 110 may be implemented. Domain entity recognition 110, which may be a locally hosted (e.g., on a same system as text analysis system 140) or remotely hosted machine learning model that is trained to recognize entities in given text for a domain (e.g., a different models for medical, legal, individual scientific disciplines, and so on). Domain entity recognition 110 may evaluate the analysis task ground truth 104 b, 106 b, 108 b in training data set 102 for corresponding input texts 104 a. 106 a, and 108 a, to recognize domain entities in the ground truth data. For example, entity recognition machine learning models (e.g., Named Entity Recognition (NER) models) may be implemented as part of domain entity recognition 110 to analyze task ground truth labels to identify key terms or other entities that are significant to the domain; see also at least ¶¶ [0028], [0030], [0035]), wherein the second machine-learning model is a pre-trained language model that is configured to generate a result in response to an input prompt (see at least ¶ [0016]: once the domain entities are identified, then the domain entities may be passed to tuning data set augmentation 120. Tuning data set augmentation 120 may augment training data set 102 to include the domain entities, as indicated at 104 c, 106 c, and 108 c. The augmented training data set 1020 can then be used to perform fine-tuning on a pre-trained large language model, as indicated at 130. For example, fine-tuning techniques may include adding instructions to include the domain entit(ies) (e.g., 104 c, 106 c, and 108 c in training requests) in the response as part of performing the text analysis task; see also at least ¶¶ [0017] and [0030]), and wherein the third machine-learning model is a pre-trained language model that is configured to generate a result in response to an input prompt (see at least ¶ [0014]: techniques for domain entity extraction for performing text analysis tasks reduce hallucination and improve summary completeness by guiding the performance of task analysis with the terms present in the text (e.g., guiding generation of SOAP note summaries with the clinical concepts present in the conversation); see also at least ¶ [0017]: pre-trained large language model 142 that is fine tuned to the domain can then be used to perform text analysis tasks, such as summarization, comparison, question answering, or adding introductory or conclusory sections, among other text analysis tasks, using domain instructions that are generated for input text that have domain entities extracted, as indicated at 144, and then sent, as indicated at 154, to pre-trained model that is fine-tuned to the domain 142, to perform the text analysis tasks and return a result 156 which can be passed back as text analysis 158; see also at least ¶¶ [0013], [0030], and [0044]-[0046]). Claims 6 and 13: Moriarty discloses the limitations as shown in the rejection above. Further, Moriarty, as shown, discloses the following limitations: for each respective portion of the plurality of portions: using a fourth machine-learning model to extract one or more facts from the respective portion based at least in-part on the one or more entities (see at least ¶ [0044]: as indicated at 630, the one or more domain entities may be inserted as part of generating instructions to perform the text analysis task using a pre-trained large language model fine-tuned to the domain, in some embodiments. For example, as discussed above the domain entities may be included and used to guide generation of the result of the text analysis task. For a summarization task, the instructions could ask for the domain entities to be included in the summary. For a question answering task, the instructions could ask for the answer to use each of the domain entities in generating the answer. For comparison, the instructions could request that the domain entities to be considered in each text and the comparison showing any differences or similarities in their use. Various other possible instructions that use the domain entities in the may be generated according to the performed text analysis task; see also at least ¶¶ [0015]-[0017] and [0030]), and adding the one or more facts to a collection of facts (see also at least ¶¶ [0015]-[0017], [0030], and [0044] and the analysis above. The different entities are facts, which are classified according to task (e.g., summarization, question answering, etc.)); and generating a set of classified facts from the collection of facts, wherein each classified fact of the set of classified facts corresponds to a fact in the collection of facts and is associated with a particular category label selected from a set of category labels (see also at least ¶¶ [0015]-[0017], [0030], and [0044] and the analysis above. The different entities are facts, which are classified according to task (e.g., summarization, question answering, etc.)). Claim Rejections - 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-4, 7, 9-11, 14, 16-18, and 20 are rejected under AIA 35 U.S.C. § 103 as being unpatentable over Moriarty et al. (U.S. Pub. No. 2025/0005282 A1) (hereinafter “Moriarty”) in view of Lipton et al. (U.S. Pub. No. 2022/0375605 A1) (hereinafter “Lipton”). Claims 2, 9, and 16: Moriarty discloses the limitations as shown in the rejection above. Moriarty does not explicitly disclose, but Lipton, as shown, teaches the following limitations: wherein combining the one or more first entities with the one or more second entities to result in the one or more entities comprises removing an entity included in the one or more second entities that are also included in the one or more first entities (see at least ¶ [0082]: given a physician-patient conversation, data processing system 110 may extract the mentioned past and present diagnoses of the patient that are relevant to the primary reason for the patient's visit (called the chief complaint). For each conversation, data processing system 110 may create a list of the chief complaint and related medical problems by using categorical tags associated with Subjective and/or a subsection of the SOAP note. All medical problems in the Subjective: Past Medical History subsection are tagged with “HPI” (e.g., History of Present Illness) to signify that they are related to the chief complaint. The medical problem tags present in the Assessment and Plan: Assessment subsection of the SOAP note. Data processing system 110 may then simplify the medical problem tags by converting everything to lowercase, and removing elaborations given in parentheses. For example, data processing system 110 may simplify “hypertension (moderate to severe)” to “hypertension”. For each of the 20 most frequent tags retrieved after the previous simplifications, data processing system 110 searches among all medical problems and includes tags that previously had the original tag as a substring. For example, “systolic hypertension” was merged into “hypertension”. After following the above procedure on the training and validation set, data processing system 110 selects a given number (e.g., ˜15) of the most frequent medical problem tags, as shown in Table 2 below. The data processing system 110 restricts the task to predicting whether each of these medical problems were diagnosed for a patient or not diagnosed for that patient). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the techniques for automatically generating formatted annotations of doctor-patient conversations taught by Lipton with the text analysis systems disclosed by Moriarty, because Lipton teaches at ¶ [0005] that its techniques “overcome a methodological challenge of extracting data of interest from lengthy conversations (e.g., about 1500 words).” See M.P.E.P. § 2143(I)(G). Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the techniques for automatically generating formatted annotations of doctor-patient conversations taught by Lipton with the text analysis systems disclosed by Moriarty, because the claimed invention is merely a combination of old elements (the techniques for automatically generating formatted annotations of doctor-patient conversations taught by Lipton and the text analysis systems disclosed by Moriarty), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A). Claims 3, 10, and 17: Moriarty discloses the limitations as shown in the rejection above. Moriarty does not explicitly disclose, but Lipton, as shown, teaches the following limitations: wherein combining the one or more first entities with the one or more second entities to result in the one or more entities comprises comparing the one or more second entities to the respective portion to determine whether each second entity in the one or more second entities is included in the respective portion (see at least ¶ [0089]: the data processing system 110 can execute hybrid models to determine noteworthy utterances. The long length of the input sequence makes the task difficult for the neural models. The methods disclosed herein try a variety of strategies to pre-filter the contents of the conversation so that only sentences that are more relevant to the task are fed to classifier 114, (such as noteworthy utterances). Three example processes are described for deciding if a sentence is noteworthy. A first process is called UMLS-noteworthy. In this process, the data processing system 110 designates a sentence as noteworthy if the medical tagger finds an entity relevant to the task (e.g., a diagnosis or symptom) as defined in the medical-entity-matching baseline. The second process is called all-noteworthy. For this process, the data processing system 110 deems a sentence in the conversation noteworthy if it was used as evidence for any line in the annotated SOAP note. The classifier 114 is trained to predict the noteworthy sentences given a conversation. The third process is called diagnosis/RoS-noteworthy. In this process, the data processing system 110 defines noteworthy sentences as being only those sentences are deemed noteworthy in the second process that were used as evidence for an entry including the ground truth tags (e.g., diagnosis/RoS abnormality) that are being predicted. In addition to trying out these individual filtering strategies, combinations of these three processes can be performed; see also at least ¶ [0082]), and discarding any second entities in the one or more second entities that are not included in the respective portion (see at least ¶¶ [0082] and [0089] and the analysis above). The rationales to modify/combine the teachings of Moriarty to include the teachings of Lipton are presented above regarding claims 2, 9, and 16 and incorporated herein. Claims 4, 11, and 18: Moriarty discloses the limitations as shown in the rejection above. Moriarty does not explicitly disclose, but Lipton, as shown, teaches the following limitations: wherein combining the one or more first entities with the one or more second entities to result in the one or more entities comprises processing the one or more entities to remove additional instances of any entity that is included in the one or more entities more than once such that the one or more entities includes a single instance of each entity included in the one or more entities (see at least ¶ [0082]: given a physician-patient conversation, data processing system 110 may extract the mentioned past and present diagnoses of the patient that are relevant to the primary reason for the patient's visit (called the chief complaint). For each conversation, data processing system 110 may create a list of the chief complaint and related medical problems by using categorical tags associated with Subjective and/or a subsection of the SOAP note. All medical problems in the Subjective: Past Medical History subsection are tagged with “HPI” (e.g., History of Present Illness) to signify that they are related to the chief complaint. The medical problem tags present in the Assessment and Plan: Assessment subsection of the SOAP note. Data processing system 110 may then simplify the medical problem tags by converting everything to lowercase, and removing elaborations given in parentheses. For example, data processing system 110 may simplify “hypertension (moderate to severe)” to “hypertension”. For each of the 20 most frequent tags retrieved after the previous simplifications, data processing system 110 searches among all medical problems and includes tags that previously had the original tag as a substring. For example, “systolic hypertension” was merged into “hypertension”. After following the above procedure on the training and validation set, data processing system 110 selects a given number (e.g., ˜15) of the most frequent medical problem tags, as shown in Table 2 below. The data processing system 110 restricts the task to predicting whether each of these medical problems were diagnosed for a patient or not diagnosed for that patient). The rationales to modify/combine the teachings of Moriarty to include the teachings of Lipton are presented above regarding claims 2, 9, and 16 and incorporated herein. Claims 7, 14, and 20: Moriarty discloses the limitations as shown in the rejection above (see particularly the rejection of claims 6 and 13). Moriarty does not explicitly disclose, but Lipton, as shown, teaches the following limitations: wherein generating the SOAP note using the third machine-learning model based at least in-part on the one or more entities comprises using the third machine-learning model to generate a set of note sections based at least in-part on the set of classified facts (see at least ¶ [0008]: the method includes accessing a digital resource that includes a plurality of sections. The method includes accessing, from a hardware storage device, a classifier configured to detect contents representing one or more portions of a communication with increased likelihood of being cited as evidence associated with a particular one of the sections, relative to a likelihood of one or more portions of another communication being cited as the evidence. The method includes receiving, from one or more data sources, a stream of data items representing a communication, with each data item being structured with fields and corresponding values. The method includes generating content for at least one of the sections by: parsing, by the data processing system, one or more fields in one or more of the received data items; extracting, by the data processing system, values from the one or more parsed fields; identifying, by the classifier, that the extracted values are represented in one or more portions of the contents representing the one or more portions of the communication with increased likelihood of being cited as evidence; based on the one or more portions of the contents that represent the extracted values, identifying that the extracted values are associated with a particular section of the digital resource; and based on the extracted values and a proximity of the extracted values to each other in the one or more of the received data items, generating content for that particular section; see also at least ¶¶ [0057] and [0079]-[0080]), wherein each note section of the set of note sections corresponds to a section of the SOAP note (see at least ¶¶ [0008], [0057], and [0079]-[0080] and the analysis above). The rationales to modify/combine the teachings of Moriarty to include the teachings of Lipton are presented above regarding claims 2, 9, and 16 and incorporated herein. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. The following references have been cited to further show the state of the art with respect to automatic SOAP note generation. Pabolu et al. (U.S. Pub. No. 2024/0127008 A1) (multi-lingual natural language generation); Strader et al. (U.S. Pub. No. 2019/0122766 A1) (interface for patient-provider conversation and auto-generation of note or summary); Krishna et al. (“Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (vol. 1: Long Papers), pp. 4958–4972, Online (2021). Assoc. for Comp. Linguistics). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher Tokarczyk, whose telephone number is 571-272-9594. The examiner can normally be reached Monday-Thursday between 6:00 AM and 4:00 PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid, can be reached at 571-270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER B TOKARCZYK/ Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Sep 12, 2024
Application Filed
Dec 11, 2025
Non-Final Rejection — §101, §102, §103
Feb 17, 2026
Applicant Interview (Telephonic)
Feb 17, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555666
MACHINE LEARNING-BASED EXERCISE RECOMMENDATION ADJUSTMENT BASED ON USER FEEDBACK
2y 5m to grant Granted Feb 17, 2026
Patent 12555687
METHOD FOR IDENTIFYING AND TREATING HEART FAILURE WITH PRESERVED EJECTION FRACTION
2y 5m to grant Granted Feb 17, 2026
Patent 12518300
Serving an Online Advertisement Asynchronously
2y 5m to grant Granted Jan 06, 2026
Patent 12512225
RISK DISPLAY APPARATUS, RISK DISPLAY METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Patent 12502075
COORDINATED PROCESSING AND SCHEDULING FOR SURGICAL PROCEDURES
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
65%
With Interview (+22.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 313 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month