Prosecution Insights
Last updated: April 17, 2026
Application No. 18/323,063

Healthcare System

Final Rejection §101§102§103§112
Filed
May 24, 2023
Examiner
LAGOY, KYRA RAND
Art Unit
3685
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 14 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
33.6%
-6.4% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §102 §103 §112
DETAILED CORRESPONDANCE The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims This final office action on merits is in response to the communication received on 08/21/2025. Claims 8-11 and 15-18 are withdrawn. Amendments to claims 1-3, 5-7, and 12-14 are acknowledged and have been carefully considered. Claims 1-18 are pending and considered below. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-3, 5-7, and 12-14 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification, while providing a general discussion of audio recording, speech to text conversion, and meeting summarization, does not reasonably convey to one of ordinary skill in the art that the inventor had possession of the full scope of the subject matter recited in the claims. The disclosure must describe the claimed invention in sufficient detail that one skilled in the art can reasonably conclude that the inventor had possession of it as of the filing date. The following limitations are not adequately supported by the originally filed disclosure: “Normalize the text” (claim 1 and 12) The specification discloses removal of stop words and stemming, but does not mention or describe any “normalization” operation or parameters defining such normalization. The term introduces additional processing steps not described and the disclosure therefore fails to show possession of this limitation. “An importance score that is a function of … keyword frequency … sentence length within a defined range … and sentence position in the appointment” (claim 1 and 12) Page 8, lines 17-19 mentions identifying “the most important sentences based on criteria such as keyword frequency, sentence length, and position within the document,” there is no disclosure of computing a score or any formula defining relative weight, thresholds, or “defined range.” The claimed quantitative scoring function and “defined range” parameters extend beyond the descriptive support of the examples given. Accordingly, the specification does not reasonably convey possession of an “importance score”. “Extractive evidence” and “trained abstractive model” (claim 1 and 12) The specification describes alternative summarization modes but provides no detail regarding how sentences are selected as “extractive evidence” or how an abstractive model is “trained.” No model architecture, training data, or example abstraction workflow is disclosed. Consequently, the written description does not demonstrate that the inventor possessed the specific extractive-abstractive pipeline that is claimed. “Structured summary object having fields” (claim 1 and 12) While the specification broadly references summaries including “treatment plans,” “symptoms,” and “follow-up recommendations, it does not describe any structured data object or fielded data representation. The claimed “structured summary object” with named fields is therefore not supported by the disclosure. “Plan items” (claim 1(e)) The specification mentions “treatment plan” and “action items”, but does not identify a discrete data element termed a “plan item.” The scope of “plan items” as a structured field therefore extends beyond the content described. “An electronic prescription comprising at least drug name, strength, dosage form, route, frequency and instructions” (claim 1, claim 2, claim 5 and claim 12) The disclosure generally states that the system may “generate a prescription” but does not specify any prescription schema or the elements now claimed. There is not an example, data field, or output format that identifies “drug name, strength, dosage form, route, frequency, and instructions.” Thus, the specification fails to show that applicant possessed the claimed detailed prescription data structure. “User-selectable links from each summary field to the one or more supporting sentences in the transcript” (claim 1 and 12) The specification discloses displaying a summary through a user interface but provides no description of interactive “links” or any mechanism associating summary fields with their underlying transcript sentences. The linking recited in the claim is not described, taught, or suggested in the originally filed specification. “Is retained as a machine-readable record for transmission to a pharmacy selected by the user” (claim 2) The specification generically mentions providing a prescription to a pharmacy but lacks any description of the prescription being retained in a machine-readable format or of user selection of a pharmacy for later transmission. The concept of a persistent machine-readable record distinct from the summary is not supported. “Transmitted in a machine-readable form to a pharmacy fulfillment system associated with the selected pharmacy” (claim 5 and 14) The disclosure references sending a prescription to a pharmacy, but does not identify any “machine-readable form”. The claimed data format and integration specifics extend beyond the description provided. “Navigation travel time from the patient’s location, and the list is filtered by insurance coverage” (claim 7) Although page 4, lines 20-28 mention using navigation data and geolocation to rank pharmacies, no disclosure addresses computing travel time or filtering by insurance coverage. The specification does not describe access to insurance data or any filtering mechanism. Accordingly, these claim features are unsupported. For the above reasons, the specification fails to provide adequate written description support for the above identified limitations. As dependent claims 2-3, 5-7, and 13-14 incorporate the unsupported subject matter of claim 1 and claim 12 and add further unsupported limitations as described above, claims 2-3, 5-7, and 13-14 are rejected under 35 U.S.C. § 112(a). Applicant may overcome this rejection by amending the specification to expressly describe the claimed subject matter, demonstrating support, or removing or amending the unsupported limitations in the claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 5-7, and 12-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. During examination, claims are given their broadest reasonable interpretation (BRI) consistent with the specification and must inform, with reasonable certainty, those skilled in the art about the scope of the invention. A claim is indefinite if, after applying BRI, it is amenable to two or more plausible constructions or lacks objective boundaries such that the metes and bounds are not reasonably clear to a PHOSITA. The following claim limitations render the scope of claim 1 and 12 indefinite: “Normalize the text” “Normalize” is a term of degree/relative term with no objective boundaries in the specification or claim. Absent identification of which operations constitute “normalize” and to what extent they must be applied, a PHOSITA cannot determine the scope with reasonable certainty. “Sentence position in the appointment” This phrase is ambiguous as to the frame of reference and measurement. Without an objective metric (i.e., an index definition), multiple reasonable interpretations exist, leading to uncertain claim scope. “Extractive evidence” The term “evidence” is not a recognized term of art for sentence selection and is susceptible to differing interpretations. The claim lacks objective boundaries as to what qualifies as “extractive evidence” and how it is represented. “Trained abstractive model” The phrase is open ended for modelling types and training sufficiency. Without objective boundaries (architecture category, input or output types, or training criteria), the scope is uncertain. “a structured summary object” The specification does not define what constitutes an object, and without clarification, a PHOSITA would reasonably interpret “object” in materially different ways, leading to different structural and functional boundaries. For the reasons above, claims 1 and 12 are indefinite under 35 U.S.C. § 112(b). The identified limitations lack objective boundaries and/or are amenable to multiple reasonable interpretations, thereby failing to particularly point out and distinctly claim the subject matter regarded as the invention. Because claims 2-3, 5-7, and 13-14 depend from claims 1 and 12, they therefore incorporate the same indefinite limitations, and are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter regarded as the invention. Applicant is required to respond by amending the claims to resolve the ambiguities or by explaining how the terms would be understood with reasonable certainty by a PHOSITA in view of the specification. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-7, and 12-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Under step 1, the analysis is based on MPEP 2106.03, and claims 1-7 are drawn to a method, and claims 12-14 are drawn to a system. Thus, each claim, on its face, is directed to one of the statutory categories (i.e., useful process, machine, manufacture, or composition of matter) of 35 U.S.C. §101. Step 2A Prong One Claim 1 recites the limitation of processing the audio to generate a text transcript of the healthcare appointment; pre-processing the transcript by removing filler words and stop words and by stemming to normalize the text; segmenting the transcript into sentences and computing, for each sentence, an importance score that is a function of at least i) keyword frequency within a healthcare terminology lexicon, (ii) sentence length within a defined range, and (iii) sentence position in the appointment; selecting, based on the importance scores, a subset of sentences as extractive evidence to generate a structured summary object having fields including at least symptoms, diagnoses/assessments, and plan items. This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind or by using a pen and paper. But for the “applying a trained abstractive model” language, the claim encompasses a user listening to, identifying, and summarizing key portions of a conversation and composing a summary in their mind or by using a pen and paper. The mere nominal recitation of applying a trained abstractive model does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process which is an abstract idea. Claim 1 recites as a whole a method of organizing human activity (i.e., managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)) because the claim recites a method that allows users to in response to the structured summary object identifying a medication plan item, automatically generating an electronic prescription comprising at least drug name, strength, dosage form, route, frequency and instructions. This is a method of managing interactions and communications between a healthcare provider and a patient, including the preparation and transmission of prescriptions as part of a professional or commercial interaction. The mere nominal recitation of a displaying the structured summary via a user interface does not take the claim out of the methods of organizing human activity grouping. Thus, the claim recites an abstract idea. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. Independent claim 12 recites identical or nearly identical steps with respect to claim 1 (and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this claim is therefore determined to recite an abstract idea under the same analysis. Under Step 2A Prong Two The claimed limitations, as per method claim 1, include the steps of: a) recording an audio of the healthcare appointment; b) processing the audio to generate a text transcript of the healthcare appointment; c) pre-processing the transcript by removing filler words and stop words and by stemming to normalize the text; d) segmenting the transcript into sentences and computing, for each sentence, an importance score that is a function of at least i) keyword frequency within a healthcare terminology lexicon, (ii) sentence length within a defined range, and (iii) sentence position in the appointment; e) selecting, based on the importance scores, a subset of sentences as extractive evidence and applying a trained abstractive model to generate a structured summary object having fields including at least symptoms, diagnoses/assessments, and plan items; f) in response to the structured summary object identifying a medication plan item, automatically generating an electronic prescription comprising at least drug name, strength, dosage form, route, frequency and instructions; and g) displaying the structured summary together with user-selectable links from each summary field to the one or more supporting sentences in the transcript via a user interface. Examiner Note: underlined elements indicate additional elements of the claimed invention identified as performing the steps of the claimed invention. The judicial exception expressed in claim 1 is not integrated into a practical application. The claim as a whole merely describes how to generally “apply” the concept of summarizing and documenting a healthcare interaction and generating a prescription based on identified information in a computer environment. The claimed computer components (i.e., applying a trained abstractive model and a user interface) are recited at a high level of generality and are merely invoked as tools to perform an existing process of analyzing, summarizing, and documenting healthcare information. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application. The judicial exception expressed in claim 1 is not integrated into a practical application. The claim recites the additional elements of recording an audio of the healthcare appointment and displaying the structured summary together with user-selectable links from each summary field to the one or more supporting sentences in the transcript. This limitation is recited at a high level of generality (i.e., as a general means of collecting input data and presenting output data to a user), and amounts to merely data gathering and displaying a result, which is a form of insignificant extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. Therefore, under step 2A, the claims are directed to the abstract idea, and require further analysis under Step 2B. Under step 2B Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A, the claim as a whole merely describes how to generally “apply” the concept of summarizing and documenting a healthcare interaction and generating a prescription based on identified information in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Claim 1 does not include an additional element that are sufficient to amount to significantly more than the judicial exception. For the providing limitation that was considered extra-solution activity in Step 2A, this has been re-evaluated in Step 2B and determined to be well-understood, routine, conventional activity in the field. The specification does not provide any indication that the limitation of collecting input data and presenting output data to a user is anything other than a conventional action that simply comes before or after summarizing and documenting information from a healthcare information (see page 1, lines 19-21, and Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016). For these reasons, there is no inventive concept. The claim is not patent eligible. Claims 2-5, 7, and 14 recite no further additional elements, and only further narrow the abstract idea. The previously identified additional elements, individually and as a combination, do not integrate the narrowed abstract idea into a practical application for reasons similar to those explained above, and do not amount to significantly more than the narrowed abstract idea for reasons similar to those explained above. Claims 6 and 13 recite the additional element of the user interface. However, this additional element amounts to implementing an abstract idea on a generic computing device. As such, this additional element, when considered individually or in combination with the prior devices, does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. Thus, as the dependent claims remain directed to a judicial exception, and as the additional elements of the claims do not amount to significantly more, the dependent claims are not patent eligible. Therefore, the claims here fail to contain any additional element(s) or combination of additional elements that can be considered as significantly more and the claim is rejected under 35 U.S.C. 101 for lacking eligible subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-6, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Konam et al. (U.S. Patent Publication 2023/0223016 A1), referred to hereinafter as Konam ‘016, in view of Konam et al. (U.S. Patent Publication 2023/0334263 A1), referred to hereinafter as Konam ‘263. Regarding claim 1, Konam ‘016 teaches a computer-implemented method for summarizing a healthcare appointment between a healthcare provider and a patient (Konam ‘016 [0023] “ FIG. 1 illustrates an example environment 100 in which a conversation is taking place, according to embodiments of the present disclosure. As shown in FIG. 1, a first party 110a (generally or collectively, party 110) is holding a conversation 120 with a second party 110b. The conversation 120 is spoken aloud and includes several utterances 122a-e (generally or collectively, utterances 122) spoken by the first party 110a and by the second party 110b in relation to a healthcare visit. As shown in the example scenario, the first party 110a is a patient and the second party 110b is a caregiver (e.g., a doctor, nurse, nurse practitioner, physician's assistant, etc.).” and Konam ‘016 [0166] “generating a display on a user interface that includes the transcript and the plurality of semantic categories, wherein the selected semantic category includes a selectable representation of the key point; and in response to receiving a selection of the selectable representation via the user interface, adjusting display of the transcript in the user interface to highlight the most-semantically-relevant segment.”), comprising: a) recording an audio of the healthcare appointment (Konam ‘016 [0024] “One or more recording devices 130a-b (generally or collectively, recording device 130) are included in the environment 100 to record the conversation 120. In various embodiments, the recording devices 130 may be any device (e.g., such as the computing device 800 described in relation to FIG. 8) that is capable of recording the audio of the conversation, which may include cellphones, dictation devices, laptops, tablets, personal assistant devices, or the like.” and Konam ‘016 [0023] “The conversation 120 is spoken aloud and includes several utterances 122a-e (generally or collectively, utterances 122) spoken by the first party 110a and by the second party 110b in relation to a healthcare visit.”); b) processing the audio to generate a text transcript of the healthcare appointment (Konam ‘016 [0020] “To create these transcripts and the analyses thereof, the present disclosure describes a Natural Language Processing (NLP) system. As used herein, NLP is the technical field for the interaction between computing devices and unstructured human language for the computing devices to be able to “understand” the contents of the conversation and react accordingly. An NLP system may be divided into a Speech Recognition (SR) system, that generates a transcript from a spoken conversation, and an analysis system, that extracts additional information from the written record.”) and Konam ‘016 [0023] “The conversation 120 is spoken aloud and includes several utterances 122a-e (generally or collectively, utterances 122) spoken by the first party 110a and by the second party 110b in relation to a healthcare visit.”); c) pre-processing the transcript by removing filler words and stop words and by stemming to normalize the text (Konam ‘016 [0024] “In various embodiments, the recording device 130 may pre-process the recording of the conversation 120 to remove or filter out environmental noise, compress the audio, remove undesired sections of the conversation (e.g., silences or user-indicated portions to remove), which may reduce data transmission loads or otherwise increase the speed of transmission of the conversation 120 over a network.” and Konam ‘016 [0114] “In various embodiments, the speech recognition system may clean up verbal miscues, add punctuation to the transcript, and divide the conversation into a plurality of segments to provide additional clarity to readers. For example, the speech recognition system may remove verbal fillers (e.g., “um”, “uh”, etc.), expand shorthand terms, replace or supplement jargon terms with more commonplace synonyms, or the like. The speech recognition system may also add punctuation based on grammatical rules, pauses in the conversation, rising or falling tones in the utterances, or the like. In some embodiments, the speech recognition system uses the various sentences (e.g., identified via the added punctuation) to divide the conversation into segments, but may additionally or alternatively use speaker identities, shared topics/intents, and other features of the conversation to divide the conversation into segments.”); d) segmenting the transcript into sentences and computing, for each sentence, an importance score (Konam ‘016 [0046] “When using a shared theme to generate segments, the SR system 220 may use some of the key terms identified by a key term embedder 222 via string matching. For each of the detected key terms identifying a theme, the segment identifying embedder 222 selects a set of nearby sentences to group together as a segment. For example, when a first sentence uses a noun, and a second sentence uses a pronoun for that noun, the two sentences may be grouped together as a sentence. In another example, when a first person provides a question, and a second person provides a responsive answer to that question, the question and the answer may be grouped together as a segment. In some embodiments, the SR system 220 may define a segment to include between X and Y sentences, where another key term for another segment (and the proximity to the second key term to the first) may define ab edge between adjacent segments.”, Konam ‘016 [0119] “The analysis system may be configured to analyze various candidate categories to group the key points into, and scores each key point in a vector space with various features related to each candidate category. When a key point has a relevancy score above a relevancy threshold in the associated dimension for a given category, and that category has the highest value for the key point, the analysis system categorizes that key point as being related to the given category.”) that is a function of at least i) keyword frequency within a healthcare terminology lexicon, (Konam ‘016 [0104] “In various embodiments, the NLP system identifies what terms are considered “unfamiliar” based on a user profile, a frequency analysis of a corpus of words, a presence of an unfamiliarity flag on the term in a key word dictionary, and combinations thereof. For example, the individual words “Vertigone” and “vertigo” may be noted in a key word dictionary used by the SR system as a term requiring explanation, may be noted as appearing below a familiarity threshold number of times across a corpus of words identifiable by the SR system, and the user may be noted as not familiar with pharmacological terms, which all can indicate that the terms “Vertigone” and “vertigo” should be considered an unfamiliar term for the user.” and Konam ‘016 [0030] What term is “correct” may vary based on the level of experience of the party, so that the NLP system may substitute synonymous terms as being more “correct” for the user's context. For example, when a doctor states correctly the chemical name for the allergy medication “diphenhydramine”, the NLP system can “correct” the transcript to read or include additional definitions to state “your allergy medication”. Similarly, various jargon or shorthand phrases may be removed for the more-accessible versions of those phrases in the transcript. Additionally or alternatively, if the party 110 is identified as attempting to say (and mispronouncing) a difficult to pronounce term, such as a chemical name for the allergy medication “diphenhydramine”, (e.g., as “DIFF-enhy-DRAY-MINE” rather than “di-FEN-hye-DRA-meen”), the NLP system can correct the transcript to remove any misidentified terms based on the mispronounced term and substitute in the correct difficult-to-pronounce term.”) (ii) sentence length within a defined range (Konam ‘016 [0060] “FIG. 3A illustrates a first state of the UI 300, as may be provided to a user after initial analysis of an audio recording of a conversation by an NLP system. The transcript is shown in a transcript window 310, which includes several segments 320a-320e (generally or collectively, segment 320) identified within the conversation. In various embodiments, the segments 320 may represent speaker turns in the conversation, sentences identified in the conversation, topics identified in the conversation, a given length of time in the conversation (e.g., every X seconds), combinations thereof, and other divisions of the conversation), and (iii) sentence position in the appointment (Konam ‘016 [0045] “In another example, a third embedder 222c is trained to recognize segments within a conversation. In various embodiments, the SR system 220 diarizes the conversation into portions that identify the speaker, and provides punctuation for the resulting sentences (e.g., commas at short pauses, periods at longer pauses, question marks at a longer pause preceded by rising intonation) based on the language being spoken. The third embedder 222c may then add metadata tags for who is speaking a given sentence (as determined by the second embedder 222b) and group one or more portions of the sentence together into segments based on one or more of a shared theme or shared speaker, question breaks in the conversation, time period (e.g., a segment may be between X and Y minutes long before being joined with another segment or broken into multiple segments), or the like.” and Konam ‘016 [0046] “When using a shared theme to generate segments, the SR system 220 may use some of the key terms identified by a key term embedder 222 via string matching. For each of the detected key terms identifying a theme, the segment identifying embedder 222 selects a set of nearby sentences to group together as a segment. For example, when a first sentence uses a noun, and a second sentence uses a pronoun for that noun, the two sentences may be grouped together as a sentence. In another example, when a first person provides a question, and a second person provides a responsive answer to that question, the question and the answer may be grouped together as a segment. In some embodiments, the SR system 220 may define a segment to include between X and Y sentences, where another key term for another segment (and the proximity to the second key term to the first) may define ab edge between adjacent segments”); e) selecting, based on the importance scores, a subset of sentences as extractive evidence and applying a trained abstractive model to generate a summary including at least symptoms, diagnoses/assessments, and plan items (Konam ‘016 [0046] “When using a shared theme to generate segments, the SR system 220 may use some of the key terms identified by a key term embedder 222 via string matching. For each of the detected key terms identifying a theme, the segment identifying embedder 222 selects a set of nearby sentences to group together as a segment. For example, when a first sentence uses a noun, and a second sentence uses a pronoun for that noun, the two sentences may be grouped together as a sentence. In another example, when a first person provides a question, and a second person provides a responsive answer to that question, the question and the answer may be grouped together as a segment. In some embodiments, the SR system 220 may define a segment to include between X and Y sentences, where another key term for another segment (and the proximity to the second key term to the first) may define ab edge between adjacent segments.”, Konam ‘016 [0119] “The analysis system may be configured to analyze various candidate categories to group the key points into, and scores each key point in a vector space with various features related to each candidate category. When a key point has a relevancy score above a relevancy threshold in the associated dimension for a given category, and that category has the highest value for the key point, the analysis system categorizes that key point as being related to the given category.”, Konam ‘016 [0154] “Additionally, the memory 820 can include one or more of machine learning models 826 for speech recognition and analysis, as described in the present disclosure. As used herein, the machine learning models 826 may include various algorithms used to provide “artificial intelligence” to the computing device 800, which may include Artificial Neural Networks, decision trees, support vector machines, genetic algorithms, Bayesian networks, or the like. The models may include publically available services (e.g., via an Application Program Interface with the provider) as well as purpose-trained or proprietary services. One of ordinary skill in the relevant art will recognize that different domains may benefit from the use of different machine learning models 826, which may be continuously or periodically trained based on received feedback. Accordingly, the person of ordinary skill in the relevant art will be able to select or design an appropriate machine learning model 826 based on the details provided in the present disclosure.”, Konam ‘016 [0043] “For example, a first embedder 222a is trained to recognize key terms, and may be provided with a set of words, relations between words, or the like to analyze the transcript 225 for. Key terms may be defined to include various terms (and synonyms) of interest to the users. For example, in a medical domain, the names of various medications, therapies, regimens, syndromes, diseases, symptoms, etc., can be set as key terms.”, Konam ‘016 [0065] Although the UI 300 illustrated in FIGS. 3A-3F displays four categories 330 corresponding to the SOAP (Subjective, Objective, Assessment, Plan) note structure used by many physicians, the analysis window 380 may display more than, fewer than, and different arrangements of the categories 330 shown in FIGS. 3A-3F. Accordingly, for the same conversation, the UI 300 may show different orders and types of the representations 340 based on which categorization scheme is selected by the user.”, Konam ‘016 [0122] In some embodiments, the analysis system also identifies the most-semantically-relevant and next-most-semantically-relevant segments for one or more categories that the key point was not classified into, but satisfied a certainty threshold for. For example, if the term “battery” could be classified into an “assessment” or “plan” category based on satisfying a certainty threshold for each category, but scored higher on the dimensions for the “plan” category, the analysis system identifies the most-semantically-relevant segment for (actual) classification into the “plan” category, but also the most-semantically-relevant segment for (potential) classification into the “assessment” category.”; f) in response to the summary identifying a medication plan item (Konam ‘016 [0043] “For example, a first embedder 222a is trained to recognize key terms, and may be provided with a set of words, relations between words, or the like to analyze the transcript 225 for. Key terms may be defined to include various terms (and synonyms) of interest to the users. For example, in a medical domain, the names of various medications, therapies, regimens, syndromes, diseases, symptoms, etc., can be set as key terms.”, Konam ‘016 [0065] Although the UI 300 illustrated in FIGS. 3A-3F displays four categories 330 corresponding to the SOAP (Subjective, Objective, Assessment, Plan) note structure used by many physicians, the analysis window 380 may display more than, fewer than, and different arrangements of the categories 330 shown in FIGS. 3A-3F. Accordingly, for the same conversation, the UI 300 may show different orders and types of the representations 340 based on which categorization scheme is selected by the user.”, Konam ‘016 [0122] In some embodiments, the analysis system also identifies the most-semantically-relevant and next-most-semantically-relevant segments for one or more categories that the key point was not classified into, but satisfied a certainty threshold for. For example, if the term “battery” could be classified into an “assessment” or “plan” category based on satisfying a certainty threshold for each category, but scored higher on the dimensions for the “plan” category, the analysis system identifies the most-semantically-relevant segment for (actual) classification into the “plan” category, but also the most-semantically-relevant segment for (potential) classification into the “assessment” category.”, and Konam ‘016 [0090] “For example, under a first category 430a of “conditions discussed”, the UI 400 includes a first representation 440a of a key point classified as related to “conditions discussed” extracted from the conversation. Other key points extracted from the conversation are classified into other categories 430, such that the key point for various medications that are classified under the second category 430b for “medications”, and the key points for follow up actions to take after the conversation a under the fourth category 330d for “follow up”.); and g) displaying the structured summary together with user-selectable links from each summary field to the one or more supporting sentences in the transcript via a user interface (Konam ‘016 [0166] “Clause 10: A method for performing various operations, a system including a processor and a memory device including instructions that when executed by the processor perform various operations, or a memory device that includes instructions that when executed by a processor perform various operations, wherein the operations comprise: receiving a transcript of a conversation between at least a first party and a second party, wherein the transcript includes: a key point classified within a selected semantic category of a plurality of semantic categories identified from the conversation; and a hyperlink between the key point and a most-semantically-relevant segment of a plurality of segments of the transcript; generating a display on a user interface that includes the transcript and the plurality of semantic categories, wherein the selected semantic category includes a selectable representation of the key point; and in response to receiving a selection of the selectable representation via the user interface, adjusting display of the transcript in the user interface to highlight the most-semantically-relevant segment.”). Konam ‘016 fails to explicitly teach a structured summary object having fields; and automatically generating an electronic prescription comprising at least drug name, strength, dosage form, route, frequency and instructions. Konam ‘263 teaches a structured summary object having fields (Konam ‘263 [0106] As shown in FIG. 4F, a human readable element 490 is presented when the fourth representation 450d is selected and the user has selected to send the prescription to the pharmacy via the fourth contextual controls 460d. The human readable element 490 is presented as a confirmation before sending a machine-readable message to the system associated with the pharmacy, and includes the various data extracted from the transcript, local systems, and external systems related to the action item. As illustrated, the “for” and “pharmacy” fields are illustrated with a first indicator 495a, indicating that the data in the fields (e.g., the name of the patient and contact information for the patient's pharmacy of record) has been taken from a system associated with the user (e.g., a locally managed EMR system with patient details and preferences). In contrast, the field for the “medicine” is illustrated with a second indicator 495b, indicating that the data (e.g., “Vertigone”—the medication for which the prescription is being submitted) was extracted from the transcript. Similarly, the filed for the “quantity” is illustrated with a third indicator 495c, indicating that the data (e.g., 300 mg, 90 day supply) was received from a supplemental data source 370 that is outside of the user's control (e.g., a pharmacy inventory system, a manufacturer's website, a physician's reference system, an insurance carrier's database of approved medications, etc.).”); automatically generating an electronic prescription comprising at least drug name, strength, dosage form, route, frequency and instructions ((Konam ‘263 [0066] “The data to include in an action item, and relevant intents behind an action item, may be defined in various templates 315 included in the template database 310. Each template 315 may define a known category of action item and the data used to complete that action item. For example, categories of action items can include “contact other participant,” “contact non-participant party,” “confirm adherence to plan,” or the like that can be further developed based on standard follow-up actions in the user's environment and role in the environment. Various users can develop and specify what data each template 315 specifies to have filled in, when those data need to be provided, and divisions between the various templates 315. For example, a doctor may define templates 315 for referring a patient to another doctor (including data to identify the patient, the condition, and referred-to doctor, etc.), for submitting a prescription to a pharmacy (including data to identify the patient, the medication, the dosage, the amount, etc.),” and Konam ‘263 [0074] “Accordingly, a template 315 for submitting a prescription can include data elements for the name of the medication, dosage of the medication, quantity of the medication (or length of prescription), preferred pharmacist, treatment notes, and the like. In contrast, the template 315 for collecting the prescription can include data elements for the preferred pharmacist, medication discount programs, insurance information, and authorized third parties who can collect the prescription. Some of the elements needed to fill out the respective templates may be extracted from the transcript 225, but others may be requested from the user or another supplemental data source 370.”, and Konam ‘263 [0106] “As shown in FIG. 4F, a human readable element 490 is presented when the fourth representation 450d is selected and the user has selected to send the prescription to the pharmacy via the fourth contextual controls 460d. The human readable element 490 is presented as a confirmation before sending a machine-readable message to the system associated with the pharmacy, and includes the various data extracted from the transcript, local systems, and external systems related to the action item. As illustrated, the “for” and “pharmacy” fields are illustrated with a first indicator 495a, indicating that the data in the fields (e.g., the name of the patient and contact information for the patient's pharmacy of record) has been taken from a system associated with the user (e.g., a locally managed EMR system with patient details and preferences). In contrast, the field for the “medicine” is illustrated with a second indicator 495b, indicating that the data (e.g., “Vertigone”—the medication for which the prescription is being submitted) was extracted from the transcript. Similarly, the filed for the “quantity” is illustrated with a third indicator 495c, indicating that the data (e.g., 300 mg, 90 day supply) was received from a supplemental data source 370 that is outside of the user's control (e.g., a pharmacy inventory system, a manufacturer's website, a physician's reference system, an insurance carrier's database of approved medications, etc.).”). Therefore, it would be obvious to a PHOSITA before the effective filing date of the invention to combine the teachings of Konam ’016 and Konam ’263 in order to generate a structured summary of a healthcare conversation and automatically populate and transmit an electronic prescription record. Konam ’016 teaches recording, transcribing, segmenting, and categorizing healthcare conversations into semantic categories for display and interaction, while Konam ’263 teaches populating structured templates (including prescription templates) with extracted data from transcripts for transmission to a pharmacy system. A PHOSITA would have been motivated to combine these references to achieve predictable results to improve clinical documentation efficiency and reduce manual data entry by linking conversation derived structured summaries to downstream prescription workflows. The combination represents an application of known techniques (structured data extraction and electronic prescription generation) to the same type of healthcare documentation environment, yielding a predictable improvement in automation and accuracy. Regarding claim 2, Konam ‘016 and Konam ‘263 teach the invention in claim 1, as discussed above, and further teach wherein the electronic prescription is created from the structured summary object by populating the drug, strength, dosage form, route, frequency and instructions fields, and is retained as a machine-readable record for transmission to a pharmacy selected by the user (Konam ‘263 [0106] As shown in FIG. 4F, a human readable element 490 is presented when the fourth representation 450d is selected and the user has selected to send the prescription to the pharmacy via the fourth contextual controls 460d. The human readable element 490 is presented as a confirmation before sending a machine-readable message to the system associated with the pharmacy, and includes the various data extracted from the transcript, local systems, and external systems related to the action item. As illustrated, the “for” and “pharmacy” fields are illustrated with a first indicator 495a, indicating that the data in the fields (e.g., the name of the patient and contact information for the patient's pharmacy of record) has been taken from a system associated with the user (e.g., a locally managed EMR system with patient details and preferences). In contrast, the field for the “medicine” is illustrated with a second indicator 495b, indicating that the data (e.g., “Vertigone”—the medication for which the prescription is being submitted) was extracted from the transcript. Similarly, the filed for the “quantity” is illustrated with a third indicator 495c, indicating that the data (e.g., 300 mg, 90 day supply) was received from a supplemental data source 370 that is outside of the user's control (e.g., a pharmacy inventory system, a manufacturer's website, a physician's reference system, an insurance carrier's database of approved medications, etc.).”), Konam ‘263 [0066] “The data to include in an action item, and relevant intents behind an action item, may be defined in various templates 315 included in the template database 310. Each template 315 may define a known category of action item and the data used to complete that action item. For example, categories of action items can include “contact other participant,” “contact non-participant party,” “confirm adherence to plan,” or the like that can be further developed based on standard follow-up actions in the user's environment and role in the environment. Various users can develop and specify what data each template 315 specifies to have filled in, when those data need to be provided, and divisions between the various templates 315. For example, a doctor may define templates 315 for referring a patient to another doctor (including data to identify the patient, the condition, and referred-to doctor, etc.), for submitting a prescription to a pharmacy (including data to identify the patient, the medication, the dosage, the amount, etc.),” and Konam ‘263 [0074] “Accordingly, a template 315 for submitting a prescription can include data elements for the name of the medication, dosage of the medication, quantity of the medication (or length of prescription), preferred pharmacist, treatment notes, and the like. In contrast, the template 315 for collecting the prescription can include data elements for the preferred pharmacist, medication discount prog
Read full office action

Prosecution Timeline

May 24, 2023
Application Filed
Feb 18, 2025
Non-Final Rejection — §101, §102, §103
Aug 21, 2025
Response Filed
Oct 10, 2025
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month