Prosecution Insights
Last updated: April 19, 2026
Application No. 18/231,484

SYSTEM, METHOD AND STORAGE MEDIUM FOR EXTRACTING TARGETED MEDICAL INFORMATION FROM CLINICAL NOTES

Non-Final OA §101§103
Filed
Aug 08, 2023
Examiner
AGAHI, DARIOUSH
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Koninklijke Philips N V
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
142 granted / 166 resolved
+23.5% vs TC avg
Strong +29% interview lift
Without
With
+29.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
193
Total Applications
across all art units

Statute-Specific Performance

§101
25.8%
-14.2% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is in response to Applicant’s RCE submission filed on 12/22/2025. Claims 1, 6 and 11 were amended, claims 14 and 15 were canceled. Claims 1-13, 16-18 are pending in the application of which Claims 1, 6, and 11 are independent and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/22/2025 has been entered. Response to Arguments Applicant’s arguments filed in the Amendment filed 12/22/2025 (herein “Amendment”) with respect to Claim Objection on various claims raised in the previous Office Action have been fully considered and are persuasive. Therefore, the claim objections are withdrawn. Applicant’s arguments filed in the Amendment with respect to 112(b) rejection raised in the previous Office Action have been fully considered and are persuasive. Therefore, 112(b) rejection are withdrawn. Applicant’s arguments in the Amendment with respect to the claim interpretation, 35 U.S.C. 112(f), on various claims have been fully considered and persuasive. Consequently, 35 U.S.C. 112(f) claim interpretation is withdrawn. Applicant’s arguments in the Amendment with respect to the 35 USC §101 rejection raised in the previous office action have been fully considered but are not persuasive. Consequently, 35 U.S.C. 101 rejection is maintained. Applicant on page 7 recites:” … the claimed invention involves artificial intelligence. With respect to artificial intelligence inventions, the USPTO has recently issued the Memorandum "Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101" dated August 4, 2025 and to the recent USPTO Desjardins Rehearing Decision (accessible at: 2024-000567 - Ex Parte Desjardins et al. Rehearing Decision Sep 26 2025), which specifies that AI inventions should not be categorically excluded as abstract ideas.” Applicant continues on page 8:” … Claim limitations that encompass AI in a way that cannot be practically performed in the human mind do not fall within this grouping" Examiner disagrees and states the Ex parte Desjardins had specifics of a particular type of training not just any recitation of AI. The following paragraph is added to the end of MPEP § 2106.04(d), subsection III: In Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were deemed to be outside any specific, enumerated judicial exception (Step 2A: NO). Therefore, Ex Parte Desjardins does not provide an automatic eligibility to an AI based claims. Applicant still continues on page 8 and draw parallel to example 39 and 47: claim limitation "training the neural network in a first stage using the first training set" of example 39 does not recite a judicial exception. And other recitation about example 47 where it deals with "training, by the computer, the ANN based on the input data and a selected training algorithm to generate a trained ANN, wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm" of claim 2 of example 47. Examiner also disagrees, since none of these examples can be applied to the instant application where it only refers to: “using a trained natural language based transformer, …wherein the trained natural language based transformer comprises:” In these limitation, there is no refernce to an AI being trained, only refers to a trained language model which is not an AI being trained. A trained language model, due to lack of specificity, is a substitution for a generic processor. As such 101 rejection is maintained. Applicant argument with respect to the 35 USC §103 rejection raised in the previous office action have been fully considered but are moot in view of the new grounds of rejection which was necessitated by applicant’s amendment. Therefore, the previous rejection has been withdrawn. However, upon further consideration, a new ground of rejection is introduced for independent claims further adding Manouchehri (US 20150173522 A1), to the combination of Sheffer, Muller, D'Souza and Letinic. Please see prior art section below for more detail including updated citations and obviousness rationale. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13, and 16-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The flowchart in MPEP 2106, subsection III, is used to determine whether a claim satisfies the criteria for subject matter eligibility. For analysis purposes, one can follow the flowchart for subject matter eligibility. PNG media_image1.png 628 432 media_image1.png Greyscale Step 1: The independent Claims is directed to statutory categories: Step 1: Abstract Idea Groupings – MPEP 2106.04(a)(2) The enumerated groupings of abstract ideas are defined as: 1) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); 2) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (see MPEP § 2106.04(a)(2), subsection II); and 3) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). Claim 1 is a method claim and directed to the process category of patentable subject matter. Claim 6 is a system claim and directed to the machine or manufacture category of patentable subject matter. Claim 11 is a non-transitory computer readable storage medium claim and is directed to the machine or manufacture category of patentable subject matter. Step 2A is a two-prong test. PNG media_image2.png 404 780 media_image2.png Greyscale Step 2A, Prong One: Does the Claim recite a Judicially Recognized Exception? Abstract Idea? Are these Claims nevertheless considered Abstract as a Mathematical Concept (mathematical relationships, mathematical formulas or equations, mathematical calculations), Mental Process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion), or Certain Methods of Organizing Human Activity (1-fundamental economic principles or practices (including hedging, insurance, mitigating risk), 2-commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), 3- managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and fall under the judicial exception to patentable subject matter?) The rejected Claims recite Mental Processes or Methods of Organizing Human Activity. Step 2A, Prong Two: Additional Elements that Integrate the Judicial Exception into a Practical Application? Identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. “Integration into a practical application” requires an additional element(s) or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Uses the considerations laid out by the Supreme Court and the Federal Circuit to evaluate whether the judicial exception is integrated into a practical application. The rejected Claims do not include additional limitations that point to integration of the abstract idea into a practical application. Accordingly, the rejected Claims are directed to the abstract idea that they recite. Claim 11 is a generic automation of a mental process since a human agent can receive, analyze, generate, evaluate, etc. Other than the mental process under the BRI, there is only the mention of a trained natural language based transformer (transformer) which is considered to be generic processor. Transformer is not invented nor improved by the applicant, and as such it is considered a generic processor/computer due to lack of specificity. With such a generic extra element, one cannot identify anything that can be relied upon as an improvement. Prong 2 of step 2A, in the 101 analysis, asks whether the abstract idea is integrated into a practical application. The answer is no in this instance because there is no technological solution in the Claim that “integrates” the abstract idea. The Claim only suggests that the abstract idea be applied. It does not describe an application. 11. A non-transitory computer-readable storage medium encoded with instructions that when executed improve accuracy and efficiency of extracting targeted structured medical information from unstructured clinical notes stored in memory, wherein the targeted medical information is social determinants of health (SDOH) information, comprising: [This is merely amount to a data gathering activity, by way of collecting information related to a social aspect of a patient which can be collected and documented on a piece of paper, as such a mental process.] retrieving from the memory a sequence of clinical texts of electronic health records, the clinical texts comprising unstructured clinical notes, and, [This is merely amount to a data gathering activity, by way of collecting information related to a patient which can be collected and documented on a piece of paper. Clinical notes could be information that a doctor or medical staff gathers during a patient interaction. As such it is a mental process.] tokenizing the sequence of the clinical texts to obtain a sequence of input tokens; [This involves dissecting (splitting) each sentence into words/tokens which formed the given sentence in a document which can be carried out by a human with the help of pen and paper. There are no inventive steps involved and human can write down the given text/sentence and pars it out to its token/word level. ] transforming, using a trained natural language based transformer, the sequence of input tokens into a sequence of structured output tokens; and [This involves in representing input sentences from one representation to another. Human can carry out such transformation, by representing the word/token into a set of numbers and subsequently transforming back to words. The additional element of trained natural language based transformer does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing of the abstract idea. Furthermore, a trained natural language based transformer due to lack specificity is considered a generic processor.] wherein the trained natural language based transformer comprises: (i) an encoder that receives the sequence of input tokens as inputs, and generates a sequence of representations pursuant to the training; and (ii) a decoder that receives the sequence of representations and generates a sequence of structured output tokens; [The internal structure of a generic processor, would not impose any meaningful limits on practicing of the abstract idea. Having an encoder and decoder which is a part of a generic trained natural language based transformer, which has not be invented nor improved upon by the applicant does not impose any meaningful limits on practicing of the abstract idea. wherein the decoder is configured to predict a link of a trigger of a medical event represented in the sequence of representations to an argument span of the trigger represented in the sequence of representations; [This is merely representing an onset of a medical event based on some pre condition, which a human can also provide such a trigger, i.e. a chest pain that may feel like pressure, tightness, pain, squeezing or aching, cold sweat, fatigue, etc. could lead (trigger) a heart attack.] obtain annotated text-label pairs of the clinical texts from the structure output tokens; [This is merely representing the extraction of the text with its associative labels from a table format, which can be considered as a mental exercise.] converting the text-label pairs of the clinical texts into a structured table format; and [This is merely transforming one format to another format. Human can perform such a task with a help of a pen and paper. There are no inventive concepts in such transformation. representing the extraction of the text with its associative labels from a table format, which can be considered as a mental exercise.] displaying the structured table of text-label pairs of the clinical text. [This is merely amount to writing down the information on a piece of paper and share it with the interested part]. These limitations, under their broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “transformer”, “computer/processing unit”, and “memory” nothing in the claim element precludes the step from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements of using a “transformer”, “computer/processing unit”, and “memory” to perform all of the above-mentioned steps. The use of a “transformer”, and “computer/processing unit” is recited at a high-level of generality (i.e., as a generic computer/processor device performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component See MPEP2106.05(f) Mere Instructions to Apply an Exception [R-10.2019]. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: Search for Inventive Concept: Additional Element Do not amount to Significantly More: The limitations of " transforming, using a trained natural language based transformer, the sequence of input tokens into a sequence of structured output tokens;” is a well-understood, routine, and conventional machine components that and are being used for their well-understood, routine, and conventional and rather generic functions. Additionally, these limitations are expressed parenthetically and lack nexus to the claim language and as such are a separable and divisible mention to a machine. Merely reciting transformer without significantly more appears to be equivalent to a generic computer/processor to process a task that a human can process in their mind or with the aid of a paper/pen. As mentioned, the only additional element to be considered, is the recitation of transformer. However, according to the as-filed specification disavows specificity of the Transformer used and referenced to “neural network-based sequence-to-structure” in Par. 0038, or when it quoted “Such a sequence to structure model could be any sequence-to- sequence models, e.g., RNN-based or transformer-based.” In Par. 0045, or in Par. 0050 referring to ANN,CNN, RNN which is attestation for Transformer to be a generic model. Therefore, the cited additional element of Transformer does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, it is not sufficient to cause the Claim, as a whole, to amount to significantly/substantially more than the underlying abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements of using a “Transformer”, “computer/processing unit”, and “memory” to perform all of the above-mentioned steps. The use of a “Transformer”, and “computer/processing unit” is recited at a high-level of generality (i.e., as a generic computer/processor device performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component See MPEP2106.05(f) Mere Instructions to Apply an Exception [R-10.2019]. Also, in case of Transformer, it is described in a broad manner such that it could include techniques that may be performed by a human, like a rule-based learning for example. However, Transformer is a “well-understood, routine, conventional elements, for example, US20250140101A1- discloses transformer-based model for natural language processing tasks. US 20250139362 A1- discloses a Transformer-based machine learning method for pre-learning of natural language processing (NLP). US20250131184A1- discloses a transformer based deep learning natural language understanding model. US20250124228A1- discloses a transformer model that utilizes deep learning in NLP and natural language for generation (NLG) tasks. US20250104127A1- discloses a Transformer Based Natural Language Processing for clustering tasks. The additional element of a “computer/processing unit” and “memory, as cited in the as-filed specification in paragraphs such as 0031, 00047, 0052, and 0057 of the instant application appears to disclose a general-purpose computer component which are well-understood, routine and conventional elements. The use of an “computer and/or components of a computer” is recited at a high-level of generality (i.e., as a generic computer device performing a generic computer function of capturing input data, storing data and retrieval data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. With respect to independent Claim 1, there is no additional component to make the claim as a whole to amount to substantially more than the underlying abstract idea. With respect to independent Claim 6, the additional component is a memory which is not sufficient to make the claim as a whole to amount to substantially more than the underlying abstract idea. The dependent claims do not add limitations that would either integrate the recited abstract idea into a practical application or could help the Claim as a whole to amount to significantly more than the Abstract idea identified for the Independent Claim: Claims 2, 7, and 12 recites: “wherein the trained natural language based transformer is a T5 transformer.” As mentioned earlier the additional element of trained natural language based transformer does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing of the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim directed toward abstract idea. The claim is not patent eligible. Claims 3, 8 and 13 recites: “decoder receives the sequence of representations and a previously generated token as inputs to generate one output token at each time step.” Similarly, the encoder and decoder are generic components of a generic model as mentioned before, which do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing of the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim directed toward abstract idea. The claim is not patent eligible. Claims 4, and 9 recites: “wherein the post-processing/processor further includes converting the text-label pairs into a table format.” Human can use pen and paper to carry out such reformatting efforts. This is merely taking a set text and their corresponding label and put them in a table format. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim directed toward abstract idea. The claim is not patent eligible. Claims 5, and 10 recites: “wherein the targeted medical information is social determinants of health (SDOH) information.” This is merely a data gathering activity and regardless of the source, human can retrieve such data. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim directed toward abstract idea. The claim is not patent eligible. Claims 16, 17 and 18 recites:” training the natural language based transformer to generate a sequence of structured output tokens from a sequence of input tokens, comprising generating event-based annotations from training clinical texts comprising a plurality of input sentences to create an event tree comprising the event-based annotations, linearizing the event tree, and training the natural language based transformer with the plurality of input sentences and the corresponding linearized event tree.” As mentioned before the natural language based transformer is considered a generic processor. So far as the actual training is concerned (as long as it lacks specificity) is mapped to mathematical operations that use paper and pen to achieve desired results. For the rest of the claim, such as attaching a particular annotation to a particular text, one can argue that under BRI, it is considered as mental process since human can perform accordingly with a pen and paper. Furthermore, reformatting it into a textual representation or graphical form, is also a mental process. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim directed toward abstract idea. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 6 - 7, and 16- 17 are rejected under 35 U.S.C. 103 as being unpatentable over D'Souza (US20150356198A1), and in further view of Letinic (US20210350915A1), Sheffer (US 20150066537 A1), Muller (”BERT 101 State of The Art NLP Model Explained”) and Manouchehri et al. (US 20150173522 A1)(herein “Manouchehri”). D'Souza, Letinic, Sheffer, and Muller were applied in the previous Office Action. Regarding claims 1, and 6, D'Souza teaches [A computer-implemented method for improving accuracy and efficiency of extracting targeted structured medical information from unstructured clinical notes stored in memory, comprising: - claim 1], [A system for improving accuracy and efficiency of extracting targeted structured medical information from unstructured clinical notes stored in memory, comprising a processor configured to: -claim 6] (D'Souza, Par. 0009:” … directed to a computer-readable storage medium having instructions that, when executed by a processor, cause performance of a method comprising: …”, and Par. 0010:” … comprising a processor, and a memory coupled to the processor and storing computer-readable instructions which, when executed by the processor, cause performance of a method comprising: …”, and Par. 0037:” … medical facts (e.g., clinical facts) may be automatically extracted from the free-form narration … “). retrieving/retrieve from the memory a sequence of clinical texts of electronic health records, the clinical texts comprising unstructured clinical notes; and (D'Souza, Par. 0044:” In some embodiments, medical transcriptionist 130 may receive the audio recording of the dictation provided by clinician 120, and may transcribe it into a textual representation of the free-form narration (e.g., into a text narrative). Medical transcriptionist 130 may be any human who listens to the audio dictation and writes or types what was spoken into a text document.”, and Par. 0109:” In some embodiments, the set of medical facts [clinical texts] corresponding to the current patient encounter (each of which may have been extracted from the text narrative [sequence of events] or provided by the user as a discrete structured data item) may be added to an existing electronic medical record (such as an EHR) for patient 122, or may be used in generating a new electronic medical record for patient 122. … In some embodiments, when there is a linkage between a fact in the set and a portion of the text narrative, the linkage may be maintained when the fact is included in the electronic medical record. In some embodiments, this linkage may be made viewable by simultaneously displaying the fact within the electronic medical record and the text narrative (or at least the portion of the text narrative from which the fact was extracted), and providing an indication of the linkage in any of the ways described above. Similarly, extracted facts may be included in other types of patient records, and linkages between the facts in the patient records and the portions of text narratives from which they were extracted may be maintained and indicated in any suitable way.”) Note: An audio dictation, and text narrative both are considered a form of unstructured text. tokenizing/to tokenize the sequence of the clinical texts to obtain a sequence of input tokens; (D'Souza, Par. 0066:” … For example, in some embodiments, a tokenizer module may be implemented to designate spans of the text as representing structural/syntactic units such as document sections, paragraphs, sentences, clauses, phrases, individual tokens, words, sub-word units such as affixes, etc.”, and Par. 0132:” The CLU system 1006 may generate or provide (as an output) a tokenized document and a document containing annotations. The tokenized document may be, in some embodiments, a tokenized XHTML document. Such a document may be generated by tokenizing words and phrases in the document. Character offsets of the tokens may be returned by the CLU system 1006. The document containing annotations may be any suitable document, a non-limiting example of which is the illustrated application XML 1008.”) [post-processing the structured output tokens to obtain annotated text-label pairs of the clinical texts, - claim 1], [obtain annotated text-label pairs of the clinical texts from the structure output tokens, -claim 6] (D'Souza, Par. 0134:” The tokenized XHTML document and application XML 1008 may be used to present an annotated XHTML document 1010. That is, the annotations from the application XML 1008 may be applied to the tokenized XHTML document output by the CLU system. In some embodiments, the annotations are applied using an application viewer which presents the annotated document to a user. The annotated XHTML document 1010 therefore represents a richly formatted document including the annotations from CLU system 1006.”) structured output tokens (D'Souza, Par. 0145:” FIG. 12C illustrates an example of a rendered document 1250 which may be presented to a user (e.g., by a CAC application viewer or other suitable viewing application) and which represents annotations displayed with rich formatting.”) Note: displayed with rich formatting, implies outputting structured tokens. D'Souza, does not teach, however, Sheffer teaches improving the accuracy and efficiency of extracting targeted structured medical information from unstructured clinical [[notes stored in memory, comprising a processor configured to: ]] (Sheffer, Par. 0002:” Broadly speaking, clinical documentation improvement (CDI) initiatives seek to improve the quality of provider documentation in order to better reflect the services rendered and more accurately represent the complete patient encounter. “, and Par. 0016:” To improve existing CDI programs, this disclosure describes techniques to amplify prior capabilities to find cases that exhibit improvement opportunities, provide structured models of clinical evidence to support consistent CDI decisions, and extend natural language processing (NLP) technology to capture clinical indicators from both unstructured and structured sources. Relevant features include, but are not limited to: (1) accurate extraction of clinical evidence from medical records, including both structured and unstructured sources using an extended NLP engine for automated case-finding; (2) a clinical (CDI) information model that supports consistent query decisions; and (3) a compositional model to fuse together information from different portions of a medical record, in order to recognize and act upon sophisticated CDI scenarios.”) Sheffer is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza further in view of Sheffer to improve the accuracy and efficiency of extracting targeted structured medical information from unstructured clinical notes. Motivation to do so would improve the evidence-based analytics scenarios for healthcare management (Sheffer, Par. 0020). D'Souza, as modified above, does not teach, however Letinic teaches transform, using a trained natural language based transformer, the sequence of input tokens into a sequence of structured output tokens, (Letinic, Par. 0075:” … In some embodiments, a model with encoder-decoder architecture may be used. In other embodiments, the T5 encoder-decoder transformer may be used. T5 treats every NLP problem as a text-to-text problem. It is a model with up to 11B parameters trained on a giant data set of 750 GB of clean English web text, or >1T tokens. To specify a task, a task specific prefix is added to the input sequence. While Bert adds another output layer on top of the transformer for each specific task, T5 applies the same model and decoding process to every task without changes in architecture. In some embodiments, The Stanford Question Answering Dataset (SQuAD) may be used in pre-training and fine-tuning T5 for a question answering task.”, and Par. 0096:” For example, the platform displays a complete set of physician specialties relevant for a given condition and distinguishes primary (first to be visited) and secondary (potentially required) specialties. It ranks physicians in each relevant specialty.”) Note: Displaying list of the physicians ranked in relevant specialty, implies structured data to output on a display, furthermore, what to display is a design choice. Letinic is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza further in view of Letinic to transform, using a trained natural language based transformer, the sequence of input tokens into a sequence of structured output tokens. Motivation to do so would allow the system to extract additional relevant information for each condition (Letinic, Par. 0079). D'Souza, as modified above, does not teach, however Muller teaches wherein the trained natural language based transformer comprises: (i) an encoder that receives the sequence of input tokens as inputs, and generates a sequence of representations; and (ii) a decoder that receives the sequence of representations and generates a sequence of [[structured]] output tokens, (Muller, Page 15:” BERT models pre-trained for specific tasks: … Clinical Notes analysis”, and Page 1:” BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing.”, and Page 8:” A transformer does this by successively processing an input through a stack of transformer layers, usually called the encoder. If necessary, another stack of transformer layers - the decoder - can be used to predict a target output.”, and Page 10:” A transformer block transforms a sequence of word representations to a sequence of contextualized words (numbered representations).” Muller is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine D'Souza, as modified above, with Muller. As implied by Muller, one of ordinary skill would have been motivated to combine the teachings because it would enhance reliability and machine-readability of the output to adhere to a predefined format that is consistent and predictable. D'Souza, as modified above, does not teach, however, Manouchehri teaches wherein the decoder is configured to predict a link of a trigger of a medical event represented in the sequence of representations to an argument span of the trigger represented in the sequence of representations; (Manouchehri, Par. 0006:” … The medical field is well aware of several health conditions [event] which may afflict children, and in the case [link] of most health conditions, the medical field has identified a root cause which triggers that particular health condition. “) Manouchehri is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza further in view of Manouchehri to wherein the decoder is configured to predict a link of a trigger of a medical event represented in the sequence of representations to an argument span of the trigger represented in the sequence of representations. Motivation to do so would implement targeted, effective, and preventative treatments rather than just managing symptoms. Regarding claims 2 and 7, D'Souza, as modified above, teaches the computer-implemented method and the system of claims of 1 and 6, respectively. D'Souza, as modified above, does not teach, however Letinic further teaches wherein the trained natural language based transformer is a T5 transformer. (Letinic, Par. 0075:” … In some embodiments, a model with encoder-decoder architecture may be used. In other embodiments, the T5 encoder-decoder transformer may be used. T5 treats every NLP problem as a text-to-text problem. It is a model with up to 11B parameters trained on a giant data set of 750 GB of clean English web text, or >1T tokens. To specify a task, a task specific prefix is added to the input sequence. While Bert adds another output layer on top of the transformer for each specific task, T5 applies the same model and decoding process to every task without changes in architecture. In some embodiments, The Stanford Question Answering Dataset (SQuAD) may be used in pre-training and fine-tuning T5 for a question answering task.”) Regarding claims 16 and 17, D'Souza, as modified above, teaches the computer-implemented method and the system of claims of 1 and 6, respectively. D'Souza, as modified above, teaches generating event-based annotations from training clinical texts comprising a plurality of input sentences to create an event tree comprising (D'Souza, Par. 0097:” Category: Social history—Tobacco use [event-based]. Fields: Name, Substance, Form, Status, Qualifier, Frequency, Duration, Quantity, Unit type, Duration measure, Occurrence, SNOMED code, Norm value, Value.”, and Par. 0098:” Category: Social history—Alcohol use [event-based]. Fields: Name, Substance, Form, Status, Qualifier, Frequency, Duration, Quantity, Quantifier, Unit type, Duration measure, Occurrence, SNOMED code, Norm value, Value.”) Note: tobacco use is mapped to event based, while other components after it such as name, substance, etc. are the leaves of the tree. the event-based annotations, linearizing the event tree, and training the natural language based transformer with the plurality of input sentences and the corresponding linearized event tree. (D'Souza, Par. 0035:”… Some embodiments involve the automatic extraction of discrete medical facts (e.g., clinical facts), such as could be stored as discrete structured data items in an electronic medical record, from a clinician's free-form narration of a patient encounter.”, and Par. 0066:”In some embodiments, the process of training a statistical entity detection model on labeled training data may involve a number of steps to analyze each training text and probabilistically associate its characteristics with the corresponding entity labels. In some embodiments, each training text (e.g., free-form clinician narration) may be tokenized to break it down into various levels of syntactic substructure.”) Note: extracting specific fact and storing them in a database map to linearization. D'Souza, as modified above, does not teach, however Letinic further teaches training the natural language based transformer to generate a sequence of structured output tokens from a sequence of input tokens, comprising (Letinic, Par. 0075:”… In some embodiments, a model with encoder-decoder architecture may be used. In other embodiments, the T5 encoder-decoder transformer may be used. T5 treats every NLP problem as a text-to-text problem. It is a model with up to 11B parameters trained on a giant data set of 750 GB of clean English web text, or >1T tokens. To specify a task, a task specific prefix is added to the input sequence. While Bert adds another output layer on top of the transformer for each specific task, T5 applies the same model and decoding process to every task without changes in architecture. In some embodiments, The Stanford Question Answering Dataset (SQuAD) may be used in pre-training and fine-tuning T5 for a question answering task.”, and Par. 0096:” For example, the platform displays a complete set of physician specialties relevant for a given condition and distinguishes primary (first to be visited) and secondary (potentially required) specialties. It ranks physicians in each relevant specialty.”) Note: Displaying list of the physicians ranked in relevant specialty, implies structured data to output on a display, furthermore, what to display is a design choice. Claims 3, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over D'Souza, Sheffer, Letinic, Muller, Manouchehri and in further view of Li (US20240347064A1) . Li was applied in the previous Office Action. Regarding claims 3 and 8, D'Souza, as modified above, teaches the computer-implemented method and the system of claims of 2 and 7, respectively. D'Souza, as modified above, does not teach, however, Li teaches decoder receives the sequence of representations and a previously generated token as inputs to generate one output token at each time step. (Li, Par. 0645:” … Stated another way, the sequence-to-sequence model may take, as input, a first sequence of tokens and using a transformer encoder of the sequence-to-sequence model, the transformer encoder may function to convert the first sequence of tokens to a sequence embedding and a transformer decoder of the sequence-to-sequence model may function to convert the sequence embedding to a second sequence of tokens.”) Li is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza, as modified above, further in view of Li to have decoder receives the sequence of representations and a previously generated token as inputs to generate one output token at each time step. Motivation to do so would improve the results of subsequent post-processing text analysis operations (Li, Par. 0531). Claims 4 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over D'Souza, Sheffer, Letinic, Muller, Manouchehri and in further view of Gasperecz (US20200279271A1). Gasperecz was applied in the previous Office Action. Regarding claims 4, and 9, D'Souza, as modified above, teaches the computer-implemented method and the system of claims of 1 and 6, respectively. D'Souza, as modified above, does not teach, however, Gasperecz teaches [wherein the post-processing further includes converting the text-label pairs into a table format – claim 4], [wherein the processor is further configured to convert the text-label pairs into a table format – claim 9]. (Gasperecz, Par. 0092:” At block 404, the parser module 204 can perform parsing on the regulatory text. The parsing can include removing non-relevant information and text editor marks from the regulatory content text. The parsing can also include converting the text into a model that is a table or matrix. The table or matrix can, for example, have every subsection in a separate row in the table and every row containing a subsection identifier column; a text column; a label column; a requirement description column, and a related subsection column.”) Gasperecz is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza, as modified above, further in view of Gasperecz to wherein the processor is further configured to convert the text-label pairs into a table format. Motivation to do so would improve model performance with human in the loop annotation/training (Gasperecz, Par. 0151). Claims 5, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over D'Souza, Sheffer, Letinic, Muller, Manouchehri and in further view of Etienne (US20220319703A1). Etienne was applied in the previous Office Action. Regarding claims 5, and 10, D'Souza, as modified above, teaches the computer-implemented method and the system of claims of 1 and 6, respectively. D'Souza, as modified above, does not teach, however, Etienne teaches wherein the targeted medical information is social determinants of health (SDOH) information. (Etienne, Par. 0037:” The processing unit 102 is communicatively coupled to the medical database 110. The processing unit 102 is configured to access the SOC medical records 112 …. The medical records 112 can include electronic medical records, laboratory results, genomic information, biospecimens, pharmacy data, social determinants of health data, or combinations thereof.”) Etienne is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza, as modified above, further in view of Etienne to wherein the targeted medical information is social determinants of health (SDOH) information. Motivation to do so would improve patient health, reduced health resource utilization (Etienne, Par. 0043). Claims 11 - 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over D'Souza, and in further view of Sheffer, Etienne, Letinic, Muller, Manouchehri, and Gasperecz. Regarding claim 11, D'Souza teaches A non-transitory computer-readable storage medium encoded with instructions that when executed [[improve accuracy and efficiency of extracting targeted structured medical information from unstructured clinical ]] notes stored in memory, [[wherein the targeted medical information is social determinants of health (SDOH) information,]] comprising: (D'Souza, Par. 0009:” … directed to a computer-readable storage medium having instructions that, when executed by a processor, cause performance of a method comprising: …”, and Par. 0010:” … comprising a processor, and a memory coupled to the processor and storing computer-readable instructions which, when executed by the processor, cause performance of a method comprising: …”, and Par. 0037:” … medical facts (e.g., clinical facts) may be automatically extracted from the free-form narration … “ and Par. 0038:” … one or more tangible, non-transitory computer-readable storage devices storing processor-executable instructions, and one or more processors that execute the processor-executable instructions to perform the functions described herein. The storage devices may be implemented as computer-readable storage media encoded with the processor-executable instructions; …”). Retrieving from the memory a sequence of clinical texts of electronic health records, the clinical texts comprising unstructured clinical notes; and (D'Souza, Par. 0044:” In some embodiments, medical transcriptionist 130 may receive the audio recording of the dictation provided by clinician 120, and may transcribe it into a textual representation of the free-form narration (e.g., into a text narrative). Medical transcriptionist 130 may be any human who listens to the audio dictation and writes or types what was spoken into a text document.”, and Par. 0109:” In some embodiments, the set of medical facts [clinical texts] corresponding to the current patient encounter (each of which may have been extracted from the text narrative [sequence of events] or provided by the user as a discrete structured data item) may be added to an existing electronic medical record (such as an EHR) for patient 122, or may be used in generating a new electronic medical record for patient 122. … In some embodiments, when there is a linkage between a fact in the set and a portion of the text narrative, the linkage may be maintained when the fact is included in the electronic medical record. In some embodiments, this linkage may be made viewable by simultaneously displaying the fact within the electronic medical record and the text narrative (or at least the portion of the text narrative from which the fact was extracted), and providing an indication of the linkage in any of the ways described above. Similarly, extracted facts may be included in other types of patient records, and linkages between the facts in the patient records and the portions of text narratives from which they were extracted may be maintained and indicated in any suitable way.”) Note: An audio dictation, and text narrative both are considered a form of unstructured text. to tokenizing the sequence of the clinical texts to obtain a sequence of input tokens; D'Souza, Par. 0066:” … For example, in some embodiments, a tokenizer module may be implemented to designate spans of the text as representing structural/syntactic units such as document sections, paragraphs, sentences, clauses, phrases, individual tokens, words, sub-word units such as affixes, etc.”, and Par. 0132:” The CLU system 1006 may generate or provide (as an output) a tokenized document and a document containing annotations. The tokenized document may be, in some embodiments, a tokenized XHTML document. Such a document may be generated by tokenizing words and phrases in the document. Character offsets of the tokens may be returned by the CLU system 1006. The document containing annotations may be any suitable document, a non-limiting example of which is the illustrated application XML 1008.”) obtaining annotated text-label pairs of the clinical texts from the structure output tokens; and (D'Souza, Par. 0134:” The tokenized XHTML document and application XML 1008 may be used to present an annotated XHTML document 1010. That is, the annotations from the application XML 1008 may be applied to the tokenized XHTML document output by the CLU system. In some embodiments, the annotations are applied using an application viewer which presents the annotated document to a user. The annotated XHTML document 1010 therefore represents a richly formatted document including the annotations from CLU system 1006.”) structured output tokens (D'Souza, Par. 0145:” FIG. 12C illustrates an example of a rendered document 1250 which may be presented to a user (e.g., by a CAC application viewer or other suitable viewing application) and which represents annotations displayed with rich formatting.”) Note: displayed with rich formatting, implies outputting structured tokens. D'Souza, does not teach, however, Sheffer teaches improve accuracy and efficiency of extracting targeted structured medical information from unstructured clinical [[notes stored in memory,]] (Sheffer, Par. 0002:” Broadly speaking, clinical documentation improvement (CDI) initiatives seek to improve the quality of provider documentation in order to better reflect the services rendered and more accurately represent the complete patient encounter. “, and Par. 0016:” To improve existing CDI programs, this disclosure describes techniques to amplify prior capabilities to find cases that exhibit improvement opportunities, provide structured models of clinical evidence to support consistent CDI decisions, and extend natural language processing (NLP) technology to capture clinical indicators from both unstructured and structured sources. Relevant features include, but are not limited to: (1) accurate extraction of clinical evidence from medical records, including both structured and unstructured sources using an extended NLP engine for automated case-finding; (2) a clinical (CDI) information model that supports consistent query decisions; and (3) a compositional model to fuse together information from different portions of a medical record, in order to recognize and act upon sophisticated CDI scenarios.”) Sheffer is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza further in view of Sheffer to improve the accuracy and efficiency of extracting targeted structured medical information from unstructured clinical notes. Motivation to do so would improve the evidence-based analytics scenarios for healthcare management (Sheffer, Par. 0020). D'Souza, as modified above, does not teach, however, Etienne teaches wherein the targeted medical information is social determinants of health (SDOH) information, comprising: (Etienne, Par. 0037:” The processing unit 102 is communicatively coupled to the medical database 110. The processing unit 102 is configured to access the SOC medical records 112 …. The medical records 112 can include electronic medical records, laboratory results, genomic information, biospecimens, pharmacy data, social determinants of health data, or combinations thereof.”) Etienne is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza further in view of Etienne to wherein the targeted medical information is social determinants of health (SDOH) information. Motivation to do so would improve patient health, reduced health resource utilization (Etienne, Par. 0043). D'Souza, as modified above, does not teach, however Letinic teaches transforming, using a trained natural language based transformer, the sequence of input tokens into a sequence of structured output tokens, (Letinic, Par. 0075:” … In some embodiments, a model with encoder-decoder architecture may be used. In other embodiments, the T5 encoder-decoder transformer may be used. T5 treats every NLP problem as a text-to-text problem. It is a model with up to 11B parameters trained on a giant data set of 750 GB of clean English web text, or >1T tokens. To specify a task, a task specific prefix is added to the input sequence. While Bert adds another output layer on top of the transformer for each specific task, T5 applies the same model and decoding process to every task without changes in architecture. In some embodiments, The Stanford Question Answering Dataset (SQuAD) may be used in pre-training and fine-tuning T5 for a question answering task.”, and Par. 0096:” For example, the platform displays a complete set of physician specialties relevant for a given condition and distinguishes primary (first to be visited) and secondary (potentially required) specialties. It ranks physicians in each relevant specialty.”) Note: Displaying list of the physicians ranked in relevant specialty, implies structured data to output on a display, furthermore, what to display is a design choice. Letinic is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza, as modified above, further in view of Letinic to transform, using a trained natural language based transformer, the sequence of input tokens into a sequence of structured output tokens. Motivation to do so would allow the system to extract additional relevant information for each condition (Letinic, Par. 0079). D'Souza, as modified above, does not teach, however Muller teaches wherein the trained natural language based transformer comprises: (i) an encoder that receives the sequence of input tokens as inputs, and generates a sequence of representations; and (ii) a decoder that receives the sequence of representations and generates a sequence of [[structured]] output tokens; (Muller, Page 15:” BERT models pre-trained for specific tasks: … Clinical Notes analysis”, and Page 1:” BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing.”, and Page 8:” A transformer does this by successively processing an input through a stack of transformer layers, usually called the encoder. If necessary, another stack of transformer layers - the decoder - can be used to predict a target output.”, and Page 10:” A transformer block transforms a sequence of word representations to a sequence of contextualized words (numbered representations).”). Muller is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine D'Souza, as modified above, with Muller. As implied by Muller, one of ordinary skill would have been motivated to combine the teachings because it would enhance reliability and machine-readability of the output to adhere to a predefined format that is consistent and predictable. D'Souza, as modified above, does not teach, however, Manouchehri teaches wherein the decoder is configured to predict a link of a trigger of a medical event represented in the sequence of representations to an argument span of the trigger represented in the sequence of representations; (Manouchehri, Par. 0006:” … The medical field is well aware of several health conditions [event] which may afflict children, and in the case [link] of most health conditions, the medical field has identified a root cause which triggers that particular health condition. “) Manouchehri is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza further in view of Manouchehri to wherein the decoder is configured to predict a link of a trigger of a medical event represented in the sequence of representations to an argument span of the trigger represented in the sequence of representations. Motivation to do so would implement targeted, effective, and preventative treatments rather than just managing symptoms. D'Souza, as modified above, does not teach, however Gasperecz teaches converting the text-label pairs of the clinical texts into a structured table format; and (Gasperecz, Par. 0092:” At block 404, the parser module 204 can perform parsing on the regulatory text. The parsing can include removing non-relevant information and text editor marks from the regulatory content text. The parsing can also include converting the text into a model that is a table or matrix. The table or matrix can, for example, have every subsection in a separate row in the table and every row containing a subsection identifier column; a text column; a label column; a requirement description column, and a related subsection column.”) displaying the structured table of text-label pairs of the clinical text. (Gasperecz, Par. 0113:” At block 412, the splitter module 208 can display or print the final table or model to an output device, which can be used to track compliance with the regulatory content.”) Gasperecz is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza, as modified above, further in view of Gasperecz to convert the text-label pairs of the clinical texts into a structured table format; displaying the structured table of text-label pairs of the clinical text. Motivation to do so would improve model performance with human in the loop annotation/training (Gasperecz, Par. 0151). Regarding claim 12, D'Souza, as modified above, teaches the medium claim of 11. D'Souza, as modified above, does not teach, however Letinic further teaches wherein the trained natural language based transformer is a T5 transformer. (Letinic, Par. 0075:” … In some embodiments, a model with encoder-decoder architecture may be used. In other embodiments, the T5 encoder-decoder transformer may be used. T5 treats every NLP problem as a text-to-text problem. It is a model with up to 11B parameters trained on a giant data set of 750 GB of clean English web text, or >1T tokens. To specify a task, a task specific prefix is added to the input sequence. While Bert adds another output layer on top of the transformer for each specific task, T5 applies the same model and decoding process to every task without changes in architecture. In some embodiments, The Stanford Question Answering Dataset (SQuAD) may be used in pre-training and fine-tuning T5 for a question answering task.”) Regarding claim 18, D'Souza, as modified above, teaches the medium claim of 11. D'Souza, as modified above, teaches generating event-based annotations from training clinical texts comprising a plurality of input sentences to create an event tree comprising (D'Souza, Par. 0097:” Category: Social history—Tobacco use [event-based]. Fields: Name, Substance, Form, Status, Qualifier, Frequency, Duration, Quantity, Unit type, Duration measure, Occurrence, SNOMED code, Norm value, Value.”, and Par. 0098:” Category: Social history—Alcohol use [event-based]. Fields: Name, Substance, Form, Status, Qualifier, Frequency, Duration, Quantity, Quantifier, Unit type, Duration measure, Occurrence, SNOMED code, Norm value, Value.”) Note: tobacco use is mapped to event based, while other components after it such as name, substance, etc. are the leaves of the tree. the event-based annotations, linearizing the event tree, and training the natural language based transformer with the plurality of input sentences and the corresponding linearized event tree. (D'Souza, Par. 0035:”… Some embodiments involve the automatic extraction of discrete medical facts (e.g., clinical facts), such as could be stored as discrete structured data items in an electronic medical record, from a clinician's free-form narration of a patient encounter.”, and Par. 0066:”In some embodiments, the process of training a statistical entity detection model on labeled training data may involve a number of steps to analyze each training text and probabilistically associate its characteristics with the corresponding entity labels. In some embodiments, each training text (e.g., free-form clinician narration) may be tokenized to break it down into various levels of syntactic substructure.”) Note: extracting specific fact and storing them in a database map to linearization. D'Souza, as modified above, does not teach, however Letinic further teaches training the natural language based transformer to generate a sequence of structured output tokens from a sequence of input tokens, comprising (Letinic, Par. 0075:”… In some embodiments, a model with encoder-decoder architecture may be used. In other embodiments, the T5 encoder-decoder transformer may be used. T5 treats every NLP problem as a text-to-text problem. It is a model with up to 11B parameters trained on a giant data set of 750 GB of clean English web text, or >1T tokens. To specify a task, a task specific prefix is added to the input sequence. While Bert adds another output layer on top of the transformer for each specific task, T5 applies the same model and decoding process to every task without changes in architecture. In some embodiments, The Stanford Question Answering Dataset (SQuAD) may be used in pre-training and fine-tuning T5 for a question answering task.”, and Par. 0096:” For example, the platform displays a complete set of physician specialties relevant for a given condition and distinguishes primary (first to be visited) and secondary (potentially required) specialties. It ranks physicians in each relevant specialty.”) Note: Displaying list of the physicians ranked in relevant specialty, implies structured data to output on a display, furthermore, what to display is a design choice. Claim 13, is rejected under 35 U.S.C. 103 as being unpatentable over D'Souza, Sheffer, Etienne, Letinic, Muller, Manouchehri, Gasperecz and in further view of Li . Regarding claim 13, D'Souza, as modified above, teaches the medium of claim 7. D'Souza, as modified above, does not teach, however, Li teaches decoder receives the sequence of representations and a previously generated token as inputs to generate one output token at each time step. (Li, Par. 0645:” … Stated another way, the sequence-to-sequence model may take, as input, a first sequence of tokens and using a transformer encoder of the sequence-to-sequence model, the transformer encoder may function to convert the first sequence of tokens to a sequence embedding and a transformer decoder of the sequence-to-sequence model may function to convert the sequence embedding to a second sequence of tokens.”) Li is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified D'Souza, as modified above, further in view of Li to have decoder receives the sequence of representations and a previously generated token as inputs to generate one output token at each time step. Motivation to do so would improve the results of subsequent post-processing text analysis operations (Li, Par. 0531). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Riskin et al. (US20140330586A1) teaches in Par. 0031:” … transforming narrative content into structured output that defines where individual information resides within the output may include the steps of receiving narrative content; scanning the narrative content using a natural language processing (NLP) engine to identify a section and at least one clinical assertion within that section; extracting information from the narrative content, wherein the extracted information includes the section, the clinical assertion, and a plurality of elements, wherein the elements may include section elements and clinical assertion elements that annotate the section and clinical assertions respectively; identifying the section elements of the section and assigning a label to at least one section element based on a clinical model; identifying the clinical assertion elements of the clinical assertion and assigning a label to at least one clinical assertion element based on the clinical model; and organizing the section, clinical assertion, section elements, and clinical assertion elements within a schema. Examiner's Note: Examiner has cited particular columns and line numbers and/or paragraph numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARIOUSH AGAHI whose telephone number is (408)918-7689. The examiner can normally be reached Monday - Thursday and alternate Fridays, 7:30-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DARIOUSH AGAHI, P.E. Primary Examiner /DARIOUSH AGAHI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Aug 08, 2023
Application Filed
May 06, 2025
Non-Final Rejection — §101, §103
Aug 01, 2025
Response Filed
Sep 19, 2025
Final Rejection — §101, §103
Dec 22, 2025
Request for Continued Examination
Jan 16, 2026
Response after Non-Final Action
Feb 08, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596890
SYSTEMS AND METHODS FOR CROSS-LINGUAL TRANSFER LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12596876
SYSTEMS AND METHODS FOR IMPROVING TEXTUAL DESCRIPTIONS USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12591743
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM FOR EXTRACTING A NAMED ENTITY FROM A DOCUMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586586
SPEECH RECOGNITION WITH SELECTIVE USE OF DYNAMIC LANGUAGE MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579448
TECHNIQUES FOR POSITIVE ENTITY AWARE AUGMENTATION USING TWO-STAGE AUGMENTATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+29.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month