Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
In the amendment dated 2/12/2026, the following occurred: Claims 1, 5 and 18 have been amended; and claim 17 was previously canceled.
Claims 1-16 and 18-20 are pending and have been examined.
Priority
Acknowledgement is made of applicant’s claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been filed in parent Application No. EP22181784.4 filed on 06/29/2022.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 and 18-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
Claims 1 and 18 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more.
Eligibility Analysis Step 1 (YES): Claims 1 and 18 fall into at least one of the statutory categories (i.e., process, machine or non-transitory CRM).
Eligibility Analysis Step 2A1 (YES): The claims recite an abstract idea. The identified abstract idea is for managing a medical structured reporting workflow, as underlined (claim 1 being representative):
a) receiving a structured reporting workflow designator designating a particular medical structured report template belonging to a plurality of medical structured report templates, wherein the particular medical structured report template includes fields to be populated with data representing answers to questions associated with the fields of the particular medical structured report template;
b) receiving a medical report comprising free or unstructured text related to the particular medical structured report template, wherein the free or unstructured text is written or dictated by a clinician through a user interface;
c) extracting the free or unstructured text of the medical report of b) and processing the free or unstructured text together with data that specifies i) a given field of the particular medical structured report template, ii) a context, iii) a question, and iv) answers to the question of iii) as input to a pretrained question-answer machine learning system, which outputs one of the answers of iv) to the question of iii) for the given field of i) of the particular medical structured report template; and
d) using the one answer output by the pretrained question-answer machine learning system in c) to automatically populate the given field of the particular medical structured report template with associated data;
wherein each one of steps a) to d) are implemented on at least one computer.
The identified claim elements, as drafted, is a process that under the broadest reasonable interpretation (BRI) covers a method of organizing human activity (i.e., managing personal behavior or relationships or interactions between people including following rules or instructions) but for the recitation of generic computer component language (discussed below in 2A2). That is, other than reciting the generic computer component language, the claimed invention amounts to managing personal behavior or relationships or interactions between people. For example, but for the generic computer component language, the claims encompass a person storing program instructions and a medical structured reporting workflow, using known datasets of questions and answers, receiving a structured reporting workflow designator, receiving a medical report comprising free or unstructured text, processing the free or unstructured text of the medical report together with various other inputs (i.e., inputting and outputting data, i.e., applying data), outputting one of the answers to the question for the given field, and using the one answer output to populate the given field with associated data, implementing the steps in the manner described in the identified abstract idea, supra. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP § 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions between people but for the recitation of generic computer component language, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. See additionally MPEP § 2106. Accordingly, the claims recite an abstract idea.
Eligibility Analysis Step 2A2 (NO):
The judicial exception, the above-identified abstract idea, is not integrated into a practical application. In particular, the claims recite the additional elements of at least one computer (claim 1) including a user interface, a memory (claim 18) and a processor (claim 18) that implement the identified abstract idea. The additional elements aforementioned are not described by the applicant and are recited at a high-level of generality (i.e., a generic computer or computer component performing a generic computer or computer component function that facilitates the identified abstract idea) such that these amount no more than mere instructions to apply the exception using a generic computer component (see Specification, e.g., at para. 0041-0042). See MPEP § 2106.04(d)(I). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims further recite the additional element of a question-answer machine-learning system that is pretrained and/or using the pretrained question-answer machine learning system to implement the identified abstract idea. The additional element is not described by the Applicant, is recited at a high-level of generality and is merely invoked as a tool to perform an existing process (MPEP § 2106.05(f)(2), see case involving a commonplace business method or mathematical algorithm being applied on a general-purpose computer within the “Other examples”), such that this amounts no more than mere instructions to apply the abstract idea using a general-purpose computer (see Applicant’s disclosure, e.g., at para. 0071-74: “More recently, innovative systems for pre-training based natural language models development have been introduced [], together with more efficient architectures based on text comprehension… Some complex models may even perform information extraction tasks [] with very few classified data (few shot learning)… the self-supervision approaches are now the state-of-the-art for NLP tasks… For the implementation of the present system a pre-trained model with T5 [] architecture has been chosen. It has been proven by scientific literature that it is effective for several kinds of tasks”; and also, e.g., at para. 0075, 0077 and 0080). See MPEP § 2106.04(d)(I); and Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
Eligibility Analysis Step 2B (NO):
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of at least one computer (claim 1) including a user interface, a memory (claim 18) and a processor (claim 18) to perform the method (represented by claim 1) amount no more than mere instructions to apply the exception using a generic computer or generic computer component. Mere instructions to apply an exception using generic computer(s) and/or generic computer component(s) cannot provide an inventive concept (“significantly more”). See MPEP § 2106.05(f).
Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional element of a question-answer machine-learning system that is pretrained and/or using the pretrained question-answer machine learning system to perform the method amounts no more than mere instructions to “apply it” with the exception by invoking an algorithm merely as a tool to perform an existing process (i.e., only recites the algorithm as a tool to apply data to an algorithm and report the results), in this case to receive various input data and output output data. The use of a trained machine learning algorithm in its ordinary capacity to perform tasks in the identified abstract idea does not provide an inventive concept (“significantly more”). See MPEP § 2106.05(f). Accordingly, alone or in combination, the additional element does not provide significantly more. Thus, the claims are not patent eligible.
Dependent claims 2-16 and 19-20, when analyzed as a whole, are similarly rejected under 35 U.S.C. §101 because the additional limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea without significantly more. The claims, when considered alone or as an ordered combination, either (1) merely further define the abstract idea, (2) do not further limit the claim to a practical application, or (3) do not provide an inventive concept such that the claims are subject matter eligible.
Claim(s) 2, 5-8 merely further describe(s) the identified abstract idea (e.g., the plurality of medical structured report templates; selecting data; translating data; subjecting the fields of the particular medical designated structured report template to a review; using the populated field(s) as input to classification algorithms, e.g., a set of rules for grouping data; fields of the plurality of medical structured report templates are associated with entities having different weights (We); building a clinical summary of a patient from free text medical reports related to the patient obtained at different by providing an ordered sequence of entities starting from the one having the higher weight, ignoring those entities having weights below a pre-determined threshold). See analysis, supra.
Claim 3 merely further recites the additional element of the question-answer machine learning algorithm is (pre)trained, which amounts no more than mere instructions to apply the abstract idea on a general-purpose computer. See analysis, supra.
Claims 4 further recite the additional element of the question-answer machine learning system employs the UnifiedQA algorithm. The additional element is not described by the Applicant, is recited at a high-level of generality and is merely invoked as a tool to perform an existing process (MPEP § 2106.05(f)(2), see case involving a commonplace business method or mathematical algorithm being applied on a general-purpose computer within the “Other examples”), such that this amounts no more than mere instructions to apply the abstract idea using a general-purpose computer (see Specification, e.g., at para. 0026-0027: “Such model is, for example, the one called UnifiedQA as disclosed in Khashabi et al. 2020, “UNIFIEDQA: Crossing Format Boundaries with a Single QA System”; also, disclosed at para. 0036-0037, 0075-0076 and 0081). See MPEP § 2106.04(d)(I); and Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of the question-answer machine learning system employs the UnifiedQA algorithm to perform the method amounts no more than mere instructions to “apply it” with the exception by invoking an algorithm merely as a tool to perform an existing process (i.e., only recites the algorithm as a tool to apply data to an algorithm and report the results), in this case to receive input data and output output data. The use of a trained machine learning algorithm in its ordinary capacity to perform tasks in the identified abstract idea does not provide an inventive concept (“significantly more”). See MPEP § 2106.05(f). Accordingly, alone or in combination, the additional element does not provide significantly more. Thus, the claims are not patent eligible.
In Claim 8, the limitations of fields of the plurality of medical structured report templates are associated with entities having different weights (We) and building a clinical summary of a patient from free text medical reports related to the patient obtained at different time by providing an ordered sequence of entities starting from the one having the higher weight, ignoring those entities having weights below a pre-determined threshold (claim 8), as drafted, is alternately a process that under broadest reasonable interpretation covers a mathematical concept that includes mathematical relationships, mathematical formulas or equations, and mathematical calculations but for the recitation of generic computer component language (discussed in the independent claim analysis, supra). That is, other than reciting the generic computer component language, the claim recites a procedure for building a clinical summary that encompasses a mathematical concept. For example, but for the generic computer component language, the claim encompasses association of structured report template data with entities having different weights. The Examiner notes that the mathematical concept need not be expressed in mathematical symbols. MPEP § 2106.04(a)(2)(I). If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, mathematical formulas or equations, or mathematical calculations but for the recitation of generic computer component language, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites multiple abstract ideas, which fall in different groupings. See MPEP 2106.
The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes.
Claims 9-16 merely further describe the identified abstract idea (e.g., grouping/categorizing the entities identified by tags having different weights, Wc, and using the Wc to modify the weight of the associated entities to obtain a modified weight, using weights to determine the ordered sequence of entities (claim 9); the weights including a temporal weight or a repetition weight (claim 10); comparing the clinical summary of the patient with sets of information present in a clinical reference database by attributing to each field a weight (claim 11); checking the clinical summary of the patient (claim 12), optionally reporting an inconsistency (claim 13), making available all or part of entities populating the fields of the particular medical structured report template in the form of tags (claim 14), applying a weight of relevance to the tags (claim 15), assigning a priority level to an examination by identifying critical conditions in the free or unstructured text of the medical report of c) (claim 16). See analysis, supra.
Claim 13 further recites the (optional) additional element of a PACS (picture archiving and communication) system in which data was previously stored. Under practical application, the additional element is at most merely generally linking the use of the abstract idea to a particular technological environment or field of use. MPEP § 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, even in combination, this additional element does not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional element of the PACS system is considered generally linking the use of the abstract idea to a particular technological environment or field of use. This has been re-evaluated under the “significantly more” analysis and has also been found insufficient to provide significantly more. MPEP § 2106.05(A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide significantly more. Accordingly, even in combination, this additional element does not provide significantly more; as such, the claim is not patent eligible.
Claims 19-20 merely further describe the identified abstract idea (e.g., the medical structured workflow) and the additional elements of the processor (e.g., populating data, applying optimized visualization protocols) and the PACS (picture archiving and communication) system. See analysis, supra.
Also, as recited in claim 19, the PACS is an additional element that collects, transmits or displays data. This additional element is recited at a high-level of generality (i.e., as a general means of collecting, transmitting or outputting data) and amounts to a location from which data is received or to which data is transmitted or outputted, each of which represents an extra-solution activity. See MPEP § 2106.04(d)(I). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. As such, claim 19 is directed to an abstract idea.
Also discussed above with respect to integration of the abstract idea into a practical application, the additional element of a PACS (i.e., a device that collects, transmits or outputs data) is considered extra-solution activity. This has been re-evaluated under the “significantly more” analysis and determined to be well-understood, routine, conventional activity in the field. MPEP 2016.05(d)(II) indicates that receiving, transmitting and outputting data over a network, e.g., using the Internet to gather data, has been held by the courts to be well-understood, routine, conventional activity (citing Symantec, TLI Communications, OIP Techs., and buySAFE). See also MPEP 2106.05(g) (citing CyberSource, Mayo and OIP Techs.) Well-understood, routine, conventional activity cannot provide an inventive concept (“significantly more”). As such, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1-4, 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sevenster et al. (US 2017/0154156 A1; “Sevenster” herein) in view of Ramanujam (US 2024/0086647 A1) and Khashabi et al. (see IDS document, “UNIFIEDQA…” (2020); “Khashabi” herein).
Re. Claim 1, Sevenster teaches a computer-implemented method for managing a medical structured reporting workflow (Abstract and Fig. 1 teach a system including a radiology workstation and a server computer and a radiology reading support method), the method comprising:
a) receiving a structured reporting workflow designator designating a particular medical structured report template belonging to a plurality of medical structured report templates (Abstract, Fig. 2, [0019], [0045] teach receiving a user input (designator) identifying a radiological (medical) finding using a finding selector 30, and retrieving a structured finding object template (a particular medical structured report template) for the radiological finding from the SFO templates database 34 (belonging to the plurality thereof)), wherein the particular structured report template includes fields to be populated with data representing answers to questions associated with the fields of the particular medical structured report template (Abstract, [0016] teach, starting with the SFO template, the radiologist fills in values for dimensions of the SFO (includes fields)… (or in some cases, one or more values may be assigned automatically). See also [0042]. The Examiner notes that “to be populated with data representing answers to associated questions…” is an intended use of “fields”, which is not required for the claim to be met.);
[…];
c) […] and […] the free or unstructured text together with data that specifies i) a given field of the particular medical structured report template of i), (ii) […] ([0041]-[0042] teach to bridge the gap between the structured format of the completed SFO 60… and a freeform textual format of the radiology report, the illustrative SFO-based tool further includes a natural language generation component 62 (machine learning system) which generates natural language text content expressing the information (the values) (i.e., the data that specifies the given field) of the completed SFO 60… in a format suitable for inclusion in the radiology report (synonymous with processing the free text together with the specified data).), which outputs […] for the given field of i) of the particular medical structured report template (Abstract, [0016] teach, starting with the SFO template, the radiologist fills in values for dimensions of the SFO… (or in some cases, one or more values may be assigned automatically) (which necessarily outputs data for the given field).); and
d) using the […] output […] in c) to automatically populate the given field of the particular medical structured report template with associated data (Abstract, [0016] teach, starting with the SFO template, the radiologist fills in values for dimensions of the SFO… (or in some cases, one or more values may be assigned automatically). See also optional feature at [0023] for populating values of dimensions of the SFO.);
wherein each one of steps a) to d) are implemented on at least one computer (Fig. 1, [0006] teach a radiology reading device comprises a server computer programmed to operate with a radiology workstation to perform a radiology reading task including performing the SFO-based radiology reading support method.)
Sevenster may not teach
(b) receiving a medical report comprising free or unstructured text related to the particular medical structured report template, wherein the free or unstructured text is written or dictated by a clinician through a user interface; or c) extracting the free or unstructured text of the medical report of b).
Ramanujam teaches
(b) receiving a medical report comprising free or unstructured text related to the particular medical structured report template, wherein the free or unstructured text is written or dictated by a clinician through a user interface (Fig. 1, [0046] teach receiving via a graphical user interface (GUI) multiple source documents containing text content. [0033] teaches, e.g., Microsoft® Word® docx format (written source docs).); and c) extracting the free or unstructured text of the medical report of b) (Fig. 1, [0046] teach automatically extract and pre-process content, e.g., text, from the source documents using natural language processing (NLP). See also Fig. 2 elements 201, 202; and Fig. 6 at step 604.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster to upload source document(s) via a user interface and subsequently extract the text content and to use this information as part of artificial intelligence-enabled system and method for authoring a scientific document as taught by Ramanujam (see Abstract), with the motivation of improving clinical document generation technology and machine learning technology (see Ramanujam at para. 0027, 0045, 0070).
Sevenster may not teach
processing… together with data that specifies… ii) a context, iii) a question, and iv) answers to the question of iii) as input to a pretrained question-answer machine learning system, which outputs one of the answers of iv) to the question of iii) for the given field of i) of the particular medical structured report template; or
the one answer output by the pretrained question-answer machine learning system.
Khashabi teaches
processing… together with data that specifies… ii) a context, iii) a question, and iv) answers to the question of iii) as input to a pretrained question-answer machine learning system, which outputs one of the answers of iv) to the question of iii) (Title teaches “UnifiedQA: Crossing Format Boundaries with a Single QA System”. See Applicant’s disclosure at para. 0027, 0037-38, 0075-76, 0080-81. Abstract and pg. 1897, para. 2 teach using the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA… for question answering tasks. Fig. 1 and Table 1 teach task examples in various formats, some of which contain a context (ii), a question (iii), and a gold answer. Figs. 1-2 teach the extractive QA format contains the gold answer as a paragraph substring (iv). See also pg. 1898, last para., teaching that the answer is extracted or abstracted from either the provided context paragraph or candidate answers. Table 1 and pg. 1898, section 3.1, para. 3 teach “examples of the natural input and output encoding for each of these formats, where both input and output representations are raw text” (processing together data that specifies a context, a question, and answers). Fig. 1 and Pg. 1897, para. 2 teach taking these natural text as input and applying UnifiedQA as-is to the QA task.); and the one answer output by the pretrained question-answer machine learning system (see previous citations. Fig. 1 and Table 1 teach output representations (the one answer), necessarily output.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster to pretrain a question-answer machine learning system and apply data to the pretrained question-answer machine learning system and to use this information as part of performing question answering tasks / a question answering method as taught by Khashabi (see Abstract), with the motivation of improving the question answering / reasoning ability / performance of an algorithm across diverse formats / using a format-agnostic QA system, thus, improving QA systems (see Khashabi at Abstract; and pg. 1897, I “Introduction”).
Re. Claim 2, Sevenster/Ramanujam/Khashabi teaches a computer-implemented method according to claim 1, wherein: the plurality of medical structured report templates have fields to be populated by selecting from a list of predefined items, wherein each item of each list is associated with a specific answer to a specific question (see claim 1 prior art rejection. Sevenster [0006]-[0007] teaches each SFO template has annotation data entry fields (the plurality have fields). The Examiner notes that “to be populated…” is an intended use of “fields”. Regardless, Sevenster at Abstract, [0021] teaches building (populating) an SFO 60 representing the radiological finding by annotating the SFO template. Sevenster [0021] teaches the SFO annotation GUI dialog 40 is in the form of annotation reticle 40, which is a compact design in which annotation data entry fields are represented as arc segments (lists of predefined items) of the reticle 40 (or the arc segments may be selected (selecting from the list of predefined items) to bring up the annotation data entry field(s) (each item of the list) corresponding to an arc segment) for inputting various annotation values. Khashabi Fig. 1-2 teaches QA datasets, a QA dataset having a particular format, e.g., an extractive format, wherein each extractive question has a gold answer.)
Re. Claim 3, Sevenster/Ramanujam/Khashabi teaches a computer-implemented method according to claim 1, wherein: the question-answer machine learning system is trained on known datasets of questions and answers (Khashabi at Abstract and pg. 1897, para. 2 teaches the UnifiedQA model is pretrained using a set of 20 seed QA datasets of multiple formats, taking natural text as input, without using format-specific prefixes) including one or more of the following formats: Extractive where questions include a context paragraph and require models to extract the answer as a substring from the context (see Khashabi at Fig. 1, teaching for EX that the Gold answer is extracted from the Context), Abstractive where questions require models to produce answers that are not substrings of the provided context paragraph (see Khashabi at Fig. 1-2, teaching for AB that the Gold answer, e.g., “fall in love with themselves”, is produced based on the Context, e.g., including “Grow dotingly enamored of themselves”, and that the paragraph does not contain the answer as a substring), Multiple-choice where questions have a set of candidate answers of which generally exactly one is correct (see Khashabi at Fig. 1, teaching for MC that there are Candidate Answers and a Gold answer among them), Yes/No where questions expect a 'yes' or 'no' answer as the response (see Khashabi at Fig. 1, teaching for YN that the gold answer is ‘no’) (Khashabi Fig. 1 teaches four formats commonly used for posing questions and answering them: Extractive (EX), Abstractive (AB), Multiple-Choice (MC), and Yes/No (YN). See also Fig. 2.)
Re. Claim 4, Sevenster/Ramanujam/Khashabi teaches a computer-implemented method according to claim 1, wherein: the question-answer machine learning system employs the UnifiedQA algorithm (Khashabi at Abstract teaches we use (employ) the latest advances in language modeling to build a single pre-trained QA model, UNIFIEDQA (the UnifiedQA system employs the UnifiedQA algorithm), that performs well across 20 QA datasets spanning 4 diverse formats… UNIFIEDQA performs surprisingly well, showing strong generalization from its out-of-format training data… Finally, fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 10 factoid and commonsense QA datasets (the pretrained system employs the algorithm).)
Re. Claim 6, Sevenster/Ramanujam/Khashabi teaches a computer-implemented method according to claim 1, wherein: the fields of the particular medical structured report template of a) is subjected to a review for being corrected (Sevenster [0043] teaches, in a sophisticated embodiment, the natural language content is automatically copied into the appropriate section of the radiology report, optionally highlighted so that the radiologist is encouraged to review it before finalizing and filling the report.), the fields as corrected being used for retraining the question-answer machine learning system (The Examiner notes that “for being corrected… as corrected being used…” are intended results of subjecting the populated fields to a review, each of which is not required for the claim to be met. See additionally claim 7 prior art rejection.)
Re. Claim 14, Sevenster/Ramanujam/Khashabi teaches a computer-implemented method according to claim 1, wherein: all or part of entities populating the fields of the particular medical structured report template (Sevenster at Abstract, [0008] teaches a structured finding object template is configured… defining annotated data entry fields with <key, value> pairs (entities populating fields). See also Sevenster [0021]-[0022], [0042], [0044]-[0045].) are made available in the form of tags that can be confirmed or modified by a user to allow the classification for scientific purposes and/or to improve the training of the question-answer machine learning system (The Examiner notes that “made available…” is intended use of all or part of “entities”, which is not required for the claim to be met. Additionally, “made available…”, “that can be confirmed or modified…”, “to allow…” are all optional language.)
Note: The USPTO interprets claim limitations that contain “if, may, might, can, could, when, potentially, possibly” as optional language. As a matter of linguistic precision, optional claim elements do not narrow claim limitations since they can be omitted; “[c]laim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed.” MPEP 2111.04 & 2173.05(d) (see also In re Johnston, 435 F3d 1381, 77 USPQ2d 1788 (Fed Cir 2006)).
Note: US 2021/0202045 A1 to Neumann (see PTO-892) teaches the recitation of “all or part of the extracted entities populating the field… are made available in the form of TAGs” (see Neumann at para. 0093), but the recitation does not distinguish over the prior art because it is an intended use of the data.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sevenster in view of Ramanujam, Khashabi, and Waibel (US 2013/0238312 A1).
Re. Claim 5, Sevenster/Khashabi teaches a computer-implemented method according to claim 1, wherein: the free or unstructured text of the medical report of b) is […] being extracted in c) (Sevenster [0042] teaches a radiology report that has a freeform textual format. Ramanujam [0028], [0046] teach extracting the text from the documents received via the GUI.)
Sevenster/Khashabi does is translated before being extracted.
Waibel teaches is translated before being extracted (Fig. 9, [0013]-[0014] teach text is input to a Machine Translation engine MT1, which translates the text in Language 1 to Language 2. Fig. 1, [0016] teach a form 28 is populated with recognized words from Speaker 1 in Language 1 and/or translated words from Speaker 2 that are translated from Language 2 to Language 1.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster/Khashabi to perform machine translation on text input and to use this information as part of computer-implemented systems and methods for extracting information from a human-to-human dialog (recognized speech or the translation thereof) to generate reports automatically as taught by Waibel (see Abstract and para. 0004), with the motivation of improving patient care-related interactions and workflow efficiency (see Waibel at para. 0002-0004).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sevenster in view of Ramanujam, Khashabi, and Rao et al. (US 2022/0253592 A1; “Rao” herein).
Re. Claim 7, Sevenster/Ramanujam/Khashabi teaches a computer-implemented method according to claim 1, wherein: at least one populated field of the particular structured medical report template is used as […] (see claim 1 prior art rejection. Sevenster [0043] teaches, in a sophisticated embodiment, the natural language content is automatically copied into the appropriate section of the radiology report, optionally highlighted so that the radiologist is encouraged to review it before finalizing and filling the report (used as content).)
Sevenster/Ramanujam/Khashabi does not teach input to classification algorithms to improve the reliability and the performances of an Enterprise Imaging Reporting System (note: “to improve…” is an intended result of “input…”, which is not required for the claim to be met).
Rao teaches input to classification algorithms to improve the reliability and the performances of an Enterprise Imaging Reporting System ([0376]-[0377] teaches a quality assurance process can be performed on an automatic billing process… to ensure the auto-determined billing data was generated properly… The auto-determined billing data is transmitted and displayed in conjunction with the medical report and/or medical scan utilized to generate the auto-determined billing data… The user can input correction information… based on errors identified in generating the auto-determined billing data… The user can send the billing data generator function 4050 into remission and can retrain the medical report natural language analysis function and/or the billing data generator function 4050 itself (classification algorithms) based on new training data (corrected information input). See also [0133]-[0139] esp. [0137], teaching performance-based remediation/retraining.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster/Ramanujam/Khashabi to perform a remission/retraining/remediation process for underperforming functions using corrected information as new training data and to use this information as part of a system with report analysis and methods for use therewith as taught by Rao, with the motivation of providing support to medical professionals in diagnosing, triaging, classifying, ranking, and/or otherwise reviewing medical data and improving performance of one or more functions (see Rao, e.g., para. 0052-0053, 0123, 0133-0140, 0191).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Sevenster in view of Ramanujam, Khashabi, and Gupta et al. (WO 2017/098341 A1; “Gupta” herein).
Re. Claim 15, Sevenster/Khashabi teaches a computer-implemented method according to claim 14, further comprising: […] before receiving a validation input from the user (Sevenster [0035] teaches invoking the PACS navigator to query whether the PACS contains any comparable previous radiology examinations for the patient (before)--if so, this is reported to the radiologist with a question as to whether the radiologist wishes to review these examinations… If the radiologist so indicates (necessarily receiving a validation input), then the radiology report(s) for these past examinations and/or their images are displayed for review by the radiologist.)
Sevenster/Khashabi does not teach applying a weight of relevance to the tags.
Gupta teaches
applying a weight of relevance to the tags (Fig. 1A, [0003], [0057] teach creating tagged content (tags)… FIG. 4A illustrates blank fields presented to the user while creating a tag and associating one or more attributes with each content block tagged… FIG. 4B illustrates exemplary fields filled by the user after tagging the content or creating a content block to associate one or more parameters, e.g., relative weight 414 and linking with block content 422. See also Fig. 4A element 402 and associated text (before receiving a validation input).)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster/Khashabi to perform a search and information retrieval process and to use this information as part of systems and methods for weighting a search query result as taught by Gupta, with the motivation of improving upon conventional computer system information search and management (see Gupta at para. 0002-0007).
Claims 18 is rejected under 35 U.S.C. 103 as being unpatentable over Sevenster in view of Vergaro et al. (US 2018/0075221 A1; “Vergaro”), Ramanujam, and Khashabi.
Re. Claim 18, Sevenster teaches a computer system for managing a medical structured reporting workflow (Abstract and Fig. 1 teach a system including a radiology workstation and a server computer and a radiology reading support method), the system comprising:
memory to store program instructions and a medical structured reporting workflow including a plurality of medical structured report templates (34) and a […] machine learning system (62) ([0046] teaches data processing or data interfacing components of the SFO-based tool, e.g. components 26, 28, 30, 32, 56, 62 (machine learning system) may be embodied as a non-transitory storage medium (memory) storing instructions executable by an electronic processor (e.g. the server 10) to perform the disclosed operations. [0045] teaches, with reference now to Fig. 2, an illustrative process using the SFO-based tool in performing a radiology reading using the system of Fig. 1 is described… In an operation 72, the SFO template for the selected finding is retrieved from the SFO templates database 34 (stored).), wherein the plurality of medical structured report templates are associated with corresponding medical […] ([0007], [0015] teach each SFO template is a structured data is configured to represent a radiological finding, e.g., tumor (associated with corresponding medical findings).), wherein the plurality of medical structured report templates include […] (note [0006]-[0007] teach each SFO template has annotation data entry fields), and wherein […];
a processor (10) configured to execute the program instructions (see [0046]) to: receive a structured reporting workflow designator designating a particular medical structured report template belonging to the plurality of medical structured report templates (Abstract, Fig. 2, [0019], [0045] teach receiving a user input (designator) identifying a radiological (medical) finding using a finding selector 30, and retrieving a structured finding object template (a particular medical structured report template) for the radiological finding from the SFO templates database 34 (belonging to the plurality of structured report templates). See also [0016], [0041].), wherein the particular medical structured report template has fields to be populated with data received in reply to specific questions (Abstract, [0016] teach, starting with the SFO template, the radiologist fills in values for dimensions of the SFO (populates included fields with associated data) via an SFO annotation graphical user interface (GUI) dialog displayed on the radiology workstation (or in some cases, one or more values may be assigned automatically). See also [0042]. The Examiner notes that “to be populated…” is an intended use of “fields”, which is not required for the claim to be met.); […] and […] the free or unstructured text together with data that specifies i) a given field of the particular medical structured report template, […] ([0041]-[0042] teaches to bridge the gap between the structured format of the completed SFO 60… and a freeform textual format of the radiology report, the illustrative SFO-based tool further includes a natural language generation component 62 (machine learning system) which generates natural language text content expressing the information (the values) (i.e., the data that specifies the given field) of the completed SFO 60… in a format suitable for inclusion in the radiology report (synonymous with process the free text together with the specified data.), which outputs […] for the given field of i) of the particular medical structured report template (Abstract, [0016] teach, starting with the SFO template, the radiologist fills in values for dimensions of the SFO… (or in some cases, one or more values may be assigned automatically) (which necessarily outputs data for the given field).); and use the […] output […] to automatically populate the given field of the particular medical structured report template with associated data (Abstract, [0016] teach, starting with the SFO template, the radiologist fills in values for dimensions of the SFO… (or in some cases, one or more values may be assigned automatically). See also optional feature at [0023] for populating values of dimensions of the SFO.)
Sevenster may not teach wherein the plurality… are associated with corresponding medical procedures, wherein the plurality… include sets of data entry sheets having a predetermined format and data entry fields uniquely associated with a corresponding aspect of a procedure.
Vergaro teaches
wherein the plurality… are associated with corresponding medical procedures, wherein the plurality… include sets of data entry sheets having a predetermined format and data entry fields uniquely associated with a corresponding aspect of a procedure (Abstract teaches storing multiple structured reports associated with corresponding interventional radiology (IR) procedures (the plurality… are associated with corresponding medical procedures). [0009] teaches the structured reports include sets of data entry sheets, graphical worksheets and text reporting worksheets having a predetermined format and data entry fields uniquely associated with a corresponding aspect of an IR procedure.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster to store templates related to the above-described structured reports and to use this information as part of a computer-implemented method and system as taught by Vergaro, with the motivation of improving structured reporting workflows (including patient scheduling, resource management, examination performance tracking, examination interpretation, results distribution and procedure billing, diagnosis and treatment), which results in improved patient care (see Vergaro, e.g., at para. 0002-0003 and 0005).
Sevenster/Vergaro may not teach
receive a medical report comprising free or unstructured text related to the particular medical structured report template, wherein the free or unstructured text is written or dictated by a clinician through a user interface; or extract the free or unstructured text of the medical report.
Ramanujam teaches
receive a medical report comprising free or unstructured text related to the particular medical structured report template, wherein the free or unstructured text is written or dictated by a clinician through a user interface (Fig. 1, [0046] teach receiving via a graphical user interface (GUI) multiple source documents containing text content. [0033] teaches Microsoft® Word® docx format (written source docs).); and extract the free or unstructured text of the medical report of b) (Fig. 1, [0046] teach automatically extract and pre-process content, e.g., text, from the source documents using natural language processing (NLP). See also Fig. 2 elements 201, 202; and Fig. 6 at step 604.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster to upload source document(s) via a user interface and subsequently extract the text content and to use this information as part of artificial intelligence-enabled system and method for authoring a scientific document as taught by Ramanujam (see Abstract), with the motivation of improving clinical document generation technology and machine learning technology (see Ramanujam at para. 0027, 0045, 0070).
Sevenster/Vergaro/Ramanujam may not teach
a question-answer machine learning system, … and wherein the question-answer machine learning system is pretrained using known datasets of questions and answers;
process… together with data that specifies… ii) a context, iii) a question, and iv) answers to the question of iii) as input to the pretrained question-answer machine learning system, which outputs one of the answers of iv) to the question of iii) for the given field of i) of the particular medical structured report template; or
the one answer output by the pretrained question-answer machine learning system.
Khashabi teaches
a question-answer machine learning system, … and wherein the question-answer machine learning system is pretrained using known datasets of questions and answers (Title teaches “UnifiedQA: Crossing Format Boundaries with a Single QA System”. See Applicant’s disclosure at para. 0027, 0037-38, 0075-76, 0080-81. Abstract and pg. 1897, para. 2 teach using the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA (is pretrained), … by training text-to-text models on a set of seed QA datasets of multiple formats (known datasets of questions and answers), taking natural text as input, without using format-specific prefixes. Fig. 1 and Table 1 teach examples of questions input and answers output.);
process… together with data that specifies… ii) a context, iii) a question, and iv) answers to the question of iii) as input to a pretrained question-answer machine learning system, which outputs one of the answers of iv) to the question of iii) (Title teaches “UnifiedQA: Crossing Format Boundaries with a Single QA System”. See Applicant’s disclosure at para. 0027, 0037-38, 0075-76, 0080-81. Abstract and pg. 1897, para. 2 teach using the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA… for question answering tasks. Fig. 1 and Table 1 teach task examples in various formats, some of which contain a context (ii), a question (iii), and a gold answer. Figs. 1-2 teach the extractive QA format contains the gold answer as a paragraph substring (iv). See also pg. 1898, last para., teaching that the answer is extracted or abstracted from either the provided context paragraph or candidate answers. Table 1 and pg. 1898, section 3.1, para. 3 teach “examples of the natural input and output encoding for each of these formats, where both input and output representations are raw text” (process together). Fig. 1 and Pg. 1897, para. 2 teach taking these natural text as input and applying UnifiedQA as-is to the QA task.); and the one answer output by the pretrained question-answer machine learning system (see previous citations. Fig. 1 and Table 1 teach output representations (the one answer), necessarily output.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster to pretrain a question-answer machine learning system and apply data to the pretrained question-answer machine learning system and to use this information as part of performing question answering tasks / a question answering method as taught by Khashabi (see Abstract), with the motivation of improving the question answering / reasoning ability / performance of an algorithm across diverse formats / using a format-agnostic QA system, thus, improving QA systems (see Khashabi at Abstract; and pg. 1897, I “Introduction”).
Claim 19-20 is rejected under 35 U.S.C. 103 as being unpatentable over Sevenster in view of Vergaro, Ramanujam, Khashabi and Reiner.
Re. Claim 19, Sevenster/Vergaro/Ramanujam/Khashabi teaches a computer system according to claim 18, wherein:
the medical structured reporting workflow employs examination images of a patient (See Sevenster at Fig. 1, image processing apps. Sevenster [0035] teaches displaying radiology report(s) of past examinations of the patient and their images.); and
the processor is configured to populate the fields of the particular medical structured report template […] a free text imaging examination report (see [0041]-[0042], freeform radiology report) and use said populated fields to automatically load from a PACS (See Sevenster [0017], PACS 12), either connected, connectible or included in the system, previous examination studies of the patient in the form of raw or processed images of the patient (Sevenster at Abstract, [0016], [0028], [0035] teaches as annotation (populating) proceeds, the SFO-based tool monitors the SFO (template) to determine whether any action rules are met… For example, if an action rule detects annotation (data entry fields) of an SFO having a "temporal" dimension which assume values such as "stable", "interval increase in size", or so forth, then the rule may invoke the PACS navigator to query whether the PACS contains any comparable previous radiology examinations for the patient--if so, this is reported to the radiologist with a question as to whether the radiologist wishes to review these examinations… then the radiology report(s) for these past examinations and/or their images are displayed (necessarily loaded).)
Sevenster/Vergaro/Ramanujam/Khashabi may not teach populating the (structured) report template from the free text imaging examination report.
Reiner teaches populating the structured report template from the free text imaging examination report (Abstract, [0147] teach converting unstructured, free text data contained within medical reports into standardized, structured data (from the free text). Fig. 2A-2B, [0154] teach accepting the structured data elements to record the final report (necessarily populated).)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method using structured finding object (SFO) templates of Sevenster/Vergaro/Khashabi to perform data extraction process on a free-text report and to use this information as part of a method of extracting real-time structured data and performing data analysis and decision support as taught by Reiner, with the motivation of improving decision support and clinical outcomes (see Reiner at Abstract and para. 0004, 0165, 0296).
Re. claim 20, Sevenster/Vergaro/Ramanujam/Khashabi/Reiner teaches a computer system according to claim 19, wherein: the processor is further configured to apply optimized visualization protocols to facilitate comparison between the images of the patient (Sevenster at Abstract, [0016], [0028], [0035] teaches invoking the PACS navigator to query whether the PACS contains any comparable previous radiology examinations for the patient--if so, this is reported to the radiologist with a question as to whether the radiologist wishes to review these examinations… then the radiology report(s) for these past examinations and/or their images are displayed. Vergaro [0067], [0073] teaches the IRIS functionality within the IRIS-PACS… includes one or more modules for generating interventional radiology structured reports and advanced image visualization (optimized visualization protocols) … One monitor displays data sheets, worksheets, structured reports and other information… The second monitor displays the corresponding images that are stored in the PACS system and provides a number of tools for visualization and image processing (applies the optimized visualization protocols). Additionally, Vergaro [0079] teaches, for example, the AT management module 152 enables the addition of condition indicators describing (e.g., visually representing) a condition or state of a vascular segment of interest. The Examiner notes that “to facilitate comparison…” is an intended result of applying optimized visualization protocols, which is not required for the claim to be met.)
Subject Matter Free of Prior Art
The cited prior art of record fails to expressly teach or suggest, either alone or in combination, the features found within claim 8 and its dependent claims 9-13 as follows (parent claim 8 being representative):
The fields of the plurality of medical structured report templates are associated with entities having different weights (We), and the method further comprises building a clinical summary of a patient from free text medical reports related to the patient obtained at different time by providing an ordered sequence of entities starting from the one having the higher weight, ignoring those entities having weights below a pre-determined threshold.
The most remarkable prior art of record is as follows:
Sevenster et al. (US 2017/0154156 A1) for teaching structured finding objects for integration of third-party applications in the image interpretation workflow.
Ramanujam (US 2024/0086647 A1) for teaching artificial intelligence-enabled system and method for authoring a scientific document.
Khashabi et al. (“UNIFIEDQA…” (2020)) for teaching UnifiedQA, a single pre-trained QA model that performs question answering tasks using a variety of diverse data formats.
Lawrence et al. (US 2005/0222981 A1) for teaching systems and methods for weighting a search query result.
The cited prior art of record also fails to expressly teach or suggest, either alone or in combination, the features found within claim 16 as follows:
A computer-implemented method according to claim 1, further comprising: assigning a priority level to an examination by identifying critical conditions related to a diagnostic question in the free or unstructured text of the medical report of b).
The most remarkable prior art of record is as follows:
Sevenster et al. (US 2017/0154156 A1) for teaching structured finding objects for integration of third-party applications in the image interpretation workflow.
Ramanujam (US 2024/0086647 A1) for teaching artificial intelligence-enabled system and method for authoring a scientific document.
Khashabi et al. (“UNIFIEDQA…” (2020)) for teaching UnifiedQA, a single pre-trained QA model that performs question answering tasks using a variety of diverse data formats.
Lawrence et al. (US 2005/0222981 A1) for teaching systems and methods for weighting a search query result.
Reiner (US 2010/0145720 A1) for teaching methodology for the conversion of unstructured, free text data (contained within medical reports) into standardized, structured data as relates to decision support for diagnosis and treatment of patients (see Abstract).
Response to Arguments
Rejection under 35 U.S.C. §103
Regarding the rejection of Claims 1-16 and 18-20, the Examiner has considered the Applicant’s arguments but does not find them persuasive for at least the following reasons. Applicant argues:
1. “Sevenster discloses a system that builds a structured finding object (SFO) 60 representing an identified radiological finding defined by a dimension and dimension value which are input by user interaction with an SFO annotation GUI 40… Natural language processing 62 is used to describe the identified radiological finding from the SFO in freeform textual format for inclusion in the radiology report 18… Thus, in Sevenster, the free form text of updated radiology report is generated from the output of the NLP processing… In contrast, the method of amended claim 1 involves… Thus, there is no free or unstructured text written or dictated by a clinician through a user interface nor any extraction of such human-derived free or unstructured text from a medical report… Furthermore, Sevenster does not involve combining the free or unstructured text… with the data that specifies i) … ii) … iii) … iv) … as input to a pretrained question-answer machine learning system… And Sevenster does not involve a pretrained question-answer machine learning system outputting one answer… as specified by the input… And Sevenster does not involve using the one answer output… to automatically populate the given field…” (Remarks, pg. 8-11).
Re. argument 1: The Examiner respectfully submits the basis of rejection as necessitated by amendment. The claims now recite that pre-processing includes extracting text from the report received via a user interface, and processing in the BRI includes mapping extracted report text to the report template with the other types of information. Ramanujam is added to teach extracting text from a source document (Sevenster’s freeform textual radiology report) uploaded via a graphical user interface. Seventer’s information and Ramanujam’s extracted information can be used as inputs to Khashabi’s pre-trained question answering system together with input data of Khashabi to output a gold answer of Khashabi for Sevenster’s SFO template dimension value. Sevenster renders obvious assigning the dimension value automatically, in this manner filling in one or more dimensions SFO template with respective answer values output by Khashabi’s pretrained question-answering system.
2. “The secondary reference to Khashabi fails to remedy the deficiencies…” (Remarks, pg. 11).
Re. argument 2: See response to arguments in 1. The combination of references renders obvious the claim limitations in claim 1. See also past response to Applicant’s arguments in Office action mailed 11/13/2025 (pgs. 47-48).
3. “Advantageously, these features of amended claim 1 use the output of machine learning system to automatically populate the given field of the particular medical structured report template based on free or written text or unstructured text of the medical report that is written or dictated by a clinician through a user interface, which can reduce the time and costs associated with this task in a manner that is not taught or suggested by the teachings of Sevenster and Khashabi” (Remarks, pg. 12).
Re. argument 3: The Examiner respectfully submits that Sevenster in view of Ramanujam and Khashabi teaches the limitations of claim 1. The cited references teach the claimed features. Whether they provide the same alleged advantages is immaterial to the novelty and/or obviousness determination.
4. “First, one of ordinary skill in the art would not have been motivated to combine Sevenster and Khashabi because these references address dissimilar problems and propose dissimilar solutions… Therefore, the invention of amended claim 1 is not obvious and the Examiner should withdraw this rejection (MPEP § 2143(1)).” (Remarks, pg. 13).
Re. argument 4: In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, the motivation to combine Khashabi with Sevenster can be found in Khashabi in its abstract as well as the introduction on pg. 1897. Khashabi provides improvement to the machine learning technology of Sevenster by providing a pre-trained question-answering algorithm that performs across diverse formats.
5. “Second, combining Sevenster and Khashabi as suggested by the Examiner would change the principle of operation of Sevenster, which uses natural language processing to generate freeform text from structured data for inclusion into a medical report…” (Remarks, pg. 13).
Re. argument 5: In response to the Applicant’s arguments that the combination of Sevenster and Khashabi does not seem to result in anything operable: The Examiner respectfully submits that the principle of operation of Sevenster is not changed. Per MPEP 2143.01(VI), this doctrine indicates that obviousness may not be present where principle operation of a reference is changed in a manner that “would require a substantial reconstruction and redesign of the elements shown in [the primary reference] as well as a change in the basic principle under which the [primary reference] construction was designed to operate.” The Examiner submits that integrating the question-answering algorithm/system and question-answering tasks of Khashabi with the teachings of Sevenster would not require a substantial reconstruction and redesign of the elements shown in Sevenster or change in the basic principle under which Khashabi’s construction was designed to operate.
The Examiner submits Khashabi is relied on for the specific teaching of a pre-trained question-answering algorithm that performs question-answering tasks across diverse formats. Sevenster is relied on for teaching a system and method using structured finding object templates, the system including a radiology workstation and a server computer programmed to operate with the radiology workstation to perform radiology reading tasks including building an SFO (i.e., radiology report) by annotating the SFO template. As such, the Examiner submits that assigning the answer to a question-answering task of Khashabi into Sevenster's radiology template as an assigned dimension value (see Sevenster at para. 0016) does not render the system and method of Sevenster inoperable.
In addition, the Examiner recognizes obviousness is not determined by what the references expressly state but by what they would reasonably suggest to one of ordinary skill in the art, as supported by decisions in In re DeLisle 406 Fed 1326, 160 USPQ 806; In re Kell, Terry and Davies 208 USPQ 871; and In re Fine, 837 F.2d 1071, 1074, 5 USPQ 2d 1596, 1598 (Fed. Cir. 1988) (citing In re Lalu, 747 F.2d 703, 705, 223 USPQ 1257, 1258 (Fed. Cir. 1988)). Further, it was determined in In re Lamberti et al, 192 USPQ 278 (CCPA) that:
(i) obviousness does not require absolute predictability;
(ii) non-preferred embodiments of prior art must also be considered; and
(iii) the question is not express teaching of references, but what they would suggest.
6. “Third, one of ordinary skill in the art would not have been motivated to combine Sevenster and Khashabi because Sevenster teaches away from the claimed invention” (Remarks, pg. 14).
Re. argument 6: The Examiner respectfully submits the basis of rejection as necessitated by amendment, in which Ramanujam extracts written text from source documents, received via a user interface, to be used downstream. The Examiner also respectfully asserts that Sevenster’s natural language processing can perform multiple tasks, and Sevenster’s system can handle the addition of question-answering tasks using Khashabi’s pre-trained question-answering algorithm for question-answering specific tasks. As this feature is added by Khashabi with proper motivation to combine, Khashabi does not teach away from the invention of Sevenster, and Sevenster does not teach away from the claimed invention.
Regarding the rejection of Claims 2-16 and 18-20, the Applicant has not offered any arguments with respect to these claims other than to reiterate the argument(s) present for the claim(s) from which they depend or are analogous to. As such, the rejection of these claims is also respectfully maintained.
Rejections under 35 U.S.C. §101
Regarding the rejection of Claims 1-16 and 18-20, the Examiner has considered the Applicant’s arguments but does not find them persuasive for at least the following reasons. Applicant argues:
1. “First… amended claim 1 does not recite an abstract idea; instead, it recites
specific physical operations that are carried out by one or more computer systems” (Remarks, pg. 15-16).
Re. argument 1: The Examiner respectfully submits the basis of rejection. The recited workflow is a form of human interaction with a computer (for example, a person carrying out steps in a workflow using one or more computer systems), which falls under certain methods of organizing human activity. Given the broadest reasonable interpretation, the claims recite Certain Methods of Organizing Human Activity (managing personal behavior and/or interactions between people, which includes one or more persons following a series of steps or rules, and which includes interaction of a person with a computer). MPEP 2106.04(a)(2)(II).
The Examiner notes that multiple CAFC court decisions that were found to recite a method of organizing human activity did not actively recite a person or persons performing the steps of the claims (see, e.g., EPG, TLI communications, and Ultramercial). Also, the additional elements are considered in Step 2A Prong 2 and Step 2B.
2. “Second… this combination of features (workflow) is integrated into a practical application, i.e., the practical application of automatically populating the given field of the particular medical structured report template with useful data. This problem is outlined in paragraph [0008] and the advantages summarized in paragraphs [0102] and [0103) of the present application” (Remarks, pg. 16-17).
Re. argument 2: The Examiner respectfully submits that following the workflow is something a person would do on generic computer(s). Mere instructions to apply the workflow on generic computer(s) does not integrate the abstract idea into a practical application.
The computer including the user interface is recited at a high level of generality and is a generic computer component. The use of the computer including the user interface for carrying out the workflow (including population of template data for report generation), as drafted, does not provide an improvement within the meaning of that word; the computer and/or the user interface are not made to physically run faster, utilize fewer resources, or run more efficiently. Utilizing a computer or tool to perform an abstract idea in a faster or more accurate manner is utilizing a computer for what it was designed to do and is insufficient to provide a practical application or significantly more. See Alice Corp. Also, this is not a practical application by any measure provided for in the 2019 Patent Eligibility Guidance.
Applicant alleges para. 0008 presents a technical problem. Para. 0008 presents what appears to be a problem of difficulty in deriving valuable information from the analysis of “narrative” and linguistic data. It is unclear how para. 0008 relates to the presented information of para. 0102-0103.
The Examiner submits that this problem in extracting valuable information in narrative and linguistic data for inclusion in a medical report is not a technical problem; this is at most a medical problem related to patient monitoring or an administrative problem in report generation. Since this is not a technical problem (i.e., caused by the computer to which the claims are confined), the claimed invention does not provide a practical application or significantly more under this measure or by any measure provided for in the 2019 PEG. Difficulty in monitoring patients by collecting and reporting patient data existed well-prior to the advent of computer systems and their installation at medical facilities. The description of the monitoring of patients and collection of patient data being “difficult” or lacking in some way does not indicate how or why this is a technical problem. It is also unclear if the Applicant’s claimed invention actually solves this purported problem or increases it; there is no nexus between the data collection or data reporting being slow, inaccurate, unusable, or not visible and the claimed invention.
Applicant alleges para. 0102-0103 present a technical solution. Para. 0102-0103 presents the following: “NLP techniques are used to extract entities. Semantic validation rules are then applied to some of these entities, pre-configured in the system… as well as checks relating to the mandatory presence of certain information… Optionally, consistency checks can also be performed by matching the data extracted from the reporting system using NLP techniques with the DICOM tags associated with the imaging studies and stored in the PACS system”. The Examiner notes that claim 1 does not positively recite extracting entities using NLP techniques, applying semantic validation rules to extracted entities, checking the mandatory presence of certain information, DICOM tags stored in a PACS system, performing consistency checks by matching the extracted data with DICOM tags associated with imaging studies.
First the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. See MPEP § 2106.04(d)(1). It is not apparent to the Examiner how the computer or the user interface could be improved by mere application of NLP techniques, or mere application of pre-configured semantic validation rules, or information checks, or matching of extracted data with DICOM tags. It is also unclear how these functions would implement a technical solution to a technical problem.
Note: It is also not apparent how the recited pre-trained question answer machine learning system is improved with respect to a technical problem in the field of machine learning by its mere application.
Note: The MPEP also states that “[s]econd, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification (e.g., "thereby increasing the bandwidth of the channel"). But the claim does not positively recite the argued features of para. 0102-0103.
3. “In rejecting the claims, the Examiner has provided no evidence that
this combination of features (workflow) of amended claim 1 is well-understood, routine,
or conventional activity in the field…” (Remarks, pg. 17).
Re. argument 3 (WURC analysis): The Examiner respectfully submits that none of the additional elements require WURC analysis at Step 2B, since none of the recited limitations have been identified as reciting well-understood, routine, and conventional activity. See MPEP 2106.05(II). Therefore, the argument is not persuasive. Although the conclusion of whether a claim is eligible at Step 2B requires that all relevant considerations be evaluated, most of these considerations were already evaluated in Step 2A Prong Two. Thus, in Step 2B, examiners should:
• Carry over their identification of the additional element(s) in the claim from Step 2A Prong Two (i.e., a computer including a user interface, a memory, a processor);
• Carry over their conclusions from Step 2A Prong Two on the considerations discussed in MPEP §§ 2106.05(a) (i.e., a device) - (c), (e) (f) and (h) (i.e., a computer including a user interface, a memory, a processor);
• Re-evaluate any additional element or combination of elements that was considered to be insignificant extra-solution activity per MPEP § 2106.05(g) (i.e., N/A), because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and
• Evaluate whether any additional element or combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP § 2106.05(d) (i.e., the additional elements).
4. “This combination of features (workflow) of claim 1 represents a technical improvement in the field of intelligent management of clinical/medical information.” (Remarks, pg. 17-18).
Re. argument 4: See response to arguments 2-3.
5. “Furthermore, this combination of features (workflow) uses the output of machine learning system to automatically populate the given field of the particular medical structured report template without human involvement, which can reduce the time and costs associated with this task and justifies the conclusion that the claim amounts to significantly more than the judicial exception under step 2B, which results in the claim being patent eligible under 35 USC. 101” (Remarks, pg. 18).
Re. argument 5: The recited “workflow” is a series of instructions or tasks that a person would follow to generate a medical report derived from valuable information. The Examiner notes that multiple CAFC court decisions that were found to recite a method of organizing human activity did not actively recite a person or persons performing the steps of the claims (see, e.g., EPG, TLI communications, Ultramercial).
That said, the Examiner respectfully submits that the machine learning algorithm is merely implemented on the general-purpose computer system, such that even in combination, the additional elements do not provide a practical application or an inventive concept. The claims do not contain meaningful limitations beyond mere instructions to apply the abstract idea on a general-purpose computer using generic machine learning technology. See MPEP § 2106.05(f). The claims invoke the computer and the machine learning technology merely as tools to execute the abstract idea. See MPEP 2106.05(f)(2).
Further, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept (emphasis added). MPEP 2106.05(f)(2) (citing Intellectual Ventures). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. Id. (citing McRO, Inc; and Enfish, LLC). Applicant’s disclosure asserts that computers could previously be programmed to perform the abstract idea, such that neither machine learning technology nor computer technology capabilities are improved. For example, the Applicant’s disclosure states at para. 0074) states: “For the implementation of the present system a pre-trained model with T5 (Text-to-Text Transfer Transformer…) architecture has been chosen. It has been proven by scientific literature that it is effective for several kinds of tasks although it can run on a light workstation, like a desktop PC” (emphasis added). For this additional reason, the additional elements do not integrate the judicial exception into a practical application or provide significantly more.
Regarding the rejection of Claims 2-16 and 18-20, the Applicant has not offered any arguments with respect to these claims other than to reiterate the argument(s) present for the claim(s) from which they depend or are analogous to. As such, the rejection of these claims is also respectfully maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Broverman et al. (US 2004/0249664 A1) for teaching design assistance for clinical trial protocols.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica M Webb whose telephone number is (469)295-9173. The examiner can normally be reached Mon-Fri 9:00am-1:00pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.W./Examiner, Art Unit 3683
/CHRISTOPHER L GILLIGAN/Primary Examiner, Art Unit 3683