DETAILED ACTION
Notices to Applicant
This communication is a non-final rejection. Claims 1-20, as filed 12/24/2024, are currently pending and have been considered below.
Foreign priority is generally acknowledged to KR 10-2024-0060559 which was filed 05/08/2024.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon and the rationale supporting the rejection would be the same under either status.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1
The claim(s) recite(s) subject matter within a statutory category as a process, machine, and/or article of manufacture which recite:
1. A system for generating a radiology report, the system comprising:
a memory; and
a processor for executing instructions stored in the memory,
wherein the processor is configured to: (additional element – merely applying the abstract idea with a computer)
obtain an analysis result for a target medical image using an artificial intelligence analysis model; (additional element – insignificant extra-solution activity, namely, mere data-gathering)
extract at least one similar image to the target medical image from a catalog set comprising medical image-radiology report pairs; (abstract idea – mental process because a person can think about similar images and extract an appropriate one from his memory)
determine at least one radiology report paired with the at least one similar image as a reference report; and (abstract idea – mental process)
generate a radiology report for the target medical image based on the analysis result, using the reference report as a guideline (abstract idea – mental process because a person can create a report based on a reference mentally or with pen and paper).
2. The system of claim 1, wherein the processor is configured to:
determine presence of findings corresponding to predetermined finding labels in the radiology report; and (abstract idea – mental process)
generate a finding label set with finding labels extracted from the radiology report (abstract idea – mental process), and
wherein the finding label set is provided as a separate report distinct from the radiology report, or included in a designated section of the radiology report (abstract idea – mental process).
3. The system of claim 1, wherein the processor is configured to:
obtain clinical information through user input or interworking with a database of a medical institution; and (additional element – insignificant extra-solution activity, namely, mere data-gathering)
revise the radiology report using the clinical information or an analysis result of the clinical information (abstract idea – mental process).
4. The system of claim 1, wherein the processor is configured to store a final radiology report, edited or confirmed for the radiology report by a user, in a designated location (additional element – insignificant extra-solution activity, namely, mere data output and storing data in memory; merely applying the abstract idea with a computer).
5. The system of claim 1, wherein the processor is configured to:
determine whether to add the radiology report to the catalog set; and (abstract idea – mental process)
add a pair of the radiology report and the target medical image to the catalog set based on the determination (additional element – insignificant extra-solution activity, namely, mere data output and storing data in memory; merely applying the abstract idea with a computer).
6. The system of claim 1, wherein the analysis result comprises lesion information detected in the target medical image (abstract idea – mental process).
7. The system of claim 6, wherein the analysis result further comprises additional information extracted from the target medical image, and
the additional information comprises at least one of detailed information on the detected lesion, information on additional findings other than the detected lesion, quality information on the target medical image, or information on metadata for the target medical image (abstract idea – mental process).
8. The system of claim 7, wherein the processor is configured to generate the additional information through visual question answering process, which extracts answers to questions in the target medical image (abstract idea – mental process).
9. The system of claim 8, wherein the processor is configured to:
select a question set related to the target medical image or an analysis result of the target medical image from a question bank having questions; and
extract an answer to each question included in the question set to generate the additional information (abstract idea – mental process).
10. The system of claim 6, wherein the analysis result further comprises quantitative information on an interest object present in the target medical image (abstract idea – mental process).
11. The system of claim 1, wherein the processor is configured to associate a non-text analysis result for the target medical image with the radiology report (abstract idea – mental process)..
Claims 1-11 are presented as an exemplary claim but the same analysis applies to the other claims. The Examiner notes that claim 20 recites a “computer product” which, according to [0162], “is stored in a non-transitory computer readable storage medium, and instructions cause the processor to execute the operation of the present disclosure” and thus is directed to statutory subject matter (i.e., the scope of claim 20 does not encompass transitory memory or signals per se).
Step 2A Prong One
The broadest reasonable interpretation of these steps includes mental processes such as evaluating medical images and reports to generate a finding. For example, but for the memory and processor language, determining at least one radiology report paired with the at least one similar image as a reference report in the context of this claim is analogous to steps a human would perform by thinking about an appropriate report drawn from memory. Nothing in the claims precludes the italicized portions from practically being performed in the mind.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims as shown above. For example, selecting a question set related to the target medical image and extracting an answer to each question in claim 9 can be performed in the mind.
Step 2A Prong Two
This judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements:
amount to mere instructions to apply an exception. For example, the memory and process of claim 1 amount to invoking computers as a tool to perform the abstract idea, see MPEP 2106.05(f))
add insignificant extra-solution activity to the abstract idea. For example, obtain an analysis result for a target medical image using an artificial intelligence analysis model amounts to mere data gathering and selecting a particular data source or type of data to be manipulated, see MPEP 2106.05(g))
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims as described in greater detail above. For example, wherein the processor is configured to store a final radiology report, edited or confirmed for the radiology report by a user, in a designated location in claim 4 recites additional limitations which amount to invoking computers as a tool to perform the abstract idea. Claim recites additional limitations which add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
The Examiner notes that the specification that the claimed method can reduce hallucination (e.g., “By using the report of the similar image as the guideline, the system 1 may enhance the reliability and accuracy of the generated radiology report, and reduce errors caused by hallucination, a common issue with the generative artificial intelligence model,” [0055]) but this is set forth in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art) and is thus not sufficient to improve the functioning of a computer. See MPEP 2106.04(d)(1).
Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, other than the abstract idea per se amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields. For example, obtaining an analysis result for a target medical image using an artificial intelligence model amounts to receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i), performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii), electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii), and/or storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv).
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7, 10-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sorenson (US20180137244A1) in view of Chung (US20220084200A1).
Regarding claim 1, Sorenson discloses: A system for generating a radiology report, the system comprising: a memory; and a processor for executing instructions stored in the memory,
wherein the processor is configured (“Referring to FIG. 2, image processing server 110 includes memory 201 (e.g., dynamic random access memory or DRAM) hosting one or more image processing engines 113-115, which may be installed in and loaded from persistent storage device 202 (e.g., hard disks), and executed by one or more processors (not shown),” [0077]) to:
--obtain an analysis result for a target medical image using an artificial intelligence analysis model (“The image processing engines can work independently or in combination with one another when evoked or enabled on a multi-sided platform which utilizes machine learning, deep learning and deterministic statistical methods (engines) running on medical image data, medical metadata and other patient and procedure related content to achieve improved cohorts of images and information that have a higher or lower pre-test probability or confidence of having a specific targeted finding confirmed by the physician or clinician,” [0032]);
--extract at least one similar image to the target medical image from a catalog set comprising medical image-radiology report pairs (“A current image data set can be compared with other similar previously diagnosed patient files to suggest possible findings, similarities or irregularities found in the latest study, in the serial set of current studies or in comparing and analyzing previous related images studies, or in comparing the new study to old studies,” [0003]);
--determine at least one radiology report paired with the at least one similar image as a reference report (“Findings represent any anatomy of interest, measurements, indications, reference materials which may relate, prior studies similar to this study, suggestion of tools, and almost any then-current information or automations tools that are desired to be used in a diagnostic or research interpretation process,” [0033]).
Sorenson suggests a report (e.g., “match these findings with clinical information resources and recommendations to provide assistance and direction to the physician,” [0062]) but does not expressly disclose the report generation as claimed. Chung teaches:
--generate a radiology report for the target medical image based on the analysis result, using the reference report as a guideline (“the processor 120 may input the clinical information to the readout generation model 222 of the diagnosis result generation unit 220 and generate a readout in the form of a sentence as an output value. For example, the processor 120 may generate a readout, such as “a nodule is spread in the middle lobe of the left lung of the lung region, and there is a high possibility in that the nodule is spread to the upper lobe and the lower lobes, which is dangerous,” based on the clinical information that the plurality of nodules is detected for the lung region, the confidence score is 80%, and the nodules are spread in the middle lobe of the left lung, and the degree of risk is high by using the readout generation model 222,” [0119]).
One of ordinary skill in the art before the effective filing date would have been motivated to expand Sorenson’s AI image analysis to include Chung’s structured report generation because using standardized language in a radiology report would make the output more likely to be clinically accurate to understandable by others in the field. See Chung [0098].
Additionally, it can be seen that each element is taught by either Sorenson or Chung. The report generating features of Chung do not affect the normal functioning of the elements of the claim which are taught by Sorenson. Because the elements do not affect the normal functioning of each other, the results of their combination would have been predictable. Therefore, before the effective filing date of the claimed invention, it would have been obvious to combine the teachings of Chung with the teachings of Sorenson since the result is merely a combination of old elements, and, since the elements do not affect the normal functioning of each other, the results of the combination would have been predictable.
Claims 12 and 20 are substantially similar to claim 1 and are rejected with the same reasoning.
Regarding claim 2, Sorenson discloses: wherein the processor is configured to:
--determine presence of findings corresponding to predetermined finding labels in the radiology report; and generate a finding label set with finding labels extracted from the radiology report (“The report may contain status indications for each finding as to whether that particular finding has been adjusted, accepted, replaced or rejected, or made as an original independent finding by the end user. The preferences and actions of the report user can be tracked and measured to improve the quality of the initial report and to require less and less input in order to achieve an accepted report, potentially not needing any input over time,” [0155]).
Sorenson does not expressly disclose but Chung teaches:
--wherein the finding label set is provided as a separate report distinct from the radiology report, or included in a designated section of the radiology report (“the user interface may include: a first area for displaying the detected one or more lesions…a second area for displaying summary information about the detected one or more lesions; and a third area for displaying a readout corresponding to the diagnosis result”).
The motivation to combine is the same as in claim 1.
Regarding claim 3, Sorenson discloses: wherein the processor is configured to: obtain clinical information through user input or interworking with a database of a medical institution; and
revise the radiology report using the clinical information or an analysis result of the clinical information (“The target findings are either held in blind confidence to see if the physician agrees independently, or the findings are presented within the physician interpretation process to evoke responses, and any feedback, adjustments, agreement or disagreement are captured and utilized as performance feedback for the engines which created the suggestions,” [0032]).
Regarding claim 4, Sorenson discloses: wherein the processor is configured to store a final radiology report, edited or confirmed for the radiology report by a user, in a designated location (“When the report is finalized, it can be stored with the resultant labelled images, along with a record of what was looked at and what was not, and which findings were validated by observation and which were not. This report can have the option to show all results, only selected results, no results, or only physician validated results from the system,” [0171]; [0152]).
Regarding claim 5, Sorenson discloses: wherein the processor is configured to: determine whether to add the radiology report to the catalog set; and add a pair of the radiology report and the target medical image to the catalog set based on the determination (“The medical data review system detects agreement or disagreement in the results and findings and sends alerts for further adjudication given the discordant results, or it records the differences and provides these to the owner of the algorithm/engine allowing them to govern whether this feedback is accepted (i.e. whether or not the physician input should be accepted as truth, and whether this study should be included in a new or updated cohort.)” [0047]).
Regarding claim 6, Sorenson discloses: wherein the analysis result comprises lesion information detected in the target medical image (“The image processing engines are configured to perform image processing operations to detect any abnormal findings of the medical images,” [0047]).
Regarding claim 7, Sorenson discloses: wherein the analysis result further comprises additional information extracted from the target medical image, and the additional information comprises at least one of detailed information on the detected lesion, information on additional findings other than the detected lesion, quality information on the target medical image, or information on metadata for the target medical image (“a quality control engine must assess contrast bolus timing and for respiratory motion artifact in order to modify the confidence of the pulmonary embolism detection engine,” [0048]).
Regarding claim 10, Sorenson discloses: wherein the analysis result further comprises quantitative information on an interest object present in the target medical image (“The image quantitative data may be used to manually or semi-automatically determine or measure the size and/or characteristics of a particular body part of the medical image. The image quantitative data may be compared with a corresponding benchmark associated with the type of the image to determine whether a particular medical condition, medical issue, or disease is present or suspected,” [0062]).
Regarding claim 11, Sorenson discloses: wherein the processor is configured to associate a non-text analysis result for the target medical image with the radiology report (“Findings can include but are not limited to derived images, contours, segmentations, overlays, numbers, similarities, quantities and any other values commonly viewed, measured, derived or found in enterprise electronic health records systems, picture archiving and communications systems, content management systems, medical data review systems, laboratory systems and/or advanced visualization systems,” [0038]).
Claims 13-16 are substantially similar to claims 2-5 (respectively) and are rejected with the same reasoning.
Claim 17 is substantially similar to claim 7 and is rejected with the same reasoning.
Claim 19 is substantially similar to claim 11 and is rejected with the same reasoning.
Claims 8, 9, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sorenson (US20180137244A1) in view of Chung (US20220084200A1) and Bazi (Bazi Y, Rahhal MMA, Bashmal L, Zuair M. Vision–Language Model for Visual Question Answering in Medical Imagery. Bioengineering. 2023; 10(3):380).
Regarding claim 8, Sorenson does not expressly disclose, but Bazi teaches discloses: wherein the processor is configured to generate the additional information through visual question answering process, which extracts answers to questions in the target medical image (“a mature medical visual question answering system (VQA)…we extract image features using the vision transformer (ViT) model, and we embed the question using a textual encoder transformer. Then, we concatenate the resulting visual and textual representations and feed them into a multi-modal decoder for generating the answer in an autoregressive way,” Abstract; ).
One of ordinary skill in the art before the effective filing date would have been motivated to expand Sorenson and Chung’s AI image analysis with structured report generation to include Bazi’s visual question answering because it would “improve diagnosis by answering clinical questions presented with a medical image” (Abstract).
Additionally, it can be seen that each element is taught by either Sorenson, Chung, or Bazi. The VQA features of Bazi do not affect the normal functioning of the elements of the claim which are taught by Sorenson and Chung. Because the elements do not affect the normal functioning of each other, the results of their combination would have been predictable. Therefore, before the effective filing date of the claimed invention, it would have been obvious to combine the teachings of Bazi with the teachings of Chung and Sorenson since the result is merely a combination of old elements, and, since the elements do not affect the normal functioning of each other, the results of the combination would have been predictable.
Regarding claim 9, Sorenson does not expressly disclose, but Bazi teaches:
--wherein the processor is configured to: select a question set related to the target medical image or an analysis result of the target medical image from a question bank having questions (question dataset on pages 9-10; “VQA-RAD… The questions are divided into a training set and a test set which contain 3064 and 451 question–answer pairs, respectively. Questions are categorized into 11 categories: abnormality, attribute, modality, organ system, color, counting, object/condition presence, size, plane, positional reasoning, and other. Half of the answers are closed-ended (i.e., yes/no type), while the rest are open-ended with either one-word or short phrase answers,” page 9); and
--extract an answer to each question included in the question set to generate the additional information (“Figure 8 shows four samples of questions answered by our model for images from the PathVQA dataset. The first sample shows that the model correctly predicts the answer and the attention span across the relevant regions in the image. In the second example, although the model cannot provide the correct answer, it can still highlight related regions in the image. The third sample asks about the condition of the “mitral valve”. The question is correctly answered by our model and the corresponding region in the image is highlighted. Finally, the question asked in the fourth example is an open-ended question, regarding the “lumen” present in the image. It can be seen that the model could not obtain the correct answer because open-ended questions are more challenging, and require further developments.” Page 14).
The motivation to combine is the same as in claim 8.
Claim 18 is substantially similar to claim 9 and is rejected with the same reasoning.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sharma (Sharma, D., Purushotham, S. & Reddy, C.K. MedFuseNet: An attention-based multimodal deep learning model for visual question answering in the medical domain. Sci Rep 11, 19826 (2021). https://doi.org/10.1038/s41598-021-98390-1) discloses VQA “aim[s] to answer a natural language question associated with an image…our model produces a meaningful sequence of words that answers the input question by utilizing a a full-fledged generative decoder” page 2.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA BLANCHETTE whose telephone number is (571)272-2299. The examiner can normally be reached on Monday - Thursday 7:30AM - 6:00PM, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant, can be reached on (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA B BLANCHETTE/Primary Examiner, Art Unit 3624