Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/8/2025 has been entered.
DETAILED ACTION
Claims 1-5, 7-9, 11-17 and 21 are pending in this application [12/8/2025].
Claims 1, 7-8, 11-14 and 17 have been amended [12/8/2025].
Claims 6, 10, and 19-20 have been cancelled [12/8/2025].
Claims 21 has been newly added [12/8/2025].
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/8/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claim(s) 12/8/2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument based on newly applied reference Yao et al. (US-2018/0342060).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-9, 12-13, 17-18, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Paik et al. (US-2024/0177836) in view of Yao et al. (US-2018/0342060).
As to Claim 1, Paik teaches ‘A computerized method, performed by a computing system having one or more hardware computer processors and one or more non- transitory computer readable storage device storing software instructions executable by the computing system to perform the computerized method comprising: for each of a plurality of medical images: accessing the medical image; applying a segmentation algorithm to the medical image to determine a plurality of anatomical segments indicated in the medical image, wherein each of the anatomical segments is associated with an anatomical component or an anatomical system; displaying the medical image on a display device of a user [Figs 1, 8, 43A-C, par 0008, 0010, 0022, 0080, 0110, 0118, 0123, 0369 – receiving a medical image having AI algorithms that are designed to segment specific anatomic regions/structures including lungs, kidney, heart, bone a part of an organ system, for example, the cardiovascular system, the skeletal system, the gastrointestinal system, endocrine system, or the nervous system to be labeled using an AI algorithm which is visualized to the user]’.
Paik does not disclose expressly ‘for each of the plurality of anatomical segments, receiving user input indicating that either: the anatomical segment shows a finding, or the anatomical segment does not show a finding; storing, for each anatomical segment, metadata comprising: (i) segment boundaries, (ii) anatomical identifiers, and (iii) user-indicated finding status, in a training data set; and for each of the plurality of anatomical segments: training a segment-specific diagnostic model to detect findings in the anatomical segment of medical images not included in the plurality of medical images, wherein the segment-specific diagnostic model: accesses the training data set to identify a first set of medical images with the anatomical segment identified as no finding and a second set of medical images with the anatomical segment identified as finding detected, and trains the segment-specific diagnostic model based on differences between the first and second sets of medical images’.
Yao teaches ‘for each of the plurality of anatomical segments, receiving user input indicating that either: the anatomical segment shows a finding, or the anatomical segment does not show a finding [Figs 8A, 8L-M, 8T-Y, 10Q, par 0050, 0055, 0058-0059, 0082, 0084, 0090, 0117, 0129, 0150-0152, 0154-0158, 0161, 0203, 0268 – user inputting classified data of anatomical regions including indicating a confirmation or denial of findings within the region or inputting a new finding]; storing, for each anatomical segment, metadata comprising: (i) segment boundaries, (ii) anatomical identifiers, and (iii) user-indicated finding status, in a training data set [par 0050, 0055, 0058-0059, 0082, 0094, 0090, 0117, 0129, 0150-0152, 0154-0158, 0203, 0268 – medical scan analysis function including training set data such as metadata of a medical scan taken of anatomical region identified with identifiers (chest or other anatomical region), approved/denied findings of anatomical regions displayed (cardiomegaly for heart, effusion for chest cavities, emphysema for lungs)]’.
Paik in the proposed combination of Yao teaches ‘for each of the plurality of anatomical segments: training a segment-specific diagnostic model to detect findings in the anatomical segment of medical images not included in the plurality of medical images, wherein the segment-specific diagnostic model: accesses the training data set to identify a first set of medical images with the anatomical segment identified as no finding and a second set of medical images with the anatomical segment identified as finding detected, and trains the segment-specific diagnostic model based on differences between the first and second sets of medical images [Fig 8, par 0022, 0126, 0153, 0186 – user selects a portion of a medical image shown on display to identify a feature within the portion of the medical image comprising an anatomical portion of a subject, where a user wishing to analyze the liver, for example, selects the liver so the system extracts clinically relevant information, such as finding or impressions regarding the medical image of the liver for active learning based on user interaction data fed back into algorithms’ training set when identifying and labeling of anatomic variants including lesions with accepting or rejecting potential findings or lesions presented to user]’.
Paik and Yao are analogous art because they are from the same field of endeavor, namely image processing systems for medical images. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to include a medical scan analysis function, as taught by Yao. The motivation for doing so would have been to properly identifying normal and abnormal anatomical regions in a medical reviewing system. Therefore, it would have been obvious to combine Yao with Paik to obtain the invention as specified in claim 1.
As to Claim 2, Paik teaches ‘further comprising: accessing a medical image not included in the plurality of medical images; applying the segmentation algorithm to the medical image to determine the plurality of anatomical segments of patient anatomy indicated in the medical image; and for each of the anatomical segments identified in the medical image: selecting a segment-specific diagnostic model associated with the anatomical segment; applying the segment-specific diagnostic model to at least portions of the medical image associated with the anatomical segment, wherein the segment-specific diagnostic model provides an indication of whether the anatomical segment is more likely normal or abnormal [Fig 8, par 0022, 0080, 0126, 0153, 0172, 0185-0186 – user selects a portion of a medical image shown on display to identify a feature within the portion of the medical image comprising an anatomical portion of a subject, where a user wishing to analyze the liver, for example, selects the liver so the system extracts clinically relevant information, such as finding or impressions regarding the medical image of the liver for active learning based on user interaction data fed back into algorithms’ training set when identifying and labeling of anatomic variants including lesions with accepting or rejecting potential findings or lesions presented to user]’.
As to Claim 3, Paik teaches ‘wherein the anatomical components include one or more of: lungs, vasculature, cardiac, mediastinum, pleura, or bone [par 0110 – anatomy that can be segmented and/or labeled including lung, heart, lymph, thyroid, spleen, adrenal gland, colon, rectum, bladder, ovaries, skin, liver, spine, bone, pancreas, cervix, uterus, and other anatomical regions]’.
As to Claim 4, Paik teaches ‘wherein the anatomical systems include one or more of: digestive system, musculoskeletal system, nervous system, endocrine system, reproductive system, urinary system, or immune system [par 0110 – the image includes an organ system, for example, the cardiovascular system, the skeletal system, the gastrointestinal system, endocrine system, or the nervous system]’.
As to Claim 5, Paik teaches ‘wherein the anatomical segments are associated with corresponding sections of a medical report [Fig. 4, par 0009, 0145 – when user selects an image region of interest, a corresponding AI-generated text in the report can be illuminated or highlighted in a distinctive manner in addition to a medical image evaluated to detect one or more segmented features in the image can be automatically incorporated into a medical report including finding within the segmented features]’.
As to Claim 7, Paik teaches ‘wherein the plurality of anatomical segments are stored in data structure in association with a type of the medical image [par 0252, 0311-0312 – storing and retrieving of baseline datasets as well as data structures of medical image or medical imaging data of anatomic segmentations within a database]’.
As to Claim 8, Paik teaches ‘further comprising: wherein the segmentation algorithm accesses a medical report associated with the medical image to determine whether there is a finding or no finding for each of the anatomical segments indicating in the medical report [Fig. 4, par 0009, 0145 – when user selects an image region of interest, a corresponding AI-generated text in the report can be illuminated or highlighted in a distinctive manner in addition to a medical image evaluated to detect one or more segmented features in the image can be automatically incorporated into a medical report including finding within the segmented features]’.
As to Claim 9, Paik teaches ‘wherein said determining whether there is a finding or no finding for each of the anatomical segments indicating in the medical report is based at least partly on natural language processing of textual descriptions associated with respective anatomical segments [par 0010, 0017, 0253, 0369 – quality metric assessment comprises using natural language processing of said report to generate a list of one or more findings and analyzing said list of one or more findings to generate one or more quality metrics]’.
As to Claim 12, Paik teaches ‘wherein the segment-specific diagnostic models are trained using one or more artificial intelligence algorithms to classify items in a medical report as finding or no finding [Fig. 5, par 0153 – a medical image is displayed with AI-assisted interpretation and reporting of findings in a medical image presented for a user to confirm for insertion into the medical report]’.
As to Claim 13, Paik teaches ‘further comprising: displaying, in a user interface, an indication of any anatomical segments with findings [Fig. 5, par 0153 – a medical image is displayed with a finding presented for a user to confirm for insertion into the medical report]’.
As to Claim 17, Paik teaches ‘wherein the segment-specific diagnostic model determining indications of finding vs no finding based on one or more of an indication or a clinical question [par 0123, 0138 – an image query (“what is this?” including if a user saw an area of the image with a possible abnormality, he/she can simply point at or look at the region to question a list of possible findings associated with that area of the image]’.
As to Claim 21, Paik teaches ‘A computing system comprising: a hardware computer processor; and a non-transitory computer readable medium having software instructions stored thereon, the software instructions executable by the hardware computer processor to cause the computing system to perform operations [Abstract, Fig 15, par 0314 – system includes processor and memory] comprising: accessing the medical image; applying a segmentation algorithm to the medical image to determine a plurality of anatomical segments indicated in the medical image, wherein each of the anatomical segments is associated with an anatomical component or an anatomical system; displaying the medical image on a display device of a user [Paik et al. (US-2024/0177836): Figs 1, 8, 43A-C, par 0008, 0010, 0022, 0080, 0110, 0118, 0123, 0369 – receiving a medical image having AI algorithms that are designed to segment specific anatomic regions/structures including lungs, kidney, heart, bone a part of an organ system, for example, the cardiovascular system, the skeletal system, the gastrointestinal system, endocrine system, or the nervous system to be labeled using an AI algorithm which is visualized to the user]’.
Paik does not disclose expressly ‘for each of the plurality of anatomical segments, receiving user input indicating that either: the anatomical segment shows a finding, or the anatomical segment does not show a finding; storing, for each anatomical segment, metadata comprising: (i) segment boundaries, (ii) anatomical identifiers, and (iii) user-indicated finding status, in a training data set; and for each of the plurality of anatomical segments: training a segment-specific diagnostic model to detect findings in the anatomical segment of medical images not included in the plurality of medical images, wherein the segment-specific diagnostic model: accesses the training data set to identify a first set of medical images with the anatomical segment identified as no finding and a second set of medical images with the anatomical segment identified as finding detected, and trains the segment-specific diagnostic model based on differences between the first and second sets of medical images’.
Yao teaches ‘for each of the plurality of anatomical segments, receiving user input indicating that either: the anatomical segment shows a finding, or the anatomical segment does not show a finding [Figs 8A, 8L-M, 8T-Y, 10Q, par 0050-0051, 0055, 0058-0059, 0082, 0084, 0090, 0117, 0129, 0150-0152, 0154-0158, 0161, 0203, 0268 – user inputting classified data of anatomical regions including indicating a confirmation or denial of findings within the region or inputting a new finding]; storing, for each anatomical segment, metadata comprising: (i) segment boundaries, (ii) anatomical identifiers, and (iii) user-indicated finding status, in a training data set [par 0050, 0055, 0058-0059, 0082, 0094, 0090, 0117, 0129, 0150-0152, 0154-0158, 0203, 0268 – medical scan analysis function including training set data such as metadata of a medical scan taken of anatomical region identified with identifiers (chest or other anatomical region), approved/denied findings of anatomical regions displayed (cardiomegaly for heart, effusion for chest cavities, emphysema for lungs)]’.
Paik in the proposed combination of Yao teaches ‘for each of the plurality of anatomical segments: training a segment-specific diagnostic model to detect findings in the anatomical segment of medical images not included in the plurality of medical images, wherein the segment-specific diagnostic model: accesses the training data set to identify a first set of medical images with the anatomical segment identified as no finding and a second set of medical images with the anatomical segment identified as finding detected, and trains the segment-specific diagnostic model based on differences between the first and second sets of medical images [Fig 8, par 0022, 0126, 0153, 0186 – user selects a portion of a medical image shown on display to identify a feature within the portion of the medical image comprising an anatomical portion of a subject, where a user wishing to analyze the liver, for example, selects the liver so the system extracts clinically relevant information, such as finding or impressions regarding the medical image of the liver for active learning based on user interaction data fed back into algorithms’ training set when identifying and labeling of anatomic variants including lesions with accepting or rejecting potential findings or lesions presented to user]’.
Paik and Yao are analogous art because they are from the same field of endeavor, namely image processing systems for medical images. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to include a medical scan analysis function, as taught by Yao. The motivation for doing so would have been to properly identifying normal and abnormal anatomical regions in a medical reviewing system. Therefore, it would have been obvious to combine Yao with Paik to obtain the invention as specified in claim 21.
Claim(s) 11 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Paik et al. in view of Yao et al. and further in view of Sawarkar et al. (US-2024/0087724).
As to Claim 11, Paik in view of Yao teaches all of the claimed elements/features as recited in independent claim 6. Paik in view of Yao does not disclose in its entirety ‘wherein the segment-specific diagnostic models are trained using itemized reports wherein at least one report item corresponds to an anatomical segment defined in an image’.
However, Paik does teach each of said plurality of features corresponds to an anatomical structure, a tissue type, a tumor or tissue abnormality, a contrast agent, or any combination thereof and said plurality of features comprises one or more of nerve, blood vessel, lymphatic vessel, organ, joint, bone, muscle, cartilage, lymph, blood, adipose, ligament, or tendon; and said medical report comprises one or more sentences or phrases describing or assessing said at least one feature [par 0010].
While, Sawarkar teaches the GUI includes an interactive checklist prepopulated from checklist database, which is displayed along with current medical image, such as nodules, tumors, lesions or other abnormalities in anatomy/organ structures, where the interactive checklist provides line items corresponding to items to be considered by the user and associated check boxes that are physical checked by the user via the GUI upon completion of each of the line items [Fig 3 (350), par 0033, 0040-0041, 0049-0051].
Paik in view of Sawarkar teaches ‘wherein the segment-specific diagnostic models are trained using itemized reports wherein at least one report item corresponds to an anatomical segment defined in an image [Paik: par 0010; Sawarkar: Fig 3 (350), par 0033, 0040-0041, 0049-0051]’.
Paik in view of Yao are analogous art with Sawarkar because they are from the same field of endeavor, namely image processing systems for medical images. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to include an interactive checklist, as taught by Sawarkar. The motivation for doing so would have been to helping a user to observe and examine organ structure and anatomy contained in a current medical image, to detect one or more regions of interest in the current medical image, such as nodules, tumors, lesions or other abnormalities, to measure various features such as tissue and lesion volume in order to measure growth or decrease in lesion (or other abnormality) size in response to treatment, and to assist with treatment planning prior to radiation therapy, radiation dose calculation and the like. Image segmentation may also identify atypical presentations, rare cases and outliers, for example, which are expected to be monitored more closely, since the prevalence of under-reading is higher in these cases. Therefore, it would have been obvious to combine Sawarkar with Paik in view of Yao to obtain the invention as specified in claim 11.
As to Claim 14, Sawarkar teaches ‘further comprising: prepopulating an itemized report with the indications of findings and associated anatomical segments [Fig 3 (350), par 0033, 0040-0041, 0049-0051 – the GUI includes an interactive checklist prepopulated from checklist database, which is displayed along with current medical image, where the interactive checklist provides line items corresponding to items to be considered by the user and associated check boxes that are physical checked by the user via the GUI upon completion of each of the line items]’.
Paik in view of Yao are analogous art with Sawarkar because they are from the same field of endeavor, namely image processing systems for medical images. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to include an interactive checklist, as taught by Sawarkar. The motivation for doing so would have been to helping a user to observe and examine organ structure and anatomy contained in a current medical image, to detect one or more regions of interest in the current medical image, such as nodules, tumors, lesions or other abnormalities, to measure various features such as tissue and lesion volume in order to measure growth or decrease in lesion (or other abnormality) size in response to treatment, and to assist with treatment planning prior to radiation therapy, radiation dose calculation and the like. Image segmentation may also identify atypical presentations, rare cases and outliers, for example, which are expected to be monitored more closely, since the prevalence of under-reading is higher in these cases. Therefore, it would have been obvious to combine Sawarkar with Paik in view of Yao to obtain the invention as specified in claim 14.
As to Claim 15, Sawarkar teaches ‘wherein the anatomical segments associated with findings are indicated in the report [par 0041 – image segmentation helps the user to observe and examine organ structure and anatomy contained in the current medical image, to detect one or more regions of interest in the current medical image, such as nodules, tumors, lesions or other abnormalities, to measure various features such as tissue and lesion volume in order to measure growth or decrease in lesion (or other abnormality) size in response to treatment, and to assist with treatment planning prior to radiation therapy, radiation dose calculation and the like]’.
Paik in view of Yao are analogous art with Sawarkar because they are from the same field of endeavor, namely image processing systems for medical images. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to include an interactive checklist, as taught by Sawarkar. The motivation for doing so would have been to helping a user to observe and examine organ structure and anatomy contained in a current medical image, to detect one or more regions of interest in the current medical image, such as nodules, tumors, lesions or other abnormalities, to measure various features such as tissue and lesion volume in order to measure growth or decrease in lesion (or other abnormality) size in response to treatment, and to assist with treatment planning prior to radiation therapy, radiation dose calculation and the like. Image segmentation may also identify atypical presentations, rare cases and outliers, for example, which are expected to be monitored more closely, since the prevalence of under-reading is higher in these cases. Therefore, it would have been obvious to combine Sawarkar with Paik in view of Yao to obtain the invention as specified in claim 15.
As to Claim 16, Paik in the proposed combination teaches ‘wherein the anatomical segments associated with findings include a link or reference to a medical image associated with the finding [par 0009, 0080-0081, 0140, 0144-0147 – accepted findings can be automatically incorporated into a medical report, and the user or subsequent consumers of the report may browse the report with the selected fining text linked back to the locations of the finding in the image within a pre-populated database]’.
Conclusion
The prior art made of record
a. US Publication No. 2018/0342060
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIYA J CATO whose telephone number is (571)270-3954. The examiner can normally be reached M-F, 830-530.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 571.270.3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MIYA J CATO/Primary Examiner, Art Unit 2681