DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 20th 2026 has been entered.
Response to Amendment
Applicant’s amendment filed January 20th 2026 has been entered and made of record. Claims 1, 11, 12 and 13 are amended. Claim 14 is cancelled. Claims 1-13 are pending.
Applicant’s remarks in view of the newly presented amendments have been considered and are found to be persuasive. A new rejection is presented below with secondary reference of USPN 2023/0169648 to Kasai to teach the added limitation of wherein each feature derivation model is a trained machine learning model.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2020/0098108 to Huo et al and 2023/0169648 to Kasai et al.
With regard to claim 1, Huo discloses a diagnosis support device (Fig. 2, computing device 200) comprising:
a processor (210); and
a memory connected to or built in the processor (220),
the processor is configured to:
acquire a medical image (Fig. 4A, acquisition module 412 acquires image data, paragraph [0098]. See also Fig. 5A, step 541);
extract a plurality of anatomical regions of an organ from the medical image (Fig. 4A, segmentation module 414, paragraphs [0098]-[0099], the segmentation module extracts segments of the target region of the target image of an organ. See also Fig. 5A, step 543);
input an image of a Qth anatomical region of images of the plurality of anatomical regions to a Qth feature amount derivation model of a plurality of feature amount derivation models prepared for the Qth anatomical region, and obtain a Qth feature amount set, wherein Q is 1 or more and is Q1 or less, and Q1 is at least three (Fig. 4A, determination module 416 makes determinations about the morphological characteristic values of target regions or feature amounts, paragraph [0100]. See also Fig. 5A, steps 545 and 547. Huo discloses the segmentation of anatomical regions and sub-regions and discloses that morphological characteristic values or “feature amounts” are calculated including volume, thickness and surface area for the various segmented anatomical regions (paragraphs [0132]-[0136]). The “model” for calculating a volume is different than a “model” for calculating a thickness, which is different than a “model” for determining a surface area. The model for generating a volume or surface area is determined by the formula or equation to generate the result. Each different calculation of a morphological characteristic value is considered a different “model” and are considered equivalent to the models claimed since they also generate a feature amount or result as broadly recited);
input feature amounts which are included in a first to a Q1 feature amount sets for a first to a Q1 anatomical region output for each of the plurality of anatomical regions to a disease opinion derivation model, (Fig. 4A, assessment module 418, assesses the organ based on the determined morphological characteristic values or feature amounts, paragraphs [0101] and [0132]-[0136]). See also Fig. 5A, step 549), and
output a disease opinion from the disease opinion derivation model]; and present the opinion (paragraphs [0118]-[0119] and [0124], an assessment of the target object or organ is determined and output indicating the determined condition of the organ and output to assist a doctor in making a diagnosis).
Huo discloses the calculation of characteristic values or feature amounts for different regions or subregions of the brain using models or specific calculations as discussed, but does not explicitly disclose wherein each feature derivation model is a trained machine learning model.
Kasai teaches a similar system for imaging and segmenting parts of a patient’s brain in order to make a diagnosis of possible dementia based on evaluations of multiple different brain portions. Specifically Kasai teaches that machine learning models are used to output and evaluation index X1 for each part (region of interest) of the subject’s brain (paragraph [0114]):
“…The second evaluation unit 12 outputs the first evaluation index X1 for each part (region of interest) of the subject's brain based on the medical image by using a learned model that is machine-learned so as to output the first evaluation index X1 for each part (region of interest) of the subject's brain… Hereinafter, the Z-score value of the gray matter volume value of the region of interest on the anatomical standard space will be described as the first evaluation index X1. However, the first evaluation index X1 is not limited to the Z-score.”
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use a trained machine learning model to derive a feature amount as taught by Kasai to calculate the feature amount or characteristic values taught by Huo in order to derive accurate representative values of the brain regions.
With regard to claim 2, Huo discloses the diagnosis support device according to claim 1, wherein the feature amount derivation model includes at least one of an auto-encoder, a single-task convolutional neural network for class determination, or a multi-task convolutional neural network for class determination (paragraphs [0072], [0131] and [0153]-[0156], Huo discloses using a convolutional neural network for segmenting and classifying brain sub-structures. The segmentation is used in determining the feature amounts or morphological characteristic values disclosed by Huo).
With regard to claim 3, Huo discloses the diagnosis support device according to claim 1, the processor is configured to:
input an image of one anatomical region of the anatomical regions to the plurality of different feature amount derivation models (Fig. 5D and paragraphs [0132]-[0135], the feature amounts or morphological values are determined based on models of the anatomical regions), and
output the feature amounts from each of the plurality of feature amount derivation models (paragraphs [0132]-[0135] and Fig. 5D, step 513, the morphological values are determined and presented in assessing the condition of the organ).
With regard to claim 4, Huo discloses the diagnosis support device according to claim 1, the processor is configured to:
input disease-related information related to the disease to the disease opinion derivation model in addition to the plurality of feature amounts (paragraphs [0004], [0124], [0128], [0138]-[0140], The disease diagnosis parameters are derived from sample images of healthy and non-healthy organs).
With regard to claim 5, Huo discloses the diagnosis support device according to claim 1, wherein the disease opinion derivation model is configured by any one method of a neural network, a support vector machine, or boosting (paragraphs [0072] and [0153], Huo discloses a convolutional neural network for segmenting and classifying brain sub-structures).
With regard to claim 6, Huo discloses the diagnosis support device according to claim 1, the processor is configured to:
perform normalization processing of matching the acquired medical image with a reference medical image prior to extraction of the anatomical regions (paragraph [0112], [0131] and [0171], Huo discloses a template matching algorithm for segmenting the target image regions thus matching the target region with a known reference medical image. Huo discloses shape normalization at paragraph [0036] to align the image with the reference image. See also Fig. 4D, where block 406 includes template matching. This is the start of performing shape normalization. Huo further discloses block 408 morphological determination module. This is where the shape normalization occurs (paragraphs [0114]-[0115], [0133]). Huo basically describes morphing the brain image to match the reference image which is shape normalization).
With regard to claim 7, Huo discloses the diagnosis support device according to claim 1, wherein the organ is a brain and the disease is dementia (paragraphs [0003], [0004] and [0124], Huo discloses determining dementia from brain images).
With regard to claim 8, Huo discloses the diagnosis support device according to claim 7, wherein the plurality of anatomical regions include at least one of a hippocampus or a temporal lobe (paragraphs [0113], [0128], [0134], [0172] and [0184], Huo discloses brain sub-regions including both the hippocampus and temporal lobe).
With regard to claim 9, Huo discloses the diagnosis support device according to claim 7, the processor is configured to:
input disease-related information related to the disease to the disease opinion derivation model in addition to the plurality of feature amounts, wherein the disease-related information includes at least one of a volume of the anatomical region, a score of a dementia test, a test result of a genetic test, a test result of a spinal fluid test, or a test result of a blood test (paragraphs [0017]-[0018] and [0133]-[0134], volume of brain regions; paragraph [0005], score of a scale test).
With regard to claim 10, Huo discloses the diagnosis support device according to claim 7, the processor is configured to:
input disease-related information related to the disease to the disease opinion derivation model in addition to the plurality of feature amounts (paragraphs [0004], [0124], [0128], [0138]-[0140], The disease diagnosis parameters are derived from sample images of healthy and non-healthy organs),
wherein the plurality of anatomical regions include at least one of a hippocampus or a temporal lobe (paragraphs [0113], [0128], [0134], [0172] and [0184], Huo discloses brain sub-regions including both the hippocampus and temporal lobe)., and
the disease-related information includes at least one of a volume of the anatomical region, a score of a dementia test, a test result of a genetic test, a test result of a spinal fluid test, or a test result of a blood test (paragraphs [0017]-[0018] and [0133]-[0134], volume; paragraph [0005], score of a scale test).
With regard to claim 11, the discussion of claim 1 applies.
With regard to claim 12, the discussion of claim 1 applies. Huo discloses a computer program (paragraphs [0066], [0083]-[0087]),
With regard to claim 13, the discussion of claim 1 applies. Huo discloses dementia diagnosis based on brain images (paragraphs [0003], [0004] and [0124]).
Huo discloses a dementia diagnosis support method causing a computer that includes a processor and a memory connected to or built in the processor (Fig. 2, computing device 200, processor 210 and memory 220) to execute a process comprising:
acquiring a medical image in which a brain appears (Fig. 4A, acquisition module 412 acquires image data, paragraph [0098]. See also Fig. 5A, step 541. See also Figs. 8, 9 and 10);
extracting a plurality of anatomical regions of the brain from the medical image (Fig. 4A, segmentation module 414, paragraphs [0098]-[0099], and [0113]-[0114] the segmentation module extracts segments of the target region of the target image of an organ, specifically a brain and brain segments. See also Fig. 5A, step 543);
inputting an image of a Qth anatomical region of images of the plurality of anatomical regions to a Qth feature amount derivation model of a plurality of feature amount derivation models prepared for the Qth anatomical region, and obtaining a Qth feature amount set, wherein Q is 1 or more and is 01 or less, and 01 is at least three; (Fig. 4A, determination module 416 makes determinations about the morphological characteristic values of target regions or feature amounts, paragraph [0100]. See also Fig. 5A, steps 545 and 547. Paragraph [0128] describes how brain segments are used to determine brain disease such as dementia. Huo discloses the segmentation of anatomical regions and sub-regions and discloses that morphological characteristic values or “feature amounts” are calculated including volume, thickness and surface area for the various segmented anatomical regions (paragraphs [0132]-[0136]). The “model” for calculating a volume is different than a “model” for calculating a thickness, which is different than a “model” for determining a surface area. The model for generating a volume or surface area is determined by the formula or equation to generate the result. Each different calculation of a morphological characteristic value is considered a different “model” and are considered equivalent to the models claimed since they also generate a feature amount or result as broadly recited);
inputting feature amounts which are included in a first to a Q1 feature amount sets for a first to a Q1 anatomical region to a dementia opinion derivation model, and (Fig. 4A, assessment module 418, assesses the organ based on the determined morphological characteristic values, paragraph [0101]. See also Fig. 5A, step 549. Paragraph [0128] describes how brain segments are used to determine brain disease such as dementia), and
outputting a dementia opinion from the dementia opinion derivation model; and presenting the opinion (paragraphs [0118]-[0119] and [0124], an assessment of the target object or organ is determined and output indicating the determined condition of the organ and output to assist a doctor in making a diagnosis. Paragraph [0128] describes how brain segments are used to determine brain disease such as dementia).
Huo disclose the calculation of characteristic values or feature amounts for different regions or subregions of the brain using models or specific calculations as discussed, but does not explicitly disclose wherein each feature derivation model is a trained machine learning model.
Kasai teaches a similar system for imaging and segmenting parts of a patient’s brain in order to make a diagnosis of possible dementia based on evaluations of multiple different brain portions. Specifically Kasai teaches that machine learning models are used to output and evaluation index X1 for each part (region of interest) of the subject’s brain (paragraph [0114]):
“…The second evaluation unit 12 outputs the first evaluation index X1 for each part (region of interest) of the subject's brain based on the medical image by using a learned model that is machine-learned so as to output the first evaluation index X1 for each part (region of interest) of the subject's brain… Hereinafter, the Z-score value of the gray matter volume value of the region of interest on the anatomical standard space will be described as the first evaluation index X1. However, the first evaluation index X1 is not limited to the Z-score.”
Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use a trained machine learning model to derive a feature amount as taught by Kasai to calculate the feature amount or characteristic values taught by Huo in order to derive accurate representative values of the brain regions.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESLEY J TUCKER/Primary Examiner, Art Unit 2661