DETAILED ACTION
Contents
Notice of Pre-AIA or AIA Status 2
Response to Amendment 2
Response to Arguments 2
Claim Rejections - 35 USC § 103 3
Allowable Subject Matter 10
Conclusion 11
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is responsive to applicant’s amendment and remarks received on 10/27/25. Claims 1-8 are currently pending.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 8 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimedinvention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8 are rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 9,536,316 B2) in view of Soberanis-Mukul et al (ML: “Uncertainty-based graph convolutional networks for organ segmentation refinement”).
Regarding claim 1, Park discloses a medical image processing apparatus comprising: processing circuitry (see col. 10, lines 55-col. 11, lines 17; segmenters can be implemented in hardware components such as processors) configured to generate, from a medical image, a plurality of feature target regions for determining a target of image segmentation (see col. 4, lines 45-67; The image obtainer 110 receives a medical image and transmits the image to the first segmenter 120. The first segmenter 120 generates a candidate lesion list, including one or more candidate lesions of the medical image including a region suspected to include a lesion. The first segmenter 120 collects examination information, including, but not limited to, a name associated with the medical image, such as, a liver cancer examination or a colorectal cancer examination, purpose of acquiring the medical image, such as, surgery or a general health check-up, or analysis to be performed of the medical image. Then, the first segmenter 120 determines at least one candidate lesion based on the examination information to generate the candidate lesion list); and perform, in the target region, the image segmentation on the target (see abstract; apparatus and method are provided including a first segmenter and a second segmenter. The first segmenter is configured to generate a first segmentation result from a medical image using a first segmentation parameter for a candidate lesion. The second segmenter is configured to determine a target lesion to segment from among the candidate lesion based on the first segmentation result, and generate a second segmentation result using a second segmentation parameter to segment the target lesion). Park does not teach calculate, with respect to each of positions in the medical image, a feature reliability degree using the plurality of feature target regions; determine a target region indicating the region in which the target of image segmentation is present, based on the calculated feature reliability degree.
Soberanis-Mukul, in the same field of endeavor, teaches calculate, with respect to each of positions in the medical image, a feature reliability degree using the plurality of feature target regions (see section 2, 2.1; Overview: Consider an input volume V with V (x) the intensity value at the voxel position x ∈ R3; consider also, a trained CNN g(V (x);θ) with parameters θ….. The model uncertainty U is given by the entropy, computed as M U(x) = H(x) = − c=1 P(x)c logP(x)c…… MCDO uses the dropout layers of the network in inference time, and perform T stochastic passes on the network to approximate the output of a Bayesian neural network. Following this method, we get the model’s expectation); determine a target region indicating the region in which the target of image segmentation is present, based on the calculated feature reliability degree (see section 2, 2.1, 2.2; we define the potential incorrect elements by applying a binary threshold on the entropy volume Ub(x) = U(x) > τ, (3) where the uncertainty threshold τ controls the entropy necessary to consider a voxel x ∈ Y as uncertain….. The uncertainty is also used to define a 3-D shape-adapted region of interest (ROI) around the organ…. We define our working region as ROI(x) = dilation(Ub(x))∪Eb(x) with Eb the expectation binarized by a threshold of 0.5. Since the entropy is usually high in boundary re gions, including the dilated Ub ensures that the ROI is bigger enough to contain the organ. Also, this allows us to include high confidence background predictions (Y =0) for training the GCN. Including the expectation in the ROI give us high… The voxels x ∈ ROI define the nodes for G. Each node is represented by a feature vector containing intensity V (x), expectation E(x), and entropy U(x). Finally, we labeled each node in the graph according to its uncertainty level using the next rule:).
It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Park to utilize the cited limitations as suggested by Soberanis-Mukul. The suggestion/motivation for doing so would have been to outperform other refinement method (see abstract) Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Park, while the teaching of Soberanis-Mukul continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Regarding claim 8, the claim is analyzed as a method that implements the limitations of claim 1 (see rejection of claim 1).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 9,536,316 B2) with Soberanis-Mukul et al (ML: “Uncertainty-based graph convolutional networks for organ segmentation refinement”), and further in view of Hansis et al (Int J Cars: “Landmark constellation models for medical image content identification and localization”).
Regarding claim 2, Park with Soberanis-Mukul teaches all elements as mentioned above in claim 1. Park with Soberanis-Mukul does not teach expressly detect, from the medical image, position information of a plurality of feature points including at least a first feature point and a second feature point; generate a first feature target region based on the first feature point and generate a second feature target region based on the second feature point; and determine the target region, based on the first feature target region and the second feature target region.
Hansis, in the same field of endeavor, teaches detect, from the medical image, position information of a plurality of feature points including at least a first feature point and a second feature point (see abstract, intro; For each anatomical region, we train a constellation model indicating the mean relative locations and location variability of a set of landmarks.); generate a first feature target region based on the first feature point and generate a second feature target region based on the second feature point (see abstract, methods; each landmark is detected and the set of detected points compute a target region through point based registration); and determine the target region, based on the first feature target region and the second feature target region (see abstract, methods; This model is registered to the landmarks detected in a test image via point-based registration, using closed-form solutions; mean weighted residual registration error serves as a confidence measure).
It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Park with Soberanis-Mukul to utilize the cited limitations as suggested by Hansis. The suggestion/motivation for doing so would have been to improve localization performance (see abstract). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Park with Soberanis-Mukul, while the teaching of Hansis continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 9,536,316 B2) Soberanis-Mukul et al (ML: “Uncertainty-based graph convolutional networks for organ segmentation refinement”), Hansis et al (Int J Cars: “Landmark constellation models for medical image content identification and localization”), and further in view of De Vos et al (IEEE: “ConvNet-Based Localization of Anatomical Structures in 3-D Medical Images”).
Regarding claim 3, Park, Soberanis-Mukul with Hansis teaches all elements as mentioned above in claim 2. Park, Soberanis-Mukul with Hansis does not teach expressly determine, for each of the first feature point and the second feature point, a position, a shape, and a size of the feature target region associated with the feature point, based on a position of the feature point and relative positions between the feature point and the target.
De Vos, in the same field of endeavor, teaches determine, for each of the first feature point and the second feature point, a position, a shape, and a size of the feature target region associated with the feature point, based on a position of the feature point and relative positions between the feature point and the target (see section 2, 3).
It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Park, Soberanis-Mukul with Hansis to utilize the cited limitations as suggested by De Vos. The suggestion/motivation for doing so would have been to enable a more robust and accurate localization method (see abstract). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Park, Soberanis-Mukul with Hansis, while the teaching of De Vos continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 9,536,316 B2) Soberanis-Mukul et al (ML: “Uncertainty-based graph convolutional networks for organ segmentation refinement”), Hansis et al (Int J Cars: “Landmark constellation models for medical image content identification and localization”), and further in view of McEvoy et al (Genitourinary Imaging: “Preoperative Prostate MRI: A Road Map for Surgery”).
Regarding claim 6, Park, Soberanis-Mukul with Hansis teaches all elements as mentioned above in claim 2. Park, Soberanis-Mukul with Hansis does not teach expressly target is a prostate, and the plurality of feature points are a pubic bone point, a femur point, a pelvis point, a urethra entrance point, a prostate center point, and a prostate apex point.
McEvoy, in the same field of endeavor, teaches target is a prostate, and the plurality of feature points are a pubic bone point, a femur point, a pelvis point, a urethra entrance point, a prostate center point, and a prostate apex point (see pg. 383-385).
It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Park, Soberanis-Mukul with Hansis to utilize the cited limitations as suggested by McEvoy. The suggestion/motivation for doing so would have been to increase radiologists’ confidence in reporting relevant imaging findings (see pg. 383). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Park, Soberanis-Mukul with Hansis, while the teaching of McEvoy continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 9,536,316 B2), Soberanis-Mukul et al (ML: “Uncertainty-based graph convolutional networks for organ segmentation refinement”), Hansis et al (Int J Cars: “Landmark constellation models for medical image content identification and localization”), and further in view of Bagur et al (J. Magn. Reason. Imaging: “Pancreas MRI Segmentation Into Head, Body, and Tail Enables Regional Quantitative Analysis of Heterogeneous Disease”).
Regarding claim 7, Park, Soberanis-Mukul with Hansis teaches all elements as mentioned above in claim 2. Park, Soberanis-Mukul with Hansis does not teach expressly target is a pancreas, and the plurality of feature points are a head of the pancreas, a tail of the pancreas, and a bottom part of the head of the pancreas.
Bagur, in the same field of endeavor, teaches target is a prostate, and the plurality of feature points are a pubic bone point, a femur point, a pelvis point, a urethra entrance point, a prostate center point, and a prostate apex point (see pg. 997-998).
It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Park, Soberanis-Mukul with Hansis to utilize the cited limitations as suggested by Bagur. The suggestion/motivation for doing so would have been to enable a more robust segmentation and better performance and annotation efficiency (see pg. 1007). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Park, Soberanis-Mukul with Hansis, while the teaching of Bagur continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Allowable Subject Matter
Claims 4-5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claims 4-5, none of the references of record alone or in combination suggest or fairly teach wherein for each of the first feature target region and the second feature target region, on a basis of a position reliability coefficient of the feature point and information about the feature target region associated with the feature point, the processing circuitry is configured to calculate, with respect to each of positions in the medical image, a feature reliability degree indicating a probability that the position in the medical image will belong to the target region, and the processing circuitry is configured to calculate, with respect to each of the positions in the medical image, a total reliability degree from a plurality of feature reliability degrees calculated from each of the first feature target region and the second feature target region and to specify the target region on a basis of the total reliability degrees.
Conclusion
Claims 1-3, 6-8 are rejected. Claims 4-5 are objected to as being dependent upon a rejected base claim.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD PARK. The examiner’s contact information is as follows:
Telephone: (571)270-1576 | Fax: 571.270.2576 | Edward.Park@uspto.gov
For email communications, please notate MPEP 502.03, which outlines procedures pertaining to communications via the internet and authorization. A sample authorization form is cited within MPEP 502.03, section II.
The examiner can normally be reached on M-F 9-6 CST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer, can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWARD PARK/
Primary Examiner, Art Unit 2666