DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1 and 34 are amended. Claims 1-35 are pending in this application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-10 and 25-35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al., “Anatomy-guided Multimodal Registration by Learning Segmentation without Ground Truth: Application to Intraprocedural CBCT/MR Liver Segmentation and Registration” in view of Vilsmeier et al., US 2023/0087494.
Regarding claim 1, Zhou discloses an image processing device (fig. 1; section 2.1; Anatomy-Preserving domain Adaptation to Segmentation Network (APA2Seg-Net)) comprising:
a processor (a processor is implied),
wherein the processor is configured to execute:
an image acquisition process of acquiring a first image in a first image space and a second image in a second image space different from the first image space (fig. 3; section 2.2; CBCT and MR images);
a first conversion process of converting the first image into an image in a third image space that is characterized by a region of interest, using a first converter (fig. 3; section 2.2; CBCT Segmenter);
a second conversion process of converting the second image into an image in the third image space, using a second converter (fig. 3; section 2.2; MR Segmenter).
Zhou further discloses a created registered CBCT-MR pair, which provide a better visualization of tumor (fig. 3; section 2.2).
Zhou discloses claim 1 as enumerated above, but Zhou does not explicitly disclose wherein the first image and the second image are images of a same patient and a first similarity calculation process of calculating a first similarity between the first image and the second image in the third image space as claimed.
However, Vilsmeier discloses patient image comparison data is determined based on the registration data, wherein the patient image comparison data describes a measure of similarity between the first medical image and the second medical image. The patient similarity data describes that the first medical image and the second medical image were taken of the same patient (fig. 2, element S26; para 0015 and 0140, and 0152).
Therefore, taking the combined disclosures of Zhou and Vilsmeier as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate patient image comparison data is determined based on the registration data, wherein the patient image comparison data describes a measure of similarity between the first medical image and the second medical image. The patient similarity data describes that the first medical image and the second medical image were taken of the same patient as taught by Vilsmeier into the invention of Zhou for the benefit of determining whether two medical images were taken of the same patient (Vilsmeier: para 0008).
Regarding claim 2, the image processing device according to claim 1, Zhou and Vilsmeier in the combination further disclose
wherein, in the first conversion process, the first image is converted into images in a plurality of the third image spaces (Zhou: fig. 5; section 3.2, second paragraph),
in the second conversion process, the second image is converted into images in the plurality of third image spaces to correspond to the first conversion process (Zhou: fig. 6; section 3.2; second paragraph), and
in the first similarity calculation process, the first similarity in each of the plurality of third image spaces is calculated, and a plurality of the calculated first similarities are integrated (Vilsmeier: fig. 2, element S26; para 0015 and 0140).
Regarding claim 3, the image processing device according to claim 2, Zhou in the combination further disclose
wherein, in the first conversion process, the first image is converted by at least two or more first converters into images in the third image spaces which correspond to each of the first converters (fig. 3; section 2.2), and
in the second conversion process, the second image is converted by at least two or more second converters into images in the third image spaces, which correspond to each of the second converters, to correspond to the first conversion process (fig. 3; section 2.2).
Regarding claim 4, the image processing device according to claim 2, Vilsmeier in the combination further disclose wherein, in the first similarity calculation process, the first similarity is calculated for each of the plurality of third image spaces (fig. 2, element S26; para 0015 and 0140).
Regarding claim 5, the image processing device according to claim 1, Zhou in the combination further disclose
wherein the region of interest is an anatomical region of interest, and the third image space is a feature image space that is characterized by the anatomical region of interest (fig. 5; section 3.2, second paragraph).
Regarding claim 6, the image processing device according to claim 1, Zhou in the combination further disclose
wherein the processor is configured to:
execute a region-of-interest receiving process of receiving an input of the region of interest (figs. 3-4; sections 2.2 and 3.1),
the first converter converts the first image on the third image space characterized by the region of interest received in the region-of-interest receiving process (fig. 3; section 2.2), and
the second converter converts the second image on the third image space characterized by the region of interest received in the region-of-interest receiving process (fig. 3; section 2.2).
Regarding claim 7, the image processing device according to claim 1, Zhou and Vilsmeier in the combination further disclose
wherein the first converter outputs a first reliability indicating a reliability of the image after the conversion (Zhou: fig. 5; section 3.2, second paragraph), and
in the first similarity calculation process, the first similarity that has a weight or is to be selected is calculated on the basis of the first reliability (Vilsmeier: fig. 2, element S26; para 0015 and 0140).
Regarding claim 8, the image processing device according to claim 7, Zhou in the combination further disclose
wherein the first converter outputs the first reliability for each pixel of the image after the conversion (fig. 5; section 3.2, second paragraph).
Regarding claim 9, the image processing device according to claim 1, Zhou and Vilsmeier in the combination further disclose
wherein the first converter outputs a first reliability indicating a reliability of the image after the conversion (Zhou: fig. 5; section 3.2, second paragraph),
the second converter outputs a second reliability indicating a reliability of the image after the conversion (Zhou: fig. 6; section 3.2, second paragraph), and
in the first similarity calculation process, the first similarity having a weight is calculated on the basis of the first reliability and the second reliability (Vilsmeier: fig. 2, element S26; para 0015 and 0140).
Regarding claim 10, the image processing device according to claim 9, Zhou in the combination further disclose
wherein the first converter outputs the first reliability for each pixel of the image after the conversion (fig. 5; section 3.2, second paragraph), and
the second converter outputs the second reliability for each pixel of the image after the conversion (fig. 6; section 3.2, second paragraph).
Regarding claim 25, the image processing device according to claim 1, Vilsmeier in the combination further disclose
wherein the processor is configured to:
execute a second similarity calculation process of calculating a second similarity between the first image and the second image (para 0143), and
in the first similarity calculation process, the first similarity weighted on the basis of the second similarity is calculated (para 0143).
Regarding claim 26, the image processing device according to claim 25, Vilsmeier in the combination further disclose
wherein the second similarity is an amount of mutual information between a simple X-ray image and a pseudo X-ray image generated from a three-dimensional X-ray CT image (para 0131 and 0143).
Regarding claim 27, the image processing device according to claim 25, Vilsmeier in the combination further disclose
wherein the second similarity is a gradient correlation between a simple X-ray image and a pseudo X-ray image generated from a three-dimensional X-ray CT image (para 0131 and 0143).
Regarding claim 28, the image processing device according to claim 1, Zhou and Vilsmeier in the combination further disclose
wherein a registration parameter between the first image and the second image is input to the second converter (Zhou: fig. 3; section 2.2), and the processor is configured to:
execute an optimization process of updating the registration parameter (Zhou: fig. 3; section 2.2) on the basis of the first similarity (Vilsmeier: fig. 2, element S26; para 0015 and 0140).
Regarding claim 29, the image processing device according to claim 1, Vilsmeier in the combination further disclose
wherein the first image is a simple X-ray image (para 0131), and
the second image is a three-dimensional X-ray CT image (para 0131).
Regarding claim 30, the image processing device according to claim 29, Zhou in the combination further disclose
wherein the second converter converts the three-dimensional X-ray CT image into an image obtained by correcting an attenuation value of the second image in a projection plane (fig. 3; section 2.2).
Regarding claim 31, the image processing device according to claim 30, Zhou in the combination further disclose
wherein the first converter converts the simple X-ray image on a two-dimensional feature image space with a model constructed using machine learning (fig. 3; section 2.2), and
the second converter converts the three-dimensional X-ray CT image into the image obtained by correcting the attenuation value in the projection plane by masking a label of the region of interest on the three-dimensional X-ray CT image and projecting the masked three-dimensional X-ray CT image onto the two-dimensional feature image space (fig. 3; section 2.2).
Regarding claim 32, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Regarding claim 33, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Regarding claim 34, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Regarding claim 35, this claim recites substantially the same limitations that are performed by claim 34 above, and it is rejected for the same reasons.
Allowable Subject Matter
Claims 11-24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The prior art made of record and considered pertinent to the applicant's disclosure, taken individually or in combination, does not teach the claimed invention having the following limitations, in combination with the remaining claimed limitations.
Regarding dependent claim 11, the prior art does not teach or suggest the claimed invention having “wherein the first converter converts the first image into a first shape feature and a first texture feature, the second converter converts the second image into a second shape feature and a second texture feature, and in the first similarity calculation process, the first similarity is calculated on the basis of a similarity between the first shape feature and the second shape feature and a similarity between the first texture feature and the second texture feature”, and a combination of other limitations thereof as recited in the claims.
Regarding claims 12-17, the claim has been found allowable due to its dependencies to claim 11 above.
Regarding dependent claim 18, the prior art does not teach or suggest the claimed invention having “wherein the first converter converts the first image into a first anatomical region and a first disease region, the second converter converts the second image into a second anatomical region and a second disease region, and in the first similarity calculation process, the first similarity is calculated on the basis of a similarity between the first anatomical region and the second anatomical region and a similarity between the first disease region and the second disease region”, and a combination of other limitations thereof as recited in the claims.
Regarding claims 19-24, the claim has been found allowable due to its dependencies to claim 18 above.
Response to Arguments
Applicant's arguments filed 03/02/2026 have been fully considered but they are not persuasive.
Regarding independent claim 1, Applicant argues that Vilsmeier does not disclose “the first image and the second image are images of a same patient” as claimed. Examiner respectfully disagrees. As stated in the rejection above, Vilsmeier in the combination discloses the patient similarity data describes that the first medical image and the second medical image were taken of the same patient (para 0152).
The MPEP 2111 states that the USPTO must employ the “broadest reasonable interpretation" of the claims. With the broadest reasonable interpretation, Examiner interprets the claimed “the first image and the second image are images of a same patient”, in light of the specification, as the first medical image and the second medical image were taken or acquired of the same patient.
Therefore, the claimed “the first image and the second image are images of a same patient” reads on the disclosure of Vilsmeier.
In view of the above arguments, the Examiner believes all rejections are proper and should be maintained.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAN D HUYNH/Primary Examiner, Art Unit 2665