DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-11, 13-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by D1.1
With regard to claim 1, D1 teach computer-implemented method for generating a registered image based on at least one first image of an object compliant with a source imaging modality and on at least one second image of said object compliant with a target imaging modality (see abstract, ¶ 1), the method comprising: receiving said at least one first image and said at least one second image (see ¶ 20), transforming said at least one first image into at least one first modality transformed image, so that said at least one first modality transformed image is compliant with said target imaging modality, by using a first machine learning model (see ¶ 21: neural network to transform the image to resemble target modality), implementing a second machine learning model receiving as inputs said at least one second image and at least one first modality transformed image, so as to obtain at least one first registered image being said at least one second image registered on said at least one first modality transformed image (sbZi) (see ¶ 21: neural network to perform rigid transformation); implementing a third machine learning model, said third machine learning model receiving as inputs said at least one first modality transformed image and said at least one first registered image, so as to obtain said registered image, wherein said registered image is registered on said at least one first modality transformed image (see ¶¶ 21, 28: third neural network to implement deformable transformation followed by rigid transformation).
With regard to claim 2, D1 teach method according to claim 1, wherein said object is a human body part (see ¶ 20: medical image of human heart/brain).
With regard to claim 3, D1 teach method according to claim 1, wherein the at least one first image comprises at least one 2D image and wherein the at least one second image comprises one 3D image, said at least one 3D image comprising a plurality of N 2D slices of said object (see ¶ 20: first and second modality may be 2D or 3D images).
With regard to claim 4, D1 teach method according to claim 1, wherein the at least one first image comprises at least one 3D image comprising a plurality of N 2D slices of said object and wherein the at least one second image comprises at least one 2D image (see ¶ 20: first and second modality may be 2D or 3D images).
With regard to claim 5, D1 teach method according to claim 3, wherein implementing said second machine learning model comprises iterations, each iteration among said iterations comprising: i) registering said at least one second image onto said at least one first modality transformed image by applying a current global rigid spatial transform to said at least one second image, to obtain at least one current second transformed image, said current second transformed image comprising N current 2D transformed slices of said object (see ¶¶ 20, 22: registering by applying rigid transformation, implicitly comprises slices when second image is 3D); ii) matching each first modality transformed image among the at least one first modality transformed images to one corresponding current 2D transformed slice among the N current second transformed slices so as to meet a similarity criterion, said similarity criterion being evaluated on the basis of a similarity measure estimated between each first modality transformed image (sbZi) and one corresponding current 2D transformed slice (see ¶¶ 22, 23, 26: similarity criteria, minimizing dissimilarity measure); wherein, step i) and step ii) are repeated on the latest current second transformed image till said similarity criterion is met (see ¶¶ 22, 23, 26: iteratively repeated until dissimilarity criteria is minimized).
With regard to claim 6, D1 teach method according to claim 5, further comprising, preliminarily to implementing the second machine learning model: obtaining at least one first segmentation mask corresponding to structures of interest in said at least one first image (see ¶¶ 29, 30, 33: extracting features representing structural information of anatomical structure, implicit that a structure is segmented), obtaining at least one second segmentation mask corresponding to said structures of interest in said plurality of N 2D slices (see ¶¶ 29-30, 33: : extracting features representing structural information of anatomical structure, implicit that a structure is segmented from second image comprising N slices when the second image is 3D), and wherein step i) of each iteration comprises implementing said second machine learning model receiving as additional inputs said at least one first segmentation mask and said at least one second segmentation mask, so as to obtain said current global rigid transform (see ¶¶ 21, 23, 28-30: features or structure used to perform registration, rigid transformation is performed interatively).
With regard to claim 7, D1 teach method according to claim 6, wherein said object is a human body part, and the structures of interest are stiff regions such as bones, cartilage, or tendons (see ¶ 33: anatomical features of human body).
With regard to claim 8, D1 teach method according to claim 5, wherein step ii) of each iteration comprises updating, for each first modality transformed image, a coordinate along a common axis of said plurality of N current 2D transformed slices, by maximizing the similarity measure (see ¶¶ 21-23: minimizing dissimilarity, registration implicitly performed along common axis).
With regard to claim 9, D1 teach method according to claim 1, wherein the first machine learning model is a cycle generative adversarial network, GAN, said first machine learning model being configured to generate, from an input source image compliant with said source imaging modality, a simulated image compliant with said target imaging modality respectively associated with said input source image (see ¶¶ 3, 34-35: generative adversarial network used to simulate an image that resembles target modality).
With regard to claim 10, D1 teach method according to claim 3, wherein: said at least one first modality transformed image corresponds to a first 3D representation of said object (see ¶¶ 20-21: 3D image), the at least one first registered image comprises one 3D first registered image with voxels (see ¶¶ 20-21, 23: 3D registered image) and the third machine learning model outputs, for each voxel, displacement information corresponding to a displacement between said one 3D first registered image and said first 3D representation (see ¶¶ 20-21, 33, 28: third neural network to perform deformable registration comprising displacement).
With regard to claim 11, see discussion of claim 10.
With regard to claim 13, D1 teach method according to claim 3, wherein the target imaging source modality is computed tomography imaging or magnetic resonance imaging (see ¶ 20: CT/MRI).
With regard to claim 14, see discussion of claim 1.
With regard to claims 15-20, see discussion of corresponding claims above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over D1.
With regard to claim 12, D1 teach method according to claim 3, but fails to explicitly teach wherein the source imaging modality is histopathology, whole slide imaging, or echography. However, Examiner takes Official Notice to the fact that histopathology, whole slide imaging and echography are well known imaging modalities before the effective filing date and that one skilled in the art would have been motivated to implement the image registration teachings of D1 on images of these modalities yielding predictable results.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVINASH YENTRAPATI whose telephone number is (571)270-7982. The examiner can normally be reached on 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AVINASH YENTRAPATI/Primary Examiner, Art Unit 2672
1 US Publication No. 2023/0079164.