Prosecution Insights
Last updated: April 19, 2026
Application No. 18/634,088

METHOD FOR GENERATING A REGISTERED IMAGE

Non-Final OA §102§103
Filed
Apr 12, 2024
Examiner
YENTRAPATI, AVINASH
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Centre Leon Berard
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
69%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
499 granted / 671 resolved
+12.4% vs TC avg
Minimal -5% lift
Without
With
+-5.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
698
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 671 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-11, 13-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by D1.1 With regard to claim 1, D1 teach computer-implemented method for generating a registered image based on at least one first image of an object compliant with a source imaging modality and on at least one second image of said object compliant with a target imaging modality (see abstract, ¶ 1), the method comprising: receiving said at least one first image and said at least one second image (see ¶ 20), transforming said at least one first image into at least one first modality transformed image, so that said at least one first modality transformed image is compliant with said target imaging modality, by using a first machine learning model (see ¶ 21: neural network to transform the image to resemble target modality), implementing a second machine learning model receiving as inputs said at least one second image and at least one first modality transformed image, so as to obtain at least one first registered image being said at least one second image registered on said at least one first modality transformed image (sbZi) (see ¶ 21: neural network to perform rigid transformation); implementing a third machine learning model, said third machine learning model receiving as inputs said at least one first modality transformed image and said at least one first registered image, so as to obtain said registered image, wherein said registered image is registered on said at least one first modality transformed image (see ¶¶ 21, 28: third neural network to implement deformable transformation followed by rigid transformation). With regard to claim 2, D1 teach method according to claim 1, wherein said object is a human body part (see ¶ 20: medical image of human heart/brain). With regard to claim 3, D1 teach method according to claim 1, wherein the at least one first image comprises at least one 2D image and wherein the at least one second image comprises one 3D image, said at least one 3D image comprising a plurality of N 2D slices of said object (see ¶ 20: first and second modality may be 2D or 3D images). With regard to claim 4, D1 teach method according to claim 1, wherein the at least one first image comprises at least one 3D image comprising a plurality of N 2D slices of said object and wherein the at least one second image comprises at least one 2D image (see ¶ 20: first and second modality may be 2D or 3D images). With regard to claim 5, D1 teach method according to claim 3, wherein implementing said second machine learning model comprises iterations, each iteration among said iterations comprising: i) registering said at least one second image onto said at least one first modality transformed image by applying a current global rigid spatial transform to said at least one second image, to obtain at least one current second transformed image, said current second transformed image comprising N current 2D transformed slices of said object (see ¶¶ 20, 22: registering by applying rigid transformation, implicitly comprises slices when second image is 3D); ii) matching each first modality transformed image among the at least one first modality transformed images to one corresponding current 2D transformed slice among the N current second transformed slices so as to meet a similarity criterion, said similarity criterion being evaluated on the basis of a similarity measure estimated between each first modality transformed image (sbZi) and one corresponding current 2D transformed slice (see ¶¶ 22, 23, 26: similarity criteria, minimizing dissimilarity measure); wherein, step i) and step ii) are repeated on the latest current second transformed image till said similarity criterion is met (see ¶¶ 22, 23, 26: iteratively repeated until dissimilarity criteria is minimized). With regard to claim 6, D1 teach method according to claim 5, further comprising, preliminarily to implementing the second machine learning model: obtaining at least one first segmentation mask corresponding to structures of interest in said at least one first image (see ¶¶ 29, 30, 33: extracting features representing structural information of anatomical structure, implicit that a structure is segmented), obtaining at least one second segmentation mask corresponding to said structures of interest in said plurality of N 2D slices (see ¶¶ 29-30, 33: : extracting features representing structural information of anatomical structure, implicit that a structure is segmented from second image comprising N slices when the second image is 3D), and wherein step i) of each iteration comprises implementing said second machine learning model receiving as additional inputs said at least one first segmentation mask and said at least one second segmentation mask, so as to obtain said current global rigid transform (see ¶¶ 21, 23, 28-30: features or structure used to perform registration, rigid transformation is performed interatively). With regard to claim 7, D1 teach method according to claim 6, wherein said object is a human body part, and the structures of interest are stiff regions such as bones, cartilage, or tendons (see ¶ 33: anatomical features of human body). With regard to claim 8, D1 teach method according to claim 5, wherein step ii) of each iteration comprises updating, for each first modality transformed image, a coordinate along a common axis of said plurality of N current 2D transformed slices, by maximizing the similarity measure (see ¶¶ 21-23: minimizing dissimilarity, registration implicitly performed along common axis). With regard to claim 9, D1 teach method according to claim 1, wherein the first machine learning model is a cycle generative adversarial network, GAN, said first machine learning model being configured to generate, from an input source image compliant with said source imaging modality, a simulated image compliant with said target imaging modality respectively associated with said input source image (see ¶¶ 3, 34-35: generative adversarial network used to simulate an image that resembles target modality). With regard to claim 10, D1 teach method according to claim 3, wherein: said at least one first modality transformed image corresponds to a first 3D representation of said object (see ¶¶ 20-21: 3D image), the at least one first registered image comprises one 3D first registered image with voxels (see ¶¶ 20-21, 23: 3D registered image) and the third machine learning model outputs, for each voxel, displacement information corresponding to a displacement between said one 3D first registered image and said first 3D representation (see ¶¶ 20-21, 33, 28: third neural network to perform deformable registration comprising displacement). With regard to claim 11, see discussion of claim 10. With regard to claim 13, D1 teach method according to claim 3, wherein the target imaging source modality is computed tomography imaging or magnetic resonance imaging (see ¶ 20: CT/MRI). With regard to claim 14, see discussion of claim 1. With regard to claims 15-20, see discussion of corresponding claims above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over D1. With regard to claim 12, D1 teach method according to claim 3, but fails to explicitly teach wherein the source imaging modality is histopathology, whole slide imaging, or echography. However, Examiner takes Official Notice to the fact that histopathology, whole slide imaging and echography are well known imaging modalities before the effective filing date and that one skilled in the art would have been motivated to implement the image registration teachings of D1 on images of these modalities yielding predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVINASH YENTRAPATI whose telephone number is (571)270-7982. The examiner can normally be reached on 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AVINASH YENTRAPATI/Primary Examiner, Art Unit 2672 1 US Publication No. 2023/0079164.
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602803
HEAD-MOUNTED DISPLAY AND METHOD FOR DEPTH PREDICTION
2y 5m to grant Granted Apr 14, 2026
Patent 12579791
AUTOMATED METHODS FOR GENERATING LABELED BENCHMARK DATA SET OF GEOLOGICAL THIN-SECTION IMAGES FOR MACHINE LEARNING AND GEOSPATIAL ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12562264
METHOD FOR THE RECOMPOSITION OF A KIT OF SURGICAL INSTRUMENTS AND CORRESPONDING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12536646
STRUCTURE DAMAGE CAUSE ESTIMATION SYSTEM, STRUCTURE DAMAGE CAUSE ESTIMATION METHOD, AND STRUCTURE DAMAGE CAUSE ESTIMATION SERVER
2y 5m to grant Granted Jan 27, 2026
Patent 12536654
THE SYSTEM AND METHOD FOR STOOL IMAGE ANALYSIS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
69%
With Interview (-5.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 671 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month