Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,923

DIFFEOMORPHIC MR IMAGE REGISTRATION AND RECONSTRUCTION

Non-Final OA §103
Filed
Mar 20, 2024
Examiner
MILLER, JOHN W
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Hyperfine Operations Inc.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
2y 1m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
11 granted / 29 resolved
-20.1% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
7 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
22.2%
-17.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 6, 15, 17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0197662 A1 (Sloan et al), hereinafter Sloan, in view of US 2015/0154741 A1 (Chen et al), hereinafter Chen. As to claims 1, 8, and 15, Sloan discloses a computer-implemented method to train a neural network to perform image registration (para [0072]-[0075]) (also, a non-transitory computer-readable medium to train a neural network to perform image registration with instructions stored thereon that, when executed by a processor of a server, cause the processor to perform operations (para [0028], [0030]-[0032], [0072]- [0075]), the operations comprising:), the method comprising: providing as input to the neural network, a first image and a second image (The transformation regressor 36 uses the reference image 30 and synthesized floating image 34 as inputs to its neural network., para [0047]; para [0085], [0176]-[0177]), determining, using the neural network, a dense displacement field based at least on the first image and the second image (The transformation regressor 36 is a neural network which is configured to learn to predict the non-rigid transformation to align two given images. The non-rigid transformation is described by a displacement field, para [0037]); obtaining, using the neural network, a transformed image based on the first image and the dense displacement field, wherein the transformed image is aligned with the second image (The predicted displacement field is applied to the floating image to obtain a transformed floating image, which may also be referred to as a corrected floating image. The objective function is computed between the reference image and the transformed floating image, para [0098]); computing a registration loss value based on comparison of the transformed image and the second image (e.g. discriminator feedback 82 is a value for a further function, which may be described as a discriminatory loss function, para [0099]); and adjusting one or more parameters of the neural network based on the registration loss value (e.g. weights within the transformation regressor 66 are adjusted whilst training to maximize the error signal of the discriminator 76, para [0100]-[0101]). Sloan fails to teach wherein the first image and the second image are reconstructed from a fast spin echo (FSE) magnetic resonance (MR) imaging sequence. However, Chen, in an analogous art, teaches wherein the first image and the second image are reconstructed from a fast spin echo (FSE) magnetic resonance (MR) imaging sequence (para [0026], [0033], [0034], [0096], [0117], [0134]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have implemented the system of Sloan in the environment of Chen by including ‘wherein the first image and the second image are reconstructed from a fast spin echo (FSE) magnetic resonance (MR) imaging sequence’ because the modification would serve to smooth the shot-to-shot motion-induced phase variations, and produce information of high signal to noise ratio. As to claims 4 and 17, Chen teaches wherein the first image is reconstructed from a first set of echoes of the FSE MR imaging sequence, and wherein the second image is reconstructed from a second set of echoes of the FSE MR imaging sequence (para [0026], [0033], [0034], [0096], [0117], [0134]). As to claims 6 and 19, Sloan discloses wherein computing the registration loss value comprises: minimizing a local normalized cross correlation value based on the transformed image and the second image (para [0099]-[0101]). Claim(s) 2-3, 5, 7-14, 16, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sloan in view of Chen and in further view of US 2019/0205766 A1 (Krebs et al), hereinafter Krebs. As to claims 2 and 16, neither Sloan nor Chen teaches wherein determining the dense displacement field comprises: predicting, using the neural network, a stationary velocity field; and integrating, using the neural network, the stationary velocity field to determine the dense displacement field. However, Krebs, in an analogous art, teaches wherein determining the dense displacement field comprises: predicting, using the neural network, a stationary velocity field (para [0044]-[0045], [0048]-[0051]); and integrating, using the neural network, the stationary velocity field to determine the dense displacement field (para [0044]-[0045], [0048]-[0051]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have modified the system of Sloan and Chen by including wherein determining the dense displacement field comprises: predicting, using the neural network, a stationary velocity field; and integrating, using the neural network, the stationary velocity field to determine the dense displacement field as taught by Krebs because the modification would output the corresponding displacements, which are diffeomorphic (Krebs, para [0045]). As to claim 3, neither Sloan nor Chen teaches wherein the dense displacement field is a diffeomorphic displacement field. However, Krebs, in an analogous art, teaches wherein the dense displacement field is a diffeomorphic displacement field (para [0044]- [0045]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have modified the system of Sloan and Chen by including wherein the dense displacement field is a diffeomorphic displacement field as taught by Krebs because the modification would restrict the space of dense deformations to the subspace of bijective and invertible maps, hence guaranteeing invertibility, smoothness, and no foldings (Krebs, para [0003]). As to claims 5 and 18, neither Sloan nor Chen teaches wherein obtaining the transformed image comprises: applying, with a spatial transform network, the dense displacement field to the first image, wherein the spatial transform network outputs the transformed image. However, Krebs, in analogous art, teaches wherein obtaining the transformed image comprises: applying, with a spatial transform network, the dense displacement field to the first image, wherein the spatial transform network outputs the transformed image (para [0044] -[0046], [0076]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have modified the system of Sloan and Chen by including wherein obtaining the transformed image comprises: applying, with a spatial transform network, the dense displacement field to the first image, wherein the spatial transform network outputs the transformed image as taught by Krebs because the modification may be used for segmentation or surgery planning where one or more images showing segmentation or the surgery plan are output (Krebs, para [0076]). As to claims 7 and 20, neither Sloan nor Chen explicitly teaches wherein training the neural network is an unsupervised process. However, Krebs, in analogous art, teaches wherein training the neural network is an unsupervised process (para [0017], [0051]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have modified the system of Sloan and Chen by including wherein training the neural network is an unsupervised process as taught by Krebs because the modification would be used as a loss function (Krebs, para [0017]). As to claim 8, Sloan discloses a device to perform image registration (para [0072]-[0075]), the device comprising: one or more processors (para [0028], [0030]-[0032], [0072]-[0075]); and a memory coupled to the one or more processors, with instructions stored thereon that, when executed by the processor (para [0028], [0030]-[0032], [0072]-[0075]), cause the one or more processors to perform operations comprising: providing a first image and a second image as input to a trained neural network (e.g. The transformation regressor 36 uses the reference image 30 and synthetic floating image 34 as inputs to its neural network., para [0047]; para [0085], [0176]-[0177]), obtaining, as output of the trained neural network, a dense displacement field for the first image (e.g. The transformation regressor 36 is a neural network which is configured to learn to predict the non-rigid transformation to align two given images. The non-rigid transformation is described by a displacement field, para [0037]): obtaining a transformed image by applying the dense displacement field to the first image with a spatial transform network, wherein corresponding features of the transformed image and the second image are aligned (e.g. the predicted displacement field is applied to the floating image to obtain a transformed floating image, which may also be referred to as a corrected floating image. The objective function is computed between the reference image and the transformed floating image, para [0098]). Sloan fails to disclose wherein the first image and the second image are reconstructed from a fast spin echo (FSE) magnetic resonance (MR) imaging sequence; outputting the transformed image. However, Chen, in an analogous art, teaches wherein the first image and the second image are reconstructed from a fast spin echo (FSE) magnetic resonance (MR) imaging sequence (para [0026], [0033], [0034], [0096], [0117], [0134]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have implemented the system of Sloan in the environment of Chen by including ‘wherein the first image and the second image are reconstructed from a fast spin echo (FSE) magnetic resonance (MR) imaging sequence’ because the modification would serve to smooth the shot-to-shot motion-induced phase variations, and produce information of high signal to noise ratio. Neither Sloan nor Chen explicitly teaches outputting the transformed image. However, Krebs, in an analogous art, teaches outputting the transformed image (para [0076]). It would have been obvious to one of ordinary skill in the art to have modified the system of Sloan and Chen by including outputting the transformed image as taught by Siemens because the modification would allow for more diagnostically useful comparison of the images from different times and/or modalities (Krebs, para [0076]). As to claim 9, Krebs teaches wherein obtaining the dense displacement field comprises: obtaining, using the neural network, a stationary velocity field (para [0044], [0048]-[0051]); and integrating, using the neural network, the stationary velocity field to determine the dense displacement field (para [0044], [0048]-[0051]). As to claim 10, neither Sloan nor Chen teaches wherein the device is a portable low-field MR imaging device having a display device and at least one permanent magnet. However, Krebs, in an analogous art, teaches wherein the device is a portable low-field MR imaging device having a display device and at least one permanent magnet (para [0081]-[0084], [0093]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have modified the system of Sloan and Chen by including wherein the device is a portable low-field MR imaging device having a display device and at least one permanent magnet as taught by Krebs because the modification would process medical images from scans of a patient (Krebs, para [0084]) and use a smart phone for displaying the output, such as a warped image or other image based on the registration (Krebs, para [0093]). As per claim 11, Chen teaches wherein the first image is reconstructed from a first set of echoes of the FSE MR imaging sequence, and wherein the second image is reconstructed from a second set of echoes of the FSE MR imaging sequence (para [0026], [0033], [0034], [0096], [0117], [0134]). As per claim 12, Chen teaches wherein the first set of echoes comprise odd echoes, and wherein the second set of echoes comprises even echoes (para [0117], [0134]). As per claim 13, Sloan disclose wherein the first image and the second image are of a human tissue or a human organ (para [0134]). As per claim 14, Krebs teaches wherein outputting the transformed image comprises displaying the transformed image on the display device (para [0076]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN W MILLER whose telephone number is 571-272-7353. The examiner can normally be reached Monday - Friday 7:30 AM - 4:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Deborah Reynolds can be reached at 571-272-0734. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN W MILLER/Supervisory Patent Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Mar 20, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598347
SIGNAL PROCESSING DEVICE AND IMAGE DISPLAY APPARATUS INCLUDING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12556756
SYSTEM COMPRISING TV AND REMOTE CONTROL, AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Feb 17, 2026
Patent 12555179
DYNAMICALLY CONFIGURABLE VIDEO PROCESSING ARCHITECTURE
2y 5m to grant Granted Feb 17, 2026
Patent 12515524
DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 06, 2026
Patent 12498782
Machine-Based Classification of Object Motion as Human or Non-Human as Basis to Facilitate Controlling Whether to Trigger Device Action
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
44%
With Interview (+6.5%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month