Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,372

METHODS AND APPARATUS FOR DEEP LEARNING BASED MOTION DETECTION IN NUCLEAR IMAGING SYSTEMS

Non-Final OA §101§102§103§DP
Filed
Mar 27, 2024
Examiner
HELCO, NICHOLAS JOHN
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Siemens Healthcare
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
26 granted / 36 resolved
+10.2% vs TC avg
Strong +44% interview lift
Without
With
+44.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants This action is in response to the Application filed on 03/27/2024. Claims 1-20 are pending. Information Disclosure Statement The Information Disclosure Statement (IDS) submitted on 03/27/2024 has been fully considered by the examiner. Claim Objections Claim 17 is objected to by the examiner. Regarding claim 17, claim 17 is objected to because the examiner believes the claim was intended to read storing parameters characterizing the second trained neural network in a data repository (emphasis added via underline). Claims 15, 16, and 17 are otherwise mirrored versions of claims 12, 13, and 14, respectively, with 12-14 focusing on the first trained network, and 15-17 focusing on the second trained network. However, if the current version of the claim was Applicant’s intention, it would not raise any issues under 35 U.S.C. 112. In the interest of compact prosecution, the above suggested changes to claim 17 will be used in the following claim rejections. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-6, 10-11, and 18-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7, 9, and 19 of copending Application No. 18/918,677 in view of Chatterjee et al. (U.S. Publ. US-2023/0260142-A1). Regarding claim 1, the claim language of present claim 1 and reference claim 7 is substantially similar, as described below: Row # Present Application 18/618,372 Claim 1 Reference Application 18/918,677 Claim 7 Notes 1 A computer-implemented method comprising: A method for image registration, comprising: Present claim 1’s preamble is broader. 2 receiving positron emission tomography (PET) measurement data and co-modality measurement data from an image scanning system; receiving an anatomical image and a functional image of a structure of interest, and first and second trained convolutional neural networks, The reference Application only claims “receiving” the two images, not additionally generating them from respective measurement data as in present claim 1. However, paragraph 0003 of the reference Application’s publication states that PET images can be an instance of “functional images”, and that CT images can be an instance of “anatomical images.” The present application makes clear that CT images are also an example of “co-modality images.” Thus, the examiner considers PET images analogous to functional images, and co-modality images analogous to anatomical images. 3 generating a PET image based on the PET measurement data and a co-modality image based on the co-modality measurement data; 4 wherein the anatomical image and the functional image are acquired at different scan times or there is movement of the structure of interest between the different scan times; Present claim 1 is broader in that it does not recite a similar limitation. 5 inputting the PET image and the co-modality image to a first trained neural network and, based on inputting the PET image and the co-modality image to the first trained neural network, generating first features of the PET image and second features of the co-modality image; extracting features by applying the anatomical image and the functional image as input to the first trained convolutional neural network; Essentially the same. 6 inputting the first features and the second features to a second trained neural network and, based on inputting the first output data to the second trained neural network, generating displacement data characterizing a displacement between the first features and the second features; estimating a deformation field by applying the features as input to the second trained convolutional neural network; The examiner interprets reference claim 7’s “deformation field” as a narrower instance of present claim 1’s “displacement data.” 7 and generating display data based on the displacement data, and applying the deformation field to the anatomical image to generate a registered anatomical image. The examiner interprets reference claim 7’s “registered anatomical image” to be a narrower instance of present claim 1’s “display data.” 8 and transmitting the display data for display. The reference application does not claim any form of transmitting output data. As seen above, the only limitations of present claim 1 not present in reference claim 7 are those of rows 2-3 and 8. In other words, present claim 1 is only narrower than reference claim 7 in that present claim 1 additionally recites first generating the images from their respective measurement data before the image processing, and at last transmitting the final display/output data. Pertaining to the same field of endeavor, Chatterjee discloses receiving positron emission tomography (PET) measurement data and co-modality measurement data from an image scanning system; generating a PET image based on the PET measurement data and a co-modality image based on the co-modality measurement data (see figure 5, movable image 104, fixed image 106 and paragraphs 0061-0062, where the movable image can be a CT scan/co-modality image generated from an image scanning system, and the fixed image can be a PET image generated from an image scanning system); and transmitting the display data for display (see paragraph 0107). The reference Application and Chatterjee are considered analogous art, as they are both directed to neural networks for generating displacement fields between medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Chatterjee into the reference Application because doing so allows for obtaining medical images of different modalities (see Chatterjee paragraphs 0061-0062), and because doing so allows for rendering the output data on any suitable computer screen and/or display as desired (see Chatterjee paragraph 0107). Regarding claim 2, present claim 2 is rejected in view of reference claim 9, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 2 narrows the co-modality images to be CT images, which reference claim 9 also does by narrowing the anatomical images to be CT images. Regarding claim 3, present claim 3 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 3 narrows the first trained network to be a CNN, which reference claim 7 also does in row 5 above. Regarding claim 4, present claim 4 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 4 narrows the second trained network to be a CNN, which reference claim 7 also does in row 6 above. Regarding claim 5, present claim 5 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 5 states that the images share common features, which is implied by reference claim 7 in rows 4 and 6-7 above. Regarding claim 6, present claim 6 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 6 narrows the displacement data to comprise at least one displacement value for each of a plurality of pixels of the PET image and the co-modality image, which is within the broadest reasonable interpretation of the “deformation field” recited by reference claim 7. Regarding claim 10, present claim 10 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 10 narrows the displacement data to comprise displacement values identifying pixel offsets between the PET image and the co-modality image, which is within the broadest reasonable interpretation of the “deformation field” recited by reference claim 7. Regarding claim 11, present claim 11 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 11 narrows the measurement data to be based on corresponding scans of a same subject, which is implied by reference claim 1 in rows 4 and 6-7 above. Regarding claim 18, present claim 18 is rejected in view of reference claim 19, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 18 and reference claim 19 are instead directed to non-transitory computer readable mediums that perform the same method. Regarding claim 19, present claim 19 is rejected in view of reference claim 9, with similar analysis to the rejection of present claim 2 in view of reference claim 9 above. Regarding claim 20, present claim 20 is rejected in view of reference claim 1, with similar analysis to the rejection of present claim 1 in view of reference claim 7 above. The only difference here is that present claim 20 and reference claim 1 are instead directed to systems that perform the same method. This is a provisional nonstatutory double patenting rejection. Claims 1-6, 10-11, and 18-20 are rejected on the grounds of nonstatutory obviousness-type double patenting as being unpatentable over claims 1, 6-7, and 18 of U.S. Patent No. US-12154285-B2 in view of Chatterjee et al. (U.S. Publ. US-2023/0260142-A1). Regarding claim 1, the claim language of present claim 1 and reference claim 6 is substantially similar, as described below: Row # Present Application 18/618,372 Claim 1 Reference Patent US-12154285-B2 Claim 6 Notes 1 A computer-implemented method comprising: A method for image registration, comprising: Present Application’s preamble is broader. 2 receiving positron emission tomography (PET) measurement data and co-modality measurement data from an image scanning system; receiving an anatomical image and a functional image of a structure of interest, and first and second trained convolutional neural networks; The reference patent only claims “receiving” the two images, not additionally generating them from respective measurement data as in present claim 1. However, column 1, lines 12-26 of the reference patent states that PET images can be an instance of “functional images”, and that CT images can be an instance of “anatomical images.” The present application makes clear that CT images are also an example of “co-modality images.” Thus, the examiner considers PET images analogous to functional images, and co-modality images analogous to anatomical images. 3 generating a PET image based on the PET measurement data and a co-modality image based on the co-modality measurement data; 4 inputting the PET image and the co-modality image to a first trained neural network and, based on inputting the PET image and the co-modality image to the first trained neural network, generating first features of the PET image and second features of the co-modality image; extracting features by applying the anatomical image and the functional image as input to the first trained convolutional neural network; Essentially the same. 5 inputting the first features and the second features to a second trained neural network and, based on inputting the first output data to the second trained neural network, generating displacement data characterizing a displacement between the first features and the second features; estimating a deformation field by applying the features as input to the second trained convolutional neural network, wherein the second trained convolutional neural network comprises a deformation vector field regressor that regresses a relative motion displacement matrix between the anatomical image and the functional image; The examiner interprets reference claim 6’s “deformation field” as a narrower instance of present claim 1’s “displacement data.” 6 and generating display data based on the displacement data, and applying the deformation field to the anatomical image to generate a registered anatomical image. The examiner interprets reference claim 6’s “registered anatomical image” as a narrower instance of the present claim 1’s “display data.” 7 and transmitting the display data for display. The reference patent does not claim any form of transmitting output data. As seen above, the only limitations of present claim 1 not present in reference claim 6 are those of rows 2-3 and 7. In other words, present claim 1 is only narrower than reference claim 6 in that present claim 1 additionally recites first generating the images from their respective measurement data before the image processing, and at last transmitting the final display/output data. Pertaining to the same field of endeavor, Chatterjee discloses receiving positron emission tomography (PET) measurement data and co-modality measurement data from an image scanning system; generating a PET image based on the PET measurement data and a co-modality image based on the co-modality measurement data (see figure 5, movable image 104, fixed image 106 and paragraphs 0061-0062, where the movable image can be a CT scan/co-modality image generated from an image scanning system, and the fixed image can be a PET image generated from an image scanning system); and transmitting the display data for display (see paragraph 0107). The reference Application and Chatterjee are considered analogous art, as they are both directed to neural networks for generating displacement fields between medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Chatterjee into the reference Application because doing so allows for obtaining medical images of different modalities (see Chatterjee paragraphs 0061-0062), and because doing so allows for rendering the output data on any suitable computer screen and/or display as desired (see Chatterjee paragraph 0107). Regarding claim 2, present claim 2 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 2 narrows the co-modality images to be CT images, which reference claim 7 also does by narrowing the anatomical images to be CT images. Regarding claim 3, present claim 3 is rejected in view of reference claim 6, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 3 narrows the first trained network to be a CNN, which reference claim 6 does in row 4 above. Regarding claim 4, present claim 4 is rejected in view of reference claim 6, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 4 narrows the second trained network to be a CNN, which reference claim 6 does in row 5 above. Regarding claim 5, present claim 5 is rejected in view of reference claim 6, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 5 states that the images share common features, which is implied by reference claim 5 in rows 5-6 above. Regarding claim 6, present claim 6 is rejected in view of reference claim 6, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 6 narrows the displacement data to comprise at least one displacement value for each of a plurality of pixels of the PET image and the co-modality image, which is within the broadest reasonable interpretation of the “deformation field” recited by reference claim 6. Regarding claim 10, present claim 10 is rejected in view of reference claim 6, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 10 narrows the displacement data to comprise displacement values identifying pixel offsets between the PET image and the co-modality image, which is within the broadest reasonable interpretation of the “deformation field” recited by reference claim 6. Regarding claim 11, present claim 11 is rejected in view of reference claim 6, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 11 narrows the measurement data to be based on corresponding scans of a same subject, which is implied by reference claim 6 in rows 5-6 above. Regarding claim 18, present claim 18 is rejected in view of reference claim 18, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 18 and reference claim 18 are instead directed to non-transitory computer readable mediums that perform the same method. Regarding claim 19, present claim 19 is rejected in view of reference claim 7, with similar analysis to the rejection of present claim 2 in view of reference claim 7 above. Regarding claim 20, present claim 20 is rejected in view of reference claim 1, with similar analysis to the rejection of present claim 1 in view of reference claim 6 above. The only difference here is that present claim 20 and reference claim 1 are instead directed to systems that perform the same method. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 and 10-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. Analysis for claim 1 is provided in the following. Claim 1 is reproduced in the following (annotation added): A computer-implemented method comprising: receiving positron emission tomography (PET) measurement data and co-modality measurement data from an image scanning system; generating a PET image based on the PET measurement data and a co-modality image based on the co-modality measurement data; inputting the PET image and the co-modality image to a first trained neural network and, based on inputting the PET image and the co-modality image to the first trained neural network, generating first features of the PET image and second features of the co-modality image; inputting the first features and the second features to a second trained neural network and, based on inputting the first output data to the second trained neural network, generating displacement data characterizing a displacement between the first features and the second features; and generating display data based on the displacement data, and transmitting the display data for display. Step 1: Does the claim belong to one of the statutory categories? Claim 1 is directed to a process, which is a statutory category of invention (YES). Step 2A Prong One: Does the claim recite a judicial exception? Step e can be regarded as reciting mental processes such as observations, evaluations, or judgments that can be practically performed in the human mind, or by a human using pen and paper. Other than “based on inputting the first output data to the second trained neural network”, the claim does not further limit how the displacement data is generated. Any kind of determination of displacement information, such as displacement vectors, between any features of the two images would read on this claim limitation, and a human could easily do so in their mind, or with pen and paper. Note that the courts do not distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer (see MPEP 2106.04(a)(2).III) (YES). Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? Steps a-c recite limitations involving obtaining the measurement data and generating the images based on said measurement data. Step d recites limitations involving inputting the images into a first trained neural network to generate features from the images, which are then processed during the mental processes within step e. In the context of the claim, steps a-d amount to mere data gathering to obtain the data necessary to perform the mental processes within step e. Steps f and g recite the generation of generic display data and transmitting the data, which amounts to generic data outputting (NO). Step 2B: Does the claim as a whole amount to significantly more than the recited exception? The claim as a whole appends well-understood, routine, conventional (“WURC”) activities previously known to the industry (such as generating PET and co-modality images and using a neural network to extract features from them) to mental processes that can be practically performed in the human mind, or by a human using pen and paper. This is then followed by further WURC activities of generating generic output data that represents the result of performing the mental processes, and transmitting said data (NO). Claim 1 is not eligible. Similar analysis is applicable to independent claims 18 and 20. Claims 18 and 20 both recite additional elements of computerized systems at a high level of generality, which do not integrate the judicial exceptions into a practical application. Claims 18 and 20 are not eligible. Claims 2 and 19 narrow the co-modality data and images to the specific species of CT data and CT images, respectively, which does not integrate the judicial exceptions into a practical application. Claims 2 and 19 are not eligible. Claims 3 and 4 narrow the first and second trained neural networks, respectively, to the specific species of convolutional neural networks, which does not integrate the judicial exceptions into a practical application. Claims 3 and 4 are not eligible. Claims 5 and 11 recite limitations involving the features/images depicting the same subject, which does not integrate the judicial exceptions into a practical application. More specifically, the generation of the displacement data in claim 1 already suggests that the images share common features. Claims 5 and 11 are not eligible. Claim 6 narrows the displacement data to comprise at least one displacement value for each of the pixels of the images, which can still be generated in the human mind as applied to claim 1 above. Claim 6 is not eligible. Claim 7 further narrows the displacement data to comprise three displacement values for three directions for each of the pixels of the images, which cannot be practically performed in the human mind, or by a human using pen and paper. Claim 7 is eligible. Claim 8 recites determining a magnitude value based on the three displacement values for each pixel, which also cannot be practically performed in the human mind. Claim 8 is eligible. Claim 9 narrows the display data to the specific species of a heat map, which does not introduce any new judicial exceptions. Claim 9 is eligible based on its dependence on claims 7 and 8 above. Claim 10 recites that the displacement data comprises displacement values identifying pixel offsets between the images, which can still be generated in the human mind as applied to claim 1 above. Claim 10 is not eligible. Claims 12 and 15 recite training the first/second neural networks, which does not integrate the judicial exceptions into a practical application. Claims 12 and 15 further recite using the first/second neural networks to generate output data characterizing features of the images and to generate output data characterizing displacement values between the labeled features, respectively, both of which are mental processes that can be practically performed in the human mind. Claims 12 and 15 further recite determining that the first/second neural networks are trained based on their respective output data, both of which are also mental processes that can be practically performed in the human mind. Claims 12 and 15 are not eligible. Claims 13 and 16 recite determining at least one metric value based on the output data of the first/second models, and determining that the first/second models are trained based on the metric value, which are all mental processes that can be practically performed in the human mind. Claims 13 and 16 are not eligible. Claims 14 and 17 recite storing parameters characterizing each of the trained neural networks in a data repository, which amounts to mere data output or storage. Claims 14 and 17 are not eligible. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-7, 10-11, and 15-20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Chatterjee et al. (U.S. Publ. US-2023/0260142-A1.). Regarding claim 1, Chatterjee discloses a computer-implemented method (see figures 5 and 9) comprising: receiving positron emission tomography (PET) measurement data and co-modality measurement data from an image scanning system; generating a PET image based on the PET measurement data and a co-modality image based on the co-modality measurement data (see figure 5, movable image 104, fixed image 106 and paragraphs 0061-0062, where the movable image can be a CT scan/co-modality image generated from an image scanning system, and the fixed image can be a PET image generated from an image scanning system; both images can be two- or three-dimensional; paragraph 0063 specifies that the two images depict the same subject); inputting the PET image and the co-modality image to a first trained neural network (see figure 5, where the movable and fixed images are input to a machine learning model 402) and, based on inputting the PET image and the co-modality image to the first trained neural network, generating first features of the PET image and second features of the co-modality image (see figure 5, modality-neutral movable image 404, modality-neutral image 406, and paragraphs 0078-0083, where the machine learning model outputs the modality-neutral images; paragraph 0040 specifies that this can involve applying a convolutional neural network that generates features of the images); inputting the first features and the second features to a second trained neural network (see figure 9, scenario 904, where the modality-neutral images 404 and 406 are input to a deep learning registration model 906) and, based on inputting the first output data to the second trained neural network, generating displacement data characterizing a displacement between the first features and the second features (see figure 9, where the deep learning model 906 generates the registration field 802; paragraphs 0093 and 0096 specify that the registration field can include a vector field/displacement data that maps the movement of pixels/voxels/features from one image to the other); and generating display data based on the displacement data (see paragraphs 0103-0106, where the movable and fixed images can be registered/aligned with eachother via the registration field to generate a registered image/display data), and transmitting the display data for display (see paragraph 0107). Regarding claim 2, Chatterjee discloses wherein the co-modality measurement data is computed tomography (CT) measurement data and the co-modality images are CT images (see paragraphs 0061-0062, where the movable image can be a CT scan/co-modality image generated from an image scanning system). Regarding claim 3, Chatterjee discloses wherein the first trained neural network is a convolutional neural network (CNN) (see paragraphs 0040-0041). Regarding claim 5, Chatterjee discloses wherein the first features of the PET images and the second features of the co-modality images include common features (see paragraph 0063, where the two images depict common features of the same subject). Regarding claim 6, Chatterjee discloses wherein the displacement data comprises at least one displacement value for each of a plurality of pixels of the PET image and the co-modality image (paragraphs 0093 and 0096 specify that the registration field can include a vector field/displacement data that maps the movement of pixels/voxels/features from one image to the other). Regarding claim 7, Chatterjee discloses wherein the at least one displacement value for each of the plurality of pixels comprises a first displacement value for a first direction, a second displacement value for a second direction, and a third displacement value for a third direction (see paragraph 0093, where the vector field can also apply to voxels if the image is three-dimensional, in which case the vectors would have three direction components). Regarding claim 10, Chatterjee discloses wherein the displacement data comprises displacement values identifying pixel offsets between the PET image and the co-modality image (paragraphs 0093 and 0096 specify that the registration field can include a vector field/displacement data that maps the movement of pixels/voxels/features from one image to the other). Regarding claim 11, Chatterjee discloses wherein the PET measurement data and the co-modality measurement data are based on corresponding scans of a same subject (see paragraph 0063, where the two images depict common features of the same subject). Regarding claim 15, Chatterjee discloses training the second trained neural network (paragraphs 0097-0099 provide an overview of training the deep learning registration model 906), the training comprising: inputting labelled PET features and labelled CT features to a neural network (see paragraph 0098, where training movable and fixed images can be labeled and input to the model) and, based on inputting the labelled PET features and the labelled CT features to the neural network, generating output data characterizing displacement values between the labelled PET features and labelled CT features (see paragraph 0099, where the model generates output registration fields representing displacement values between the training images); and determining the neural network is trained based on the output data (see paragraph 0099, where the output can be used for training termination criteria, such as error, loss, or objective functions, that determine when the model is appropriately trained). Regarding claim 16, Chatterjee discloses determining at least one metric value based on the output data; and determining the neural network is trained based on the at least one metric value (see paragraph 0099, where the output can be used for training termination criteria, such as error, loss, or objective functions, that determine when the model is appropriately trained). Regarding claim 17, Chatterjee discloses storing parameters characterizing the second trained neural network in a data repository (see paragraph 0096, where the field component can electronically store the deep learning registration model). Regarding claim 18, Chatterjee discloses a non-transitory computer readable medium (see figure 18, system memory 1806, RAM 1812, ROM 1810) storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising (see figure 18, processing unit 1804). The remainder of claim 18 recites steps identical to those of claim 1. Therefore, Chatterjee anticipates claim 18 as applied to claim 1 above. Regarding claim 19, Chatterjee discloses claim 19 as applied to claim 2 above. Regarding claim 20, Chatterjee discloses a system (see figure 18) comprising: a memory device storing instructions (see figure 18, system memory 1806, RAM 1812, ROM 1810); and at least one processor communicatively coupled the memory device, the at least one processor configured to execute the instructions to (see figure 18, processing unit 1804). The remainder of claim 20 recites steps identical to those of claim 1. Therefore, Chatterjee anticipates claim 20 as applied to claim 1 above. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Chatterjee et al. (U.S. Publ. US-2023/0260142-A1.) in view of Paragios et al. (U.S. Publ. US-2024/0087270-A1). Regarding claim 4, Chatterjee fails to disclose the limitations of claim 4. Pertaining to the same field of endeavor, Paragios discloses wherein the second trained neural network is a convolutional neural network (CNN) (see figure 5 and paragraphs 0109-0113, where a second machine learning architecture uses a CNN to generate a displacement field between sets of medical images). Chatterjee and Paragios are considered analogous art, as they are both directed to neural networks for generating displacement fields between medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Paragios into Chatterjee because CNNs are well-known tools in the art for performing image registration (see Paragios paragraph 0112). Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Chatterjee et al. (U.S. Publ. US-2023/0260142-A1.) in view of Madabhushi et al. (U.S. Publ. US-2022/0012902-A1). Regarding claim 8, Chatterjee discloses determining, for each of the plurality of pixels, a magnitude value (see paragraph 0093, where the registration field is a matrix with each element having a vector with a calculated direction and magnitude) and generating the display data based on the magnitude values (see paragraphs 0103-0106, where the movable and fixed images can be registered/aligned with eachother via the vectors in the registration field to generate a registered image/display data). Chatterjee fails to disclose determining, for each of the plurality of pixels, a magnitude value based on the first displacement value, the second displacement value, and the third displacement value (emphasis added via underline). Pertaining to the same field of endeavor, Madabhushi discloses determining, for each of the plurality of pixels, a magnitude value based on the first displacement value, the second displacement value, and the third displacement value (see paragraph 0066, where the formula for the magnitude of a 3D vector, x 2 + y 2 +   z 2 , is used to find the magnitudes "D(c)" of displacement vectors in medical imaging data). Chatterjee and Madabhushi are considered analogous art, as they are both directed to neural networks for generating displacement fields in medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Madabhushi into Chatterjee because doing so enables extraction of structural deformation information in medical images (see Madabhushi paragraphs 0065-0066). Regarding claim 9, Chatterjee fails to disclose the limitations of claim 9. Pertaining to the same field of endeavor, Madabhushi discloses wherein the display data characterizes a heat map (see figures 5-6 and paragraphs 0076-0077, where the magnitude values "D(c)" are visualized using heatmaps). Chatterjee and Madabhushi are considered analogous art, as they are both directed to neural networks for generating displacement fields in medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Madabhushi into Chatterjee because doing so allows for visualization of the deformation of biological structures in imaging data (see Madabhushi figures 5-6 and paragraphs 0076-0077). Claims 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chatterjee et al. (U.S. Publ. US-2023/0260142-A1.) in view of Laaksonen et al. (U.S. Publ. US-2021/0192719-A1). Regarding claim 12, Chatterjee discloses training the first trained neural network (see paragraphs 0111-0118, where the machine learning model is trained only with unsupervised learning), the training comprising: inputting (see paragraphs 0111 and 0118, where the model outputs predicted modality-neutral images based on the unlabeled input images); and determining the neural network is trained based on the output data (see paragraphs 0123-0124, where any training termination criteria can be used, such as loss functions, for determining that the model is appropriately trained). Chatterjee fails to disclose inputting labelled PET images and labelled CT images to a neural network and, based on inputting the labelled PET images and the labelled CT images to the neural network, generating output data characterizing PET features and CT features (emphasis added via underline). In other words, Chatterjee only discloses training the first network via unsupervised learning, not via supervised learning as required by the claim. Pertaining to the same field of endeavor, Laaksonen discloses inputting labelled PET images and labelled CT images to a neural network and, based on inputting the labelled PET images and the labelled CT images to the neural network, generating output data characterizing PET features and CT features (see figure 4 and paragraphs 0038-0044, where both labeled and unlabeled training data for multiple modalities, including PET and CT, can be input to a model that generates features/output data from said images). Chatterjee and Laaksonen are considered analogous art, as they are both directed to neural networks for processing co-modality image sets. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have integrated the teachings of Laaksonen into Chatterjee because using semi-supervised approaches is useful when training data is scarce and/or expensive (see Laaksonen paragraph 0044). Regarding claim 13, Chatterjee in view of Laaksonen discloses determining at least one metric value based on the output data; and determining the neural network is trained based on the at least one metric value (see Chatterjee paragraphs 0123-0124, where any training termination criteria can be used, such as loss functions, for determining that the model is appropriately trained). Regarding claim 14, Chatterjee in view of Laaksonen discloses storing parameters characterizing the first trained neural network in a data repository (see Chatterjee paragraph 0040, where the modality-neutral component can electronically store the machine learning model). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOHN HELCO whose telephone number is (703)756-5539. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached at telephone number 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /NICHOLAS JOHN HELCO/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602867
METHOD FOR AUTONOMOUSLY SCANNING AND CONSTRUCTING A REPRESENTATION OF A STAND OF TREES
2y 5m to grant Granted Apr 14, 2026
Patent 12597092
Systems and Methods for Altering Images
2y 5m to grant Granted Apr 07, 2026
Patent 12586370
VEHICLE IMAGE ANALYSIS SYSTEM FOR A PERIPHERAL CAMERA
2y 5m to grant Granted Mar 24, 2026
Patent 12573018
DEFECT ANALYSIS DEVICE, DEFECT ANALYSIS METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND LEARNING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12561754
METHOD AND SYSTEM FOR PROCESSING IMAGE BASED ON WEIGHTED MULTIPLE KERNELS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+44.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month