Prosecution Insights
Last updated: April 19, 2026
Application No. 18/437,354

AI-DRIVEN MOTION CORRECTION OF PET DATA

Non-Final OA §103
Filed
Feb 09, 2024
Examiner
DARDANO, STEFANO ANTHONY
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Siemens Healthcare
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
57 granted / 74 resolved
+15.0% vs TC avg
Strong +33% interview lift
Without
With
+33.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
22 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-18 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7-9, 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over in CHAN et al. (US 20240008832 A1 Hereinafter “CHAN”) view of Schaefferkoetter (US 20230009528 A1 Hereinafter “Schaefferkoetter”). Regarding claim 1, CHAN teaches a molecular imaging scanner comprising: a plurality of photon detectors ([0125]: “FIG. 14B shows an example of the arrangement of the PET scanner 800, in which the object OBJ to be imaged rests on a table 816 and the GRD modules GRD1 through GRDN are arranged circumferentially around the object OBJ and the table 816”. The PET scanner contains the plurality of photon detectors); and a processing unit to: determine an anatomical image of an object (Fig. 1, [0048]: “In another example for a PET/CT scanner, the other medical images 158 can be computed tomography (CT) images″. The CT image acts as the anatomical image. The CT image is acquired for the process as seen in Fig. 1 for the training (#158) and the denoising (#258)). acquire molecular imaging data of the object at the plurality of photon detectors (Fig. 2, [0051]: “In method 200, the PET emission data 251 is corrected in step 202, and then, in step 204, a PET image 255 is reconstructed from the corrected emission data using a PET image reconstruction process”. The PET data is the molecular imaging data and for it to be processed it must be acquired); reconstruct a functional image based on the molecular imaging data (Fig. 2, [0051]: “In method 200, the PET emission data 251 is corrected in step 202, and then, in step 204, a PET image 255 is reconstructed from the corrected emission data using a PET image reconstruction process”. The reconstructed PET image is the functional image); and input the anatomical image and the functional image to a trained neural network to generate a second functional image ([0084]: “FIG. 9 shows a flow diagram of a modified implementation of methods 100 and 200, in which the step 110″ trains a modified network 162″ and step 210″ applies a low-quality PET image 255 and low-quality CT image 258 to the modified network 162″. That is, steps 110″ and 210″ use a combination of a low-quality PET image and a low-quality CT image (or other non-PET image) to generate a high-quality PET image with PVC 253″. The high quality PET image acts as the second functional image); and a display (Fig. 14B: Processor #870 (which is presumed to be a computer) can be seen containing a display which is the computer screen). While it is presumed CHAN would use their display to present the high quality PET image, CHAN does not expressly disclose using their display to present the second functional image. However, Schaefferkoetter teaches using a display to present a processed image (Fig. 1, [0057]: “For example, image volume reconstruction engine 118 applies the attenuation map 105 to the PET image 115 to generate the final image volume 191. Final image volume 191 can include image data that can be provided for display and analysis, for example”). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify CHAN’s display to include Schaefferkoetter’s ability to display processed imaging results because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, Schaefferkoetter’s ability to display processed imaging results permits the ability to display the processed results to a physician for evaluation and diagnosis ([0032]: “The computing device can also display the image volume to a physician for evaluation and diagnosis, for example”). This known benefit in Schaefferkoetter is applicable to CHAN’s display as they both share characteristics and capabilities, namely, they are directed to using CT and PET imaging data to generate an improved version of medical data. Therefore, it would have been recognized that modifying CHAN’s display to include Schaefferkoetter’s ability to display processed imaging results would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate Schaefferkoetter’s ability to display processed imaging results in using CT and PET imaging data to generate an improved version of medical data and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art. Regarding claim 2, the combination of CHAN and Schaefferkoetter teaches a scanner according to Claim 1, in addition, CHAN further teaches the processing unit to: determine a linear attenuation ([0046]: “The other medical scan can advantageously be used to provide an attenuation model and provide enhanced resolution”. If the other medical image is a CT image, the attenuation model would be a linear attenuation model because CT image fundamentally represent linear attenuation. A map of this attenuation can be provided “Alternatively, an attenuation map used in PET reconstruction can be used to approximate the resolution degraded CT image 158” [0084]), wherein reconstruction of the functional image is based on the linear attenuation ([0084]: “Alternatively, an attenuation map used in PET reconstruction can be used to approximate the resolution degraded CT image 158”. If used in PET image reconstruction is would contain the molecular imaging data) Schaefferkoetter further teaches using a correction map for the attenuation correction of PET images ([0024]: “Furthermore, the machine learning model may be trained to generate attenuation correction maps for any imaging domain, yielding, for example, a single attenuation correction approach for both PET/CT and PET/MR systems”. This attenuation correction is used for the PET images which generally require attenuation correction “Quantitative Positron Emission Tomography (PET) generally requires an attenuation map to correct for a number of photons that have either been lost for a sinogram bin (i.e., attenuation correction) or wrongly assigned to another sinogram bin (i.e., scatter correction). The corrections generally depend on an accurate knowledge of photon values within a subject. The attenuation map characterizing the corrections (e.g., mu map) is calculated or estimated using an accompanying anatomical modality, such as computed tomography (CT) or magnetic resonance (MR)” [0022]). At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to modify CHAN’s attenuation correction map to include Schaefferkoetter’s attenuation correction map because such a modification is taught, suggested, or motivated by the art. More specifically, the motivation to modify CHAN to include Schaefferkoetter is expressly provided by Schaefferkoetter, stating that PET scans generally require attenuation correction ([0022]: “Quantitative Positron Emission Tomography (PET) generally requires an attenuation map to correct for a number of photons that have either been lost for a sinogram bin (i.e., attenuation correction)”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify CHAN’s attenuation correction map to include Schaefferkoetter’s attenuation correction map with the motivation of attenuation correction. The person of ordinary skill in the art would have recognized the benefit of corrected attenuation. Regarding claim 3, the combination of CHAN and Schaefferkoetter teaches a scanner according to Claim 1, in addition, CHAN further teaches a scanner according to Claim 1, wherein the neural network is trained based on a plurality of sets of training data, each of the plurality of sets of training data comprising: a training anatomical image ([0046]: “Optionally, the training method 100 can also incorporate other medical images 158 that are generated from a medical imaging scan performed using another medical imaging modality (e.g., X-ray computed tomography (CT) or magnetic resonance imaging (MIR))”. The CT image acts as the anatomical image); a training functional image exhibiting motion artifacts ([0032]: “The image quality can often be further degraded by other confounding factors, such as positron range, photon pair non-collinearity, limited intrinsic system resolution, finite reconstruction voxel sizes, patient motion etc.”. Fig. 6 shows using both low quality and high quality PET images (functional images). These low quality image contain noise which can arise from the patient motion as stated above “That is, the approach in step 110 in which the network 162 is trained using low-quality images having a wide range of noise levels can reduce the dependence of the image quality of the high-quality PET image 253 on the noise level of the low-quality PET image 255” [0049]); and a ground truth functional image exhibiting less motion artifacts than the training functional image ([0096]: “These predictions are then input into the loss function, by which they are compared to the corresponding ground truth labels (i.e., the high quality image 153)”. These high quality PET images can also be seen being input into the DL-CNN for training. These PET images are also functional images). Regarding claim 7, the content of claim 7 is similar to the content of claim 1, but the words “anatomical image” and “functional image” have been replaced with “computed tomography” and “position emission tomography”. CHAN also discloses these types of data for the process being claimed in the rejection of claim 1 (the anatomical image is mapped using the CT image and the functional image is mapped using the PET image in claim 1, any mapping for claim 7 is going to be the same for claim 1). Therefore, claim 7 is rejected for the same reasons of obviousness as claim 1, along with the additional teachings above. Regarding claim 8, the content of claim 8 is similar to the content of claim 2, therefore it is rejected for the same reasons of obviousness as claim 2. Regarding claim 9, the content of claim 9 is similar to the content of claim 3, therefore it is rejected for the same reasons of obviousness as claim 3. Regarding claim 13, the content of claim 13 is similar to the content of claim 1, with the additional teachings of a non-transitory medium and processing unit. CHAN also discloses this information ([0128]: “Alternatively, the CPU in the processor 870 can execute a computer program including a set of computer-readable instructions that perform various steps of method 100 and/or method 200, the program being stored in any of the above-described non-transitory electronic memories and/or a hard disk drive, CD, DVD, FLASH drive or any other known storage media”). Therefore, claim 13 is rejected for the same reasons of obviousness as claim 1, along with the additional teachings above. Regarding claim 14, the content of claim 14 is similar to the content of claim 2, therefore it is rejected for the same reasons of obviousness as claim 2. Regarding claim 15, the content of claim 15 is similar to the content of claim 3, therefore it is rejected for the same reasons of obviousness as claim 3. Claims 4, 10, and 16 rejected under 35 U.S.C. 103 as being unpatentable over in CHAN et al. (US 20240008832 A1 Hereinafter “CHAN”) view of Schaefferkoetter (US 20230009528 A1 Hereinafter “Schaefferkoetter”) in further view of SOMMER et al. (US 20210181287 A1 Hereinafter “SOMMER”). Regarding claim 4, the combination of CHAN and Schaefferkoetter teaches a scanner according to Claim 3, in addition, CHAN further teaches wherein a first set of the plurality of sets of training data is generated by: acquiring a first training anatomical image ([0046]: “Optionally, the training method 100 can also incorporate other medical images 158 that are generated from a medical imaging scan performed using another medical imaging modality (e.g., X-ray computed tomography (CT) or magnetic resonance imaging (MIR))”. The CT image acts as the anatomical image) and a first training functional image ([0032]: “The image quality can often be further degraded by other confounding factors, such as positron range, photon pair non-collinearity, limited intrinsic system resolution, finite reconstruction voxel sizes, patient motion etc.”. Fig. 6 shows using both low quality and high quality PET images (functional images). These low quality image contain noise which can arise from the patient motion as stated above “That is, the approach in step 110 in which the network 162 is trained using low-quality images having a wide range of noise levels can reduce the dependence of the image quality of the high-quality PET image 253 on the noise level of the low-quality PET image 255” [0049]); and The combination of CHAN and Schaefferkoetter does not expressly disclose applying motion correction to obtain a ground truth image. However, SOMMER teaches applying motion correction to obtain a ground truth image ([0046]: “In order to generate a motion-artifact-corrected magnetic resonance imaging data set, the motion-artifact-only magnetic resonance imaging data set may be subtracted from the received magnetic resonance imaging data set, i.e. the imaging data set structures identified as resulting from motion artifacts are subtracted such that only motion-artifact-free imaging data sets structures remain”. These data sets are described as being used for training a model to remove motion, “Embodiments may have the beneficial effect that using magnetic resonance imaging training data sets with different motion artifacts levels an efficient and effective training of the deep learning network may be ensured. Magnetic resonance imaging training data sets may e.g. be clinical imaging data sets with and without motion artifacts” [0055]). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of CHAN and Schaefferkoetter’s training data to include SOMMER’s ground truth generated data because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, SOMMER’s ground truth generated data permits generation of more training data for training the model to improve medical images. This known benefit in SOMMER is applicable to the combination of CHAN and Schaefferkoetter’s training data as they both share characteristics and capabilities, namely, they are directed to training models to improve medical images. Therefore, it would have been recognized that modifying the combination of CHAN and Schaefferkoetter’s training data to include SOMMER’s ground truth generated data would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate SOMMER’s ground truth generated data in training models to improve medical images and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art. Regarding claim 10, the content of claim 10 is similar to the content of claim 4, therefore it is rejected for the same reasons of obviousness as claim 4. Regarding claim 16, the content of claim 16 is similar to the content of claim 4, therefore it is rejected for the same reasons of obviousness as claim 4. Claims 5-6, 11-12, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over in CHAN et al. (US 20240008832 A1 Hereinafter “CHAN”) view of Schaefferkoetter (US 20230009528 A1 Hereinafter “Schaefferkoetter”) in further view of SOMMER et al. (US 20210181287 A1 Hereinafter “SOMMER”) in further view of ITO et a. (US 20100026880 A1 Hereinafter “ITO”). Regarding claim 5, the combination of CHAN, Schaefferkoetter, and SOMMER teaches a Scanner according to Claim 4, in addition, CHAN further teaches wherein a second set of the plurality of sets of training data is generated by: acquiring a second training anatomical image and a second ground truth functional image (Fig. 1: A second set of anatomical images (158(2)) and ground truth functional images (153(2)) can be seen used to train the DL-CNN); and SOMMER further teaches the benefit of generating additional images for training ([0055]: “Embodiments may have the beneficial effect that using magnetic resonance imaging training data sets with different motion artifacts levels an efficient and effective training of the deep learning network may be ensured. Magnetic resonance imaging training data sets may e.g. be clinical imaging data sets with and without motion artifacts or artificially generated imaging data sets based on motion-artifact-free clinical imaging data sets to which artificially motion artifacts have been introduced”). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of CHAN and Schaefferkoetter’s training data to include SOMMER’s generated additional images for training because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, SOMMER’s generated additional images for training permits generation of more training data for training the model to improve medical images. This known benefit in SOMMER is applicable to the combination of CHAN and Schaefferkoetter’s training data as they both share characteristics and capabilities, namely, they are directed to training models to improve medical images. Therefore, it would have been recognized that modifying the combination of CHAN and Schaefferkoetter’s training data to include SOMMER’s generated additional images for training would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate SOMMER’s generated additional images for training in training models to improve medical images and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art. Another art, ITO additionally teaches that data can be generated by using motion vectors (Fig. 3, [0097]: “The motion-blur addition processing section 11 adaptively performs filter processing on the input moving image data ID (for each frame or for each divided area) using the motion vector VD so that the motion-blur addition processing is performed”). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of CHAN, Schaefferkoetter, and SOMMER’s generated training data to include ITO’s motion simulation using vectors because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, ITO’s motion simulation using vectors is comparable to the combination of CHAN, Schaefferkoetter, and SOMMER’s generated training data because the generated training data can be artificially generated as taught by SOMMER; by adding simulated motion to the motion-free images, ITO provides a method to perform that very simulation. Therefore, it would be obvious to one of ordinary skill in the art to modify the combination of CHAN, Schaefferkoetter, and SOMMER’s generated training data to include ITO’s motion simulation using vectors in order to obtain the predictable result of using motion vectors on ground truth images to simulate motion artifacts to generate additional functional images to train the model to improve medical image data. Regarding claim 6, the combination of CHAN and Schaefferkoetter teaches a scanner according to Claim 1, in addition, CHAN further teaches wherein a first set of the plurality of sets of training data is generated by: acquiring a first training anatomical image and a first ground truth functional image (Fig. 1: A first set of anatomical images (158(1)) and ground truth functional images (153(1)) can be seen used to train the DL-CNN); and The combination of CHAN and Schaefferkoetter does not expressly disclose applying motion vectors to ground truth images to generate additional images for training. However, SOMMER teaches the benefit of generating additional images for training ([0055]: “Embodiments may have the beneficial effect that using magnetic resonance imaging training data sets with different motion artifacts levels an efficient and effective training of the deep learning network may be ensured. Magnetic resonance imaging training data sets may e.g. be clinical imaging data sets with and without motion artifacts or artificially generated imaging data sets based on motion-artifact-free clinical imaging data sets to which artificially motion artifacts have been introduced”). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of CHAN and Schaefferkoetter’s training data to include SOMMER’s generated additional images for training because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, SOMMER’s generated additional images for training permits generation of more training data for training the model to improve medical images. This known benefit in SOMMER is applicable to the combination of CHAN and Schaefferkoetter’s training data as they both share characteristics and capabilities, namely, they are directed to training models to improve medical images. Therefore, it would have been recognized that modifying the combination of CHAN and Schaefferkoetter’s training data to include SOMMER’s generated additional images for training would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate SOMMER’s generated additional images for training in training models to improve medical images and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art. ITO additionally teaches that data can be generated by using motion vectors (Fig. 3, [0097]: “The motion-blur addition processing section 11 adaptively performs filter processing on the input moving image data ID (for each frame or for each divided area) using the motion vector VD so that the motion-blur addition processing is performed”). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of CHAN, Schaefferkoetter, and SOMMER’s generated training data to include ITO’s motion simulation using vectors because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, ITO’s motion simulation using vectors is comparable to the combination of CHAN, Schaefferkoetter, and SOMMER’s generated training data because the generated training data can be artificially generated as taught by SOMMER; by adding simulated motion to the motion-free images, ITO provides a method to perform that very simulation. Therefore, it would be obvious to one of ordinary skill in the art to modify the combination of CHAN, Schaefferkoetter, and SOMMER’s generated training data to include ITO’s motion simulation using vectors in order to obtain the predictable result of using motion vectors on ground truth images to simulate motion artifacts to generate additional functional images to train the model to improve medical image data. Regarding claim 11, the content of claim 11 is similar to the content of claim 5, therefore it is rejected for the same reasons of obviousness as claim 5. Regarding claim 12, the content of claim 12 is similar to the content of claim 6, therefore it is rejected for the same reasons of obviousness as claim 6. Regarding claim 17, the content of claim 17 is similar to the content of claim 5, therefore it is rejected for the same reasons of obviousness as claim 5. Regarding claim 18, the content of claim 18 is similar to the content of claim 6, therefore it is rejected for the same reasons of obviousness as claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: QI et al. (US 20210304457 A1) teaches Neural network motion vector estimation for PET data. Bharkhada et al. (US 20220215599 A1) teaches motion field vector application. Jin et al. (US 20190340793 A1) teaches PET image and CT images combination for PET generation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFANO A DARDANO whose telephone number is (703)756-4543. The examiner can normally be reached Monday - Friday 11:00 - 7:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEFANO ANTHONY DARDANO/ Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Feb 09, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586207
SYSTEM AND METHOD FOR AI SEGMENTATION-BASED REGISTRATION FOR MULTI-FRAME PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12573227
METHOD AND SYSTEM FOR EXTRACTION OF DATA FROM DOCUMENTS FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573030
PROCESSING OF TRACTOGRAPHY RESULTS USING AN AUTOENCODER
2y 5m to grant Granted Mar 10, 2026
Patent 12548353
IMAGE PROCESSING APPARATUS SUPPORTING OBSERVATION OF OBJECT USING MICROSCOPE, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12536689
MINING UNLABELED IMAGES WITH VISION AND LANGUAGE MODELS FOR IMPROVING OBJECT DETECTION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month