Prosecution Insights
Last updated: April 19, 2026
Application No. 18/969,251

NUCLEAR MEDICINE DIAGNOSIS DEVICE, REGION-OF-INTEREST MOVEMENT STATE DETECTION METHOD, AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Dec 04, 2024
Examiner
MAYNARD, JOHNATHAN A
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Canon Medical Systems Corporation
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
74 granted / 189 resolved
-30.8% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
31 currently pending
Career history
220
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al. (“Unsupervised deep learning framework for data-driven gating in positron emission tomography” August 2023), hereinafter “Li.” Regarding claim 1, Li discloses a nuclear medicine diagnosis device (Canon Celstion whole-body TOF PET/CT scanner” P.6050, ¶5 – P.6051, ¶1) comprising: processing circuitry configured to collect radiation data obtained by detecting radiation based on radioactive medicine administrated to a subject (Canon Celestion whole-body TOF PET/CT scanner comprises processing circuitry to collect list-mode data obtained by detecting radiation based on 216.0-249.9 MBq F-FDG administration to a patient, P.6050, ¶5 – P.6051, ¶1), divide the radiation data into a plurality of radiation data segments with predetermined time widths (list-mode data is divided into 500 ms frames, P.6050, ¶5 – P.6051, ¶1), and detect a movement state of a region of interest in a body of the subject based on the radiation data segments (respiratory motion, motion signal, displacement, respiratory phase, and motion field of a ROI in the body of a patient based on the divided list-mode data, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4). Regarding claim 2, Li discloses the processing circuitry generates reconstructed images indicating positions of the region of interest based on the radiation data segments (Canon Celestion whole-body TOF PET/CT scanner and computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU comprise processing circuitry to generate reconstructed images indicating positions of the region of interest based on the divided list mode data, P.6049, ¶ 4 – P.6050, ¶ 1, P.6050, ¶5 – P.6051, ¶1, P.6052, ¶ 2, P.6052, ¶ 5), rearranges generated reconstructed images in order of time of respiration of the subject (reconstructed images are sorted in order of respiratory gates, Abstract, P.6048, ¶5, P.6048, ¶6 – P.6049, ¶1, P.6049, ¶4 – P.6050, ¶3, P.6050, ¶5 – P.6051, ¶1, P.6052, ¶5 – P.6053, ¶ 2), compares the positions of the region of interest between at least two rearranged reconstructed images (respiratory motion, motion signal, displacement, respiratory phase, and motion field of a ROI between at least two of the sorted respiratory gate reconstructed images, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4), and detects the movement state of the region of interest based on a comparison result of performing a comparison process (respiratory motion, motion signal, displacement, respiratory phase, and motion field of a ROI determined based on a comparison between at least two of the sorted respiratory gate reconstructed images, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4). Regarding claim 3, Li discloses the processing circuitry rearranges the reconstructed images based on feature quantities of the generated reconstructed images (Canon Celestion whole-body TOF PET/CT scanner and NVIDIA GTX 1080TI GPU comprise processing circuitry to sort reconstructed images in order of respiratory gates based on latent features of the generated reconstructed images, Abstract, P.6048, ¶5, P.6048, ¶6 – P.6049, ¶1, P.6049, ¶4 – P.6050, ¶3, P.6050, ¶5 – P.6051, ¶1, P.6052, ¶5 – P.6053, ¶ 2). Regarding claim 4, Li discloses the processing circuitry extracts the feature quantities including at least movement components in the region of interest indicated in the reconstructed images by inputting the reconstructed images to a learning network and rearranges the reconstructed images based on a degree of similarity between the feature quantities (Canon Celestion whole-body TOF PET/CT scanner and computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU comprise processing circuitry to extract the latent features including at least respiratory movement components in the ROI indicated in the reconstructed images by inputting the reconstructed images to a unsupervised feature learning network and sorts the reconstructed images based on similarity between the latent features, Abstract, P.6048, ¶5, P.6048, ¶6 – P.6049, ¶1, P.6049, ¶4 – P.6050, ¶3, P.6050, ¶5 – P.6051, ¶1, P.6052, ¶5 – P.6053, ¶ 2). Claim 7 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li. Regarding claim 7, Li discloses a region-of-interest movement state detection method (a method for detecting respiratory motion, Abstract; region of interest movement analysis, P.6052, ¶2) comprising: collecting, by a computer, radiation data obtained by detecting radiation based on radioactive medicine administrated to a subject (Canon Celestion whole-body TOF PET/CT scanner comprises a workstation computer to collect list-mode data obtained by detecting radiation based on 216.0-249.9 MBq F-FDG administration to a patient, P.6050, ¶5 – P.6051, ¶1); dividing, by the computer, the radiation data into a plurality of radiation data segments with predetermined time widths (list-mode data is divided into 500 ms frames, P.6050, ¶5 – P.6051, ¶1; Canon Celestion whole-body TOF PET/CT scanner comprises a workstation computer, P.6050, ¶5 – P.6051, ¶1; computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU, P.6049, ¶5 – P.6050, ¶1); and detecting, by the computer, a movement state of a region of interest in a body of the subject based on the radiation data segments (respiratory motion, motion signal, displacement, respiratory phase, and motion field of a ROI in the body of a patient based on the divided list-mode data, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4; Canon Celestion whole-body TOF PET/CT scanner comprises a workstation computer, P.6050, ¶5 – P.6051, ¶1; computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU, P.6049, ¶5 – P.6050, ¶1). Claim 8 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li. Regarding claim 8, Li discloses a non-transitory computer-readable storage medium storing a program for causing a computer to (a method implemented on a Canon Celestion whole-body TOF PET/CT scanner comprises a workstation computer and computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU, P.6049, ¶5 – P.6050, ¶1, P.6050, ¶5 – P.6051, ¶1): collect radiation data obtained by detecting radiation based on radioactive medicine administrated to a subject (collect list-mode data obtained by detecting radiation based on 216.0-249.9 MBq F-FDG administration to a patient, P.6050, ¶5 – P.6051, ¶1); divide the radiation data into a plurality of radiation data segments with predetermined time widths (list-mode data is divided into 500 ms frames, P.6050, ¶5 – P.6051, ¶1); and detect a movement state of a region of interest in a body of the subject based on the radiation data segments (respiratory motion, motion signal, displacement, respiratory phase, and motion field of a ROI in the body of a patient based on the divided list-mode data, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Li as in claim 2 above, and further in view of Ren et al. (“Data-driven event-by-event respiratory motion correction using TOF PET list-mode centroid of distribution” 2017), hereinafter “Ren.” Regarding claim 5, Li discloses the processing circuitry compares positions of the region of interest between a first reconstructed image that is the reconstructed image indicating a state in the respiration of the subject and a second reconstructed image that is the reconstructed image indicating a state in the respiration of the subject (Canon Celestion whole-body TOF PET/CT scanner and computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU comprise processing circuitry to calculate the displacement and motion fields of the ROI between a first reconstructed image that is the reconstructed image indicating a state in the respiration of the patient and a second reconstructed image that is the reconstructed image indicating a state in the respiration of the patient, Abstract, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4). However, Li does not appear to explicitly disclose the first reconstructed image indicating a maximum expiration state and the second reconstructed image indicating a minimum inspiration state. However, in the same field of endeavor of PET imaging, Ren teaches comparing positions of the region of interest between a first reconstructed image that is the reconstructed image indicating a maximum expiration state in the respiration of the subject and a second reconstructed image that is the reconstructed image indicating a maximum inspiration state in the respiration of the subject (determining the respiratory displacement of the ROI, pancreas, between end-expiration and end-inspiration from gated reconstructions, Abstract, P.4744, ¶5 – P.4745, ¶1, P.4747, ¶1, Fig. 4, P.4752, ¶ 2). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Ren’s known technique of calculating the displacement of the ROI between a first end-expiration reconstructed image and a second end-inspiration reconstructed image to Li’s known apparatus for calculating the displacement and motion fields of the ROI between a first and second reconstructed image to achieve the predictable result that this allows for quantitative analysis of the reconstruction results to assess reliability. See, e.g., Ren, P.4744, ¶5 – P.4745, ¶1 and P.4752, ¶2. Regarding claim 6, Li discloses the processing circuitry obtains a difference between a position of a first region of interest that is the region of interest indicated in the first reconstructed image and a position of a second region of interest that is the region of interest indicated in the second reconstructed image and outputs information indicating the obtained difference as the comparison result (Canon Celestion whole-body TOF PET/CT scanner and computer software implementation using Keras 2.2.4 with Tensorflow backend and NVIDIA GTX 1080TI GPU comprise processing circuitry to calculate the displacement and motion fields of the ROI between a first reconstructed image that is the reconstructed image indicating a state in the respiration of the patient and a second reconstructed image that is the reconstructed image indicating a state in the respiration of the patient, Abstract, P.6050, ¶ 1-2, P.6052, ¶2, P.6052, ¶5 – P.6053, ¶2, P.6058, ¶4). However, Li does not appear to explicitly disclose the first reconstructed image indicating a maximum expiration state and the second reconstructed image indicating a minimum inspiration state. However, in the same field of endeavor of PET imaging, Ren teaches comparing positions of the region of interest between a first reconstructed image that is the reconstructed image indicating a maximum expiration state in the respiration of the subject and a second reconstructed image that is the reconstructed image indicating a maximum inspiration state in the respiration of the subject (determining the respiratory displacement of the ROI, pancreas, between end-expiration and end-inspiration from gated reconstructions, Abstract, P.4744, ¶5 – P.4745, ¶1, P.4747, ¶1, Fig. 4, P.4752, ¶ 2). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have applied Ren’s known technique of calculating the displacement of the ROI between a first end-expiration reconstructed image and a second end-inspiration reconstructed image to Li’s known apparatus for calculating the displacement and motion fields of the ROI between a first and second reconstructed image to achieve the predictable result that this allows for quantitative analysis of the reconstruction results to assess reliability. See, e.g., Ren, P.4744, ¶5 – P.4745, ¶1 and P.4752, ¶2. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chen et al. (“Respiratory signal estimation for cardiac perfusion SPECT using deep learning” August 2023) discloses a deep learning network for generating reconstructed images from divided line mode data, rearranging the data based on respiratory motion, and determining a displacement between the end-expiration and end-inspiration. Lassen et al. (“Gating approaches in cardiac PET imaging” 2019) discloses a learning network for generating reconstructed images from divided line mode data and rearranging the data based on respiratory motion. Messerli et al. (“Clinical evaluation of data-driven respiratory gating for PET/CT in an oncological cohort of 149 patients: impact on image quality and patient management” 2021) discloses a learning network for generating reconstructed images from divided line mode data and rearranging the data based on respiratory motion. Prevrhal et al. (U.S. Pub. No. 2023/0022425) discloses generating reconstructed images from divided line mode data and rearranging the data based on respiratory motion. Buther et al. (U.S. Pub. No. 2010/0067765) discloses generating reconstructed images from divided line mode data and rearranging the data based on respiratory motion. Thomas et al. (U.S. Pub. No. 2008/0107229) discloses generating reconstructed images from divided line mode data and rearranging the data based on respiratory motion. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Johnathan Maynard whose telephone number is (571)272-7977. The examiner can normally be reached 10 AM - 6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 571-270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M./Examiner, Art Unit 3798 /KEITH M RAYMOND/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Dec 04, 2024
Application Filed
Dec 19, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594084
Ultrasound Device for Use with Synthetic Cavitation Nuclei
2y 5m to grant Granted Apr 07, 2026
Patent 12588817
SYSTEMS AND METHODS FOR GENERATING DIAGNOSTIC SCAN PARAMETERS FROM CALIBRATION IMAGES
2y 5m to grant Granted Mar 31, 2026
Patent 12575734
DEVICES AND RELATED ASPECTS FOR MAGNETIC RESONANCE IMAGING-BASED IN- SITU TISSUE CHARACTERIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12571862
B1 FIELD MAP WITH CONTRAST MEDIUM INJECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12544142
Method and System for Associating Pre-Operative Plan with Position Data of Surgical Instrument
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
46%
With Interview (+6.9%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month