Prosecution Insights
Last updated: April 19, 2026
Application No. 17/680,873

ARTIFICIAL INTELLIGENCE BASED 3D RECONSTRUCTION

Final Rejection §103
Filed
Feb 25, 2022
Examiner
KY, KEVIN
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Canon Medical Systems Corporation
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
420 granted / 549 resolved
+14.5% vs TC avg
Strong +25% interview lift
Without
With
+25.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
582
Total Applications
across all art units

Statute-Specific Performance

§101
17.6%
-22.4% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 549 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-5, 7-8 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Morgas et al (US 20210304402) in view of Petschke et al (US 20170224299), in further view of Lu et al (US 20200234471). Regarding claim 1, Morgas discloses a method for producing a trained machine learning model (abstract: Systems and methods for augmenting a training data set with annotated pseudo images for training machine learning models), the method comprising: generating, based on 3D training data of an object acquired by using a first radiation imaging apparatus (¶127 The image(s) of the patient can be obtained using any target medical imaging modality; The target imaging modality can be different from the first imaging modality of the original training data set 402; ¶136 To generate the projection images 608, a 3D array of voxels 405 (i.e., volume image data) from the set of training images 404 of the original training data set 402 is obtained. The 3D array of voxels 405 represents the original 3D images or the 2D image slices 404 viewed as a three-dimensional array), simulated projection data representative of when the object is imaged by a second radiation apparatus (¶136 an algorithm can be applied to simulate a hypothetical radiation beam 604, such as, but not limited to X-rays, from a virtual radiation source 602 centered on a point a distance away from the volume 405 centroid, passing through the volume 405 at different source positions (A-N) and corresponding projection angles (α1-αn), and captured on a plane of a virtual detector 607, as shown in FIGS. 14-15, also positioned at a distance away from the volume 405 centroid opposite the source 602; The generated 2D projection images 608 (i.e., DRRs) represent projections of the anatomical structures and ground truths (contours and/or anatomical features) 406 of the 3D volume image 405; ¶156 trained to automatically generate a pseudo image(s) of a second imaging modality from images of a first imaging modality); and training an untrained machine learning model to produce the trained machine learning model by using the generated simulated projection data (¶151 the projection image data obtained in S1102 can be used as input for training a segmentation DNN model 315A to generate an output, such as contours 414E for the anatomical structures in the projection images 608, as shown in FIG. 18; ¶152 for the training of the segmentation DNN model 315A, the projection data, including the projection images 608 obtained using a forward projection algorithm, and the ground truth contours 406 contained therein, is provided as input data to the input layer of the segmentation DNN model 315A; wherein the method further comprises generating, based on the 3D training data, simulated scatter data representative of when the object is imaged by the second radiation apparatus (¶136 an algorithm can be applied to simulate a hypothetical radiation beam 604, such as, but not limited to X-rays, from a virtual radiation source 602 centered on a point a distance away from the volume 405 centroid, passing through the volume 405 at different source positions (A-N) and corresponding projection angles (α1-αn), and captured on a plane of a virtual detector 607, as shown in FIGS. 14-15, also positioned at a distance away from the volume 405 centroid opposite the source 602; ¶139 In order to generate a pseudo 3D volumetric image 412A (and/or 2D image scans of pseudo images 412) that provides a realistic model of the interaction of image generating signals (X-rays, for example) with the patient in the target imaging modality, while also providing a realistic patient model, the forward and backward projection algorithms applied to generate and reconstruct the projection images 608, accurately model the behavior of photons as they pass and scatter 606 through the volumetric images 405, accurately calculate the radiation dose on the virtual detector 607 pixels, and establish an accurate estimate of the actual imaging geometry corresponding to the generated projection images 608). Morgas fails to teach where Petschke teaches the simulated projection data being generated from energy-integrated projection data (¶16 FIG. 7B shows a plot of 1000 simulations of a material decomposition using a combined cost function including both singles-count projection data and energy-integrated projection data). Further, Morgas fails to teach where Lu teaches the step of training the untrained machine learning model comprises training the untrained machine learning model to produce the trained machine learning model (¶46 In step 223 of process 220, the training data for training the 3D CNN is generated from the training projection data; ¶52 At step 224, the training data, including the target images paired with respective sets of input images, are used for training and optimization of a 3D CNN) by using the simulated scatter data as ground truth data and the simulated projection data as input to the untrained machine learning model (¶33 the machine learning-based approach can include a neural network trained to minimize a loss function between a scatter estimation result from the neural network and a target value representing ground truth for the X-ray scatter. For example, in certain implementations, the ground truth can be generated using a scatter simulations produced via an RTE-based approach that applies spherical harmonic expansion in the integrand of the RTE integral equation to simplify and acceleration the calculations; ¶43 In a training phase, at process 220, a training dataset can be generated using as ground truth scatter profiles, and the ground truth scatter profiles can be generated from a model-based scatter estimation method; ¶45 In step 221 of process 220, a training projection data can be acquired. A large training projection database, which includes a plurality of training projection dataset; In step 223 of process 220, the training data for training the 3D CNN is generated from the training projection data; ¶50 reference scatter profiles provide the target images/data (e.g., a ground truth scatter profile)). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of the simulated projection data being generated from energy-integrated projection data from Petschke, and the teaching of the step of training the untrained machine learning model comprises training the untrained machine learning model to produce the trained machine learning model by using the simulated scatter data as ground truth data and the simulated projection data as input to the untrained machine learning model from Lu into the method as disclosed by Morgas. The motivation for doing this is to yield correct material decomposition and further to improve methods that can quickly simulate X-ray scatter without significantly sacrificing accuracy. Regarding claim 3, Morgas discloses the method according to claim 1, wherein the first radiation imaging apparatus is a computed tomography imaging apparatus, and the simulated projection data is simulated computed tomography projection data (¶72 & ¶127 To generate the medical images (whether 2-D or 3-D) of the training sets 10 and/or of the patient set 30, any suitable medical imaging modality or modalities can be used, such as, but not limited to, X-ray, computer tomography (CT), cone beam computed tomography (CBCT), spiral CT, positron emission tomography (PET), magnetic resonance imaging (MRI), functional MRI, single photon emission computed tomography (SPECT), optical tomography, ultrasound imaging, fluorescence imaging, radiotherapy portal imaging, or any combinations thereof; The target imaging modality can be different from the first imaging modality of the original training data set 402. The target imaging modality can be the same as the imaging modality of the generated pseudo images 412 of the pseudo training data set 410 and/or the generated pseudo images 412′ of the pseudo training data set 410′.). Regarding claim 4, Morgas discloses the method according to claim 3, wherein the second radiation imaging apparatus is a cone beam computed tomography imaging apparatus and the simulated computed tomography projection data is simulated cone beam computed tomography projection data (¶72 & ¶127 To generate the medical images (whether 2-D or 3-D) of the training sets 10 and/or of the patient set 30, any suitable medical imaging modality or modalities can be used, such as, but not limited to, X-ray, computer tomography (CT), cone beam computed tomography (CBCT), spiral CT, positron emission tomography (PET), magnetic resonance imaging (MRI), functional MRI, single photon emission computed tomography (SPECT), optical tomography, ultrasound imaging, fluorescence imaging, radiotherapy portal imaging, or any combinations thereof; The target imaging modality can be different from the first imaging modality of the original training data set 402. The target imaging modality can be the same as the imaging modality of the generated pseudo images 412 of the pseudo training data set 410 and/or the generated pseudo images 412′ of the pseudo training data set 410′.). Regarding claim(s) 5 and 7-8 (drawn to an apparatus): The rejection/proposed combination of Morgas and Lu, explained in the rejection of method claim(s) 1 and 3-4, anticipates/renders obvious the steps of the apparatus of claim(s) 5 and 7-8 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1 and 3-4 is/are equally applicable to claim(s) 5 and 7-8. See further Morgas ¶88 including computer system 312 having processor 313. Regarding claim 23, Morgas discloses a non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform the method of claim 1 (¶183-185). Response to Arguments Applicant's arguments filed 10/28/2025 have been fully considered but they are not persuasive. The applicant first argues that the prior art of record does not teach the step of training the untrained machine learning model comprises training the untrained machine learning model to produce the trained machine learning model by using the simulated scatter data as ground truth data and the simulated projection data as input to the untrained machine learning model. Regarding this argument, the examiner disagrees. Lu et al (US 20200234471) teaches in ¶46 In step 223 of process 220, the training data for training the 3D CNN is generated from the training projection data; ¶52 At step 224, the training data, including the target images paired with respective sets of input images, are used for training and optimization of a 3D CNN, and further in ¶33 the machine learning-based approach can include a neural network trained to minimize a loss function between a scatter estimation result from the neural network and a target value representing ground truth for the X-ray scatter. For example, in certain implementations, the ground truth can be generated using a scatter simulations produced via an RTE-based approach that applies spherical harmonic expansion in the integrand of the RTE integral equation to simplify and acceleration the calculations; ¶43 In a training phase, at process 220, a training dataset can be generated using as ground truth scatter profiles, and the ground truth scatter profiles can be generated from a model-based scatter estimation method; ¶45 In step 221 of process 220, a training projection data can be acquired. A large training projection database, which includes a plurality of training projection dataset; In step 223 of process 220, the training data for training the 3D CNN is generated from the training projection data; ¶50 reference scatter profiles provide the target images/data (e.g., a ground truth scatter profile. In these paragraphs, it is clear that the untrained machine learning model is trained using simulated scatter data as ground truth data (e.g. the ground truth scatter profiles), and the simulated projection data as input to the untrained machine learning model (e.g. the training data for training the 3D CNN is generated from the training projection data)). Clarity has been provided to the rejection above. Applicant’s additional arguments with respect to claim(s) 1, 3-5, 7-8 and 23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN KY whose telephone number is (571)272-7648. The examiner can normally be reached Monday-Friday 9-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN KY/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Feb 25, 2022
Application Filed
Dec 09, 2024
Non-Final Rejection — §103
Mar 13, 2025
Response Filed
Apr 03, 2025
Final Rejection — §103
Jul 08, 2025
Request for Continued Examination
Jul 11, 2025
Response after Non-Final Action
Jul 24, 2025
Non-Final Rejection — §103
Oct 28, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597158
POSE ESTIMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597291
IMAGE ANALYSIS FOR PERSONAL INTERACTION
2y 5m to grant Granted Apr 07, 2026
Patent 12586393
KNOWLEDGE-DRIVEN SCENE PRIORS FOR SEMANTIC AUDIO-VISUAL EMBODIED NAVIGATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586559
METHOD AND APPARATUS FOR GENERATING SPEECH OUTPUTS IN A VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12579382
NATURAL LANGUAGE GENERATION USING KNOWLEDGE GRAPH INCORPORATING TEXTUAL SUMMARIES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.3%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 549 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month