DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims1-5, 7-16, and 19-27 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Xu et al. (US 10,573,032 B2).
1. A computer-implemented method for training a regression model for cone-beam computed tomography (CBCT) data processing (line65-67, column 1 bridging lines 1-49, column 2), the method comprising: obtaining a reference medical image of an anatomical area (lines 9-11, column 2 mention image collected from a patient); generating, from the reference medical image, a plurality of variation images, wherein the plurality of variation images provide variation in representations of the anatomical area; identifying projection viewpoints (Figure 10D shows variation images representing anatomical areas), in a CBCT projection space, for each of the plurality of variation images; generating, at each of the projection viewpoints, a set of CBCT projections and a corresponding set of simulated aspects of the CBCT projections; and training the regression model, using the corresponding sets of the CBCT projections and the simulated aspects of the CBCT projections (Fig. 8A corresponding to the discussion in lines 27-55, column 18, says using a deep convolutional neural network to created error map for updating parameters of the neural network, thus removing image artifacts. Line19-22, column 2, mention training a deep convolutional neural network for regression such as to reduce one or more artifacts in the projection space image).
2. The method of claim 1, wherein training the regression model includes training with pairs of generated CBCT projections that include simulated deficiencies and generated CBCT projections that do not include the simulated deficiencies; wherein the trained regression model is configured to receive a newly captured CBCT projection that includes one or more deficiencies as input, and wherein the trained regression model is configured to provide a corrected CBCT projection as output (lines 11-15, column 2 say to reduce one or more artifacts in a central one of plurality CT projection space images).
3. The method of claim 2, wherein the trained regression model is adapted to correct one or more deficiencies in the newly captured CBCT projection caused by scatter, and wherein the pairs of generated CBCT projections for training comprise CBCT projections that include simulated scatter and CBCT projections that do not include the simulated scatter (line 64, column 2 mentions at least one artifact being a scattering artifact) .
4. The method of claim 2, wherein the trained regression model is adapted to correct one or more deficiencies in the newly captured CBCT projection caused by a foreign material, and wherein the pairs of generated CBCT projections for training comprise CBCT projections that include simulated artifacts caused by the foreign material and CBCT projections that remove the simulated artifacts caused by the foreign material (line 64, column 2 mentions at least one artifact being a scattering artifact) .
5. The method of claim 4, wherein the foreign material is metal, and wherein the CBCT projections that remove the simulated artifacts are produced using at least one metal artifact reduction algorithm (lines 15-20, column 5 discusses using physical phantom and iterative algorithms to reduce artifacts. Physical phantom typically would inherently include artifacts caused by metal).
7. The method of claim 1, wherein the plurality of variation images comprise a first plurality of CBCT projections generated with a first field of view, and wherein the trained regression model is configured to receive as input a second plurality of CBCT projections having a second field of view that differs from the first field of view (Figure 9c).
8. The method of claim 1, wherein the plurality of variation images are generated by geometrical augmentations or changes to the representations of the anatomical area, and wherein the projection viewpoints correspond to a plurality of projection angles for capturing CBCT projections (Figure 2 illustrates this capability).
9. The method of claim 1, wherein the reference medical image is a 3D image provided from a computed tomography (CT) scan, and wherein the method further includes training of the regression model using a plurality of reference medical images from the CT scan (Figure 9c).
10. The method of claim 1, wherein the reference medical image is from a human patient, and wherein the trained regression model is used for radiotherapy treatment of the human patient (Lines 24-36, column 1).
11. The method of claim 10, wherein the method further includes training of the regression model using a plurality of reference medical images from one or more prior computed tomography (CT) scans or one or more prior CBCT scans of the human patient (Figure 9c).
12. The method of claim 1, wherein the reference medical image is provided from one of a plurality of human subjects, and wherein the method further includes training of the model using a plurality of reference medical images from each of the plurality of human subjects patient (Lines 24-36, column 1) .
13. A computer-implemented method for using a trained regression model for cone-beam computed tomography (CBCT) data processing, (Xu teaches a system and method for training a deep convolutional neutral network [abstract]) the method comprising: accessing a trained regression model configured for removing deficiencies in CBCT projections (Fig. 8A corresponding to the discussion in lines 27-55, column 18, says using a deep convolutional neural network to created error map for updating parameters of the neural network, thus removing image artifacts. Line19-22, column 2, mention training a deep convolutional neural network for regression such as to reduce one or more artifacts in the projection space image) , wherein the trained regression model is trained using corresponding sets of simulated deficiencies (this would be the estimated result mentioned in line 40, column 18. see also fig. 9c, 2D CBCT artifacts-contaminated projection data at step 980 is sent to the CNN model at step 892; see also fig. 9b, at step 940 sets of 2D CBCT sets of artifact contaminated projection data is received at different view angles and corresponding artifact reduced 2D CBCT projection data. ) and CBCT projections at each of a plurality of projection viewpoints in a CBCT projection space, wherein the sets of simulated deficiencies and CBCT projections are generated based on a reference medical image (lines 28-56, column 18 also mention generating the estimated result from inputs 804, which corresponds to a reference medical image); providing a first plurality of CBCT projections as an input to the trained regression model, wherein the first plurality of CBCT projections include one or more deficiencies; and obtaining a second plurality of CBCT projections as an output of the trained regression model, wherein the second plurality of CBCT projections include corrections to the one or more deficiencies. (note discussion corresponds to Fig. 8)
14. The method of claim 13, wherein training of the trained regression model includes training with pairs of generated CBCT projections that include the simulated deficiencies and generated CBCT projection images that do not include the simulated deficiencies (this limitation corresponds to the teaching in lines 32-37, column 18 as the teaching specifically calls for training the deep learning model using medical images contaminated with artifacts and expected results to train the deep learning model).
15. The method of claim 14, wherein the one or more deficiencies in the first plurality of CBCT projections are caused by scatter, and wherein the pairs of generated CBCT projections used for training comprise CBCT projections that include simulated scatter and CBCT projections that do not include the simulated scatter (line 64, column 2 mentions at least one artifact being a scattering artifact).
16. The method of claim 14, wherein the one or more deficiencies in the first plurality of CBCT projections are caused by a foreign material, and wherein the pairs of generated CBCT projections used for training include CBCT projections that include simulated artifacts caused by the foreign material and CBCT projections that remove the simulated artifacts caused by the foreign material (scattering artifact is considered as a foreign matter to the collected image. As such, describing it as a foreign matter is an inherent characterization of the artifact).
19. The method of claim 13, wherein the first plurality of CBCT projections are captured with a first field of view, and wherein the CBCT projections used for training are generated with a second field of view that differs from the first field of view Figure 2 illustrates this capability).
20. The method of claim 13, wherein the reference medical image used for training is one of a plurality of 3D images provided from a computed tomography (CT) scan (Fig. 9c).
21. The method of claim 13, further comprising: performing reconstruction of a 3D CBCT image from the second plurality of CBCT projections (Fig. 9c).
22. The method of claim 21, wherein the reference medical image used for training is from a human patient, and wherein the 3D CBCT image is used for radiotherapy treatment of the human patient (Lines 24-36, column 1).
23. The method of claim 13, wherein the trained regression model is trained based on reference images captured from a plurality of human subjects (Lines 1-23, column 2).
24. A non-transitory computer-readable storage medium comprising computer-readable instructions for training a regression model to process cone-beam computed tomography (CBCT) data, wherein the instructions, when executed, cause a computing machine to perform operations comprising: obtaining a reference medical image of an anatomical area; generating, from the reference medical image, a plurality of variation images, wherein the plurality of variation images provide variation in representations of the anatomical area; identifying projection viewpoints, in a CBCT projection space, for each of the plurality of variation images; generating, at each of the projection viewpoints, a set of CBCT projections and a corresponding set of simulated aspects of the CBCT projections; and training the regression model, using the corresponding sets of the CBCT projections and the simulated aspects of the CBCT projections (Lines 41-67, column 9 bridging lines 1-17, column 10).
25. The computer-readable storage medium of claim 24, wherein training the regression model includes training with pairs of generated CBCT projections that include simulated deficiencies and generated CBCT projections that do not include the simulated deficiencies; wherein the trained regression model is configured to receive a newly captured CBCT projection that includes one or more deficiencies as input, and wherein the trained regression model is configured to provide a corrected CBCT projection as output (the analogy mentioned above for claim 1 equally applies to this claim).
26. A non-transitory computer-readable storage medium comprising computer-readable instructions for using a trained regression model to process cone-beam computed tomography (CBCT) data, wherein the instructions, when executed, cause a computing machine to perform operations comprising: accessing a trained regression model configured for removing deficiencies in CBCT projections, wherein the trained regression model is trained using corresponding sets of simulated deficiencies and CBCT projections at each of a plurality of projection viewpoints in a CBCT projection space, wherein the sets of simulated deficiencies and CBCT projections are generated based on a reference medical image; providing a first plurality of CBCT projections as an input to the trained regression model, wherein the first plurality of CBCT projections include one or more deficiencies; and obtaining a second plurality of CBCT projections as an output of the trained regression model, wherein the second plurality of CBCT projections include corrections to the one or more deficiencies (the analogy mentioned above for claim 1 equally applies to this claim).
27. The computer-readable storage medium of claim 26, wherein training of the trained regression model includes training with pairs of generated CBCT projections that include the simulated deficiencies and generated CBCT projection images that do not include the simulated deficiencies (the analogy mentioned above for claim 1 equally applies to this claim ).
Allowable Subject Matter
Claims 6, 17 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the prior art of record does not teach the limitations of claims 6 and 17.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DON KITSUN WONG whose telephone number is (571)272-1834. The examiner can normally be reached on Monday – Friday 9:00am – 6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Uzma Alam can be reached on 571 272 3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DON K WONG/ Primary Examiner, Art Unit 2884