Prosecution Insights
Last updated: April 19, 2026
Application No. 18/811,465

APPARATUS AND METHOD FOR DENOISING OF MEDICAL IMAGE

Non-Final OA §102§103
Filed
Aug 21, 2024
Examiner
TRAN, JENNY NGAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Claripi Inc.
OA Round
1 (Non-Final)
20%
Grant Probability
At Risk
1-2
OA Rounds
2y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
1 granted / 5 resolved
-42.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-16 are currently pending in the present application, with claims 1 and 9 being independent. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/21/2024 have been considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 5-10, and 13-16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Niu et al. "Noise entangled GAN for low-dose CT simulation." arXiv preprint arXiv:2102.09615 (2021)., hereinafter referred to as “Niu”. Regarding claim 1, Niu discloses an apparatus for denoising a medical image (NE-GAN, Section 2.3 Implementation details), comprising: an image processing module configured to extract a noise component from a medical image for processing by inputting the medical image for the processing to a noise extraction deep learning model (Fig. 1-2 and Section 2.1 Generation of high-dose noise image; For the denoising scheme, the HDCT image is first forwarded into a denoising model to obtain a clean CT image, which is then subtracted from the input HDCT image to generate the high-dose noise image. The denoising model is trained by directly mapping high-dose CT image to the low-dose CT image, as shown in Fig. 2) trained in advance (Section 2.2, Right Column; After training, only the generator G is retained to simulate different levels of LDCT images given the clean image, the high-dose noise image, and the specific noise factor…Examiner’s note: after training to simulate different images clarifies that the model is trained in advanced), and generate a noise-removed image by subtracting the noise component from the medical image for the processing (Fig,. 1-2 and Section 2.1 Generation of high-dose noise image; the HDCT image is first forwarded into a denoising model to obtain a clean CT image, which is then subtracted from the input HDCT image to generate the high-dose noise image). wherein the noise extraction deep learning model is trained using a simulation noise component image generated by a noise simulator and a simulation low-quality image generated based on the simulation noise component image as a pair (Fig. 1-2 and Section 2.1 Generation of high-dose noise image; the trained model can generate the denoised image…The denoising scheme can extract the real prior information from the real HDCT images, which are then transformed to LDCT images with specific noise level by NE-GAN…NE-GAN takes the simulated noise and the HDCT image as inputs to generate a set of LDCT images with different levels of noise…). Regarding claim 2, Niu discloses the apparatus of claim 1, and further discloses wherein the simulation low-quality image is generated by combining the simulation noise component image and a normal-quality medical image (Fig. 3 and Section 2.2 Noise Entangled GAN, Right Column; NE-GAN consists of generator G and a set of discriminators…the generator G is a encoder-decoder network that takes a clean CT image and a noise image scaled with a noise factor as inputs, and outputs a LDCT image corresponding to the input noise factor…) Regarding claim 5, Niu discloses the apparatus of claim 1, and further discloses wherein the noise simulator is configured to generate the simulation noise component image by inputting a set of the normal-quality medical images to a generative adversarial model trained in advance (Section 2 Methods; we propose to simulate an LDCT image through two steps: the first step is to generate a clean image…and the second step is to generate different levels of LDCT images by entangling the high-dose noise component scaled with a specific noise factor into the clean CT image. Section 2.2 Noise Entangled GAN, Right Column; NE-GAN consists of generator G and a set of discriminators…the generator G is a encoder-decoder network that takes a clean CT image and a noise image scaled with a noise factor as inputs, and outputs a LDCT image corresponding to the input noise factor) Regarding claim 6, Niu discloses the apparatus of claim 5, and further discloses wherein the generative adversarial model is repeatedly trained (Section 3.1 Dataset; we used a multi-dose of real CT image dataset from [6], in which the CT images were collected from anonymous cadavers and each of them was repeatedly scanned…sub-dataset that contains 261 groups of CT images for training and 251 groups of CT images for testing…Section 3.2 Results on simulated dataset; we repeatedly generated HDCT noise image with CatSim and simulated the LDCT images with NE-GAN by 50 times) to minimize a loss due to difference between a noise component image for training and a simulation noise component image generated by inputting a set of normal-quality medical images for training to the generative adversarial model (Section 2.2 Noise Entangled GAN; The loss function is: …first two items in the loss function are the adversarial losses…and train G to minimize the probability of assigning the correct label for D, the third item is a data fidelity loss…and the fourth item is a reconstruction loss…). Regarding claim 7, Niu discloses the apparatus of claim 1, and further discloses wherein the noise simulator is configured to generate a set of low-quality medical images by inputting a set of normal-quality medical images to a generative adversarial model trained in advance (Section 2 Methods; we propose to simulate an LDCT image through two steps: the first step is to generate a clean image…and the second step is to generate different levels of LDCT images by entangling the high-dose noise component scaled with a specific noise factor into the clean CT image. Section 2.2 Noise Entangled GAN, Right Column; NE-GAN consists of generator G and a set of discriminators…the generator G is a encoder-decoder network that takes a clean CT image and a noise image scaled with a noise factor as inputs, and outputs a LDCT image corresponding to the input noise factor…To train NE-GAN, we need a set of training samples…where xi^0 and ni^0 denote the clean CT image and high-dose image respectively…), and generate the simulation noise component image by subtracting the set of low-quality medical images from the set of normal-quality medical images (Section 2.1 Generation of high-dose noise image; For the denoising scheme, the HDCT image is first forwarded into a denoising model to obtain a clean CT image, which is then subtracted from the input HDCT image to generate the high-dose noise image. The denoising model is trained by directly mapping high-dose CT image to the low-dose CT image, as shown in Fig. 2). Regarding claim 8, Niu discloses the apparatus of claim 7, and further discloses wherein the generative adversarial model is repeatedly trained to minimize a loss due to difference between a low-quality medical image for training and a simulation low-quality medical image generated by inputting a set of normal-quality medical images for training to the generative adversarial model (Section 2.2 Noise Entangled GAN; To train NE-GAN, we need a set of training samples… where xi^0 and ni^0 denotes the clean CT image and high-dose image respectively…The loss function is:…the first two items in the loss function are the adversarial losses that train D to maximize the probability of assigning the correct label to both real LDCT images and the generated ones from G…). Regarding claim 9, claim 9 is the method claim of apparatus claim 1, and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1. Regarding claim 10, claim 10 has similar limitations as of claim 2, except it is a method claim, therefore it is rejected under the same rationale as claim 2. Regarding claim 13, claim 13 has similar limitations as of claim 5, except it is a method claim, therefore it is rejected under the same rationale as claim 5. Regarding claim 14, claim 14 has similar limitations as of claim 6, except it is a method claim, therefore it is rejected under the same rationale as claim 6. Regarding claim 15, claim 15 has similar limitations as of claim 7, except it is a method claim, therefore it is rejected under the same rationale as claim 7. Regarding claim 16, claim 16 has similar limitations as of claim 8, except it is a method claim, therefore it is rejected under the same rationale as claim 8. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3-4 and 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Niu et al. "Noise entangled GAN for low-dose CT simulation." arXiv preprint arXiv:2102.09615 (2021)., hereinafter referred to as “Niu”, in view of Yang et al. "Low-dose CT denoising via sinogram inner-structure transformer." IEEE transactions on medical imaging 42, no. 4 (2022): 910-921, hereinafter referred to as “Yang”. Regarding claim 3, Niu discloses the apparatus of claim 1, and further discloses wherein the noise simulator is configured to generate the simulation noise component image by inputting a set of the normal-quality medical images (Section 2 Methods; to generate a clean image. Section 2.2 Noise Entangled GAN, Right Column; NE-GAN consists of generator G and a set of discriminators…the generator G is a encoder-decoder network that takes a clean CT image and a noise image scaled with a noise factor as inputs), (Section 2.2 Noise Entangled GAN; To train NE-GAN, we need a set of training samples…S is number of noise levels or discriminators, kj denotes the noise factor that is a positive real number and a larger value corresponds to a higher noise level or lower image quality). Niu does not disclose of which a domain is converted into a sinogram domain In the same art of LDCT techniques, Yang discloses (Fig. 2; sinogram noise and image noise and Section III.B; we propose the sinogram transformer module for sinogram domain denoising), It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to apply the sinogram-domain processing techniques as taught by Yang to the CT image processing framework of Niu. Doing so preserves structural information related to the CT acquisition process and improve noise modeling and denoising performance (Yang Section I; Considering that noise appears randomly (e.g, Poisson Distribution) in the whole imaging procedure, the conjugate pairs obtaining different noise would evidently break this structure. By maintaining this sinogram inner-structure, we can effectively restrain the noise and improve the image quality. Therefore, exploring the inner-structure of sinogram is of great importance for sinogram domain denoising… Designing a loss function based on sinogram inner-structure for network training in the sinogram domain can effectively maintain this structure and restrain noise). Incorporating such projection-domain representations into noise simulation techniques would’ve yielded predictable results in improved accuracy and realism of the generated CT images. Regarding claim 4, Niu in view of Yang discloses the apparatus of claim 3, and Niu further discloses wherein the noise generation model is provided as a model, of which parameters are varied depending on training, and repeatedly trained (Section 3.1 Dataset; we used a multi-dose of real CT image dataset from [6], in which the CT images were collected from anonymous cadavers and each of them was repeatedly scanned…sub-dataset that contains 261 groups of CT images for training and 251 groups of CT images for testing…Section 3.2 Results on simulated dataset; we repeatedly generated HDCT noise image with CatSim and simulated the LDCT images with NE-GAN by 50 times) to minimize a loss due to difference between a noise component image for training and the generated simulation noise component image (Section 2.2 Noise Entangled GAN; The loss function is: …first two items in the loss function are the adversarial losses…and train G to minimize the probability of assigning the correct label for D, the third item is a data fidelity loss…and the fourth item is a reconstruction loss…After training, only the generator G is retained to simulate different levels of LDCT images given the clean image, the high-dose noise image, and the specific noise factor…). Niu and Yang are combined for the reason set forth above with respect to claim 3. Regarding claim 11, claim 11 has similar limitations as of claim 3, except it is a method claim, therefore it is rejected under the same rationale as claim 3. Regarding claim 12, claim 12 has similar limitations as of claim 4, except it is a method claim, therefore it is rejected under the same rationale as claim 4. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNY N TRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Aug 21, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499589
SYSTEMS AND METHODS FOR IMAGE GENERATION VIA DIFFUSION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
20%
Grant Probability
70%
With Interview (+50.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month