Prosecution Insights
Last updated: April 19, 2026
Application No. 18/412,887

MEDICAL IMAGE DENOISING BASED ON LAYER SEPARATION

Non-Final OA §102§103
Filed
Jan 15, 2024
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Shanghai United Imaging Intelligence Co. Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claim: claims 1-20 are examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 4/2/2024 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 4-5, 7-8, 12, 14-15, 17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mustikovela et al (US 2023/0004760). Claim 1, similarly claim 12: Mustikovela et al (US 2023/0004760) anticipated the following subject matter: An apparatus, comprising: one or more processors configured to (figure 7 and 0111 detail using of one or more processors to perform operations): obtain a medical image that depicts an object (0531-0532 detail using on such as image processing for medical applications and imaging; 0555 further detail application of medical imaging and diagnostic functions with AI-assisted annotation and labeling (objects and objects of interest); 0608 detail one or more neural networks includes a first network that generates a representation of an object (foreground), the one or more neural networks includes a second network that generates a background image, and a combination (merge/combine) of the background image and the representation of the object); separate the medical image into a background layer and a foreground layer (above teaches medical imaging and applications, where 0059 detail using of neural network such as GAN for separate representation of foreground and background objects; figure 5 and 0075); denoise (0205-0206 detail enhanced nose reduction for both spatial and temporal noise reduction in both weights (layers of monitoring neural networks) across frames) the background layer using a first neural network (figure 5 and 0075 detail separate for background passing through convolutional layers (first neural network)); denoise the foreground layer using a second neural network, wherein the second neural network differs from the first neural network with respect to at least one of a neural network architecture or a number of neural network parameters (figure 5 and 0075 use of convolutional layers (3D or 2D) for background processing, and paragraph 0077 teaches use of multi-layer perception (MLP) for foreground processing in blocks 508, 510, 512, where 0078 detail further separate branches for feature of object scene discrimination); and merge the denoised background layer and the denoised foreground layer into a denoised medical image that depicts the object (0608 detail one or more neural networks includes a first network that generates a representation of an object, the one or more neural networks includes a second network that generates a background image, and a combination of the background image and the representation of the object; 0618, 0628, 0638 detail combine neural networks generated background with object (foreground)). Regarding claim 12, where Mustikovela et al teaches flowchart (method) in figure 7. Claim 2: The apparatus of claim 1, wherein the one or more processors are configured to separate the medical image into the background layer and the foreground layer using a third neural network (0058-0059 detail using of neural network such as GAN (Generative Adversarial Network) for separate representation of foreground and background objects). Claim 4, similarly claim 14: The apparatus of claim 1, wherein the first neural network comprises a convolutional neural network (figure 5 and 0075 detail separate for background passing through convolutional layers) and wherein the second neural network comprises a multi-layer perceptron (MLP) neural network (paragraph 0077 teaches use of multi-layer perception (MLP) for foreground processing in blocks 508, 510, 512). Claim 5, similarly claim 5: The apparatus of claim 4, wherein the one or more processors being configured to denoise the foreground layer using the second neural network comprises the one or more processors being configured to dissect the foreground layer into multiple patches and denoise the multiple patches using the MLP neural network (above teaches noise reduction, where 0086 further detail processing of foreground via patch-based with class probability map and realism score). Claim 7: The apparatus of claim 1, wherein the one or more processors are configured to merge the denoised background layer and the denoised foreground layer using a third neural network (above teaches noise reduction, where 0618, 0628 detail combine of representation of object (foreground using MLP as taught above) and background (using convolutional neural network as taught above) combine by using object detection neural network (third neural network)). Claim 8, similarly claim 17: The apparatus of claim 7, wherein the first neural network, the second neural network, and the third neural network are trained jointly (0545 teaches leverage applications by model training service of machine learning model on parallel (jointly) computing platform; 0107 also detail the concept of jointly trained). Claim 20: A non-transitory computer-readable medium comprising instructions that, when executed by one or more processors included in a computing device, cause the one or more processors to implement the method of claim 12 (0653 teaches non-transitory computer-readable medium). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mustikovela et al (US 2023/0004760) in view of FONTANARAVA et al (US 2019/0298204). Claim 6, similarly claim 16: Mustikovela et al (US 2023/0004760) teaches all the subject matter above. Mustikovela et al do not teach following subject matter: The apparatus of claim 1, wherein the first neural network comprises a smaller number of neural network parameters than the second neural network. FONTANARAVA et al (US 2019/0298204) teaches the following subject matter: The apparatus of claim 1, wherein the first neural network comprises a smaller number of neural network parameters than the second neural network (0073-0074 teaches fewer (smaller) parameters for convolutional network (first neural network) than multi-layer neural network (second neural network), where 0038-0042 detail application for filtering of pixel of medical monitor of disease). Mustikovela et al and FONTANARAVA et al are both in the field of image processing, especially for medical imaging processing with different neural network such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Mustikovela et al by FONTANARAVA et al such that smaller parameter (smaller number of pixel) has the advantage of reduction in the calculation time as disclosed by FONTANARAVA et al in 0074 Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Mustikovela et al (US 2023/0004760) in view of HOLT et al (US 2014/0282624). Claim 11: Mustikovela et al (US 2023/0004760) teaches all the subject matter above. Mustikovela et al do not teach following subject matter: The apparatus of claim 1, wherein the medical image is an X-ray image acquired via fluoroscopy imaging. HOLT et al (US 2014/0282624) teaches the following subject matter: The apparatus of claim 1, wherein the medical image is an X-ray image acquired via fluoroscopy imaging (0008 teaches imaging of fluoroscopy data from x-ray video data). Mustikovela et al and HOLT et al are both in the field of image processing, especially for medical imaging processing of medical imaging in foreground and background dataset (HOLT et al teaches 0243 teaches background and foreground data, where 0286 teaches denoise of data) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Mustikovela et al by HOLT et al where processing of fluoroscopy data from x-ray video data such would unified set of facet controls for controlling the processing steps and arrangements in the processing pipeline architectures as disclosed by HOLT et al paragraph 0008. Allowable Subject Matter Claim 3, similarly 13, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find prior art teaching claim 3 reciting “…wherein the third neural network is trained using training data generated via recursive projected compressive sensing.” Claim 9 and 18, and dependent claim 10 and 19, is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find prior art teaching claim 9 reciting “…wherein the first neural network and the second neural network are trained jointly via a training process during which: the first neural network is used to denoise a background training image comprising first synthetic noise; the second neural network is used to denoise a foreground training image comprising second synthetic noise; parameters of the first neural network are adjusted based on a difference between the denoised background training image and a clean ground truth background image; and parameters of the second neural network are adjusted based on a difference between the denoised foreground training image and a clean ground truth foreground image.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. S et al (US 2020/0311877) teaches SYSTEMS AND METHODS FOR BACKGROUND NOISE REDUCTION IN MAGNETIC RESONANCE IMAGES – 0006 teaches segmenting the MR image/map into foreground (the region of anatomy of interest in the MR image/map) and background (outside of the region of anatomy of interest). In some embodiments, the segmentation is performed by a neural network, which has been trained to classify each pixel/voxel of the MR image/map as belong to either background or foreground. Then an intensity threshold is applied selectively to the background, and not to the foreground. This may enable greater background noise reduction, where application on medical images inside human body. Paragraph 0026 detail further. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jan 15, 2024
Application Filed
Jan 16, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month