Prosecution Insights
Last updated: April 19, 2026
Application No. 18/607,804

ULTRA-HIGH RESOLUTION CT RECONSTRUCTION USING GRADIENT GUIDANCE

Non-Final OA §103
Filed
Mar 18, 2024
Examiner
ALFONSO, DENISE G
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Subtle Medical, Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
94%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
76 granted / 103 resolved
+11.8% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
31 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
59.8%
+19.8% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§103
DETAILED ACTIONS Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim this application being a Continuation of the Application No. PCT/CN2022/120184, filed on September 21, 2022, and benefit of foreign priority from Chinese Patent Application No. PCT/CN2021/122318 filed on September 30, 2021. Information Disclosure Statement The information disclosure statement (“IDS”) filed on 07/11/2024 was reviewed and the listed references were noted. Drawings The 5-page drawings have been considered and placed on record in the file. Status of Claims Claims 1-20 are pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7 and 10-18 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US 2022/0092739 A1), hereinafter referred to as Li, in view of Ma et al., "Structure-Preserving Super Resolution with Gradient Guidance" (2020), hereinafter referred to as Ma. Claim 1 Li discloses a computer-implemented method for ultra-high resolution computed tomography (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”) comprising: (a) acquiring (Li, Fig. 5, step 510, obtaining an image), using computed tomography (CT) (Li, [0039], “the systems may include a magnetic resonance imaging (MRI) system, a radiotherapy (RT) system, a computed tomography (CT) system, an emission computed tomography (ECT) system, an X-ray photography system, a positron emission tomography (PET) system, or the like, or any combination thereof.”), a medical image of a subject ([0040], “The term “image” in the present disclosure is used to collectively refer to image data (e.g., scan data, projection data) and/or images of various forms, including a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D), etc”, “ The term “region,” “location,” and “area” in the present disclosure may refer to a location of an anatomical structure shown in the image or an actual location of the anatomical structure existing in or on the subject's body”), wherein the medical image has a lower resolution (Li, [0083], “the current resolution level of the image obtained by the processing device 120A described in operation 510 may be lower than a resolution level threshold. Accordingly, the image quality of the obtained image may be relatively low”); and (b) processing the medical image (Li, Fig. 5, step 550, processing the image using the target processing model), with aid of a deep learning network model (Li, [0095], “the target processing model may include a neural network model, such as a convolutional neural network (CNN) model (e.g., a full CNN model, V-net model, a U-net model, an AlexNet model, an Oxford Visual Geometry Group (VGG) model, a ResNet model), a generative adversarial network (GAN) model, or the like, or any combination thereof”), to reconstruct an ultra-high resolution medical image (Li, Fig. 13, step 1360, Reconstructing a target image based on the target k-space data, [0099], “the processed image with the target resolution level may be an ultra-resolution reconstructed image”), wherein the deep learning network model is trained using a generative adversarial network (GAN)-based framework (Li, [0095], “the target processing model may include a neural network model, such as a convolutional neural network (CNN) model (e.g., a full CNN model, V-net model, a U-net model, an AlexNet model, an Oxford Visual Geometry Group (VGG) model, a ResNet model), a generative adversarial network (GAN) model, or the like, or any combination thereof”) with a gradient guidance. Li does not explicitly disclose wherein the deep learning network model is trained using a generative adversarial network (GAN)-based framework with a gradient guidance. However, Ma teaches wherein the deep learning network model is trained using a generative adversarial network (GAN)-based framework with a gradient guidance (Ma, Fig. 2, “Our architecture consists of two branches, the SR branch and the gradient branch. The gradient branch aims to super-resolve LR gradient maps to the HR counterparts. It incorporates multi-level representations from the SR branch to reduce parameters and outputs gradient information to guide the SR process by a fusion block in turn. The final SR outputs are optimized by not only conventional image-space losses, but also the proposed gradient-space objectives”, Section 3.2.1, “Once we get the SR gradient maps by the gradient branch, we are able to integrate the obtained gradient features into the SR branch to guide SR reconstruction in turn”, Section 4.1, “We use the architecture of ESRGAN as the backbone of our SR branch and the RRDB block as the gradient block. We randomly sample 15 32 × 32 patches from LR images for each input mini-batch. Therefore the ground-truth HR patches have a size of 128 × 128. We initialize the generator with the parameters of a pre-trained PSNR-oriented model. The pixelwise loss, perceptual loss, adversarial loss and gradient loss are used as the optimizing objectives”). PNG media_image1.png 282 730 media_image1.png Greyscale Li and Ma are both considered to be analogous to the claimed invention because they are in the same field of super resolution using GAN. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Li to incorporate the teachings of Ma wherein the deep learning network model is trained using a generative adversarial network (GAN)-based framework with a gradient guidance. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to optimize the final super resolution outputs by not only conventional image-space losses, but also the proposed gradient-space objectives (Ma, Fig. 2) Claim 2 The combination of Li in view of Ma disclose a computer-implemented method of claim 1 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the GAN-based framework comprises a first branch for improving a resolution of a medical image (Ma, Fig. 2, SR Branch), and a second branch for generating a predicted gradient map (Ma, Fig. 2, Gradient Branch). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 2 and are incorporated herein by reference. Thus, the method recited in Claim 2 is met by Li and Ma. Claim 3 The combination of Li in view of Ma disclose a computer-implemented method of claim 2 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the predicted gradient map is used to guide the training of the first branch (Ma, Fig. 2, “outputs gradient information to guide the SR process by a fusion block in turn”). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 3 and are incorporated herein by reference. Thus, the method recited in Claim 3 is met by Li and Ma. Claim 4 The combination of Li in view of Ma disclose a computer-implemented method of claim 3 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the predicted gradient map is concatenated with a feature map of the first branch and is supplied to a residual block (Ma, Fig. 2, Fusion Block, Section 3.2.1, “we feed the feature maps produced by the next-to-last layer of gradient branch to the SR branch”, Section 3.2.2, “We fuse the structure information by a fusion block which fuses the features from two branches together. Specifically, we concatenate the two features and then use another RRDB block and convolutional layer to reconstruct the final SR features.”). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 4 and are incorporated herein by reference. Thus, the method recited in Claim 4 is met by Li and Ma. Claim 5 The combination of Li in view of Ma disclose a computer-implemented method of claim 2 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the second branch uses a pixel-wise loss in a training process (Ma, Section 3.3, “we design two terms of loss to penalize the difference in the gradient maps (GM) of the SR and HR images. One is based on the pixelwise loss”. The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 5 and are incorporated herein by reference. Thus, the method recited in Claim 5 is met by Li and Ma. Claim 6 The combination of Li in view of Ma disclose a computer-implemented method of claim 2 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the first branch uses a combination of pixel-wise loss (Ma, Section 3.3, “Among these, β I SR, β GM SR and β GM GB are the weights of the pixel losses for SR images, gradient maps of SR images and SR gradient maps respectively. γ I SR and γ GM SR are the weights of the adversarial losses for SR image and their gradient maps.”) and a GAN loss in a training process (Ma, Section 3.3, “Among these, β I SR, β GM SR and β GM GB are the weights of the pixel losses for SR images, gradient maps of SR images and SR gradient maps respectively. γ I SR and γ GM SR are the weights of the adversarial losses for SR image and their gradient maps.”). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 6 and are incorporated herein by reference. Thus, the method recited in Claim 6 is met by Li and Ma. Claim 7 The combination of Li in view of Ma disclose a computer-implemented method of claim 2 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the second branch incorporates one or more intermediate feature maps generated by the first branch (Ma, Section 3.2.1, “As shown in Figure 2, the gradient branch incorporates several intermediate-level representations from the SR branch.”). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 7 and are incorporated herein by reference. Thus, the method recited in Claim 7 is met by Li and Ma. Claim 10 The combination of Li in view of Ma disclose a computer-implemented method of claim 2 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), where an input to the second branch includes a gradient map of the medical image acquired in (a) (Ma, Fig. 2, the input for the gradient branch is the gradient map of the low-resolution image, Li discloses that the low-resolution original image is a medical image). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 10 and are incorporated herein by reference. Thus, the method recited in Claim 10 is met by Li and Ma. Claim 11 The combination of Li in view of Ma disclose a computer-implemented method of claim 1 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”), wherein the deep learning network model is trained using a loss function comprising a combination of at least pixel-wise loss, adversarial loss, and perceptual loss (Ma, Section 4.1, “the initialize the generator with the parameters of a pre-trained PSNR-oriented model. The pixelwise loss, perceptual loss, adversarial loss and gradient loss are used as the optimizing objectives”). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claim 11 and are incorporated herein by reference. Thus, the method recited in Claim 11 is met by Li and Ma. Claims 12-18 are rejected for similar reasons as those described in claims 1-7. The additional elements in Claims 12-18 (Li and Ma) discloses includes: a non-transitory computer-readable storage medium including instructions (Li, [0008], “The non-transitory computer readable medium may include at least one set of instructions for image processing”) that, when executed by one or more processors, cause the one or more processors to perform operations (Li, [0008], “When executed by one or more processors of a computing device, the at least one set of instructions may cause the computing device to perform a method”). The proposed combination as well as the motivation for combining the Li and Ma references presented in the rejection of Claim 1, apply to Claims 12-18 and are incorporated herein by reference. Thus, the medium recited in Claims 12-18 is met by Li and Ma. Claims 8-9 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Ma in further view of Mu et al., “Integration of gradient guidance and edge enhancement into super-resolution for small object detection in aerial images” (April, 2021), hereinafter referred to as Mu. Claim 8 The combination of Li in view of Ma disclose a computer-implemented method of claim 7 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”). The combination of Li in view of Ma does not explicitly disclose wherein the first branch comprises a set of residual blocks and the one or more intermediate feature maps are generated by one or more residual blocks selected from the set of residual blocks. However, Mu teaches wherein the first branch comprises a set of residual blocks (MU, Fig. 2, SR branch has a set of RRRDB blocks, Section 3.3, “This branch constitutes two parts. The first part is a regular SR network which is similar to the generator of ESRGAN. Compared to ESRGAN, we replace the RRDB with the proposed residual-in-residual residual dense block (RRRDB). The RRDB has a residual-in-residual structure with dense blocks in the main path, as presented in Figure 3(a). We add an additional level of residual learning inside the dense blocks, as presented in Figure 3(b), to augment the network capacity without increasing its complexity.”), and the one or more intermediate feature maps are generated by one or more residual blocks selected from the set of residual blocks (Mu, Section 3.3, “Since we use 23 RRRDB blocks in the SR branch, we use the feature from the 5th, 10th, 15th, and 20th blocks into the gradient branch to enhance the GM. The second part of the SR branch fuses the feature of GM. We fuse the structure information by matrix multiplication. Finally, we use two convolutional layers to reconstruct the final ISR features, as shown in Figure 2.”). PNG media_image2.png 374 1190 media_image2.png Greyscale Li, Ma, and Mu are all considered to be analogous to the claimed invention because they are in the same field of super resolution using GAN. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Li and Ma to incorporate the teachings of Mu wherein the first branch comprises a set of residual blocks and the one or more intermediate feature maps are generated by one or more residual blocks selected from the set of residual blocks. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to augment the network capacity without increasing its complexity (Mu, Section 3.3). Claim 9 The combination of Li in view of Ma disclose a computer-implemented method of claim 2 (Li, Fig. 5, [0098], “the target resolution level may be an ultra-high resolution level which is higher than the current resolution level of the image. The ultra-high resolution level may be higher than a preset resolution level threshold, or the ultra-high resolution level may be higher than the current resolution level multiplied by a certain multiple”) The combination of Li in view of Ma does not explicitly disclose wherein the first branch comprises a first set of residual blocks and wherein the second branch comprises a second set of residual blocks. However, Mu teaches wherein the first branch comprises a first set of residual blocks (Mu, Fig. 2, SR branch has its own set of RRRDB blocks, Section 4.1, “we use 23 RRRDB blocks for the SR branch, 3 RRRDB blocks for the edge-enhanced branch, and 4 RRRDB blocks for the gradient branch”) and wherein the second branch comprises a second set of residual blocks (Mu, Fig. 2, GRadient branch has its own set of RRRDB blocks, Section 4.1, “we use 23 RRRDB blocks for the SR branch, 3 RRRDB blocks for the edge-enhanced branch, and 4 RRRDB blocks for the gradient branch”). Li, Ma, and Mu are all considered to be analogous to the claimed invention because they are in the same field of super resolution using GAN. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Li and Ma to incorporate the teachings of Mu wherein the first branch comprises a first set of residual blocks and wherein the second branch comprises a second set of residual blocks. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have been to augment the network capacity without increasing its complexity (Mu, Section 3.3). Claims 19-20 are rejected for similar reasons as those described in claims 8-9. The additional elements in Claims 19-20 (Li, Ma, and Mu) discloses includes: a non-transitory computer-readable storage medium including instructions (Li, [0008], “The non-transitory computer readable medium may include at least one set of instructions for image processing”) that, when executed by one or more processors, cause the one or more processors to perform operations (Li, [0008], “When executed by one or more processors of a computing device, the at least one set of instructions may cause the computing device to perform a method”). The proposed combination as well as the motivation for combining the Li, Ma, and Mu references presented in the rejection of Claims 8 and 9, apply to Claims 19 and 20 and are incorporated herein by reference. Thus, the medium recited in Claims 19-20 is met by Li, Ma, and Mu. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENISE G ALFONSO whose telephone number is (571)272-1360. The examiner can normally be reached Monday - Friday 7:30 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENISE G ALFONSO/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Mar 18, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586352
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579693
ELECTRONIC SHELF LABEL MANAGING SERVER, DISPLAY DEVICE AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555371
VISION TRANSFORMER FOR MOBILENET SIZE AND SPEED
2y 5m to grant Granted Feb 17, 2026
Patent 12541980
METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
2y 5m to grant Granted Feb 03, 2026
Patent 12541941
A Method for Testing an Embedded System of a Device, a Method for Identifying a State of the Device and a System for These Methods
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
94%
With Interview (+19.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month