Prosecution Insights
Last updated: April 19, 2026
Application No. 18/535,223

METHODS FOR GENERATING IMAGE SUPER-RESOLUTION DATA SET, IMAGE SUPER-RESOLUTION MODEL AND TRAINING METHOD

Non-Final OA §101§102§103§112
Filed
Dec 11, 2023
Examiner
SUMMERS, GEOFFREY E
Art Unit
2669
Tech Center
2600 — Communications
Assignee
BEIJING WEILING TIMES TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
249 granted / 348 resolved
+9.6% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Response to Amendment The preliminary amendment filed January 28, 2026, has been entered in full. Claims 1-10 are pending. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered. Claim Interpretation Claims are given their broadest reasonable interpretation (BRI) during examination. MPEP 2111. Under BRI, the words of a claim are given their plain meaning, unless such meaning is inconsistent with the specification. MPEP 2111.01, Subsection I. The plain meaning of a term is the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. Id. Claim 5 recites “ESRGAN model”. This is interpreted to be a term of art referring to the type of model described in ‘Wang’ (“ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks,” 2018). Claim 5 recites “SwinIR model”. This is interpreted to be a term of art referring to the type of model described in ‘Liang’ (“SwinIR: Image Restoration Using Swin Transformer,” 2021). Claim 5 recites “HAT model”. This is interpreted to be a term of art referring to the type of model described in ‘Chen’ (“Activating More Pixels in Image Super-Resolution Transformer,” 19 March 2023). Claim 9 recites “GAN”. This is interpreted to be a term of art referring to a Generative Adversarial Network. Claim 10 recites “ECBSR model”. This is interpreted to be a term of art referring to the type of model described in ‘Zhang-X’ (“Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices,” 2021). Claim Objections Claim 8 is objected to because of the following informalities: In claim 8, fourth line, “the training methods” should be “the training method” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 6 and 9 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites the limitation “the k groups of sub-model parameter” in step S105. There is insufficient antecedent basis for this limitation in the claim. Step S103 uses the variable n instead of k. Claim 9 recites the limitation "the preset weight" in the last two lines. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 10 is directed to “An image super-resolution model”. Such a model can be embodied as data (e.g., a set of parameter values) and/or a computer program (e.g., a set of instructions for how to perform image super-resolution). The claim does not include any structural limitations. The scope of the claim covers embodiments of data per se and/or software per se that are not directed to any of the categories of inventions eligible for patenting under 35 U.S.C. 101 (MPEP 2106.03, Subsection I, first non-limiting example) and the claim is therefore patent-ineligible (MPEP 2106.03, Subsection II). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 3 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ‘Zhang’ (“Designing a Practical Degradation Model for Deep Blind Image Super-Resolution,” 2021). Regarding claim 1, Zhang discloses a method for generating an image super-resolution data set (see, e.g., Figure 1 and further mapping below), comprising steps of: S101: constructing a high-resolution image set (e.g., Section 6.1, 2nd paragraph, “100 DIV2K validation images”; These correspond to HR image in Fig. 1); S102: performing image blind degradation processing on high-resolution image HR1 in the high-resolution image set to obtain the corresponding low-resolution image LR1 and thereby an LR1-HR1 data pair (e.g., Fig. 1, blind degradation processing includes a random sequence of degradations that result in an output LR image paired with the input HR image), and performing image blind degradation processing on all high-resolution images HR1 in the high-resolution image set to obtain an LR1-HR1 data set (e.g., Sec. 6.1, 2nd par. (which spans pages 4776-4777), subset of DIV2K4D images generated with degradation type IV, which are paired with corresponding original/high-res images); S103: training a first model with the LR1-HR1 data set to obtain a model parameter of the first model and saving the model parameter, wherein the first model is an image super-resolution model (e.g., Sec. 5, 2nd par., training of BSRGAN); S104: constructing a low-resolution image set (e.g., Sec. 6.1, 2nd par., RealSRSet includes 20 low-resolution images; Fig. 2(b) shows examples); and S105: inputting low-resolution image LR2 in the low-resolution image set into the first model with the model parameter to obtain super-resolution image SR2 and thereby an LR2- SR2 data pair, and inputting all low-resolution images LR2 in the low-resolution image set into the first model with the model parameter to obtain an LR2-SR2 data set (e.g., Sec. 6.4, Fig. 4(a) and (f), each of the low-res images from RealSRSet is input to trained BSRGAN model to obtain a corresponding super-resolution image, thereby obtaining a dataset of LR2-SR2 pairs). Regarding claim 3, Zhang discloses the method according to claim 1, wherein, the image blind degradation processing, based on a random selection method (e.g., Sec. 3.4, random shuffle), comprises performing on the high-resolution image HR1 any one or more of (Examiner notes that the claim requires only one of the following options, but has included mapping for multiple options to promote compact prosecution): blurring operation: based on the random selection method, selecting one or both of Gaussian blur and Sinc filter blur for operation (Sec. 3.1); scaling operation: based on the random selection method, selecting one or more of bilinear interpolation, bicubic interpolation, and regional interpolation for operation (Sec. 3.2); noise superposition operation: based on the random selection method, selecting one or both of Gaussian noise and Poisson noise for operation (Sec. 3.3, Gaussian noise); and image compression operation: compressing the image with a compression factor of 30%-95% (Sec. 3.3, JPEG compression noise); and wherein, the image blind degradation processing is performed once (e.g., Sec. 3.4, Fig. 1, blind degradation is performed once, following a random sequence of degradations), or the image blind degradation processing is iteratively performed twice. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of ‘Reddy’ (US 2020/0226718 A1). Regarding claim 2, Zhang teaches the method according to claim 1. Zhang further teaches that step S103 comprises: S1031: inputting the low-resolution image LR1 into the first model, and the first model outputs super-resolution image SR1 (e.g., Fig. 3, LR image (a) is input to BSRGAN/first model, which outputs SR image (f)); S1032: calculating a loss function using the high-resolution image HR1 and the super-resolution image SR1 (e.g., Sec. 5, last par., loss function including L1 loss); S1033: in the case where the loss function is less than a first preset threshold, saving the model parameter (see Note Regarding Stopping Criterion below); and S1034: repeating steps S1031 to S1033 for each LR1-HR1 data pair in the LR1-HR1 data set to obtain the model parameter of the first model (e.g., Sec. 5, last par., iterative Adam minimization; also see Note Regarding Stopping Criterion below). Note Regarding Stopping Criterion. Zhang teaches using an iterative Adam optimization to minimize a loss function (e.g., Sec. 5, last par.), but does not explicitly teach any stopping criteria for the optimization. In particular, Zhang does not explicitly teach, in the case where the loss function is less than a first preset threshold, saving the model parameter, and repeating this step to obtain the model parameter. However, Reddy does teach iterative super-resolution model training (e.g., Figs. 1-2) that does include, in the case where the loss function is less than a first preset threshold, saving the model parameter (e.g., [0038], [0031]-[0032], if loss is less than threshold, then it is determined to have an acceptable value and training is terminated, thus saving the current model parameter), and repeating this step to obtain the model parameter (e.g., [0038], “The process 200 can then be repeated”). Reddy teaches that “The threshold conditions can be chosen based on accuracy and computing efficiency considerations.” ([0031]). The threshold can be set to ensure that a desired accuracy level is achieved, while avoiding the consumption of additional computational resources that would be needed to continue training beyond that desired accuracy level. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Zhang with the loss threshold stopping criterion of Reddy in order to improve the method with the reasonable expectation that this would result in a method that could advantageously balance a tradeoff between accuracy and computation, ensuring that a desired accuracy was achieved without expending additional computational resources to continue training beyond that point. This technique for improving the method of Zhang was within the ordinary ability of one of ordinary skill in the art based on the teachings of Reddy. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Zhang and Reddy to obtain the invention as specified in claim 2. Regarding claim 9, Zhang in view of Reddy teaches the method according to claim 2, and Zhang further teaches that the loss function is calculated by: calculating L1 loss function (Sec. 5, last par., L1 loss), GAN loss function (Sec. 5, last par., PatchGAN loss), and perceptual loss function (Sec. 5, last par., VGG perceptual loss) respectively; and performing weighted calculation on the above calculated results according to the preset weight to obtain the loss function (Note the ‘112(b) rejection; Sec. 5, last par., “minimizing a weighted combination of” the losses, with preset “weights 1, 1 and 0.1, respectively”). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of ‘Dieckmann’ (“Ensemble learning: Bagging and Boosting,” 23 Feb. 2023). Regarding claim 6, Zhang teaches the method according to claim 1. Zhang teaches a neural network model that accepts a low-resolution image as input and outputs a super-resolution image (e.g., Fig. 3, (a) and (f)). Zhang does not teach using bagging. In particular, Zhang does not explicitly teach that: in steps 5101-5102, the LR1-HR1 data set comprises n types of sub-training sets; in step S 103, training the first model comprises training the first model with n types of sub-training sets respectively to obtain n groups of sub-model parameter; and in step S 105, for the k groups of sub-model parameter, selecting one group of sub-model parameter in sequence as the model parameter of the first model, inputting the low-resolution image LR2 to obtain an output result, and performing weighted fusion of all output results according to a preset weight to obtain the super-resolution image SR2. However, Dieckmann does teach a bagging technique where a data set is divided into n sub-training sets (e.g., ninth page, random subsets of the original training data), a model is trained on each sub-training set to obtain n groups of sub-model parameter (e.g., ninth page, “Each of those datasets is then used to fit an individual model”), and inputs are passed through each of the sub-model parameters (e.g., ninth page, each individual model “process individual predictions for the given data”), with the resulting outputs fused according to a preset weight to obtain an output (e.g., ninth and tenth pages, outputs are fused by simple average; i.e., if the number of data subsets is n , then the preset weight is 1 n ). If the bagging technique taught by Dieckmann were applied to the super-resolution neural network of Zhang, then it would result in the claimed invention. Dieckmann teaches that bagging is advantageous because it reduces the variance of a model to avoid overfitting (eighth page). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Zhang with the bagging of Dieckmann in order to improve the method with the reasonable expectation that this would result in a method that advantageously trained a super-resolution model with reduced variance to avoid overfitting. This technique for improving the method of Zhang was within the ordinary ability of one of ordinary skill in the art based on the teachings of Dieckmann. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Zhang and Dieckmann to obtain the invention as specified in claim 6. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of ‘Araujo’ (“Ensembles of Single Image Super-Resolution Generative Adversarial Networks,” 2021). Regarding claim 7, Zhang teaches the method according to claim 1. Zhang teaches a neural network model that accepts a low-resolution image as input and outputs a super-resolution image (e.g., Fig. 3, (a) and (f)). The neural network model is trained based on a LR1-HR1 data set (e.g., Sec. 6.1, 2nd par.). Zhang does not teach using an ensemble. In particular, Zhang does not explicitly teach that: in steps S101-S102, the LR1-HR1 data set comprises one basic training set and k types of sub-training sets; in step S103, training the first model with the basic training set to obtain a basic model parameter, and training the first model with the basic model parameter with k types of sub- training sets to obtain k groups of sub-model parameter; and in step S105, for the k groups of sub-model parameter, selecting one group of sub-model parameter in sequence as the model parameter of the first model, inputting the low-resolution image LR2 to obtain an output result, and performing weighted fusion of all output results according to a preset weight to obtain the super-resolution image SR2. However, Araujo does teach an ensemble technique for performing image super-resolution, where the LR1-HR1 data set (e.g., Fig. 9, Universal Dataset) comprises one basic training set (e.g., Sec. 3.3.2, Bucket 0) and k types of sub-training sets (e.g., Sec. 3.3.2, Buckets 1-4; k = 4 ); training the first model with the basic training set to obtain a basic model parameter (e.g., Secs. 4.1 and 3.3.2, Fig. 7, seed model [basic model] is trained using Bucket 0 [basic training set]), and training the first model with the basic model parameter with k types of sub-training sets to obtain k groups of sub-model parameter (e.g., Secs. 4.1 and 3.3.2, Fig. 7, individual models A-D [sub-models] are trained using Buckets 1-4 [k types of sub-training sets]); and for the k groups of sub-model parameter, selecting one group of sub-model parameter in sequence as the model parameter of the first model, inputting the low-resolution image LR2 to obtain an output result, and performing weighted fusion of all output results according to a preset weight to obtain the super-resolution image SR2 (e.g., Fig. 11, output results of individual/sub-model parameters are combined through weighted fusion according to preset weights ω to obtain output SR images). If the ensemble technique taught by Araujo were applied to the super-resolution neural network of Zhang, then it would result in the claimed invention. Araujo teaches that ensemble techniques can obtain better performance than individual models (e.g., Sec. 2.5, 1st par.) and teaches that its ensemble is capable of producing better output images than ESRGAN (e.g., Fig. 18). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Zhang with the ensemble of Araujo in order to improve the method with the reasonable expectation that this would result in a method that could obtain better performance. This technique for improving the method of Zhang was within the ordinary ability of one of ordinary skill in the art based on the teachings of Araujo. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Zhang and Araujo to obtain the invention as specified in claim 7. Claim(s) 8 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Zhang-X and Reddy. Regarding claim 8, Zhang teaches the method of claim 1, which obtains an LR2-SR2 data set of paired low- and high-resolution images (e.g., Sec. 6.4, Fig. 4(a) and (f), each of the low-res images from RealSRSet is input to trained BSRGAN model to obtain a corresponding super-resolution image, thereby obtaining a dataset of LR2-SR2 pairs). Zhang does not explicitly teach training a second model with the LR2-SR2 data set. However, Zhang-X does teach training a second model with a LR2-SR2, low-resolution and high-resolution image pair data set, wherein the second model is an image super-resolution model (ECBSR), and the training method comprises steps of: S201: inputting the LR2 image into the second model, and the second model outputs SR2' image (e.g., Sec. 4.1, LR images are input to model during training and L 1 loss is calculated; The L 1 loss is based on the difference between actual model output [i.e., SR2’] and expected model output [i.e., SR2]); S202: calculating a loss function using the super-resolution image SR2 and the super-resolution image SR2' (e.g., Sec. 4.1, LR images are input to model during training and L 1 loss is calculated; The L 1 loss is based on the difference between actual model output [i.e., SR2’] and expected model output [i.e., SR2]), and in the case where the loss function is less than a third preset threshold, saving the model parameter (see Note Regarding Stopping Criterion below); and S203: repeating steps S201 to S202 for each LR2-SR2 data pair in the LR2-SR2 data set to obtain the model parameter of the second model (e.g., Sec. 4.1, iterative Adam optimization of the model; also see Note Regarding Stopping Criterion below). Zhang-X’s model can be trained on any set of corresponding low- and high-resolution image pairs, which would include the LR2-SR2 dataset produced by Zhang. The ECBSR super-resolution model trained by Zhang-X has some advantages over the BSRGAN super-resolution model trained by Zhang. For example, BSRGAN is a modified version of ESRGAN (e.g., Zhang: Sec. 5, 1st par.) and Zhang-X recognizes that the topology of ESRGAN “result[s] in much higher memory access cost (MAC) and sacrifice[s] the parallelism degree, which severely reduces the inference speed” (Sec. 3.1, Neat Topology; Note that reference [49] describes ESRGAN). In contrast, the topology of ECBSR is more-plain, which keeps “the MAC of our model as low as possible” (Id.), thereby advantageously improving inference speeds, especially on mobile devices. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Zhang to train the ECBSR model of Zhang-X in order to improve the method with the reasonable expectation that this would result in a method that obtained a model that could advantageously perform faster inference, especially on mobile devices. This technique for improving the method of Zhang was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang-X. Note Regarding Stopping Criterion. Zhang teaches using an iterative Adam optimization to minimize a loss function (e.g., Sec. 5, last par.), but does not explicitly teach any stopping criteria for the optimization. In particular, Zhang does not explicitly teach, in the case where the loss function is less than a third preset threshold, saving the model parameter, and repeating this step to obtain the model parameter. Zhang-X also does not teach this feature. However, Reddy does teach iterative super-resolution model training (e.g., Figs. 1-2) that does include, in the case where the loss function is less than a third preset threshold, saving the model parameter (e.g., [0038], [0031]-[0032], if loss is less than threshold, then it is determined to have an acceptable value and training is terminated, thus saving the current model parameter), and repeating this step to obtain the model parameter (e.g., [0038], “The process 200 can then be repeated”). Reddy teaches that “The threshold conditions can be chosen based on accuracy and computing efficiency considerations.” ([0031]). The threshold can be set to ensure that a desired accuracy level is achieved, while avoiding the consumption of additional computational resources that would be needed to continue training beyond that desired accuracy level. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Zhang in view of Zhang-X as applied above with the loss threshold stopping criterion of Reddy in order to improve the method with the reasonable expectation that this would result in a method that could advantageously balance a tradeoff between accuracy and computation, ensuring that a desired accuracy was achieved without expending additional computational resources to continue training beyond that point. This technique for improving the method of Zhang in view of Zhang-X was within the ordinary ability of one of ordinary skill in the art based on the teachings of Reddy. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Zhang, Zhang-X and Reddy to obtain the invention as specified in claim 8. Regarding claim 10, Zhang in view of Zhang-X and Reddy teaches the method according to claim 8. Zhang-X further teaches that the obtained image super-resolution model is an ECBSR model (Throughout). Allowable Subject Matter Claims 4 and 5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ‘Delbracio’ (WO 2024/058804 A1) Trains separate models to restore each of multiple specific image degradations, then uses them to train a single image transformation model that can restore multiple image degradations – e.g., Figs. 1A and 3 ‘S’ (“Everything You Need To Know About Knowledge Distillation, aka Teacher-Student Model,” 20 April 2023) Gives background about knowledge distillation techniques, including response-based knowledge distillation where a student model is trained based on the difference between its output and a teacher model’s output given the same input data ‘Wang-21’ (“Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data,” 2021) Describes a second-order degradation model, where random degradations are applied twice – e.g., Fig. 2 Uses sinc filter – Sec. 3.3 Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEOFFREY E SUMMERS whose telephone number is (571)272-9915. The examiner can normally be reached Monday-Friday, 7:00 AM to 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEOFFREY E SUMMERS/Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Jan 28, 2026
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586379
SYSTEM FOR DETECTING OCCURRENCE PERIOD OF CYCLICAL EVENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561755
System and Method for Image Super-Resolution
2y 5m to grant Granted Feb 24, 2026
Patent 12555205
METHOD AND APPARATUS WITH IMAGE DEBLURRING
2y 5m to grant Granted Feb 17, 2026
Patent 12541838
INSPECTION APPARATUS AND REFERENCE IMAGE GENERATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536682
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+35.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month