Prosecution Insights
Last updated: April 19, 2026
Application No. 17/425,715

IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM

Final Rejection §102§103§112
Filed
May 22, 2023
Examiner
SUMMERS, GEOFFREY E
Art Unit
2669
Tech Center
2600 — Communications
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
249 granted / 348 resolved
+9.6% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
28.6%
-11.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 348 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Response to Amendment Claims 1-2, 4-5, 8, 10, 12, 16-18, 20, 22, 24, 26, 29-31, 33-34, and 37 were previously pending, with claims 2, 4-5, 31 and 33-34 withdrawn from consideration. Applicant’s amendment filed March 9, 2026, has been entered in full. The specification is amended. Claims 1, 10, 12, 16-18, 20, 24, 26, 29, 33 and 37 are amended. Claim 8 is cancelled. No new claims are added. Claims 2, 4-5, 31 and 33-34 remain withdrawn. Accordingly, claims 1-2, 4-5, 10, 12, 16-18, 20, 22, 24, 26, 29-31, 33-34, and 37 are now pending, with claims 1, 10, 12, 16-18, 20, 22, 24, 26, 29-30, and 37 remaining under consideration. Response to Arguments Applicant argues that amendments to the specification have overcome the previous objection to the title (Remarks filed March 9, 2023, hereinafter Remarks: Page 22). Examiner agrees. The previous objection to the title is withdrawn. Applicant argues that amendments to claim 29 have overcome a previous objection for informalities (Remarks: Pages 22-23). Examiner agrees. The previous objection to claim 29 is withdrawn. Applicant has amended the claims to further-define the term “definition” in accordance with the BRI of that term presented in the Non-Final Rejection (Remarks: Page 23). Examiner agrees that the term is being interpreted as it has been explicitly defined in the amended claims. The Non-Final Rejection included a rejection under 35 U.S.C. 112(b) of all the claims under consideration (Pages 5-8). In summary, the rejection explained that the claims were indefinite because it was unclear to what extent, if any, the claims were limited by the generator training recited in claim 1 and the further dependent claims, including claim 8. Applicant’s Remarks include the following (Pages 23-24): PNG media_image1.png 200 400 media_image1.png Greyscale PNG media_image2.png 200 400 media_image2.png Greyscale To the extent that these Remarks assert that the previous rejection under 35 U.S.C. 112(b) has been overcome, they are respectfully non-persuasive. Amendment to include features of claim 8, which was included in the previous rejection, does not address or overcome the previous grounds of rejection under 35 U.S.C. 112(b). The previous rejection is maintained. Applicant argues that further rejections specific to claims 8 and 24 under 35 U.S.C. 112(b) have been overcome by amendment (Remarks: Pages 24-25). Examiner agrees that these further rejections have been overcome and they are withdrawn. Applicant traverses the previous rejections under 35 U.S.C. 102 and 103 (Remarks: Pages 25-29). Applicant first argues that the image repair in Can does not fall within the scope of the claimed improvement in clarity or perceptibility of an image (Remarks: Page 27). Examiner respectfully disagrees. The generative image shown as an example in Applicant’s Remarks (Page 27) is plainly clearer and more-perceptible than the masked input version at least because it includes a complete nose and mouth, so the face is more clearly seen and perceived. Furthermore, as explicitly recited in the amended preamble of claim 1, the improved definition may include deblurring and Can explicitly states that image repair may be applied to blurred image regions (Sec. 1, 1st par.) – i.e., that it may be used for deblurring as in the claimed invention. Applicant next argues that Can does not disclose the discriminator configuration recited in amended claim 1 (Remarks: Pages 27-28, second and third points). Examiner respectfully notes that, as explained in the Non-Final Rejection (‘112(b) rejection at pages 5-8 and ‘102 rejections of claims 1, 8 and 12 at pages 10-13), as best understood in view of the issues of indefiniteness, the claim is to a method that apparently does not require the structure of the discriminators or use of the discriminators during training, so Can’s disclosure can apparently fall within the apparent scope of the claim even if it does not describe the specific training discriminators recited in the amended claims. Applicant requests rejoinder of the withdrawn claims based on their arguments that claim 1 is allowable (Remarks: Page 29). As explained above and in the rejections below, claim 1 is not in condition for allowance, so rejoinder is not appropriate at this time. MPEP 821.04. Admitted Prior Art In the Office Action dated December 12, 2025, Examiner took Official Notice of facts in the following instance(s): At Page 18: “However, Examiner takes Official Notice that it is old and well-known in the art of image analysis to implement an image processing method as an electronic device, comprising a processor, a memory, and a program or instruction stored in the memory and executed by the processor, wherein the processor is configured to execute the program or instruction so as to implement the method. Such implementation in an electronic device (e.g., a computer) advantageously allows the image processing method to be performed quickly and efficiently.” Regarding Official Notice, MPEP 2144.03(C) includes the following instructions: “To adequately traverse such a finding, an applicant must specifically point out the supposed errors in the examiner’s action, which would include stating why the noticed fact is not considered to be common knowledge or well-known in the art.” “A general allegation that the claims define a patentable invention without any reference to the examiner’s assertion of official notice would be inadequate.” “If applicant does not traverse the examiner’s assertion of official notice or applicant’s traverse is not adequate, the examiner should clearly indicate in the next Office action that the common knowledge or well-known in the art statement is taken to be admitted prior art because applicant either failed to traverse the examiner’s assertion of official notice or that the traverse was inadequate. If the traverse was inadequate, the examiner should include an explanation as to why it was inadequate.” In the reply filed March 9, 2026, Applicant generally alleges that the claims define a patentable invention without any reference to Examiner’s assertion of Official Notice, which is an inadequate traverse. Therefore, as required by the MPEP, Examiner clearly indicates that the Official Notice statement(s) noted above is/are taken to be admitted prior art because Applicant either failed to traverse it/them or inadequately traversed it/them. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 10, 12, 16-18, 20, 22, 24, 26, 29-30, and 37 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites a method and includes a clause “wherein the first generator is acquired through training a to-be-trained generator using a plurality of discriminators.” The claim goes on to recite further details of the training and the discriminators. It is unclear whether the recited training is required to be performed as part of the method, and this ambiguity makes the scope of the claim unclear and renders the claim indefinite. The claim recites the method “comprising: receiving an input image; and processing the input image through a first generator to acquire an output image with definition higher than the input image” (emphasis added), which appears to indicate that these are the two steps of the method. The “wherein” clause appears to merely identify the source of the generator used to process the input image. Furthermore, “wherein” clauses may not require steps to be performed. MPEP 2111.04, Subsection I. These factors weigh in favor of an interpretation where training is not required to be performed as part of the claimed method. However, the “wherein” clause describes a step of “training a to-be-trained generator using at least two discriminators” and it follows “comprising:” in the preamble. Furthermore, the process of using a generator generally includes first training that generator before applying the trained generator (i.e., inference), so one of ordinary skill in the art would expect that a method of processing an image with a trained generator may include acquiring the generator through training a to-be-trained generator. These factors weigh in favor of an interpretation where training is required to be performed as part of the claimed method. As explained above, it is unclear whether the method of claim 1 should be understood to include the recited step of training a to-be-trained generator, or not. This ambiguity makes the scope of the claim unclear and renders the claim indefinite. If the training step is not required to be performed, then the scope of the claim is further indefinite because it is unclear to what degree the details of the “training a to-be-trained generator” recited in claim 1 and the dependent claims limit the scope of the method. A generator is typically a neural network including a series of layers, each layer including weights and filters that are applied to an input image to produce an output image. A generator is often initialized with random values for weights and filters, and then those values are updated during training, the final values defining a trained generator that is ready for use in inference. While training changes the values of the weights, filters, and other parameters, it does not change how the act of processing an image through the generator is performed. I.e., the architecture of the generator remains the same, the same number and sizes of filters are convolved, the feature maps have the same dimensionality, etc. with only the specific values of its weights being changed. Therefore, it is unclear to what extent, if any, a requirement that the first generator has been trained using a plurality of discriminators has any effect on how a step of “processing the image through a first generator” is performed. I.e., the architecture and processing steps performed by a given generator are unchanged regardless of how that generator was trained. It is possible to define a product in terms of the process by which it is made. See, e.g., MPEP 2173.05(p). It is conceivable that the structure of a generator could be defined in terms of the process by which it is trained (i.e., made). However, claim 1 is not to a product or to the first generator itself. Instead, claim 1 is to a method that includes a step of processing an input image through a generator. As discussed above, it is unclear whether or to what degree the step of processing the input image through the first generator would be limited by the manner in which the generator was trained. For example, if the same generator architecture were trained using only a single discriminator, then processing an input image through it would still require processing the image through the same layers, filters, etc. (albeit with different specific weight values). The same multiplication, addition, etc. operations would be performed either way. This is analogous to how a generator is applied in the same way to different input images. The values of individual pixels in the different input images may change, and this changes the values of the feature maps being multiplied, added and otherwise propagated through the generator, but the process of applying the generator remains the same for all input images. For at least these reasons, claim 1 is further indefinite because it is unclear to what degree the details of the “training a to-be-trained generator” recited in claim 1 and the dependent claims limit the scope of the method. The issues discussed above can be summarized as follows: claim 1 is to a method of inference, but the noted “wherein” clause of claim 1 (and almost all of the further wherein clauses of claim 1 and the further elements recited in the dependent claims) are directed to how training is performed. Applicant may wish to amend the claims to recite a method of training, rather than inference. Claims 10, 12, 16-18, 20, 22, 24, 26, 29-30 are also indefinite at least because they include the indefinite limitations of claim 1. Claim 37 recites similar limitations and is also indefinite for substantially the same reasons as claim 1. Note that while claim 37 is to a device, it is defined in terms of a method. For purposes of practicing compact prosecution, the claims are interpreted to not be limited by any of the recited training procedures. MPEP 2173.06, Subsection II. Nevertheless, some training details may be mapped in the rejections below in an effort to advance prosecution. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 10, 12, 16-18, 20, 22, 24, 26, and 29 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ‘Can’ (“A method of face repair based on encoder-decoder and dual discrimination network,” July 2020). Regarding claim 1, Can discloses an image processing method for denoising and/or deblurring an input image (e.g., Figure 1, the method includes performing image repair; e.g., Section 1, 1st paragraph, the repair may be applied to a blurred part of the image – i.e., the repair may include deblurring), comprising: receiving an input image (e.g., Figure 1, top-left, input image with mask), the input image comprising a facial image (e.g., Fig. 2, left two columns); and processing the input image through a first generator (e.g., Fig. 1, Completion Network) to acquire an output image (e.g., Fig. 1, top-right, generative image) with definition higher than the input image, the definition is clarity or perceptibility of an image (e.g., Fig. 1, the output image has been repaired to recover missing details; It has higher definition at least because the face is can be seen and perceived more-clearly than the defective input image), wherein the first generator is acquired through training a to-be-trained generator using a plurality of discriminators (Note the ‘112(b) rejection and associated claim interpretation; This limitation is directed to the training procedure, and the claim apparently does not require performing it, so this limitation apparently does not limit the scope of the claimed invention); wherein when training the to-be-trained generator using the plurality of discriminators to acquire the first generator, the to-be-trained generator and the plurality of discriminators are trained alternately in accordance with a training image and an authentication image to acquire the first generator, wherein the authentication image has definition higher than the training image, and when training the to-be-trained generator, a total loss of the to-be-trained generator comprises at least one of a first loss and a total adversarial loss of the plurality of discriminators (Note the ‘112(b) rejection and associated claim interpretation; This limitation is directed to the training procedure, and the claim apparently does not require performing it, so this limitation apparently does not limit the scope of the claimed invention); wherein the first generator comprises N repair modules, where N is an integer greater than or equal to 2 (Many different aspects of Can’s generator can be considered a “repair module”; For example, each layer of the encoder, each layer of the decoder, corresponding pairs of encoder and decoder layers, etc. can all be considered repair modules), wherein the plurality of discriminators comprise N discriminators of a first type with different network structures, respectively corresponding to the N repair modules, and discriminators of a second type configured to improve the local repairing of the definition of a face in the training image by the first generator (Note the ‘112(b) rejection and associated claim interpretation; This limitation is directed to the training procedure, and the claim apparently does not require performing it, so this limitation apparently does not limit the scope of the claimed invention); wherein the plurality of discriminators further comprise X discriminators of a third type, where X is a positive integer greater than or equal to 1, and each discriminator of the third type is configured to improve the repairing of details of a facial component in the training image by the first generator (Note the ‘112(b) rejection and associated claim interpretation; This limitation is directed to the training procedure, and the claim apparently does not require performing it, so this limitation apparently does not limit the scope of the claimed invention). Regarding claim 10, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 10 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 12, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 12 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 16, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 16 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 17, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 17 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 18, Can discloses the image processing method according to claim 1, wherein the first generator comprises N repair modules having a same network structure, where N is an integer greater than or equal to 2 (See Fig. 1 and Table 1; If the N repair modules are layers of the encoder, then all N=7 modules have a same convolutional structure; If the N repair modules are layers of the decoder, then all N=7 modules have a same deconvolutional structure; If the N repair modules are each corresponding pair of encoder and decoder layers with a skip connection between them, then all N=7 modules have a same structure of a convolutional layer and a deconvolutional layer with a skip connection between). Examiner notes that all of the further limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 18 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for the reasons presented in the mapping above and substantially the same reasons as claim 1. Regarding claim 20, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 20 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 22, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 22 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claims 20 and 1. Regarding claim 24, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 20 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 26, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 26 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Regarding claim 29, Examiner notes that all of the limitations recited in the claim further define the training of the to-be-trained generator, but such limitations have been interpreted as not limiting the scope of the claimed method. See ‘112(b) rejection above. Accordingly, claim 29 is also rejected under 35 U.S.C. 102(a)(1) as being anticipated by Can for substantially the same reasons as claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 30 and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Can. Regarding claim 30, Can teaches the image processing method of claim 1. Can further teaches that the first generator comprises repair modules with scales of 64*64, 128*128, and 256*256 (e.g., Fig. 1, each encoder layer can be considered a repair module; e.g., Table 1, conv_3, conv_2 and conv_1 have input sizes/scales of 64*64, 128*128, and 256*256, respectively). Can does not explicitly teach a repair module with a scale of 512*512. However, Can does teach rules that were used to set the scales of its repair modules. Specifically, “The size of the convolution kernel in the encoder is 4x4, the step size is 2, the padding operation is 1, and after each convolution operation, the image is reduced by half, the number of convolution kernels is doubled …” (Sec. 2.1). Following this same rule, one of ordinary skill in the art would have been able to add an additional convolutional layer (i.e., an additional repair module) that would have a scale of 512*512 (i.e., 256 is half of 512, as 64 is half of 128 and so on). A corresponding decoder layer would also be added, per the rules described by Can. Adding such an additional layer would be desirable for at least one of two reasons. First, in general, the performance of a neural network is expected to increase with increased depth (i.e., an increased number of layers), so one of ordinary skill in the art would expect that adding an additional layer to the generator would allow for improved performance (i.e., better repair) to be achieved. Second, higher-resolution images are generally considered to be of higher quality and adding a 512x512 input layer would lead to the modified generator producing a higher-resolution 512x512-pixel output image (relative to the 256x256-pixel resolution used in Can). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify the method of Can to use an additional repair module of scale 512*512 in order to improve the method with the reasonable expectation that this would result in a method whose generator achieved higher repair performance and/or produced higher-resolution images considered to have higher quality. This technique for improving the method of Can was within the ordinary ability of one of ordinary skill in the art based on the teachings of Can. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Can to obtain the invention as specified in claim 30. Regarding claim 37, Examiner notes that the claim recites an electronic device, comprising a processor, a memory, and a program or instruction stored in the memory and executed by the processor, wherein the processor is configured to execute the program or instruction so as to implement an image processing method that is substantially the same as the method of claim 1. Can’s teachings fall within the scope of the method of claim 1 (see above). Can certainly suggests some sort of computer implementation (e.g., Sec. 3.1, first paragraph), but does not teach details of an electronic device used to implement the method. In particular, Can does not explicitly teach implementing its image processing method as an electronic device, comprising a processor, a memory, and a program or instruction stored in the memory and executed by the processor, wherein the processor is configured to execute the program or instruction so as to implement the method. However, it has been taken as admitted prior art that it is old and well-known in the art of image analysis to implement an image processing method as an electronic device, comprising a processor, a memory, and a program or instruction stored in the memory and executed by the processor, wherein the processor is configured to execute the program or instruction so as to implement the method. Such implementation in an electronic device (e.g., a computer) advantageously allows the image processing method to be performed quickly and efficiently. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to implement the image processing method of Can as an electronic device, comprising a processor, a memory, and a program or instruction stored in the memory and executed by the processor, wherein the processor is configured to execute the program or instruction so as to implement the method in order to improve the method with the reasonable expectation that this would result in a method that could be performed quickly and efficiently. Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Can to obtain the invention as specified in claim 37. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEOFFREY E SUMMERS whose telephone number is (571)272-9915. The examiner can normally be reached Monday-Friday, 7:00 AM to 3:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEOFFREY E SUMMERS/Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Dec 04, 2025
Non-Final Rejection — §102, §103, §112
Mar 09, 2026
Response Filed
Mar 26, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586379
SYSTEM FOR DETECTING OCCURRENCE PERIOD OF CYCLICAL EVENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561755
System and Method for Image Super-Resolution
2y 5m to grant Granted Feb 24, 2026
Patent 12555205
METHOD AND APPARATUS WITH IMAGE DEBLURRING
2y 5m to grant Granted Feb 17, 2026
Patent 12541838
INSPECTION APPARATUS AND REFERENCE IMAGE GENERATION METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536682
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+35.4%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 348 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month