Prosecution Insights
Last updated: April 19, 2026
Application No. 18/616,953

IMAGE PROCESSING METHOD AND IMAGE PROCESSING DEVICE BASED ON NEURAL NETWORK

Non-Final OA §101§103§112
Filed
Mar 26, 2024
Examiner
KEUP, AIDAN JAMES
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
48 granted / 60 resolved
+18.0% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
22 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status The status of claims 1-15 is: Claims 1-15 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/26/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner except for the foreign patent document that is crossed out as it was not received by the Examiner. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections Claim 2 is objected to because of the following informalities: there are two periods at the end of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “suitable” in claims 1 and 12 is a relative term which renders the claim indefinite. The term “suitable” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “suitable” renders the “second DNN” and the “third DNN” indefinite as it is unclear what would make a model “suitable” for their purposes. Claims 1-11 and 13-15 are rejected for being dependent on claims 1 and 12 respectively. Claims 4 and 15 recite the limitation "wherein the distribution model is a Gaussian distribution model" in line 2 of the claims. There is insufficient antecedent basis for this limitation in the claims. Claim 8 recites the limitation "the distribution model is applied to each object existing in the low-resolution input image" in lines 2-3 of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim 9 recites the limitation “by nonlinearly transforming a depth value of the depth map" in lines 2-3 of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim 10 recites the limitation "the depth map is obtained through a fourth DNN trained to extract depth information of an image" in lines 2-3 of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim 11 recites the limitation "the fourth DNN comprises a U-shaped neural network" in line 2 of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. Three categories of subject matter are found to be judicially recognized exceptions to 35 U.S.C. § 101 (i.e. patent ineligible) (1) laws of nature, (2) physical phenomena, and (3) abstract ideas. MPEP 2106(II). To be patent-eligible, a claim directed to a judicial exception must as whole be integrated into a practical application or directed to significantly more than the exception itself (MPEP 2106). Hence, the claim must describe a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In the analysis below, the method of independent claim 1 is considered representative of independent claims 12. Each of the independent claims 1 and 12 are directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106). Step 2A, Prong 1 Analysis Independent claims 1 and 12 are directed to obtaining a feature map distinguishing between a near object and a distant object of a low-resolution input image; obtaining a composited weight map for the low-resolution input image by inputting the feature map to a first Deep Neural Network (DNN); obtaining a first image by inputting the low-resolution input image to a second DNN suitable for restoring a distant object; obtaining a second image by inputting the low-resolution input image to a third DNN suitable for restoring a near object; and obtaining a high-resolution image for the low-resolution input image by performing weighted averaging on the first image and the second image using the composited weight map. These steps amount to generic data gathering from generic computing devices performing mathematical concepts. Accordingly, the analysis under prong one of Step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Additional elements Independent claim 1 claims a first, second, and third DNN. Independent claim 12 claims a memory, a processor comprising processing circuitry, and a first, second, and third DNN. Step 2A, Prong 2 Analysis The above-identified elements do not integrate the judicial into a practical application nor do they suggest an improvement. The additional elements of a memory, a processor comprising processing circuitry, and a first, second, and third DNN amounts to merely using generic computer hardware or components as a tool to perform the claimed mental process. The DNNs recited this broadly are considered generic computer hardware as they are not differentiated from any other DNN. Using a general purpose computer to apply a judicial exception does not qualify as a particular machine and therefore, does not integrate a judicial exception into a practical application (See MPEP 2106.05(b)). Furthermore, implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)). Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or another technology or technical field, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Further, the act of acquiring data is mere data gathering which amounts to insignificant extra-solution activity (See MPEP 2106.05(g)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Step 2B Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding independent claims 1 and 12, as noted above, the additional elements are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves and other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). For all the foregoing reasons, independent claims 1 and 12 do not recite eligible subject matter under 35 USC 101. Claims 2 and 13 recite wherein the second DNN comprises a DNN using one of an L1 loss model or an L2 loss model, and the third DNN comprises a DNN using a Generative Adversarial Network (GAN) model. The features of claims 2 and 14 are directed towards the mathematical concepts and generic computer hardware recited in claims 1 and 12. Accordingly, claims 2 and 13 do not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claims 3 and 14 recite wherein the feature map is obtained by applying a distribution model to a depth map of the low-resolution image. The features of claims 3 and 14 are directed towards the mathematical concepts recited in claims 1 and 12. Accordingly, claims 3 and 14 do not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claims 4 and 15 recite wherein the distribution model is a Gaussian distribution model. The features of claims 4 and 15 are directed towards the mathematical concepts recited in claims 1 and 12. Accordingly, claims 4 and 15 do not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 5 recites wherein the depth map is obtained from distance information included in the low-resolution input image. The features of claim 5 are directed towards the mathematical concepts recited in claim 1. Accordingly, claim 5 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 6 recites wherein the depth map is obtained through a three-dimensional (3D) restoration method. The features of claim 6 are directed towards the mathematical concepts recited in claim 1. Accordingly, claim 6 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 7 recites wherein the depth map is obtained from distance information obtained in a graphics rendering process. The features of claim 7 are directed towards the mathematical concepts recited in claim 1. Accordingly, claim 7 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 8 recites wherein the distribution model is applied to each object existing in the low-resolution input image. The features of claim 8 are directed towards the mathematical concepts recited in claim 1. Accordingly, claim 8 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 9 recites wherein the first DNN distinguishes at least one object in the low-resolution input image by nonlinearly transforming a depth value of the depth map. The features of claim 9 are directed towards the mathematical concepts recited in claim 1. Accordingly, claim 9 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 10 recites wherein the depth map is obtained through a fourth DNN trained to extract depth information of an image. The features of claim 10 is directed towards the mathematical concepts and generic computer hardware recited in claim 1. Accordingly, claim 10 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 11 recites wherein the fourth DNN comprises a U-shaped neural network. The features of claim 11 are directed towards the generic computer hardware recited in claim 1. Accordingly, claim 11 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (U.S. Patent Publication No 2021/0073945 listed in the IDS received 03/26/2024, hereinafter “Kim”) in view of Chen et al. (U.S. Patent Publication No 2023/0288910, hereinafter “Chen”) and Liu et al. (CN 113609889 A using the translation provided herein, hereinafter “Liu”). Regarding claim 1, Kim discloses an image processing method based on a neural network, the image processing method comprising: obtaining a first image by inputting the low-resolution input image to a second DNN suitable for restoring a distant object (Kim [0196]: “The processor may select a super resolution model suitable for the recognized object from the super resolution model groups according to object based on the fact that the recognized image is a person (S130). The super resolution model group according to object may include a super resolution model for text, a super resolution model for a person, and the like”, the models are trained for certain objects both close and distant; Kim [0077]: “For example, the super resolution models 130 to which the artificial intelligence technology is applied may be or include various learning models such as a deep neural network or other types of machine learning models”); and obtaining a second image by inputting the low-resolution input image to a third DNN suitable for restoring a near object (Kim [0196]: “The processor may select a super resolution model suitable for the recognized object from the super resolution model groups according to object based on the fact that the recognized image is a person (S130). The super resolution model group according to object may include a super resolution model for text, a super resolution model for a person, and the like”, the models are trained for certain objects both close and distant; Kim [0077]: “For example, the super resolution models 130 to which the artificial intelligence technology is applied may be or include various learning models such as a deep neural network or other types of machine learning models”). Kim does not explicitly disclose the method comprising: obtaining a feature map distinguishing between a near object and a distant object of a low-resolution input image; and obtaining a composited weight map for the low-resolution input image by inputting the feature map to a first Deep Neural Network (DNN); and obtaining a high-resolution image for the low-resolution input image by performing weighted averaging on the first image and the second image using the composited weight map. However, Chen teaches the method comprising: obtaining a feature map distinguishing between a near object and a distant object of a low-resolution input image (Chen [0040]: “A score map is a set of values. For example, a score map may be a set (e.g., array, matrix, etc.) of weights, where each weight expresses a relevance of a corresponding feature of the first features. For instance, higher weights of the score map may indicate higher relevance of corresponding first features for determining a high-resolution thermal image. In some examples, the score map may be determined by a neural network or a portion of a neural network (e.g., node(s), layer(s), gating neural network, etc.). For instance, a neural network or a portion of a neural network may be trained to determine weights for the first features. In some examples, a neural network or portion of a neural network may be used to determine feature relevance and adaptively fuse the first features from the model and the simulated thermal image (e.g., low-resolution thermal image). For instance, the neural network or portion of a neural network may utilize the first features (based on a slice or slices, for example), the second features (from residual neural network(s), for example), and the simulated thermal image (e.g., the original simulated thermal image), and may infer the score map as a weight map of the first features”); and obtaining a composited weight map for the low-resolution input image by inputting the feature map to a first Deep Neural Network (DNN) (Chen [0040]: “A score map is a set of values. For example, a score map may be a set (e.g., array, matrix, etc.) of weights, where each weight expresses a relevance of a corresponding feature of the first features. For instance, higher weights of the score map may indicate higher relevance of corresponding first features for determining a high-resolution thermal image. In some examples, the score map may be determined by a neural network or a portion of a neural network (e.g., node(s), layer(s), gating neural network, etc.). For instance, a neural network or a portion of a neural network may be trained to determine weights for the first features. In some examples, a neural network or portion of a neural network may be used to determine feature relevance and adaptively fuse the first features from the model and the simulated thermal image (e.g., low-resolution thermal image). For instance, the neural network or portion of a neural network may utilize the first features (based on a slice or slices, for example), the second features (from residual neural network(s), for example), and the simulated thermal image (e.g., the original simulated thermal image), and may infer the score map as a weight map of the first features”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature map and weight map as taught by Chen with the method of Kim because it would improve the method by allowing the method to determine which details of the image should be given more weight when improving the resolution of the image. This motivation for the combination of Kim and Chen is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. The Kim and Chen combination does not explicitly disclose the method comprising: obtaining a high-resolution image for the low-resolution input image by performing weighted averaging on the first image and the second image using the composited weight map. However, Liu teaches the method comprising: obtaining a high-resolution image for the low-resolution input image by performing weighted averaging on the first image and the second image using the composited weight map (Liu Page 7: “using weighted average strategy for splicing, recovering the resolution before the area cutting, eliminating the joint seam effect; firstly, selecting the prediction probability map of each small image plaque, and obtaining the probability prediction result of the weighted average according to the voting strategy, as the final prediction result; each parameter of the corresponding position of the weight matrix is filled by the overlapping times of each statistical pixel, so as to eliminate the boundary effect caused by the inconsistency of the prediction result of the adjacent patch image”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the weighted averaging as taught by Liu with the method of Kim and Chen because it would improve the method by allowing the two images from the different networks to be fused based on the strengths of their resolution improvement. This motivation for the combination of Kim, Chen, and Liu is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. Regarding claim 12, it is rejected under the same analysis as claim 1 above along with Kim’s disclosure of a device comprising: a memory (Kim [0042]: “An apparatus for enhancing image resolution according to an embodiment of the present disclosure may include: a processor; and a memory connected to the processor, in which the memory stores instructions”) and at least one processor, comprising processing circuitry (Kim [0042]: “An apparatus for enhancing image resolution according to an embodiment of the present disclosure may include: a processor; and a memory connected to the processor, in which the memory stores instructions”). Regarding claim 2, Kim discloses the method, wherein the second DNN comprises a DNN using one of an L1 loss model or an L2 loss model (Kim [0134]: “An artificial neural network is characterized by features of its model, the features including an activation function, a loss function or cost function, a learning algorithm, an optimization algorithm, and so forth. Also, the hyperparameters are set before learning, and model parameters can be set through learning to specify the architecture of the artificial neural network”; Kim [0135]: “Loss functions typically use means squared error (MSE) or cross entropy error (CEE), but the present disclosure is not limited thereto”). Regarding claim 13, it is rejected under the same analysis as claim 2 above. Regarding claim 11, the Kim and Chen combination does not explicitly disclose the method, wherein the fourth DNN comprises a U-shaped neural network. However, Liu teaches the method, wherein the fourth DNN comprises a U-shaped neural network (Liu Page 5: “selecting the used semantic segmentation network model (embodiment adopts Deeplab v3 + network model), specifically implementation can select U-net and semantic segmentation network, selecting the used backbone network, specifically implementation can be selected resnet, hrnet and so on”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the U-net of Liu with the method of Kim and Chen because it is another type of network structure that is a known method in the art and because it is a substitution of one type of network structure for another. This motivation for the combination of Kim, Chen, and Liu is supported by KSR exemplary rationale (A) Combining prior art elements according to known methods to yield predictable results and rationale (B) Simple substitution of one known element for another to obtain predictable results. Claim(s) 3-4 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over the Kim, Chen, and Liu combination in view of Jaber et al. (U.S. Patent Publication No 2022/0375602, hereinafter “Jaber”). Regarding claim 3, the Kim, Chen, and Liu combination does not explicitly disclose the method, wherein the feature map is obtained by applying a distribution model to a depth map of the low-resolution image. However, Jaber teaches the method, wherein the feature map is obtained by applying a distribution model to a depth map of the low-resolution image (Jaber [0134]: “As yet another example, the distributions may be determined in a parametric manner, such as according to a selected distribution type or kernel that a machine learning model may fit to the distribution of the feature in the input (e.g., a Gaussian mixture model may be applied to determine Gaussian distributions of subsets of the feature). Other distribution models may be applied, including parametric distribution models such as chi-square fit, a Poisson distribution, and a beta distribution and non-parametric distribution models such as histograms, binning, and kernel methods. It may be appreciated by persons of ordinary skill in the art that each type of distribution may have different distribution features (e.g., mean, deviation, higher order moments, etc.), which may be used within the classifiers”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the distribution model of Jaber with the method of Kim, Chen, and Liu because it is a known method of finding a feature map from an image. This motivation for the combination of Kim, Chen, Liu, and Jaber is supported by KSR exemplary rationale (A) combining prior art elements according to known methods to yield predictable results. Regarding claim 14, it is rejected under the same analysis as claim 3 above. Regarding claim 4, the Kim, Chen, and Liu combination does not explicitly disclose the method, wherein the distribution model is a Gaussian distribution model. However, Jaber teaches the method, wherein the distribution model is a Gaussian distribution model (Jaber [0134]: “As yet another example, the distributions may be determined in a parametric manner, such as according to a selected distribution type or kernel that a machine learning model may fit to the distribution of the feature in the input (e.g., a Gaussian mixture model may be applied to determine Gaussian distributions of subsets of the feature). Other distribution models may be applied, including parametric distribution models such as chi-square fit, a Poisson distribution, and a beta distribution and non-parametric distribution models such as histograms, binning, and kernel methods. It may be appreciated by persons of ordinary skill in the art that each type of distribution may have different distribution features (e.g., mean, deviation, higher order moments, etc.), which may be used within the classifiers”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Gaussian distribution model of Jaber with the method of Kim, Chen, and Liu because it is a known method of finding a feature map from an image. This motivation for the combination of Kim, Chen, Liu, and Jaber is supported by KSR exemplary rationale (A) combining prior art elements according to known methods to yield predictable results. Regarding claim 15, it is rejected under the same analysis as claim 4 above. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over the Kim, Chen, and Liu combination in view of Chen et al. (CN 113344997 A using the translation provided herein, hereinafter “Chen 997”). Regarding claim 5, Kim, Chen, and Liu does not explicitly disclose the method, wherein the depth map is obtained from distance information included in the low-resolution input image. However, Chen 997 teaches the method, wherein the depth map is obtained from distance information included in the low-resolution input image (Chen 997 Page 2: “The invention claims a method and system for quickly obtaining a target object-only high definition object, obtaining a depth map with extremely small loss rate by optimizing neural network model, and performing depth interception on the depth map”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate using distance information as taught by Chen 997 with the method of Kim, Chen, and Liu because using distance data would improve the accuracy of the depth map. This motivation for the combination of Kim, Chen, Liu, and Chen 997 is supported by KSR exemplary rationale (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. Allowable Subject Matter Claims 6-10 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, and the rejection(s) under 35 U.S.C. 101 set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN KEUP/ Examiner, Art Unit 2666 /Molly Wilburn/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §101, §103, §112
Apr 10, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602774
Regional Pulmonary V/Q via image registration and Multi-Energy CT
2y 5m to grant Granted Apr 14, 2026
Patent 12597140
METHOD, SYSTEM AND DEVICE OF IMAGE SEGMENTATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597168
METHOD FOR CONVERTING NEAR INFRARED IMAGE TO RGB IMAGE AND APPARATUS FOR SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12592082
DEVICE AND METHOD FOR PROVIDING INFORMATION FOR VEHICLE USING ROAD SURFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12586182
Multi-Prong Multitask Convolutional Neural Network for Biomedical Image Inference
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+12.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month