Prosecution Insights
Last updated: April 19, 2026
Application No. 18/543,075

METHOD FOR LEARNING ARTIFICIAL NEURAL NETWORK BASED KNOWLEDGE DISTILLATION AND COMPUTING DEVICE FOR EXECUTING THE SAME

Non-Final OA §103§112
Filed
Dec 18, 2023
Examiner
HANSEN, CONNOR LEVI
Art Unit
2672
Tech Center
2600 — Communications
Assignee
UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
21 granted / 28 resolved
+13.0% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
32 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§103 §112
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “output unit” in claims 1, 6, 7, 10, 15, 16, and 19. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification (pg. 17, lines 8-23) as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-8 and 11-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites “arranging the plurality of divided blocks in order in respective feature extractors of the plurality of sub-student models arranged in parallel”, which is indefinite. It is unclear what specific order the plurality of divided blocks is required to be in. For example, it is unclear if the order merely corresponds to the sequence of the respective feature extractors of the plurality of sub-student models, or whether the divided blocks are required to be arranged in a specific order within each of the respective feature extractors of the plurality of sub-student models. Thus, one of ordinary skill in the art cannot ascertain the scope of the claims. For examination purposes, the limitation will be interpreted as arranging the plurality of divided blocks in an order that corresponds to the respective feature extractors of the plurality of sub-student models arranged in parallel. Claim 11 contains limitations found analogous to that of claim 2. Therefore, claim 11 is rejected for the same reason as claim 2. Claim 3 recites, “the same input image”, which lacks antecedent basis. It is unclear if the element is meant to refer to the previously recited “input image” of claim 1 or if it is meant to refer to a new element. For examination purposes, the element will be interpreted as corresponding to the same input image of the teacher model. Claim 12 contains limitations found analogous to claim 3. Therefore, claim 12 is rejected for the same reason as claim 3. Claims 4-8 and 13-17 are rejected as being dependent on a rejected base claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 9-13, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Sridhar et al. (US 20210279595 A1), (hereinafter Sridhar) in view of MathWorks (“Preprocess Images for Deep Learning” MATLAB Help Center, 2022). Regarding claim 1, Sridhar teaches a method for learning an artificial neural network performed on a computing device including one or more processors and a memory that stores one or more programs executed by the one or more processors (Sridhar, see Fig. 7, Device 700, Processor 702, Memory 704, and Executable instructions 710), the method comprising: training a teacher model including a feature extractor including a plurality of sequential blocks for extracting a feature map from an input image, and an output unit that outputs a predicted value based on the feature map (Sridhar, “With reference to FIG . 2A, in a first training or pre-training step, the teacher network 102 is trained conventionally using a training dataset that includes a plurality of training samples, each training sample being labelled training data. The training sample in the training dataset is provided as input data 106, and the teacher network 102 generates teacher inference data 108 based on the input data 106 (e.g., the training sample). The ground truth label associated with each respective training sample in the training dataset is used as a hard target by a loss function that calculates a loss between the generated teacher inference data 108 and the ground truth label , which is back-propagated through the teacher network 102 to adjust the parameters of the neural network (e.g., weights of the neural network).”, pg. 7, paragraph 0080, see Figs. 2A-2B, teacher inference data 108. and Figs. 3A-3B, teacher sub-networks 302, 310, 312, 314, and 316, A teacher model is trained to generate predictions for input images. This teacher model consists of sequential convolution blocks which determines a feature map to make predictions.); dividing the plurality of sequential blocks of the teacher model into separate blocks, respectively; generating a plurality of sub-student models by arranging the plurality of divided blocks in parallel; and training each of the plurality of sub-student models (Sridhar, “Example embodiments will now be described with reference to an integrated teacher-student system (or “ integrated system ”) 100 in FIG. 1B. The integrated system 100 includes multiple integrated teacher-student modules 104. As described below with reference to FIG. 3, each integrated teacher-student module 104 comprises a portion of the teacher network 102 , and a student sub-network .”, pg. 4, paragraph 0047, “With reference to FIG. 2B, a second training step or knowledge distillation (KD) step is shown. The teacher network 102 continues to be trained conventionally using the labelled training data. The training data is provided as input data 106, and the teacher network 102 generates teacher inference data 108. The ground truth labels of the training data are used as hard targets for the teacher as described above. During this step, the teacher-student modules 104 are also trained using the ground truth labels, with a loss function calculating a student loss for back-propagation through the teacher-student module 104, starting at the student sub-network 322, 324, 326, 328 and propagating back through all its upstream teacher sub-networks… Also during this step, the teacher - student modules undergo knowledge distillation training using soft targets. The integrated system 100 uses a cascading training framework whereby the high-level, downstream teacher-student modules 104 ( such as TS3 126 and TS4 128 ) are trained using the teacher inference data 108 generated by the teacher network 102 based on input data 106 received by the teacher network 102, and the outputs of these downstream teacher student modules 104 are used to train upstream teacher student modules 104 (such as TS1 122 and TS2 124 ).”, pg. 7, paragraphs 0082-0083, The teacher model is integrated with student models by dividing the teacher model according to its convolution blocks and arranging teacher-student branches in parallel. Each branch can reasonably be considered a student model generated by arranging the blocks in parallel because each divided teacher block is paired with a corresponding student sub-network and this pairing is arranged side-by-side as parallel branches (e.g., TS1-TS4 in Fig. 1) for training and inference.). Sridhar does not teach a teacher model including a preprocessor that preprocesses an input image to a certain size. However, MathWorks teaches a teacher model including a preprocessor that preprocesses an input image to a certain size (MathWorks, “To train a network and make predictions on new data, your images must match the input size of the network. If you need to adjust the size of your images to match the network, then you can rescale or crop your data to the required size.”, pg. 1, 1st paragraph, see table on pgs. 1 and 2, under Resize Images Using Rescaling and Cropping section). Sridhar teaches training a teacher model which includes inputting image data (Sridhar, see Fig. 1B, input data 106), but Sridhar does not teach cropping, rescaling, or resizing input images prior to being provided to the teacher model. MathWorks teaches sizing input images to match an input size required by a neural network (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the teacher model of Sridhar to include an image-sizing preprocessing step as taught by MathWorks (MathWorks, pg. 1, 1st paragraph, see table on pgs. 1 and 2, under Resize Images Using Rescaling and Cropping section). The motivation for doing so would have been to standardize training images to the model’s required size, thereby increasing the amount of usable training data. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Sridhar with MathWorks to obtain the invention as specified in claim 1. Regarding claim 2, Sridhar in view of MathWorks teaches the method of claim 1, wherein the generating of the plurality of sub-student models includes: Arranging the plurality of divided blocks in order in respective feature extractors of the plurality of sub-student models arranged in parallel (Sridhar, “The first integrated system 300 also includes a plurality of student sub-networks: Student sub-network 1352 that receives Intermediate Feature Map 1340 from the initial teacher sub-network 302… The first integrated system 300 comprises a plurality of integrated teacher-student modules 104 as identified in FIG. 1B.”, pg. 4, paragraphs 0051-0052, see Fig. 1B, The teacher model is divided according to its convolution blocks, and these blocks are arranged in an order corresponding to the respective student models arranged in parallel. By doing so, each student model can receive feature maps from different ordered blocks of the teacher for processing through its feature extractor.). Regarding claim 3, Sridhar in view of MathWorks teaches the method of claim 2, wherein each of the plurality of sub-student models receives the same input image as input (Sridhar, “Each student sub-network 352, 354, 356, 358 includes one or more convolution blocks such that the student sub-networks 352, 354, 356, 358 can be trained to generate inference data based on the feature map 340, 342, 344, 346 received as input data. Each student sub-network thus operates as a smaller alternative to the subsequent teacher sub-networks that it bypasses: for example, student sub-network 1352 can be trained to generate inference data using essentially the same input as the series of teacher sub-networks it bypasses, namely the first intermediate teacher sub-network 310 through the final teacher subnetwork 316.”, pg. 5, paragraph 0056, lines 6-17, The student models are integrated as branches from convolution blocks of the teacher model. This architecture allows both the teacher and student to be trained using the same input image. The teacher receives the original input image as input, and each student receives that same image, in the form on an intermediate feature map, for knowledge distillation learning.). Regarding claim 4, Sridhar in view of MathWorks teaches the method of claim 3, wherein the generating of the plurality of sub-student models includes: arranging a preprocessor that preprocess the input image to a different size at a front stage of each feature extractor in each sub-student model (Sridhar, “The compression block 308 may be used as part of an integrated system to effect efficient compression of the intermediate feature maps in the integrated system. In some embodiments, the intermediate feature maps are compressed by the compression block 308 as low-dimensional embedding vectors that can be easily synchronized across devices for shared computations.”, pg. 8, paragraph 0090, lines 4-11, see Fig. 3A, compression block 308). Note the combination of Sridhar in view of MathWorks teaches an image-sizing preprocessing step (see analysis of claim 1). This preprocessing step includes a preprocessor which resizes input images to a required size of the model and would be arranged prior to the initial convolution block of Sridhar (Fig. 3, convolution block 302). Sridhar further teaches arranging processing components prior to downstream convolution blocks (Fig. 3, convolution blocks 310, 312, 314, and 316, compression block 308). Sridhar describes a teacher-student module (Sridhar, “The first integrated system 300 comprises a plurality of integrated teacher-student modules 104 as identified in FIG. 1B. Each integrated teacher-student module includes one or more teacher sub-networks and a student sub-network.”, pg. 4, paragraph 0052, lines 1-5, see Fig. 1B), where each individual student model follows convolution blocks of the teacher model. This arrangement, preprocessor and compression blocks prior to convolution blocks of the teacher model, can reasonably be interpreted as teaching “arranging a preprocessor that preprocess the input image to a different size at a front stage of each feature extractor in each sub-student model” because the preprocessors are at an initial (front) stage of the teacher-student module prior to student model performing feature extraction. Thus, Sridhar in view of MathWorks teaches the limitations of claim 4. Regarding claim 9, Sridhar in view of MathWorks teaches the method of claim 1, further comprising: calculating, if the input image is input to each of the plurality of sub-student models when training of the plurality of sub-student models is completed, a final predicted value based on predicted values output from the plurality of sub-student models (Sridhar, “At step 816, after the teacher network and the first student sub-network have been trained, the first teacher sub-network and the first student sub-network are jointly operated in an inference mode to perform an inference task.”, pg. 10, paragraph 0116, lines 1-4, “Each student sub-network 352, 354, 356, 358 generates student logits 353, 355, 357, 359 which have a SoftMax function 320 applied to them to generate student inference data 322, 324, 326, 328.”, pg. 5, paragraph 0057, lines 1-4, see Fig. 3A, student inference data 322, 324, 326, and 328, Each student model, when run individually at inference time, generates classification prediction values and then uses those values in its SoftMax layer to produce final prediction outputs for that student model.). Claim 10 corresponds to claim 1, additionally reciting a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors, the one or more programs including the steps according to claim 1. Sridhar in view of MathWorks teaches the addition of a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors (Sridhar, see Fig. 7, Device 700, Processor 702, Memory 704, and Executable instructions 710), the one or more programs including the steps according to claim 1. As indicated in the analysis of claim 1, Sridhar in view of MathWorks teaches all the limitations according to claim 1. Therefore, claim 10 is rejected for the same reasons of obviousness as claim 1. Claim 11 corresponds to claim 2, additionally reciting a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors, the one or more programs including the steps according to claim 2. Sridhar in view of MathWorks teaches the addition of a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors (see analysis of claim 10), the one or more programs including the steps according to claim 2. As indicated in the analysis of claim 2, Sridhar in view of MathWorks teaches all the limitation according to claim 2. Therefore, claim 11 is rejected for the same reasons of obviousness as claim 2. Claim 12 corresponds to claim 3, additionally reciting a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors, the one or more programs including the steps according to claim 3. Sridhar in view of MathWorks teaches the addition of a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors (see analysis of claim 10), the one or more programs including the steps according to claim 3. As indicated in the analysis of claim 3, Sridhar in view of MathWorks teaches all the limitation according to claim 3. Therefore, claim 12 is rejected for the same reasons of obviousness as claim 3. Claim 13 corresponds to claim 4, additionally reciting a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors, the one or more programs including the steps according to claim 4. Sridhar in view of MathWorks teaches the addition of a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors (see analysis of claim 10), the one or more programs including the steps according to claim 4. As indicated in the analysis of claim 4, Sridhar in view of MathWorks teaches all the limitation according to claim 4. Therefore, claim 13 is rejected for the same reasons of obviousness as claim 4. Claim 18 corresponds to claim 9, additionally reciting a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors, the one or more programs including the steps according to claim 9. Sridhar in view of MathWorks teaches the addition of a computing device comprising one or more processors, a memory, and one or more programs stored in memory and configured to be executed by the one or more processors (see analysis of claim 10), the one or more programs including the steps according to claim 9. As indicated in the analysis of claim 9, Sridhar in view of MathWorks teaches all the limitation according to claim 9. Therefore, claim 18 is rejected for the same reasons of obviousness as claim 9. Claim 19 corresponds to claim 1, additionally reciting a non-transitory computer readable storage medium storing a computer program including one or more instructions that, when executed by a computing device including one or more processors, cause the computing device to perform the steps of claim 1. Sridhar in view of MathWorks teaches the addition of a non-transitory computer readable storage medium storing a computer program including one or more instructions that, when executed by a computing device including one or more processors, cause the computing device to perform the steps of claim 1. As indicated in the analysis of claim 1, Sridhar in view of MathWorks teaches all the limitations according to claim 1. Therefore, claim 19 is rejected for the same reasons of obviousness as claim 1. Allowable Subject Matter Claims 5-8 and 14-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and by overcoming the outstanding 35 U.S.C. 112(b) rejection outlined above. The following is a statement of reasons for the indication of allowable subject matter: With respect to claims 5 and 14, in addition to other limitations in the claims the Prior art of Record fails to teach, disclose or render obvious the applicant’s invention as claimed, in particular the “wherein the arranging of the preprocessor includes: arranging the preprocessor capable of preprocessing the input image to the same size as the preprocessor of the teacher model in a first sub-student model; and arranging the preprocessor capable of preprocessing the input image to the same size as a feature map output from an (n - 1)-th block of the teacher model, in an n-th sub-student model, wherein n is a natural number greater than or equal to 2 and less than or equal to the total number of blocks in the teacher model.”, as recited in claims 5 and 14. Note that the bolded limitations above render the claims allowable. Sridhar teaches a knowledge distillation method for training a teacher-student network which provides intermediate feature maps of the teacher to student sub-networks for inferences (Sridhar, “The first integrated system 300 also includes a plurality of student sub-networks: Student sub-network 1352 that receives Intermediate Feature Map 1340 from the initial teacher sub-network 302”, pg. 4, paragraph 0051, lines 1-4, see Fig. 3A). Sridhar further teaches applying preprocessing compression blocks prior to each convolutional layer of the teacher model (Sridhar, see Fig. 3A, compression block 308). However, Sridhar does not teach arranging preprocessors in the student sub-network which are capable of preprocessing input images to the same size of feature maps output by each convolutional layer of the teacher model. MathWorks teaches arranging a preprocessor in a neural network that processes input images to a certain size to match requirements of that neural network (MathWorks, pg. 1, 1st paragraph, see table on pgs. 1 and 2, under Resize Images Using Rescaling and Cropping section). However, MathWorks does not teach arranging a preprocessor for input images in each student sub-network which are capable of preprocessing input images to the same size of feature maps output by each convolutional layer of the teacher model. Claims 6-8 and 15-17 would be allowable as they are dependent on claims 5 and 14, respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONNOR LEVI HANSEN whose telephone number is (703)756-5533. The examiner can normally be reached Monday-Friday 9:00-5:00 (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CONNOR L HANSEN/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Dec 18, 2023
Application Filed
Dec 04, 2025
Examiner Interview Summary
Dec 04, 2025
Applicant Interview (Telephonic)
Jan 08, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530785
TRACKING DEVICE, TRACKING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Jan 20, 2026
Patent 12524984
HISTOGRAM OF GRADIENT GENERATION
2y 5m to grant Granted Jan 13, 2026
Patent 12518363
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND STORAGE MEDIUM WITH PIECEWISE LINEAR FUNCTION FOR TONE CONVERSION ON IMAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12499648
IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETECTING SUBJECT IN CAPTURED IMAGE
2y 5m to grant Granted Dec 16, 2025
Patent 12482257
REDUCING ENVIRONMENTAL INTERFERENCE FROM IMAGES
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+29.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month