Prosecution Insights
Last updated: April 19, 2026
Application No. 18/311,258

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §101§102§103§112
Filed
May 03, 2023
Examiner
GORMLEY, AARON PATRICK
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
4y 4m
To Grant
0%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
3 granted / 5 resolved
+5.0% vs TC avg
Minimal -60% lift
Without
With
+-60.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
30 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
30.2%
-9.8% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is in response to the application filed 05/03/2023. Claims 1-11 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 05/12/2022. It is noted, however, that applicant has not filed a certified copy of the JP2022-078954 application as required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 05/03/2023, 05/23/2023, and 09/26/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claim 1: Limitation 1: “an obtaining unit” Limitation 2: “a control unit” Claim 4: Preamble: “the control unit” Claim 6: Limitation 1: “a first evaluation unit” Limitation 2: “a quantization unit” Limitation 3: “a second evaluation unit” Limitation 4: “a correction unit” Claim 7: Limitation 1: “a third evaluation unit” & “the first evaluation unit” & “the second evaluation unit” Limitation 2: “the correction unit” Claim 8: Limitation 1: “the control unit” Claim 9: Limitation 1: “the control unit” Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Specification The disclosure is objected to because of the following informalities: [0028]: “a set from the NN to the activation function is assumed to be” is improper grammar. [0043]: It’s stated that “ λ may be implemented, for example, as in the following formula (5)”, and shows formula 5 as PNG media_image1.png 52 370 media_image1.png Greyscale . It’s unclear whether λ and e ¨ are intended to be equivalent, as e ¨ isn’t referenced elsewhere. If they are equivalent, it’s requested that the Applicant rewrite the specification to consistently use the same symbol for this. [0053] “and are values updated by obtaining a moving average” is improper grammar. [0060] “the detection target each of before and after the quantization” is improper grammar. [0069]: “does not need to be executed each time update processing for the weight of the NN” and “each time the learning is performed for predetermined number of times” are improper grammar. Appropriate correction is required. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The title of the invention contains no mention of quantization, regularization based on quantized model accuracy, or adjusting loss relative to a quantization parameter, all core concepts of the claimed invention. Claim Objections Claim 11 is objected to because of the following informalities: First verbs in each limitation are in the present participle tense, but should be in the infinitive. “causes the computer to: obtaining information … and controlling the first operation” should be “causes the computer to: obtain information … and control the first operation”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Limitations reciting the use of a “means” or equivalent generic placeholder that is modified by functional language, and not modified by sufficient structure within the claim, is interpreted as a means-plus-function limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (MPEP 2181(I) A.). For limitations interpreted under 35 U.S.C. 112(f) using means-plus-function language, the structure of the “means” or the equivalent generic placeholder substitute must be disclosed in the specification itself in a way that one skilled in the art will understand what structure will perform the recited function (MPEP 2181 (II.) A.). Additionally, for a computer-implemented means-plus-function limitation interpreted under 35 U.S.C. 112(f), the specification must disclose an algorithm for performing the claimed specific computer function (MPEP 2181 (II.) A.). Failure to adequately disclose either the structure or algorithm in sufficient detail in the specification for a computer-implemented means-plus-function limitation renders the claim indefinite under 35 U.S.C. 112(b). As noted in the claim interpretation section above, claims 1, 4, and 6-9 recite computer-implemented means-plus-function limitations incorporating the use of “an obtaining unit”, “a control unit”, “a first evaluation unit”, “a quantization unit”, “a second evaluation unit”, “a correction unit”, and “a third evaluation unit”, generic placeholders substituting “means”. The instant specification discloses no structure whatsoever for any of these units, and would be insufficient for one of ordinary skill in the art to understand what structures could perform the recited functions. Additionally, no specific algorithms are given for the claimed functions of the obtaining unit or the control unit, thus the instant specification would be insufficient for one of ordinary skill in the art to understand what algorithms could perform the recited functions. Thus, claims 1, 4, and 6-9 are considered indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. These deficiencies are inherited by dependent claims 2-3 and 5. Additionally, claim 6 discloses “a correction unit configured to correct the regularization item included in the loss function, based on the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit” in its fourth limitation. The specification refers to two different “correction units”: a “weight correction unit” and a “regularization item correction unit”. It’s unclear which of these are being referred to by claim 6, thus rendering the scope of the claim indefinite. This deficiency is inherited by dependent claim 7. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-9 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Limitations reciting the use of a “means” or equivalent generic placeholder that is modified by functional language, and not modified by sufficient structure within the claim, is interpreted as a means-plus-function limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (MPEP 2181(I) A.). For limitations interpreted under 35 U.S.C. 112(f) using means-plus-function language, the written description under 35 U.S.C. 112(a) must adequately link or associate particular structure, material, or acts to perform the function or it must be clear based on the facts of the application that one skilled in the art would have known what structure, material, or acts disclosed in the specification perform the recited function (MPEP 2163(II) A. (3)). Claims 1, 4, and 6-9 recite computer-implemented means-plus-function limitations incorporating the use of “an obtaining unit”, “a control unit”, “a first evaluation unit”, “a quantization unit”, “a second evaluation unit”, “a correction unit”, and “a third evaluation unit”, generic placeholders substituting “means”. As noted above, these claims are rejected under 35 U.S.C. 112(b) as being indefinite for failing to adequately disclose the corresponding structures or algorithms in sufficient detail in the specification. When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a). See MPEP § 2163.03, subsection VI. Thus, these claims are rejected under 35 U.S.C. 112(a) for lack of written description. This deficiency is inherited by dependent claims 2-3 and 5. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 5-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter without significantly more. Claim 5 Step 1: The claim’s ancestor, claim 1, recites “An information processing apparatus”. Therefore, claim 5 is therefore directed to the statutory category of machine Step 2A Prong 1: The claim recites the following judicial exception(s) wherein the loss is calculated by a loss function including a regularization item that is large when the size of the output exceeds the quantization parameter: This recites a mathematical concept in the form of an equation with a term that scales directly with size of the output exceeding the quantization parameter. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following additional element(s) Ancestral claim 1: an obtaining unit configured to obtain information indicating a size of an output as a result of a first operation in a neural network that performs the first operation using a weight coefficient for input data and a second operation of quantizing a result of the first operation, in order to obtain data of an intermediate layer: This is directed to mere reception of data and is insignificant extra-solution activity (MPEP 2106.05(g)). a control unit configured to control the first operation in the neural network to adjust the size of the output based on the information and a quantization parameter used for the quantization: This recites conventional quantization and is insignificant extra-solution activity (MPEP 2106.05(g)). Ancestral claim 4: wherein the control unit controls the first operation by controlling a weight coefficient of the neural network, through learning with which the size of the output exceeding the quantization parameter results in a large loss: This amounts to updating data. In other words, mere reception of data updates and is insignificant extra-solution activity (MPEP 2106.05(g)). Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) Ancestral claim 1: an obtaining unit configured to obtain information indicating a size of an output as a result of a first operation in a neural network that performs the first operation using a weight coefficient for input data and a second operation of quantizing a result of the first operation, in order to obtain data of an intermediate layer: This is an instance of retrieving information from memory, a limitation known to be well-understood, routine, and conventional (MPEP 2106.05(d) II. iv.). a control unit configured to control the first operation in the neural network to adjust the size of the output based on the information and a quantization parameter used for the quantization: This is an instance of using quantization to clip values within a range, a conventional technique in quantized neural networks, as noted by Choi ( METHOD AND DEVICE FOR DETERMINING SATURATION RATIO-BASED QUANTIZATION RANGE FOR QUANTIZATION OF NEURAL NETWORK, filed 7/22/2022, US 20240320464 A1): “Here, quantization means mapping tensor values from a dimension with a wide data representation range to a dimension with a narrow data representation range. In other words, quantization means that a processor that processes neural network operations maps high-precision tensors to low-precision values. In artificial neural networks, quantization can be applied to tensors including activations, weights, and biases of a layer.” (Choi, [0006]); “A conventional method of determining a quantization range analyzes the distribution of activation through histogram generation and determines a quantization range based on the distribution of activation” (Choi, [0027]). Ancestral claim 4: wherein the control unit controls the first operation by controlling a weight coefficient of the neural network, through learning with which the size of the output exceeding the quantization parameter results in a large loss: This is an instance of updating a value in memory. In other words, of retrieving information from memory, a well-known, understood, and routine limitation (MPEP 2106.05(d) II. iv.) Claim 6 Step 1: The claim recites a machine, as in claim 5 Step 2A Prong 1: The claim recites the following further judicial exception(s) a first evaluation unit configured to evaluate a recognition accuracy of the neural network for a detection target: This can be performed as a mental process. One can merely gauge the accuracy of the network’s results. a second evaluation unit configured to evaluate the recognition accuracy of the neural network for the detection target after the weight coefficient has been quantized: This can be performed as a mental process. One can merely gauge the accuracy of the network’s results after quantization. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the further additional element(s) a first evaluation unit configured to evaluate a recognition accuracy of the neural network for a detection target: This is mere instruction to apply a judicial exception with a generic computing component (MPEP 2106.05(f)). a quantization unit configured to quantize the weight coefficient of the neural network: This is an instance of conventional quantization and amounts to insignificant extra-solution activity (MPEP 2106.05(g)). a second evaluation unit configured to evaluate the recognition accuracy of the neural network for the detection target after the weight coefficient has been quantized: This is mere instruction to apply a judicial exception with a generic computing component (MPEP 2106.05(f)). correction unit configured to correct the regularization item included in the loss function, based on the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: This is mere instruction to apply the judicial exceptions to the loss in a generic manner (MPEP 2106.05(f)). Step 2B: The further additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) a first evaluation unit configured to evaluate a recognition accuracy of the neural network for a detection target: This is mere instruction to apply a judicial exception with a generic computing component (MPEP 2106.05(f)). a quantization unit configured to quantize the weight coefficient of the neural network: This is an instance of using quantization on weights of a neural network, a standard technique in quantized neural networks, as noted by Choi ( METHOD AND DEVICE FOR DETERMINING SATURATION RATIO-BASED QUANTIZATION RANGE FOR QUANTIZATION OF NEURAL NETWORK, filed 7/22/2022, US 20240320464 A1): “Here, quantization means mapping tensor values from a dimension with a wide data representation range to a dimension with a narrow data representation range. In other words, quantization means that a processor that processes neural network operations maps high-precision tensors to low-precision values. In artificial neural networks, quantization can be applied to tensors including activations, weights, and biases of a layer.” (Choi, [0006]) a second evaluation unit configured to evaluate the recognition accuracy of the neural network for the detection target after the weight coefficient has been quantized: This is mere instruction to apply a judicial exception with a generic computing component (MPEP 2106.05(f)). correction unit configured to correct the regularization item included in the loss function, based on the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: This is mere instruction to apply the judicial exceptions to the loss in a generic manner (MPEP 2106.05(f)). Claim 7 Step 1: The claim recites a machine, as in claim 6. Step 2A Prong 1: The claim recites the following further judicial exception(s) a third evaluation unit configured to evaluate a deterioration degree of the recognition accuracy of the neural network for the detection target due to the quantization of the weight coefficient using the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: This can be performed as a mental process. One can merely subtract the second accuracy from the first to measure a difference caused by quantization. Step 2A Prong 2: The judicial exception(s) are not integrated into a practical application through the following further additional element(s) a third evaluation unit configured to evaluate a deterioration degree of the recognition accuracy of the neural network for the detection target due to the quantization of the weight coefficient using the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: This is mere instruction to execute a judicial exception with a generic computing component (MPEP 2106.05(f)). wherein the correction unit corrects the regularization item using the deterioration degree: This is mere instruction to correct a regularization item in a generic manner based on a judicial exception (MPEP 2106.05(f)). Step 2B: The following additional element(s) of the claim, taken alone or in combination, do not amount to significantly more than the recited judicial exception(s) a third evaluation unit configured to evaluate a deterioration degree of the recognition accuracy of the neural network for the detection target due to the quantization of the weight coefficient using the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: This is mere instruction to execute a judicial exception with a generic computing component (MPEP 2106.05(f)). wherein the correction unit corrects the regularization item using the deterioration degree: This is mere instruction to correct a regularization item in a generic manner based on a judicial exception (MPEP 2106.05(f)). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 & 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Banner et al. (ACIQ: Analytical Clipping for Integer Quantization of neural networks, published 9/27/2018, ICLR 2019 Conference Blind Submission, retrieved from https://openreview.net/forum?id=B1x33sC9KQ), hereafter referred to as ‘Banner’. Regarding claim 1, Banner discloses [a]n information processing apparatus comprising: an obtaining unit configured to obtain information indicating a size of an output as a result of a first operation in a neural network that performs the first operation using a weight coefficient for input data and a second operation of quantizing a result of the first operation, in order to obtain data of an intermediate layer: “In Section 4, we provide a rigorous formulation to optimize the quantization effect of activation tensors (output[s]) using clipping by analyzing both the Gaussian and the Laplace priors. This formulation is henceforth refered [sic] to as Analytical Clipping for Integer Quantization (ACIQ).” (Banner, page 2, paragraph 2) “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (size of an output), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). To clip an activation tensor, the size of it must be measured and compared against alpha. Examiner’s note: As one of ordinary skill in the art would know, each activation is calculated as a weighted sum (first operation) of weights (weight coefficient[s]) and inputs from the previous layer. “For each method, we select N layers (tensors) to be quantized (second operation) to 4 bits using our optimized clipping method and compare it against the standard GEMMLOWP approach. In figure 3 we present this accuracy-quantization tradeoff.” (Banner, page 7, paragraph 3). The size of each activation must be measured for activations in N layers to be input into the clipping function. PNG media_image3.png 200 400 media_image3.png Greyscale (Banner, page 8, Figure 3). For N > 2, N - 2 intermediate layer activations are being clipped and quantized. a control unit configured to control the first operation in the neural network to adjust the size of the output based on the information and a quantization parameter used for the quantization: “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α (quantization parameter) is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (the information), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). Banner relates to quantizing neural networks and is analogous to the claimed invention. Regarding claim 10, Banner discloses [a]n information processing method comprising: obtaining information indicating a size of an output as a result of a first operation in a neural network that performs the first operation using a weight coefficient for input data and a second operation of quantizing a result of the first operation, in order to obtain data of an intermediate layer: “In Section 4, we provide a rigorous formulation to optimize the quantization effect of activation tensors (output[s]) using clipping by analyzing both the Gaussian and the Laplace priors. This formulation is henceforth refered [sic] to as Analytical Clipping for Integer Quantization (ACIQ).” (Banner, page 2, paragraph 2) “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (size of an output), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). To clip an activation tensor, the size of it must be measured and compared against alpha. Examiner’s note: As one of ordinary skill in the art would know, each activation is calculated as a weighted sum (first operation) of weights (weight coefficient[s]) and inputs from the previous layer. “For each method, we select N layers (tensors) to be quantized (second operation) to 4 bits using our optimized clipping method and compare it against the standard GEMMLOWP approach. In figure 3 we present this accuracy-quantization tradeoff.” (Banner, page 7, paragraph 3). The size of each activation must be measured for activations in N layers to be input into the clipping function. PNG media_image3.png 200 400 media_image3.png Greyscale (Banner, page 8, Figure 3). For N > 2, N - 2 intermediate layer activations are being clipped and quantized. controlling the first operation in the neural network to adjust the size of the output based on the information and a quantization parameter used for the quantization: “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α (quantization parameter) is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (the information), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). Banner relates to quantizing neural networks and is analogous to the claimed invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-3 & 11 are rejected under 35 U.S.C. 103 as being unpatentable over Banner et al. (ACIQ: Analytical Clipping for Integer Quantization of neural networks, published 9/27/2018, ICLR 2019 Conference Blind Submission, retrieved from https://openreview.net/forum?id=B1x33sC9KQ), hereafter referred to as ‘Banner’, in view of Diril et al. (LAYER-LEVEL QUANTIZATION IN NEURAL NETWORKS, published 6/6/2019, US 2019/0171927 A1), hereafter referred to as ‘Diril’. Regarding claim 2, the rejection of claim 1 in view of Banner is incorporated. While Banner fails to disclose the further limitations of the claim, Diril discloses an apparatus, wherein the information indicating the size of the output is information calculated based on a distribution of values of the output: “This second limit value may correspond to a maximum value for the activation layer, such as an absolute maximum weight or filter value (e.g., the highest value of an activation layer (distribution of values of the output), which may be identified by passing output values through a min-max unit) or an estimated maximum weight (information indicating the size of the output) or filter value (e.g., an approximate maximum that discards outliers, a maximum within a predetermined standard deviation of values for a particular layer, etc.).” (Diril, [0045]). Diril relates to quantization of neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Banner to calculate approximate maximum activation values, as disclosed by Diril. Such maximum values can be used to determine an upper bound for quantization that preserves as much accuracy as possible while quantizing values. See Diril, [0046]. Regarding claim 3, the rejection of claim 2 in view of Banner and Diril is incorporated. Diril further discloses an apparatus, wherein the information indicating the size of the output is information indicating an upper limit excluding an outlier of the output: “This second limit value may correspond to a maximum value for the activation layer, such as an absolute maximum weight or filter value (e.g., the highest value of an activation layer, which may be identified by passing output values through a min-max unit) or an estimated maximum weight (information indicating the size of the output) or filter value (e.g., an approximate maximum that discards outliers, a maximum within a predetermined standard deviation of values for a particular layer, etc.).” (Diril, [0045]). Diril relates to quantization of neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Banner to calculate approximate maximum activation values, as disclosed by Diril. Such maximum values can be used to determine an upper bound for quantization that preserves as much accuracy as possible while quantizing values. See Diril, [0046]. Regarding claim 11, Banner discloses a program that causes the computer to: obtaining information indicating a size of an output as a result of a first operation in a neural network that performs the first operation using a weight coefficient for input data and a second operation of quantizing a result of the first operation, in order to obtain data of an intermediate layer: “In Section 4, we provide a rigorous formulation to optimize the quantization effect of activation tensors (output[s]) using clipping by analyzing both the Gaussian and the Laplace priors. This formulation is henceforth refered [sic] to as Analytical Clipping for Integer Quantization (ACIQ).” (Banner, page 2, paragraph 2) “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (size of an output), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). To clip an activation tensor, the size of it must be measured and compared against alpha. Examiner’s note: As one of ordinary skill in the art would know, each activation is calculated as a weighted sum (first operation) of weights (weight coefficient[s]) and inputs from the previous layer. “For each method, we select N layers (tensors) to be quantized (second operation) to 4 bits using our optimized clipping method and compare it against the standard GEMMLOWP approach. In figure 3 we present this accuracy-quantization tradeoff.” (Banner, page 7, paragraph 3). The size of each activation must be measured for activations in N layers to be input into the clipping function. PNG media_image3.png 200 400 media_image3.png Greyscale (Banner, page 8, Figure 3). For N > 2, N - 2 intermediate layer activations are being clipped and quantized. controlling the first operation in the neural network to adjust the size of the output based on the information and a quantization parameter used for the quantization: “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α (quantization parameter) is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (the information), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). Banner relates to quantizing neural networks and is analogous to the claimed invention. While Banner fails to disclose the further limitations of the claim, Diril discloses [a] non-transitory computer-readable storage medium storing a program which, when executed by a computer comprising a processor and memory, causes the computer to: “The computer-readable medium containing the computer program may be loaded into computing system 810. All or a portion of the computer program stored on the computer-readable medium may then be stored in system memory 816 and/or various portions of storage devices 832 and 833. When executed by processor 814, a computer program loaded into computing system 810 may cause processor 814 to perform and/or be a means for performing the functions of one or more of the example embodiments described and/or illustrated herein.” (Diril, [0073]); “Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic storage media ( e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic- storage media (e.g., solid-state drives and flash media), and other distribution systems.” (Diril, [0080]) Diril relates to quantizing neural networks and is analogous to the claimed invention. Banner teaches a method of quantizing neural networks. The claimed invention improves upon this method by storing it in the form of instructions on computer hardware. Diril teaches standard computer hardware, applicable to Banner. A person of ordinary skill in the art would have recognized that storing Banner’s method as computer instructions on Diril’s hardware would lead to the predictable result of the method being executable by a computing system, and would improve the known device by allowing it to be performed with real data (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results). Claims 4-9 are rejected under 35 U.S.C. 103 as being unpatentable over Banner et al. (ACIQ: Analytical Clipping for Integer Quantization of neural networks, published 9/27/2018, ICLR 2019 Conference Blind Submission, retrieved from https://openreview.net/forum?id=B1x33sC9KQ), hereafter referred to as ‘Banner’, in view of Sasagawa (Neural Network Derivation Method, published 8/12/2021, US 2021/0248463 A1). Regarding claim 4, the rejection of claim 1 in view of Banner is incorporated. Banner discloses an information processing apparatus, wherein the control unit controls the first operation by controlling a weight coefficient of the neural network, through learning with which the size of the output exceeding the quantization parameter results in a large loss: “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α (quantization parameter) is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (size of the output), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4) “the expected mean-square-error between X and its quantized version Q(X) can be written as follows: PNG media_image4.png 332 551 media_image4.png Greyscale ” (Banner, page 4, paragraph 6). As can be seen in the formula above, the mean square error depends partly on (x - α ). In other words, the value of the loss increases as the difference between the activation tensor size and quantization threshold increases. While Banner fails to disclose the further limitations of the claim, Sasagawa discloses an apparatus, wherein the control unit controls the first operation by controlling a weight coefficient of the neural network, through learning with which the size of the output exceeding the quantization parameter results in a large loss: “in the present disclosure, when an inference model is generated, training is performed using a loss function for optimization to which a regularization term is added. This regularization term prevents a weight parameter used by a neural network from becoming a weight parameter likely to change the accuracy of an inferred value. For example, in the present disclosure, when a neural network is trained, a weight parameter is updated so that a value of “loss function+regularization term (loss)” becomes smaller, and an inference model is generated. Accordingly, even when a weight parameter is quantized at the time of mounting, it is possible to reduce a significant decrease in accuracy of an inferred value.” (Sasagawa, [0037]) Sasagawa relates to quantized neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Banner to adjust the weights based on loss during learning, as disclosed by Sasagawa. Doing so would avoid reductions in accuracy, even for quantized weights. See Sasagawa, [0037] Regarding claim 5, the rejection of claim 4 in view of Banner and Sasagawa is incorporated. Banner further discloses an apparatus, wherein the loss is calculated by a loss function including a regularization item that is large when the size of the output exceeds the quantization parameter: “the expected mean-square-error between X and its quantized version Q(X) can be written as follows: PNG media_image4.png 332 551 media_image4.png Greyscale ” (Banner, page 4, paragraph 6). Regarding claim 6, the rejection of claim 5 in view of Banner and Sasagawa is incorporated. Sasagawa further discloses an apparatus, comprising: a first evaluation unit configured to evaluate a recognition accuracy of the neural network for a detection target: “Discrimination training model 40 is a model for training discriminator 41 that determines the accuracy of an inferred value” (Sasagawa, [0048]) “Pre-quantization weight parameter w+Δw and quantized weight parameter w.sup.q are inputted to discriminator 41. Discriminator 41 outputs inferred value D(w+Δw) in response to weight parameter w+Δw, and outputs inferred value D(w.sup.q) in response to weight parameter w.sup.q.” (Sasagawa, [0049]) “Inferred value (first inferred value) x of pre-quantization model 20 (the neural network) and inferred value (second inferred value) G(z) of quantized model 30 are inputted to discrimination training model 40. Discrimination training model 40 contrasts inputted inferred value x and inferred value G(z) with above-described inferred values D(w+Δw) and D(w.sup.q), and trains discriminator 41 by performing backpropagation.” (Sasagawa, [0050]) a quantization unit configured to quantize the weight coefficient of the neural network: “Quantized model 30 includes a second neural network having weight parameter (second parameter) w.sup.q. Weight parameter w.sup.q is obtained by converting weight parameter w of pre-quantization model 20 into a second numeric representation different from the above-described first numeric representation … Specifically, weight parameter w.sup.q is obtained by quantizing weight parameter w+Δw obtained by adding Δw to weight parameter w.” (Sasagawa, [0047]) a second evaluation unit configured to evaluate the recognition accuracy of the neural network for the detection target after the weight coefficient has been quantized: “Inferred value (first inferred value) x of pre-quantization model 20 and inferred value (second inferred value) G(z) of quantized model 30 (neural network for the detection target after the weight coefficient has been quantized) are inputted to discrimination training model 40. Discrimination training model 40 contrasts inputted inferred value x and inferred value G(z) with above-described inferred values D(w+Δw) and D(w.sup.q), and trains discriminator 41 by performing backpropagation.” (Sasagawa, [0050]) a correction unit configured to correct the regularization item included in the loss function, based on the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: “The regularization term in above “loss function+regularization term” is determined to be larger when the accuracy of an inferred value decreases, and is determined to be smaller when the accuracy of an inferred value increases.” (Sasagawa, [0038]) “FIG. 2 is a schematic diagram illustrating a function of a discriminator that determines the accuracy of an inferred value. FIG. 2 shows a state in which weight parameters that decrease the accuracy of an inferred value when the weight parameters are quantized (the lower region of FIG. 2) and weight parameters that are less likely to decrease the accuracy of an inferred value even when the weight parameters are quantized (the upper region of FIG. 2) are classified by the function of the discriminator. When such a discriminator can be generated, it is possible to determine whether an unknown weight parameter is to decrease the accuracy of an inferred value, and it is possible to decide whether to increase or decrease a regularization term based on the determination result.” (Sasagawa, [0039]) Sasagawa relates to quantizing neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Banner to add a regularization term proportional to accuracy discrepancies in quantized weights to the loss function, as disclosed by Sasagawa. Doing so would reduce loss of accuracy due to quantized weights. See Sasagawa, [0037]. Regarding claim 7, the rejection of claim 6 in view of Banner and Sasagawa is incorporated. Sasagawa further discloses an apparatus, comprising a third evaluation unit configured to evaluate a deterioration degree of the recognition accuracy of the neural network for the detection target due to the quantization of the weight coefficient using the recognition accuracy evaluated by the first evaluation unit and the recognition accuracy evaluated by the second evaluation unit: “FIG. 2 is a schematic diagram illustrating a function of a discriminator that determines the accuracy of an inferred value. FIG. 2 shows a state in which weight parameters that decrease the accuracy of an inferred value when the weight parameters are quantized (the lower region of FIG. 2) and weight parameters that are less likely to decrease the accuracy of an inferred value even when the weight parameters are quantized (the upper region of FIG. 2) are classified by the function of the discriminator. When such a discriminator can be generated, it is possible to determine whether an unknown weight parameter is to decrease the accuracy of an inferred value, and it is possible to decide whether to increase or decrease a regularization term based on the determination result.” (Sasagawa, [0039]) wherein the correction unit corrects the regularization item using the deterioration degree: “The regularization term in above “loss function+regularization term” is determined to be larger when the accuracy of an inferred value decreases, and is determined to be smaller when the accuracy of an inferred value increases.” (Sasagawa, [0038]) Sasagawa relates to quantizing neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Banner to add a regularization term proportional to accuracy discrepancies in quantized weights to the loss function, as disclosed by Sasagawa. Doing so would reduce loss of accuracy due to quantized weights. See Sasagawa, [0037]. Regarding claim 8, the rejection of claim 1 in view of Banner is incorporated. While Banner fails to disclose the further limitations of the claim, Sasagawa discloses an apparatus, wherein the control unit adjusts the size of the output by correcting the weight coefficient of the intermediate layer: “These models each have a multi-layer structure and include an input layer, an intermediate layer, and an output layer, etc. Each of the layers includes nodes (not shown) corresponding to neurons. The strength of a connection between neurons is represented by a weight parameter. Although a neural network has weight parameters, in order to facilitate understanding, a weight parameter will be described below as an example of weight parameters.” (Sasagawa, [0045]). The weight parameters of a network are described as a single ‘weight parameter’. “in the present disclosure, when a neural network is trained, a weight parameter is updated so that a value of “loss function+regularization term” becomes smaller, and an inference model is generated.” (Sasagawa, [0037]). Weights are adjusted to minimize the regularization term. “The regularization term in above “loss function+regularization term” is determined to be larger when the accuracy of an inferred value decreases, and is determined to be smaller when the accuracy of an inferred value increases. The following describes how to determine whether the accuracy of an inferred value is to increase or decrease.” (Sasagawa, [0038]). Weight changes cause changes to the output, and consequently the accuracy of a model. Sasagawa relates to quantized neural networks and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Banner to adjust the weights based on loss during learning, as disclosed by Sasagawa. Doing so would avoid reductions in accuracy, even for quantized weights. See Sasagawa, [0037]. Regarding claim 9, the rejection of claim 8 in view of Banner and Sasagawa is incorporated. Banner further discloses an apparatus, wherein the control unit controls the first operation in the neural network when the size of the output exceeds a predetermined value: “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α (predetermined value) is suboptimal, and suggest a model where the tensor values are clipped to reduce quantization noise. For any x ∈ R (size of the output), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). As noted in the rejection for claim 1, the first operation is the weighted sum comprising the activation. The result is clipped depending on its size relative to alpha. Regarding claim 10, Banner discloses [a]n information processing method comprising: obtaining information indicating a size of an output as a result of a first operation in a neural network that performs the first operation using a weight coefficient for input data and a second operation of quantizing a result of the first operation, in order to obtain data of an intermediate layer: “In Section 4, we provide a rigorous formulation to optimize the quantization effect of activation tensors (output[s]) using clipping by analyzing both the Gaussian and the Laplace priors. This formulation is henceforth refered [sic] to as Analytical Clipping for Integer Quantization (ACIQ).” (Banner, page 2, paragraph 2) “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (size of an output), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). To clip an activation tensor, the size of it must be measured and compared against alpha. Examiner’s note: As one of ordinary skill in the art would know, each activation is calculated as a weighted sum (first operation) of weights (weight coefficient[s]) and inputs from the previous layer. “For each method, we select N layers (tensors) to be quantized (second operation) to 4 bits using our optimized clipping method and compare it against the standard GEMMLOWP approach. In figure 3 we present this accuracy-quantization tradeoff.” (Banner, page 7, paragraph 3). The size of each activation must be measured for activations in N layers to be input into the clipping function. PNG media_image3.png 200 400 media_image3.png Greyscale (Banner, page 8, Figure 3). For N > 2, N - 2 intermediate layer activations are being clipped and quantized. controlling the first operation in the neural network to adjust the size of the output based on the information and a quantization parameter used for the quantization: “Commonly (e.g., in GEMMLOWP), integer tensors are uniformly quantized in the range [ - α , α ] , where α (quantization parameter) is determined by the tensor maximal absolute value. In the following we show that the this [sic] choice of α is suboptimal, and suggest a model where the tensor values (output[s]) are clipped to reduce quantization noise. For any x ∈ R (the information), we define the clipping function c l i p ( x , α ) as follows PNG media_image2.png 84 482 media_image2.png Greyscale ” (Banner, page 4, paragraph 4). Banner relates to quantizing neural networks and is analogous to the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhao et al. (Improving Neural Network Quantization without Retraining using Outlier Channel Splitting, published 2019, arXiv:1901.09504v1) discloses methods of quantizing neural networks using various value clipping techniques. Jantscher et al. (ERROR COMPENSATION IN ANALOG NEURAL NETWORKS, filed 2020, US 20220309331 A1) discloses a method of constructing a neural network loss based on activation clipping thresholds. Nagel et al. (A White Paper on Neural Network Quantization, published 2021, arXiv:2106.08295v1) discloses a myriad of methods for quantizing neural networks, including clipping errors. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron P Gormley whose telephone number is (571)272-1372. The examiner can normally be reached Monday - Friday 12:00 PM - 8:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AG/Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585955
Minimal Trust Data Sharing
2y 5m to grant Granted Mar 24, 2026
Patent 12579440
Training Artificial Neural Networks Using Context-Dependent Gating with Weight Stabilization
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
0%
With Interview (-60.0%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month