Prosecution Insights
Last updated: April 19, 2026
Application No. 18/163,902

COMPUTER-READABLE RECORDING MEDIUM STORING LEARNING MODEL QUANTIZATION PROGRAM AND LEARNING MODEL QUANTIZATION METHOD

Non-Final OA §101§103§112
Filed
Feb 03, 2023
Examiner
PHAM, JESSICA THUY
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending and examined herein. Claims 2-6, 9-13, and 16-19 are rejected under 35 U.S.C. 112(b). Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-20 are rejected under 35 U.S.C. 103. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The attached information disclosure statement(s) (IDS) filed on 02/03/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-6, 9-13, and 16-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2, 9, and 16 recite the limitation the specific gravity is set such that the specific gravity in each step decreases stepwise as the step proceeds. The limitation as the step proceeds suggest the same step includes multiple specific gravities because the gravity decreases as the step proceeds (“step proceeds” suggests the one step spans multiple time periods and the gravity decreases. This is indefinite because the specification does not appear to support this interpretation. Fig. 5 describes the relationship recited in claim 2. The figure seems to show a sequence of steps (i.e. number of steps) in the process, and the specific gravity for each step in the sequence of steps decreases as the number of steps increases. There should be one specific gravity for each step and the limitation as the step proceeds is unclear. Dependent claims 3-6, 10-13 and 16-19 fail to resolve the issue and are rejected with the same rationale. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-20, in accordance with these steps, follows. Step 1 Analysis: Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter. Claims 1-7 are directed to an article of manufacture, claims 8-14 are directed to a process, and claims 15-20 are directed to a machine. All claims are directed to statutory categories and analysis proceeds. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following claim elements are abstract ideas: in an objective function for searching for a combination of layers in which parameters of a machine-learned model using a neural network are quantized, the objective function including inference accuracy of the quantized model and an index related to a compression ratio of the model, setting a specific gravity such that the specific gravity of the index related to the compression ratio with respect to the inference accuracy decreases as the compression ratio increases; (This limitation describes the objective function, which is a mathematical formula, which is a mathematical concept. Setting a specific gravity, interpreted as a hyperparameter, can be practically performed in the human mind, and is a mental process. Additionally, the limitation recites a mathematical relationship between the compression ratio and the inference accuracy, which is a mathematical concept.) selecting a layer in which the objective function is optimized, as a layer in which the parameters are quantized; and (Selecting a layer in which the function is optimized is a mental process of evaluation.) … quantizing the parameters of the selected layer … (Quantization can be practically performed in the human mind, i.e. assigning a value to a range of numbers. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A non-transitory computer-readable recording medium storing a learning model quantization program for causing a computer to execute a process comprising: (This limitation recites generic computer parts and functions; this amounts to mere instructions to apply an exception.) outputting a relationship between the inference accuracy for the model obtained by … and the index related to the compression ratio. (Outputting data is an insignificant extra-solution activity. See MPEP § 2106.05(g)(3) and MPEP § 2106.05(g), ‘Mere Data Gathering’, ex. iii.) Regarding claim 2, the rejection of claim 1 is incorporated herein. The following are abstract ideas: wherein in the selecting a layer, a process of selecting a predetermined number of the layers at a time is set as one step, (Selecting a predetermined number of layers in which the function is optimized is a mental process of evaluation.) … obtained by quantizing the parameters of the layer selected in a previous step, and (Quantization can be practically performed in the human mind, i.e. assigning a value to a range of numbers. This is a mental process.) in the setting a specific gravity, the specific gravity is set such that the specific gravity in each step decreases stepwise as the step proceeds. (This recites a mathematical relationship between the specific gravities in each step, which is a mathematical concept.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: and a next step is executed on the model (This recites generic machine learning components and processes; this amounts to mere instructions to apply an exception.) Regarding claim 3, the rejection of claim 2 is incorporated herein. The following are abstract ideas: wherein in the setting a specific gravity, the specific gravity is set such that the specific gravity in each step at a final stage from a predetermined step to an end step with respect to each step at an early stage from a start step to the predetermined step is less than or equal to a predetermined ratio. (Setting a specific gravity, interpreted as a hyperparameter, can be practically performed in the human mind, and is a mental process. Additionally, this recites a mathematical relationship between a final stage step and an early stage step, which is a mathematical concept.) Regarding claim 4, the rejection of claim 2 is incorporated herein. The following are abstract ideas: wherein in the setting a specific gravity, a hyper parameter that corresponds to the specific gravity is changed in accordance with a predetermined function in which the index related to the compression ratio is a variable. (Setting a specific gravity, interpreted as a hyperparameter, can be practically performed in the human mind, and is a mental process. Additionally, this recites a mathematical formula, which is a mathematical concept.) Regarding claim 5, the rejection of claim 4 is incorporated herein. The following are abstract ideas: wherein the function is a function based on a sigmoid function, a step function, or a hyperbolic tangent function. (This recites mathematical formulas, which are mathematical concepts.) Regarding claim 6, the rejection of claim 2 is incorporated herein. The following are abstract ideas: wherein the predetermined number is 1. (Following claim 2, the predetermined number is used to select a number of layers to be quantized. The selection of one layer to be quantized is a mental process of evaluation.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, the following are abstract ideas: wherein the index related to the compression ratio is a size of the model after the quantization, the number of the quantized parameters, or a ratio of the number of the quantized parameters to the number of all the parameters included in the model before the quantization. (Following claim 1, the index related to the compression ratio is part of the description of the mathematical formula of the objective function. Therefore, this limitation further describes the mathematical formula of the objective function, a mathematical concept.) Regarding claim 8, the following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A learning model quantization method comprising: (This describes generic machine learning concepts. This amounts to mere instructions to apply an exception.) The remainder of claim 8 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 9-14 recite substantially similar subject matter to claims 2-7 respectively and are rejected with the same rationale, mutatis mutandis. Regarding claim 15, the following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A learning model quantization device comprising: (This describes generic machine learning concepts. This amounts to mere instructions to apply an exception.) a memory; and a processor coupled to the memory and configured to: (This describes generic computer components and processes. This amounts to mere instructions to apply an exception.) The remainder of claim 15 recites substantially similar subject matter to claim 1 and is rejected with the same rationale, mutatis mutandis. Claims 16-20 recite substantially similar subject matter to claims 2-6 respectively and are rejected with the same rationale, mutatis mutandis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-6, 8-13, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsuji (“GPQ: Greedy Partial Quantization of Convolutional Neural Networks Inspired by Submodular Optimization”, November 2020), and Chen (Joint Neural Architecture Search and Optimization”, 2018). Regarding claim 1, Tsuji teaches A non-transitory computer-readable recording medium storing a learning model quantization program for causing a computer to execute a process comprising: (Page 106 states "Based on this background, we propose Greedy Partial Quantization (GPQ), which can determine combinations of quantization layers that minimize accuracy loss, with O ( N 2 ) computational complexity." One of ordinary skill in the art would realize that this quantization method is performed on a computer. A computer necessarily includes a non-transitory computer-readable recording medium with a program for the method in order to execute the method on the computer.) in an objective function for searching for a combination of layers in which parameters of a machine-learned model using a neural network are quantized, (Page 107 states "With regard to solving problem (1), this study aims to search for efficient (small model size and high inference accuracy) combinations of quantization layers." Page 107 further states "For more efficient partial quantization, we redefine the objective function in (1) by introducing a factor # PARAM ( S ) that denotes the number of quantized parameters as argmax S   Acc S × log ⁡ # PARAM ( S ) β (3) where β is a coefficient to tune the relative importance of the quantized parameters.") the objective function including inference accuracy of the quantized model and an index related to a compression ratio of the model, (Page 107 states "For more efficient partial quantization, we redefine the objective function in (1) by introducing a factor # PARAM ( S ) that denotes the number of quantized parameters as argmax S   Acc S × log ⁡ # PARAM ( S ) β (3) where β is a coefficient to tune the relative importance of the quantized parameters." Page 107 further states " S denotes the set of quantization layers and Acc S is set function that denotes the inference accuracy when the layers S are quantized." Therefore, Acc S is interpreted as the inference accuracy of the quantized model and # PARAM ( S ) is interpreted as the index related to a compression ratio of the model.) selecting a layer in which the objective function is optimized, as a layer in which the parameters are quantized; and (Page 107 states "The greedy search algorithm that maximizes the objective function in (3) is shown in Algorithm 1. We started with a pretrained full-precision (FP32) model. For all unquantized layers, we individually calculated the objective function, then we selected and quantized the layers to maximize the objective function." Algorithm 1, shows that the best layer is chosen to be pushed to the set, which as stated above, is quantized. Page 107 states "We quantized both weights and activations." Therefore, the parameters of the selected layers are quantized. outputting a relationship between the inference accuracy for the model obtained by quantizing the parameters of the selected layer and the index related to the compression ratio. (Page 107-108 state "We quantized multiple CNNs according to the order of quantization layers (blocks) searched by Algorithm 1 and plotted the trade-off between inference accuracy and model size calculated based on the number of quantized parameters." The number of quantized parameters, as above, is interpreted as the index related to the compression ratio. As the relationship is plotted, the relationship is outputted.) Tsuji does not appear to explicitly teach setting a specific gravity such that the specific gravity of the index related to the compression ratio with respect to the inference accuracy decreases as the compression ratio increases; However, Chen—directed to analogous art—teaches setting a specific gravity such that the specific gravity of the index related to the compression ratio with respect to the inference accuracy decreases as the compression ratio increases; (Page 4323 states “A quantized model Θ can be constructed by its neural network architecture A and its quantization policy P . After the model is quantized, we can obtain its validation accuracy α ( Θ ) and its model size S ( Θ ) . In this paper, we define the search problem as a multi-objective function F Θ as follow: max Θ   F Θ =   max Θ   α Θ ⋅ S Θ T S γ where T S is the target for the model size and γ in the formulation above is defined as follow: γ = 0 ,   i f   S Θ ≤ T S   - 1 ,   o t h e r w i s e (2)” It means that if the model size meets the target, we simply use accuracy as the objective function. It degrades to a single objective problem. Otherwise, the objective value is penalized sharply to discourage the excessive model size.” Therefore, as the compression ratio increases (size decreases), the specific gravity, γ , is decreased in absolute value to zero. Thus, the specific gravity has an effect when the compression ratio is small.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Tsuji with the teachings of Chen because as stated on page 4321, “A Pareto optimal model is constructed in the evolutionary algorithm to achieve good trade-offs between accuracy and model size. By adjusting the multi-objective function, our search strategy can output suitable models for different accuracy or model size demands.” Additionally, as Tsuji states on page 107 "By tuning β , it is possible to search for the optimum balance between quantization and accuracy loss." Regarding claim 2, the rejection of claim 1 is incorporated herein. Tsuji teaches wherein in the selecting a layer, a process of selecting a predetermined number of the layers at a time is set as one step, and a next step is executed on the model obtained by quantizing the parameters of the layer selected in a previous step, and (Page 106, Fig. 1 states that step 2 of the method is “Calculate the sensitivity of each layer and quantize the most insensitive layer”. As the selection of the layer/calculation of the sensitivity must occur before the quantization of the selected layer (most insensitive), one step is selection, and a next step is quantization of the layer. As the most insensitive layer is selected, the predetermined number is 1.) Tsuji does not appear to explicitly teach in the setting a specific gravity, the specific gravity is set such that the specific gravity in each step decreases stepwise as the step proceeds. However, Chen—directed to analogous art—teaches in the setting a specific gravity, the specific gravity is set such that the specific gravity in each step decreases stepwise as the step proceeds. (Page 4323 “A quantized model Θ can be constructed by its neural network architecture A and its quantization policy P . After the model is quantized, we can obtain its validation accuracy α ( Θ ) and its model size S ( Θ ) . In this paper, we define the search problem as a multi-objective function F Θ as follow: max Θ   F Θ =   max Θ   α Θ ⋅ S Θ T S γ where T S is the target for the model size and γ in the formulation above is defined as follow: γ = 0 ,   i f   S Θ ≤ T S   - 1 ,   o t h e r w i s e (2)” It means that if the model size meets the target, we simply use accuracy as the objective function. It degrades to a single objective problem. Otherwise, the objective value is penalized sharply to discourage the excessive model size.” Thus, as the search proceeds, γ will eventually go from -1 to 0 (a step) meaning that the absolute value of the specific gravity decreases stepwise, and the objective function will no longer depend on the compression ratio part of the objective function.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Tsuji with the teachings of Chen for the reasons given above in regards to claim 1. Regarding claim 3, the rejection of claim 2 is incorporated herein. Tsuji does not appear to explicitly teach wherein in the setting a specific gravity, the specific gravity is set such that the specific gravity in each step at a final stage from a predetermined step to an end step with respect to each step at an early stage from a start step to the predetermined step is less than or equal to a predetermined ratio. However, Chen—directed to analogous art—teaches wherein in the setting a specific gravity, the specific gravity is set such that the specific gravity in each step at a final stage from a predetermined step to an end step with respect to each step at an early stage from a start step to the predetermined step is less than or equal to a predetermined ratio. (Page 4323 states“A quantized model Θ can be constructed by its neural network architecture A and its quantization policy P . After the model is quantized, we can obtain its validation accuracy α ( Θ ) and its model size S ( Θ ) . In this paper, we define the search problem as a multi-objective function F Θ as follow: max Θ   F Θ =   max Θ   α Θ ⋅ S Θ T S γ (1) where T S is the target for the model size and γ in the formulation above is defined as follow: γ = 0 ,   i f   S Θ ≤ T S   - 1 ,   o t h e r w i s e (2) It means that if the model size meets the target, we simply use accuracy as the objective function. It degrades to a single objective problem. Otherwise, the objective value is penalized sharply to discourage the excessive model size.” Page 4324, section 3.3 states that an evolutionary algorithm is used to search for the optimized quantization. Page 4324 states “For any model Θ , we need to optimize its architecture A and quantization policy P . Each individual model Θ of P is first trained on the training set D
Read full office action

Prosecution Timeline

Feb 03, 2023
Application Filed
Nov 19, 2025
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month