DETAILED ACTION
This Office Action is in response to Applicant's Response filed on 11/24/2025 for the above identified application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 11/24/2025 has been entered.
Claims 1-6, 8-11, 13, 15-16, and 20 are amended. Claims 7 and 17 are canceled. Claims 1-6, 8-16, and 18-20 are pending in the application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 8-16, and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
Claims 1-6 and 8-10 are directed to a method, Claims 11-16 and 18-19 are directed to an apparatus, and Claim 20 is directed to a medium. Thus, the claims fall within one of the statutory categories (machine, process, articles of manufacture) and are eligible under Step 1.
Step 2A Prong 1
Independent Claims
Claims 1, 11, and 20 recite:
generating a value distribution by counting each of values of a parameter type, the value distribution is a statistical distribution of a plurality of values of the parameter type, and the parameter type is one of a weight, an input activation, an output activation, an input feature value, and an output feature value; searching at least one breaking point in a range of the value distribution by comparing a variance between value before quantization and a corresponding value after quantization of a search point with another variance of another search point in the range of the value distribution, wherein the range is divided into a plurality of sections by the at least one breaking point - these limitations encompass writing down a value distribution by counting values of a parameter type and performing statistical analysis of the value distribution of the parameter to determine breaking point (mathematical calculations and relationships).
quantizing on a part of the values of the parameter type in a first section among the sections using a first quantization parameter and the other part of the values of the parameter type in a second section among the sections using a second quantization parameter comprising performing dynamic fixed-point quantization combined with a clipping method, wherein the first quantization parameter is different from the second quantization parameter, the first quantization parameter and the second quantization parameter have different integer lengths in the dynamic fixed-point quantization, the integer length of the first quantization parameter is determined according to an absolute value of a maximum value and an absolute value of a minimum value in the value distribution, and the maximum value and the minimum value are determined by the clipping method - these limitations encompass performing quantization on parameter values using different quantization parameters (mathematical calculations) and mathematical relationships in determining the integer length of the quantization parameters.
Accordingly, these claims recite an abstract idea that falls under the “mental processes” and “mathematical concepts” grouping.
Step 2A Prong 2
Independent Claims
Additional elements
Claims 1, 11, and 20:
a pre-trained model, wherein the pre-trained model is a model trained by inputting training samples into the deep learning network; at each layer in the deep learning network - these limitations are recited at a high-level of generality such that it amount to no more than generally linking the judicial exception to the technological environment of deep learning networks and neural network models (see MPEP § 2106.05(h)). Additionally, the high level training step merely recites the idea of training the model without providing the details of how the training is accomplished (see MPEP § 2106.05(f)).
Claim 1:
processor-implemented method performed by a processor for a deep learning network – these limitations are recited at a high-level of generality such that it amount to no more than mere instructions to apply the abstract idea on a generic computer and deep learning networks (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer and deep learning networks (see MPEP § 2106.05(h)).
Claim 11:
a computing apparatus for a deep learning network, comprising: a memory, for storing a code; and a processor, coupled to the memory, for loading and executing the code - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)).
Claim 20:
non-transitory computer-readable storage medium, for storing a code, wherein a processor loads the code to execute the method - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)).
Accordingly, these additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. These claims are directed to the abstract idea.
Step 2B
Independent Claims
Additional elements
Claims 1, 11, and 20:
a pre-trained model, wherein the pre-trained model is a model trained by inputting training samples into the deep learning network; at each layer in the deep learning network - these limitations are recited at a high-level of generality such that it amount to no more than generally linking the judicial exception to the technological environment of deep learning networks and neural network models (see MPEP § 2106.05(h)). Additionally, the high level training step merely recites the idea of training the model without providing the details of how the training is accomplished (see MPEP § 2106.05(f)).
Claim 1:
processor-implemented method performed by a processor for a deep learning network – these limitations are recited at a high-level of generality such that it amount to no more than mere instructions to apply the abstract idea on a generic computer and deep learning networks (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer and deep learning networks (see MPEP § 2106.05(h)).
Claim 11:
a computing apparatus for a deep learning network, comprising: a memory, for storing a code; and a processor, coupled to the memory, for loading and executing the code - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)).
Claim 20:
non-transitory computer-readable storage medium, for storing a code, wherein a processor loads the code to execute the method - these limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea on a generic computer (see MPEP § 2106.05(f)). This limitation can also be viewed as generally linking the use of a judicial exception to the field of generic computer (see MPEP § 2106.05(h)).
Accordingly, these additional elements do not amount to significantly more than the judicial exception. As such, these claims are patent ineligible.
Step 2A Prong 1
Dependent Claims
Claims 2 and 12:
the step of determining the at least one breaking point in the range of the value distribution comprises: determining a plurality of first search points in the range;
respectively dividing the range according to the first search points for forming a plurality of evaluation sections, and each of the evaluation sections corresponding to each of the first search points; respectively performing quantization on the evaluation sections of each of the first search points according to different quantization parameters for obtaining a quantized value corresponding to each of the first search points; and
comparing a plurality of variance amounts of the first search points for obtaining the at least one breaking point, wherein each of the variance amounts corresponding to one of the first search points comprises a variance between a quantized value and a corresponding unquantized value - these limitations encompass statistical analysis/ mathematical calculations to determine breaking point in the range of value distribution.
Claims 3 and 13:
the step of comparing the variance amounts of the first search points for obtaining the at least one breaking point comprises: using one of the first search points with a first variance amount as the at least one breaking point, wherein the first variance amount has variance amount that is smaller than others of the variance amounts of the first search points - these limitations encompass mathematical calculations and mathematical relationships involved in determining breaking point in the range of value distribution.
Claims 4 and 14:
the step of determining the first search points in the range comprises: determining a first search space in the range, wherein the first search space is equally divided into the evaluation sections by the first search points - these limitations encompass mathematical calculations and mathematical relationships involved in determining breaking point in the range of value distribution.
Claims 5 and 15:
the step of comparing the variance amounts of the first search points for obtaining the at least one breaking point comprises: determining a second search space according to one of the first search points with a first variance amount, wherein the second search space is less than the first search space, and the first variance amount has variance amount that is smaller than others of the variance amounts of the first search points; determining a plurality of second search points in the second search space, wherein a distance between adjacent two of the second search points is less than a distance between adjacent two of the first search points; and comparing a plurality of variance amounts of the second search points for obtaining the at least one breaking point, wherein each of the variance amounts corresponding to one of the second search points comprises a variance between a quantized value and a corresponding unquantized value - these limitations encompass mathematical calculations and mathematical relationships involved in determining breaking point in the range of value distribution.
Claims 6 and 16:
the step of determining the second search space according to one of the first search points with the first variance amount comprises: determining a breaking point ratio according to one of the first search points with the first variance amount, wherein the breaking point ratio is a ratio of the one of the first search points with the first variance amount to a maximum absolute value in the value distribution; and determining the second search space according to the breaking point ratio, wherein the first variance amount is located in the second search space - these limitations encompass mathematical calculations and mathematical relationships involved in determining breaking point in the range of value distribution.
Claims 8 and 18:
determining a gradient for quantization of weight by using a straight through estimator (STE) with boundary constraint, wherein the straight through estimator determines an input gradient between an upper limit and a bottom limit is equal to an output gradient - these limitations encompass mathematical calculations involved in determining gradient.
Claims 9 and 19:
quantizing a value of a weight or an input activation value of the parameter type; and
quantizing a value of an output activation value of the parameter type output of the computing layer - these limitations encompass mathematical calculations involved in performing quantization.
Claim 10:
determining an integer length of a weight of each of a plurality of quantization layers in the quantized model; and determining a fraction length of each of the quantization layers according to a bit width limit of each of the quantization layers - these limitations encompass mathematical calculations and relationships involved in determining integer length and fraction length.
Thus, the claims recite the abstract idea.
Step 2A Prong 2
Dependent Claims
Additional elements
Claims 8 and 18:
post-training a quantized model for obtaining a trained quantized model; and tuning the trained quantized model - these limitations are recited at a high-level of generality such that it amounts to no more than generally linking the use of a judicial exception to the field of machine learning models (see MPEP § 2106.05(h)). Additionally, the high level training step merely recites the idea of training the model without providing the details of how the training is accomplished (see MPEP § 2106.05(f)).
Claims 9 and 19:
inputting a quantized value into a computing layer - these limitations are recited at a high-level of generality such that it amounts to no more than generally linking the use of a judicial exception to the field of machine learning machine learning models (see MPEP § 2106.05(h)).
Claim 10:
the step of post-training the quantized model comprises: inferring a plurality of calibration samples according to the quantized model to determine an integer length of an activation/feature value in each of the quantization layers in the quantized model - these limitation merely recite an idea of a solution or outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result (see MPEP § 2106.05(f)). These limitations can also be viewed as generally linking the use of a judicial exception to the field of machine learning models (see MPEP § 2106.05(h)).
Accordingly, these additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to the abstract idea.
Step 2B
Dependent Claims
Additional elements
Claims 8 and 18:
post-training a quantized model for obtaining a trained quantized model; and tuning the trained quantized model - these limitations are recited at a high-level of generality such that it amounts to no more than generally linking the use of a judicial exception to the field of machine learning models (see MPEP § 2106.05(h)). Additionally, the high level training step merely recites the idea of training the model without providing the details of how the training is accomplished (see MPEP § 2106.05(f)).
Claims 9 and 19:
inputting a quantized value into a computing layer - these limitations are recited at a high-level of generality such that it amounts to no more than generally linking the use of a judicial exception to the field of machine learning machine learning models (see MPEP § 2106.05(h)).
Claim 10:
the step of post-training the quantized model comprises: inferring a plurality of calibration samples according to the quantized model to determine an integer length of an activation/feature value in each of the quantization layers in the quantized model - these limitation merely recite an idea of a solution or outcome with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result (see MPEP § 2106.05(f)). These limitations can also be viewed as generally linking the use of a judicial exception to the field of machine learning models (see MPEP § 2106.05(h)).
Accordingly, these additional elements do not amount to significantly more than the judicial exception. As such, the claims are patent ineligible.
Response to Arguments
35 U.S.C. §112: Applicant’s amendments have overcome the 112 rejections previously set forth.
35 U.S.C. §101: In the remarks, Applicant argues that:
(a) Step 2A prong 2: The combination of limitations provides a technical solution to improve the prediction accuracy of image classification or object detection using the pre-trained model with the quantized values and reduce the computing and storage requirements for the pre-trained model.
(b) Step 2B: None of the existing arts has discussed quantizing the values of the digital data with different quantization parameters in integer length for generating a quantized model, and therefore Applicant respectfully submits that Claims 1, 11, and 20 as a whole are not well-understood, routine, and conventional function in medical image analysis and is an inventive concept.
(c) Claims 1, 11, and 20 should be thus allowable under 35 U.S.C. 101, and claims 2-6, 8-10, 12-16, and 18-19 should be allowable for the same rationale.
Examiner respectfully disagrees with applicant’s arguments.
As to point (a), as analyzed in detail under Step 2A, Prong 1 portion of the 35 U.S.C. 101 rejections above, Claims 1, 11, and 20 recite abstract ideas that falls under the “mental processes” and “mathematical concepts” grouping. Also, as analyzed under Step 2A, Prong 2 portion of the 35 U.S.C. 101 rejections above, the additional elements amount to no more than: generally linking the judicial exception to the technological environment of deep learning networks and neural network models (see MPEP § 2106.05(h)); merely reciting the idea of training the model without providing the details of how the training is accomplished (see MPEP § 2106.05(f)); mere instructions to apply the abstract idea on a generic computer and deep learning networks (see MPEP § 2106.05(f)). The claims do not recite using the pre-trained model with the quantized values to improve the prediction accuracy of image classification or object and reduce the computing and storage requirements for the pre-trained model. Any purported improvement is based on the statistical analysis of the value distribution of the parameter type and performing quantization on parameter values using different quantization parameters, which are abstract ideas (mental process and/ or mathematical concepts) as analyzed in detail under Step 2A, Prong 1 portion of the 35 U.S.C. 101 rejections above. Examiner notes that “It is important to note, the judicial exception alone cannot provide the improvement” (see MPEP § 2106.05 (a)). Further, the recited claim merely involves, at most, an improvement to the abstract idea itself with the aid of a generic computer components and/or deep learning networks/ neural network models. Examiner notes that “It is important to keep in mind that an improvement in the abstract idea itself is not an improvement in technology” (see MPEP 2106.05(a)(II)). The claims recite abstract ideas and the additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. See the detailed analysis under 35 U.S.C. 101 rejections above.
As to point (b), firstly, the Examiner notes that “The question of whether a particular claimed invention is novel or obvious is ‘fully apart’ from the question of whether it is eligible. Diamond v. Diehr, 450 U.S. 175, 190, 209 USPQ 1, 9 (1981)” (see MPEP 2106.05(d)(I)). Secondly, the additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea and do not amount to significantly more than the judicial exception, as analyzed in detail under the 35 U.S.C. 101 rejections above. Examiner notes that an inventive concept "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself” (see MPEP § 2106.05,I).
As to point (c), dependent claims 2-6, 8-10, 12-16, and 18-19, which depend from independent claims 1, 11, and 20 are patent ineligible for at least the reasons stated above and the detailed analysis under 35 U.S.C. 101 rejections.
Accordingly, Applicant’s arguments concerning the §101 rejections are not persuasive.
35 U.S.C. §102/103: Applicant’s amendments and arguments with respect to the 102/103 rejections have been fully considered and are persuasive. The 103 rejections are withdrawn.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 CFR § 1.111(c) to consider these references fully when responding to this action.
Hanumante et al. (US 2023/0214639 A1) teaches: training a learned clipping-level a with fixed-point quantization using a parameterized clipping activation (PACT) process for the at least two computational layers of the neural network; quantizing on an effective weight that fuses a weight of the at least one convolution layer of the neural network with a weight and running variance from the at least one BN layer; determining a fractional length for weight of the at least two computational layers of the neural network from a current value of weight using the determined optimal fractional length for the weight of the at least two computational layers of the neural network; relating a fixed-point activation between two adjacent computational layers of the at least two computational layers of the neural network using a PACT quantization of the clipping-level a and an activation fractional length (FL) from at least one node in a following computational layer of the neural network; and storing resulting fixed-point weights and activation values as a compressed representation of the respective computational layers of the neural network (see [0027]).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REJI KARTHOLY whose telephone number is (571)272-3432. The examiner can normally be reached on Monday - Thursday 7:30 am - 3:30 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch, can be reached at telephone number (571)272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/REJI KARTHOLY/Primary Examiner, Art Unit 2143