Prosecution Insights
Last updated: April 19, 2026
Application No. 18/074,166

METHOD FOR TRAINING AN ARTIFICIAL NEURAL NETWORK COMPRISING QUANTIZED PARAMETERS

Final Rejection §101§103
Filed
Dec 02, 2022
Examiner
MAUNI, HUMAIRA ZAHIN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
6 granted / 16 resolved
-17.5% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
55
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments filed 01/15/2026 have been entered. Claims 1-19 remain pending within the application. The amendments filed 01/15/2026 are NOT sufficient to overcome the objection previously set forth in the Non-Final Office Action mailed 10/20/2025. See objection below. Specification The disclosure is objected to because of the following informalities: The spelling of terms in the specification are inconsistent and do not follow either American-English or British-English standard. Both American-English and British-English spelling is used, such as “regularizer” ,“regularisation”, “quantization”, “quantisation”, and etc. While the applicant has amended the one term pointed out as an example in the previous office action, the inconsistencies in spelling persist throughout the entire disclosure, beyond the terms the examiner cites as example. The disclosure must be amended such that spelling is consistent throughout the entire disclosure and follow either the American-English or the British-English standard, but not both. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the network of independent claim 11 is directed to values, and at best encompass software per se. Similarly, the network of independent claim 16 is directed to values, and at best encompass software per se. By virtue of their dependence, claims 12-15 and 17-19 at best encompass software per se. Therefore, the claims are ineligible subject matter under 35 U.S.C. 101. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Naumov et al. (Pub. No.: US 11,468,313 B1), hereafter Naumov, in view of Choi et al. ("PACT: PARAMETERIZED CLIPPING ACTIVATION FOR QUANTIZED NEURAL NETWORKS"), hereafter Choi. Regarding claim 1, Naumov discloses: A method for training an artificial neural network, comprising (Naumov, Fig. 1, Fig. 2, and Fig. 3): minimizing a loss function, the loss function comprising a scalable regularization factor defined by a differentiable periodic function configured to provide a finite number of minima selected based on a quantization scheme for the artificial neural network, whereby to constrain a connection weight value to one of a predetermined number of values of the quantization scheme (Naumov, Fig. 2, Fig. 3, col 2, lines 59-67 and col. 3 lines 1-41 teaches minimizing a loss function comprising a scalable regularization factor defined by a differentiable periodic function, i.e. scaling factor for periodic regularization function, and providing a finite number of minima selected based on the quantized neural network to constrain a connection weight value to one of a predetermined number of values of the quantization scheme), wherein the artificial neural network comprises multiple nodes …, wherein the multiple nodes are arranged in multiple layers, and wherein nodes in adjacent layers of the multiple layers are connected by connections each defining a quantized connection weight function configured to output a quantized connection weight value (Naumov, Fig. 4, col. 9, lines 63-67 and col. 10, lines 1-19 teaches the artificial neural network to comprise multiple layers of interconnected nodes wherein nodes in adjacent layers of the multiple layers are connected by connections each defining a quantized connection weight function configured to output a quantized connection weight value), wherein minimizing the loss function comprises using the scalable regularization factor as part of a regularization term in a loss calculation to push values of weights … to a set of discrete points during training, thereby constraining a quantized … value to one of a predetermined number of values of the quantization scheme (Naumov, Fig. 3, col. 3 lines 15-41 teaches minimizing a loss function using the scalable regularization factor as a part of a regularization term in a loss calculation to push values of weights of connections to a set of discrete points for quantization during training). While Naumov teaches wherein the artificial neural network comprises multiple nodes …, wherein the multiple nodes are arranged in multiple layers, and wherein nodes in adjacent layers of the multiple layers are connected by connections each defining a quantized connection weight function configured to output a quantized connection weight value, they do not explicitly teach defining a quantized activation function configured to output a quantized activation value. Choi discloses: wherein the artificial neural network comprises … defining a quantized activation function configured to output a quantized activation value (Choi, page 2, point 1, lines 1-5 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. We introduce a new parameter α that is used to represent the clipping level in the activation function and is learned via back-propagation. α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively” teaches the neural network to comprise defining a quantized activation function configured to output a quantized activation value). Naumov and Choi are analogous art because they are from the same field of endeavor, quantization and neural network. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include wherein the artificial neural network comprises … defining a quantized activation function configured to output a quantized activation value, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, as suggested by Choi (Choi, page 2, point 1, lines 7-8). While Naumov discloses wherein minimizing the loss function comprises using the scalable regularization factor as part of a regularization term in a loss calculation to push values of weights … to a set of discrete points during training, thereby constraining a quantized … value to one of a predetermined number of values of the quantization scheme, they do not explicitly disclose the loss calculation to push values of weights and activation … thereby constraining a quantized activation value…. Choi discloses: …loss calculation to push values of weights and activation … thereby constraining a quantized activation value…(Choi, page 2, point 1, lines 1-6 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. … α sets the quantization scale smaller than ReLU to reduce the quantization error, … In addition, regularization is applied to α in the loss function to enable faster convergence.” And page 4, equations 2 and 3, and paragraph 3, last 2 lines “α converges to values much smaller than the initial value as the training epochs proceed, thereby limiting the dynamic range of activations and minimizing quantization loss.” Teaches a loss calculation that pushes values of weights and activations of a neural network during training that constrain a quantized activation value). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include loss calculation to push values of weights and activation … thereby constraining a quantized activation value, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, limit the dynamic range of activations and minimize quantization loss, as suggested by Choi (Choi, page 2, point 1, lines 7-8 and page 4, paragraph 3, last 2 lines). Regarding claim 2, Naumov, in view of Choi, discloses the method as claimed in claim 1 (and thus the rejection of claim 1 is incorporated). Naumov further discloses: wherein each of the finite number of minima of the differentiable periodic function coincide with a value of the quantization scheme (Naumov, Fig. 6, and col. 12, lines 31-46 teaches a finite number of possible minimum values of the differentiable periodic function to coincide with a value of the quantization scheme). Regarding claim 3, Naumov, in view of Choi, discloses the method as claimed in claim 1 (and thus the rejection of claim 1 is incorporated). Naumov further discloses: wherein the quantization scheme defines a quantity of integer bits (Naumov, col. 2, lines 52-58 teaches a defined quantity of integer bits). Regarding claim 4, Naumov, in view of Choi, discloses the method as claimed in claim 1 (and thus the rejection of claim 1 is incorporated). Naumov further discloses: constraining, by using the loss function, a quantized … value to one of a predetermined number of values of the quantization scheme (Naumov, col. 12, lines 47-67 and col. 13, lines 1-5 teaches constraining, by using the loss function, a quantized value to one of a predetermined number of values of the quantization scheme). While Naumov discloses constraining, by using the loss function, a quantized … value to one of a predetermined number of values of the quantization scheme, they do not disclose constraining … a quantized activation value. Choi teaches: constraining… a quantized activation value (Choi, equation (1) and page 2, point 1, lines 1-6 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. We introduce a new parameter α that is used to represent the clipping level in the activation function and is learned via back-propagation. α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively. In addition, regularization is applied to in the loss function to enable faster convergence.” Teaches constraining, by using the loss function, a quantized activation value). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include constraining… a quantized activation value, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, as suggested by Choi (Choi, page 2, point 1, lines 7-8). Regarding claim 5, Naumov, in view of Choi, discloses the method as claimed in claim 1 (and thus the rejection of claim 1 is incorporated). Naumov further discloses: tuning a quantized connection weight value (Naumov, Fig. 2, col. 7, lines 37-43 teaches tuning a quantized connection weight value during training), minimizing the loss function using a gradient descent mechanism (Naumov, col. 15, lines 23-36 teaches minimizing the loss function using a gradient descent mechanism). Regarding claim 6, Naumov discloses: A non-transitory machine-readable storage medium encoded with instructions for training an artificial neural network, the instructions executable by a processor to (Naumov, Figs. 1 – 3 and col. 3, lines 51- 67), minimize a loss function comprising a scalable regularization factor defined by a differentiable periodic function configured to provide a finite number of minima selected based on a quantization scheme for a neural network, whereby to constrain a connection weight value to one of a predetermined number of values of the quantization scheme (Naumov, Fig. 2, Fig. 3, col 2, lines 59-67 and col. 3 lines 1-41 teaches minimizing a loss function comprising a scalable regularization factor defined by a differentiable periodic function, i.e. scaling factor for periodic regularization function, and providing a finite number of minima selected based on the quantized neural network to constrain a connection weight value to one of a predetermined number of values of the quantization scheme), wherein the instructions are further executable to apply the scalable regularization factor as part of a regularization term in a loss calculation to push values … to a set of discrete points during training, thereby constraining a quantized … value to one of a predetermined number of values of the quantization scheme (Naumov, Fig. 3, col. 3 lines 15-41 teaches applying the scalable regularization factor as part of a regularization term in a loss calculation to push values of weights of connections to a set of discrete points for quantization during training). While Naumov discloses applying the scalable regularization factor as part of a regularization term in a loss calculation to push values … to a set of discrete points during training, thereby constraining a quantized … value to one of a predetermined number of values of the quantization scheme, they do not explicitly disclose the loss calculation to push values of activations … thereby constraining a quantized activation value…. Choi discloses: …loss calculation to push values of activations … thereby constraining a quantized activation value…(Choi, page 2, point 1, lines 1-6 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. … α sets the quantization scale smaller than ReLU to reduce the quantization error, … In addition, regularization is applied to α in the loss function to enable faster convergence.” And page 4, equations 2 and 3, and paragraph 3, last 2 lines “α converges to values much smaller than the initial value as the training epochs proceed, thereby limiting the dynamic range of activations and minimizing quantization loss.” Teaches a loss calculation that pushes values of weights and activations of a neural network during training that constrain a quantized activation value). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include loss calculation to push values of activations … thereby constraining a quantized activation value, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, limit the dynamic range of activations and minimize quantization loss, as suggested by Choi (Choi, page 2, point 1, lines 7-8 and page 4, paragraph 3, last 2 lines). Regarding claim 7, Naumov, in view of Choi, discloses the non-transitory machine-readable storage medium as claimed in claim 6 (and thus the rejection of claim 6 is incorporated). Naumov further discloses: adjust a weight scale parameter of the differentiable periodic function, the weight scale parameter representing a scale factor for a weight value of a weight function defining a connection between nodes of the neural network (Naumov, col. 3, lines 23-41 teaches adjusting a weight scale parameter of the differentiable periodic function, the weight scale parameter representing a scale factor for a weight value of a weight function defining a connection between nodes of the neural network), compute a value for the loss function based on the adjusted weight scale parameter (Naumov, col. 3, lines 23-41 teaches computing a value for the loss function based on the adjusted weight scale parameter). Regarding claim 8, Naumov, in view of Choi, discloses the non-transitory machine-readable storage medium as claimed in claim 6 (and thus the rejection of claim 6 is incorporated). Naumov further discloses: adjust a… scale parameter of the differentiable periodic function… of a node of the neural network (Naumov, col. 3, lines 23-41 teaches adjusting a scale parameter of the differentiable periodic function of a node of the neural network). While Naumov discloses adjust a… scale parameter of the differentiable periodic function… of a node of the neural network, they do not disclose: adjust an activation scale parameter … the activation scale parameter representing a scale factor for an activation value of an activation function …, compute a value for the loss function based on the adjusted activation scale parameter. Choi discloses: adjust an activation scale parameter … the activation scale parameter representing a scale factor for an activation value of an activation function … (Choi, page 2, point 1, lines 1-5 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. We introduce a new parameter α that is used to represent the clipping level in the activation function and is learned via back-propagation. α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively” teaches adjusting an activation scale parameter … the activation scale parameter representing a scale factor for an activation value of an activation function), compute a value for the loss function based on the adjusted activation scale parameter (Choi, equation (1) and page 2, point 1, lines 1-6 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. We introduce a new parameter α that is used to represent the clipping level in the activation function and is learned via back-propagation. α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively. In addition, regularization is applied to in the loss function to enable faster convergence.” Teaches computing a value for the loss function based on the adjusted activation scale parameter). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include adjust an activation scale parameter … the activation scale parameter representing a scale factor for an activation value of an activation function …, and compute a value for the loss function based on the adjusted activation scale parameter, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, as suggested by Choi (Choi, page 2, point 1, lines 7-8). Regarding claim 9, Naumov, in view of Choi, discloses the non-transitory machine-readable storage medium as claimed in claim 6 (and thus the rejection of claim 6 is incorporated). Naumov further discloses: compute a value of the loss function by performing a gradient descent calculation (Naumov, col. 15, lines 23-36 teaches computing a value for the loss function using a gradient descent calculation). Regarding claim 10, Naumov discloses: A quantization method comprising: iteratively minimizing a loss function by adjusting a quantized parameter value as part of a gradient descent mechanism to constrain a parameter for a neural network to one of a number of integer bits defining a selected quantization scheme as part of a regularization process, wherein the loss function comprises a scalable regularization factor defined by a differentiable periodic function configured to provide a finite number of minima selected based on a quantization scheme for the neural network is minimized (Naumov, Fig. 2, Fig. 3, col 2, lines 59-67 and col. 3 lines 1-41 teaches a quantization method comprising iteratively minimizing a loss function, through training, by adjusting a quantized parameter value as part of a gradient descent mechanism to constrain a parameter for a neural network to one of a number of integer bits defining a selected quantization scheme as part of a regularization process wherein the loss function comprises a scalable regularization factor defined by a differentiable periodic function configured to provide a finite number of minima selected based on a quantization scheme for the neural network is minimized), wherein iteratively minimizing comprises applying the scalable regularization factor as part of a regularization term in a loss calculation to push values … to a set of discrete points during training, thereby constraining a quantized … value to one of a predetermined number of values of the selected quantization scheme (Naumov, Fig. 3, col. 3 lines 15-41 teaches applying the scalable regularization factor as part of a regularization term in a loss calculation to push values of weights of connections to a set of discrete points for quantization during training). While Naumov discloses applying the scalable regularization factor as part of a regularization term in a loss calculation to push values … to a set of discrete points during training, thereby constraining a quantized … value to one of a predetermined number of values of the selected quantization scheme, they do not explicitly disclose the loss calculation to push values of activations … thereby constraining a quantized activation value…. Choi discloses: …loss calculation to push values of activations … thereby constraining a quantized activation value…(Choi, page 2, point 1, lines 1-6 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. … α sets the quantization scale smaller than ReLU to reduce the quantization error, … In addition, regularization is applied to α in the loss function to enable faster convergence.” And page 4, equations 2 and 3, and paragraph 3, last 2 lines “α converges to values much smaller than the initial value as the training epochs proceed, thereby limiting the dynamic range of activations and minimizing quantization loss.” Teaches a loss calculation that pushes values of weights and activations of a neural network during training that constrain a quantized activation value). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include loss calculation to push values of activations … thereby constraining a quantized activation value, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, as suggested by Choi (Choi, page 2, point 1, lines 7-8). Claims 11-15 are substantially similar to claims 1-5, and thus are rejected on the same basis as claims 1-5. Claims 16 is substantially similar to claim 10, and thus is rejected on the same basis as claim 10. Regarding claim 17, Naumov, in view of Choi, discloses a neural network as claimed in claim 11 (and thus the rejection of claim 11 is incorporated). Naumov further discloses: wherein the neural network is initialized using sample statistics from training data (Naumov, col. 15, lines 1-13 teaches initializing the neural network using sample statistics from training data to initialize weights). Regarding claim 18, Naumov, in view of Choi, discloses a neural network as claimed in claim 17 (and thus the rejection of claim 17 is incorporated). Naumov further discloses: wherein the set of parameters comprise scale factors … (Naumov, col. 13, lines 6-17 teaches the set of parameters to comprise scale factors). While Naumov discloses wherein the set of parameters comprise scale factors, they do not disclose scale factors for activations. Choi discloses: scale factors for activations (Choi, page 2, point 1, lines 1-5 “PACT: A new activation quantization scheme for finding the optimal quantization scale during training. We introduce a new parameter α that is used to represent the clipping level in the activation function and is learned via back-propagation. α sets the quantization scale smaller than ReLU to reduce the quantization error, but larger than a conventional clipping activation function (used in previous schemes) to allow gradients to flow more effectively” teaches scale factors for activations), It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Naumov to include scale factors for activations, based on the teachings of Choi. One of ordinary skill in the art would have been motivated to make this modification in order to preserve model accuracy, as suggested by Choi (Choi, page 2, point 1, lines 7-8). Regarding claim 19, Naumov, in view of Choi, discloses a neural network as claimed in claim 11 (and thus the rejection of claim 11 is incorporated). Naumov further discloses: wherein scale factors for weights are initialized using current maximum absolute value of weights (Naumov, col. 3 lines 22 - 41 and col. 15, line 52 – col. 16 lines 5 teaches scale factors for weights are initialized using current maximum absolute value of weights). The amendments and remarks filed 01/15/2026 do not address the 35 USC § 101 software per se rejections for claims 11-19 set forth in the Non-Final Office Action mailed 10/20/2025. The 35 U.S.C. 101 software per se rejections for claims 11-19 are maintained. Applicant's arguments filed 01/15/2026 have been fully considered with regards to the 35 U.S.C. 101 abstract idea rejection, and they are persuasive. The rejections have been withdrawn. Applicant's arguments filed 01/15/2026 have been fully considered with regards to the 35 U.S.C. 102/103 rejection, but they are not persuasive. The applicant asserts on page 13 of the remarks that Choi’s “parameterized clipping activation approach” is fundamentally different than “Applicant’s approach of using a periodic regularizer with finite minima”. However, office action relies on Naumov to teach “…scalable regularization factor defined by a differentiable periodic function configured to provide a finite number of minima selected based on a quantization scheme…” (Naumov, Fig. 2, Fig. 3, col 2, lines 59-67 and col. 3 lines 1-41 teaches minimizing a loss function comprising a scalable regularization factor defined by a differentiable periodic function, i.e. scaling factor for periodic regularization function, and providing a finite number of minima selected based on the quantized neural network to constrain a connection weight value to one of a predetermined number of values of the quantization scheme), thus, it is not relevant whether Choi’s parameterized clipping activation approach is fundamentally different than Applicant’s approach of using a periodic regularizer with finite minima. The applicant asserts on page 13 of the remarks “The Office Action states it would be obvious to combine Naumov and Choi "to preserve model accuracy" (OA p. 32, 34). However, this general motivation does not provide a sufficient rationale for the specific technical modification required by the amended claims. Combining Naumov's periodic-minima regularizer with Choi's activation quantization would require fundamental changes to both approaches… This goes beyond the predictable use of known elements according to their established functions, and amounts to a substantial redesign that is not suggested by the prior art.”. The examiner respectfully disagrees, as preserving model accuracy in the field of quantization and neural network is not a general motivation. Furthermore, limiting the dynamic range of activations and minimizing quantization loss, as suggested by Choi (page 4, paragraph 3, last 2 lines “α converges to values much smaller than the initial value as the training epochs proceed, thereby limiting the dynamic range of activations and minimizing quantization loss.”), further establishes a sufficient rationale for the specific technical modification required by the amended claims to include loss calculations to push values of weights and activation … thereby constraining a quantized activation value. The examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAIRA ZAHIN MAUNI whose telephone number is (703)756-5654. The examiner can normally be reached Monday - Friday, 9 am - 5 pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.Z.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Dec 02, 2022
Application Filed
Jan 18, 2023
Response after Non-Final Action
Oct 15, 2025
Non-Final Rejection — §101, §103
Jan 15, 2026
Response Filed
Feb 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585969
GENERATING CONFIDENCE SCORES FOR MACHINE LEARNING MODEL PREDICTIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+66.7%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month