Prosecution Insights
Last updated: April 19, 2026
Application No. 17/950,009

GENERATING NEURAL NETWORKS

Final Rejection §103
Filed
Sep 21, 2022
Examiner
HOOVER, BRENT JOHNSTON
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
297 granted / 359 resolved
+27.7% vs TC avg
Strong +23% interview lift
Without
With
+22.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
24 currently pending
Career history
383
Total Applications
across all art units

Statute-Specific Performance

§101
31.4%
-8.6% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 359 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the original application filed on 9/21/2022 and the Remarks and Amendments filed on 2/2/2026. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. § 103 as being obvious over Liu et al. (US 20210264278 A1, hereinafter “Liu”) in view of Ji et al. (US 20180232640 A1, hereinafter “Ji”). Regarding claim 1, Liu discloses [a] processor, comprising: ([0149]; “executable by processors of one or more computing devices”) one or more circuits to iteratively training candidate neural network portions ([0026]; “progressively prune a neural network until one or more training conditions are satisfied. For example, the neural network pruning system can jointly train and prune a neural network for a set number of iterations or for a set time amount”, which discloses updating the scaling parameters during training. The scaling parameter is an iteratively updated portion of a neural network used to decide which layers or network portions remain; and [0149]; the hardware components include a circuit). Liu fails to explicitly disclose but Ji discloses selectively identify one or more of the candidate neural network portions to be pruned based, at least in part, on a performance threshold ([0004]; “pruning a layer of a neural network having multiple layers using a threshold; and repeating the pruning of the layer of the neural network using a different threshold until a pruning error of the pruned layer reaches a pruning error allowance”, the threshold being related to a pruning error; and [0020-0021]; “ a threshold for pruning a layer of a neural network is initialized. In some embodiments, the threshold may be initialized to an extreme end of a range of values, such as 0 or 1 … the layer of the neural network is pruned using the threshold. For example, the threshold is used to set some weights of the layer to be zero. In 104, the pruning error is calculated for the pruned layer”) increase the performance threshold for a subsequent training iteration ([0023]; “If the pruning error has not reached the pruning error allowance, the threshold is changed in 108. Changing the threshold may be performed in a variety of ways. For example, the threshold may be changed by a fixed amount. In other embodiments, the threshold may be changed by an amount based on the difference of the pruning error and the pruning error allowance. In other embodiments, the threshold may be changed by an amount based on the current threshold”, wherein the performance threshold is based on a pruning error and it is changed or increased for subsequent iterations; and [0043]; “the threshold Tl for the next iteration may be initialized to a value based on the past threshold Tl, but adjusted in a direction expected to reduce a number of pruning iterations to reach the new pruning error allowance ε”) Liu and Ji are analogous art because both are concerned with neural network pruning. Before the effective filing date of the claimed invention, it would have been obvious to one skilled in neural network pruning to combine the performance thresholds and training iterations of Ji with the method of Liu to yield to the predictable result of selectively identify one or more of the candidate neural network portions to be pruned based, at least in part, on a performance threshold and increase the performance threshold for a subsequent training iteration. The motivation for doing so would be to prune and retrain neural networks using automatic thresholds (Ji; [0002]). Regarding claim 2, the rejection of claim 1 is incorporated and Liu discloses wherein the one or more circuits are to increase the performance metrics based on training progress for a current training epoch ([0026]; “For example, the neural network pruning system can jointly train and prune a neural network for a set number of iterations or for a set time amount. In another example, the neural network pruning system can jointly train and prune a neural network until the neural network converges and/or a minimum amount of network loss is achieved”, wherein the scaling parameter is updated after each training iteration or epoch, so the metric is calculated with respect to the current training epoch). Regarding claim 3, the rejection of claim 1 is incorporated and Liu discloses calculate one or more weights corresponding to the one or more candidate neural network portions; and select the one or more candidate neural network portions based, at least in part, on the one or more weights ([0027]; “the neural network pruning system can jointly learn network weights and scaling parameters for each portion (e.g., layers or channels within those layers) of the neural network, determine a total loss, and back-propagate the loss to reduce total loss in the next iteration”, which discloses that each network portion or layer has associated weights and that pruning can consider those weights together Regarding claim 4, the rejection of claim 1 is incorporated and Liu fails to explicitly disclose but Ji discloses remove a subset of a first set of neural network portions based on the performance threshold; and obtain a first neural network layer from the first set of candidate neural network portions ([0023]; and [0043]). The motivation to combine Liu and Ji is the same as discussed above with respect to claim 1. Regarding claim 5, the rejection of claim 1 is incorporated and Liu fails to explicitly disclose but Ji discloses wherein the one or more circuits are to linearly increase the performance threshold ([0023]; and [0043]). The motivation to combine Liu and Ji is the same as discussed above with respect to claim 1. Regarding claim 6, the rejection of claim 1 is incorporated and Liu discloses wherein the processor is part of one or more graphics processing units (GPUs) ([0149]; [0168]; [0174]; [0180]). Regarding claim 7, the rejection of claim 1 is incorporated and Liu discloses generate a neural network comprising one or more of the candidate neural network portions ([0028-0029]). Regarding claim 8, it is a system claim corresponding to the steps of claim 1, and is rejected for the same reasons as claim 1. Regarding claim 9, the rejection of claim 8 is incorporated and Liu discloses obtain the set of candidate neural network portions; iteratively reduce the set of candidate neural network layers based, at least in part, on the one or more iteratively increasing neural network performance metrics; and select one or more neural network layers from the set of candidate neural network portions ([0028]; “the neural network pruning system can remove one or more portions associated with the lowest scaling parameters after a training iteration”). Regarding claim 10, the rejection of claim 8 is incorporated and Liu fails to explicitly disclose but Ji discloses at each training epoch, calculate a value of the one or more the performance threshold ([0023]; and [0043]). The motivation to combine Liu and Ji is the same as discussed above with respect to claim 1. Regarding claim 11, the rejection of claim 8 is incorporated and Liu discloses wherein the one or more processors are to selectively identify the one or more of the candidate neural network portions to be pruned based, at least in part, on one or more latency constraints ([0030]; “Accordingly, in many implementations, the neural network pruning system can gradually and automatically prune and morph a deep and wide neural network into a shallow and thin neural network that is tailored to a particular task and dataset while also maintaining overall accuracy and increasing efficiency”). Regarding claim 12, the rejection of claim 8 is incorporated and Liu discloses iteratively update a set of weights based, at least in part, on training data; and selectively identify the one or more of the candidate neural network portions to be pruned based, at least in part, on the set of weights ([0027]; “For example, as described below, the neural network pruning system can jointly learn network weights and scaling parameters for each portion (e.g., layers or channels within those layers) of the neural network, determine a total loss, and back-propagate the loss to reduce total loss in the next iteration”, wherein the weights are updated during training and used with scaling parameter for pruning decisions). Regarding claim 13, the rejection of claim 8 is incorporated and Liu discloses wherein the one or more processors are to train the candidate neural network portions to perform one or more computer vision tasks ([0030]; “tailored to a particular task and dataset while also maintaining overall accuracy and increasing efficiency”, the particular task being computer vision; and [0006]). Regarding claim 14, the rejection of claim 8 is incorporated and Liu fails to explicitly disclose but Ji discloses wherein the performance threshold is iteratively increases based on training progress ([0023]; and [0043]). The motivation to combine Liu and Ji is the same as discussed above with respect to claim 1. Regarding claim 15, it is a method claim corresponding to the steps of claim 1, and is rejected for the same reasons as claim 1. Regarding claim 16, the rejection of claim 15 is incorporated and Liu fails to explicitly disclose but Ji discloses calculating a set of values corresponding to a set of the candidate neural network portions; reducing the set of candidate neural network layers based, at least in part, on the set of values and iteratively increasing neural network performance threshold; and selecting one or more neural network layers from the set of candidate neural network portions ([0023]; and [0043]). The motivation to combine Liu and Ji is the same as discussed above with respect to claim 1. Regarding claim 17, the rejection of claim 15 is incorporated and Liu discloses training the candidate neural network portions to perform one or more natural language processing (NLP) tasks ([0030]; “tailored to a particular task and dataset while also maintaining overall accuracy and increasing efficiency”, the particular task being NLP; and [0006]). Regarding claim 18, the rejection of claim 15 is incorporated and Liu discloses wherein the one or more candidate neural network portions correspond to one or more blocks of a data structure ([0024-0026]; describes layers/channels/blocks of the network data structure subjected to pruning). Regarding claim 19, the rejection of claim 15 is incorporated and Liu discloses calculating the one or more candidate neural network portions by at least, at one or more times during training, reducing a set of the candidate neural network portions ([0028]). Regarding claim 20, Liu discloses A non-transitory computer readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least perform the method of claim 15 (Liu; [0149]; and [0026-0028]; and Ji; [0023]; and [0043]). The motivation to combine Liu and Ji is the same as discussed above with respect to claim 1. Response to Arguments Applicant’s arguments and amendments, filed on 2/2/2026, with respect to the 35 USC § 102(a)(1) rejection of the pending claim have been fully considered but are moot because the arguments do not apply to the references used to reject the amended claims of the present application. Liu and Ji are now being used to render the independent claims obvious under 35 USC § 103. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brent Hoover whose telephone number is (303)297-4403. The examiner can normally be reached Monday - Friday 9-5 MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached on 571-270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRENT JOHNSTON HOOVER/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Sep 21, 2022
Application Filed
Sep 28, 2025
Non-Final Rejection — §103
Feb 02, 2026
Response Filed
Mar 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602613
PRIVACY-ENHANCED TRAINING AND DEPLOYMENT OF MACHINE LEARNING MODELS USING CLIENT-SIDE AND SERVER-SIDE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12603147
PREDICTING PROTEIN STRUCTURES USING AUXILIARY FOLDING NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12585926
ADJUSTING PRECISION AND TOPOLOGY PARAMETERS FOR NEURAL NETWORK TRAINING BASED ON A PERFORMANCE METRIC
2y 5m to grant Granted Mar 24, 2026
Patent 12585934
COMPRESSING TOKENS BASED ON POSITIONS FOR TRANSFORMER MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579215
LEARNING ORDINAL REGRESSION MODEL VIA DIVIDE-AND-CONQUER TECHNIQUE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+22.7%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 359 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month