Prosecution Insights
Last updated: April 19, 2026
Application No. 17/879,794

OPTIMIZING METHOD AND COMPUTER SYSTEM FOR NEURAL NETWORK AND COMPUTER-READABLE STORAGE MEDIUM

Final Rejection §101§103
Filed
Aug 03, 2022
Examiner
BENOURAIDA, AMINA MORENO
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Wistron Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
18
Total Applications
across all art units

Statute-Specific Performance

§101
28.1%
-11.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d), based on an application filed in TAIWAN on 04/27/2022. The certified copy has been filed in parent Application No. 17/879,794, filed on 08/03/2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Amendment The amendment filed on December 17th, 2025 has been entered and claims 1-20 are pending. Claims 7 and 17 have been withdrawn from consideration. Response to Arguments Applicant's arguments filed on December 17th, 2025 have been fully considered but they are not persuasive. In regards to U.S.C 101 rejection, applicant argues that because independent claims 1 and 11 are not rejected under 35 U.S.C 101, the dependent claims 5-7, 9, 15-17, and 19 must also be eligible. This argument is not persuasive. In the previous Non-Final Office Action mailed on 10/02/2025, claim 7 was rejected under U.S.C 101 as being directed to an abstract idea (i.e., mathematical concept) specifically, “converging a scaling factor of at least one batch normalization laver of the neural network.” Applicant has amended independent claim 1 to incorporate the limitations of claim 7. Accordingly, claim 1 now recites the abstract idea (i.e., mathematical concept) previously rejected under U.S.C 101. Applicant’s arguments with respect to claim(s) 1 and 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-6, 8-16, and 18-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1 and analogous claim 11, 20: Step 1 (whether a claim is to a statutory category): Yes, the claim is within the four statutory categories (a process, machine, manufacture or composition of matter). Claim 1 recites a method, therefore, falls within a process category. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “converging a scaling factor of at least one batch normalization layer of the neural network…wherein the batch normalization is normalization of individual mini-batches until a normal distribution is formed with a mean of 0 and a standard deviation of 1;” recites a mathematical concept, as it involves mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP 2106.04(a)(2), (I)). Step 2A Prong 2 (evaluate whether the claim recites additional elements that integrate the exception into a practical application): No, “then sequentially pruning the neural network using two different pruning algorithms; and retraining a pruned neural network in response to each of the pruning algorithms pruning the neural network” the claim does not recite additional elements that integrate the judicial exception into a practical application with the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer see MPEP 2106.05(f). Step 2B (Inventive concept): No, the claim does not add significantly more since the intended practical application is well-understood, routine, and conventional and stated at a generic level (i.e., “apply it”, see MPEP 2106.05(f)). Therefore, claim 1 and 11 is/are ineligible. Regarding claim 2 and analogous claim 12: Further modifies the abstract idea of claim 1. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “wherein one of the two pruning algorithms is a channel pruning algorithm” recites a mathematical concept, as it involves mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP 2106.04(a)(2), (I)). Therefore, claim 2 and 12 is/are ineligible. Regarding claim 3 and analogous claim 13: Further modifies the abstract idea of claim 2. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “wherein the other one of the two pruning algorithms is a weight pruning algorithm” recites a mathematical concept, as it involves mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP 2106.04(a)(2), (I)). Therefore, claim 3 and 13 is/are ineligible. Regarding claim 4 and analogous claim 14: Further modifies the abstract idea of claim 3. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “wherein sequentially pruning the neural network using the two different pruning algorithms comprises: pruning the neural network using the weight pruning algorithm in response to pruning the neural network using the channel pruning algorithm” recites a mathematical concept, as it involves mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP 2106.04(a)(2), (I)). Therefore, claim 4 and 14 is/are ineligible. Regarding claim 5 and analogous claim 15: Further modifies the abstract idea of claim 2. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “determining at least one redundant channel to be pruned according to the first pruning set and the second pruning set” wherein observing the sets one can judge for redundancy describes a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, judgment). See MPEP2106.04(a)(2)(III). Step 2A Prong 2 (evaluate whether the claim recites additional elements that integrate the exception into a practical application): No, “obtaining a first pruning set according to the first channel pruning algorithm, wherein the first pruning set comprises at least one channel to be pruned selected by the first channel pruning algorithm;” the claim does not recite additional elements that integrate the judicial exception into a practical application with the words "apply it" (or an equivalent), such as mere (i.e., selecting a particular data source or type of data to be manipulated) to implement an abstract idea on a computer (see MPEP 2106.05(g)). And “obtaining a second pruning set according to the second channel pruning algorithm, wherein the second pruning set is at least one channel to be pruned selected by the second channel pruning algorithm;” the claim does not recite additional elements that integrate the judicial exception into a practical application with the words "apply it" (or an equivalent), such as mere (i.e., selecting a particular data source or type of data to be manipulated) to implement an abstract idea on a computer (see MPEP 2106.05(g)). Step 2B (Inventive concept): No, the claim does not add significantly more since the intended practical application is well-understood, routine, and conventional and stated at a generic level (i.e., “apply it”, see MPEP 2106.05(f).) Therefore, claim 5 and 15 is/are ineligible. Regarding claim 6 and analogous claim 16: Further modifies the abstract idea of claim 5. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “determining the at least one redundant channel according to an intersection of the first pruning set and the second pruning set” wherein observing the sets one can judge for redundancy based on an intersection describes a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, judgment). See MPEP2106.04(a)(2)(III). Step 2A Prong 2 (evaluate whether the claim recites additional elements that integrate the exception into a practical application): No, the claim does not recite additional elements that integrate the judicial exception into a practical application with the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer see MPEP 2106.05(f). Step 2B (Inventive concept): No, the claim does not add significantly more since the intended practical application is well-understood, routine, and conventional and stated at a generic level (i.e., “apply it”, see MPEP 2106.05(f)). Therefore, claim 6 and 16 is/are ineligible. Regarding claim 8 and analogous claim 18: Further modifies the abstract idea of claim 1. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “wherein the input operation is used to set a pruning ratio, and at least one of the two pruning algorithms prunes according to the pruning ratio.” recites a mathematical concept, as it involves mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP 2106.04(a)(2), (I)). Step 2A Prong 2 (evaluate whether the claim recites additional elements that integrate the exception into a practical application): No, “receiving an input operation” the claim does not recite additional elements that integrate the judicial exception into a practical application with the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer see MPEP 2106.05(f). Step 2B (Inventive concept): No, the claim does not add significantly more since the intended practical application is well-understood, routine, and conventional and stated at a generic level (i.e., “apply it”, see MPEP 2106.05(f)). Therefore, claim 8 and 18 is/are ineligible. Regarding claim 9 and analogous claim 19: Further modifies the abstract idea of claim 1. Step 2A Prong 1 (whether a claim is directed to a judicial exception): Yes, “comparing an accuracy loss of the pruned neural network with a quality threshold;” recites a mathematical concept, as it involves mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP 2106.04(a)(2), (I)). And “changing a pruning ratio of at least one of the two pruning algorithms according to a comparison result with the quality threshold” wherein evaluating results one can judge and change a pruning ratio based on the quality threshold (describes a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an evaluation, judgment). See MPEP2106.04(a)(2)(III).) Therefore, claim 9 and 19 is/are ineligible. Regarding claim 10: Further modifies the abstract idea of claim 8. Step 2A Prong 2 (evaluate whether the claim recites additional elements that integrate the exception into a practical application): No, “providing a user interface; and receiving a determination of the pruning ratio or a quality threshold through the user interface” the claim does not recite additional elements that integrate the judicial exception into a practical application with the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer see MPEP 2106.05(f). Step 2B (Inventive concept): No, the claim does not add significantly more since the intended practical application is well-understood, routine, and conventional and stated at a generic level (i.e., “apply it”, see MPEP 2106.05(f)). Therefore, claim 10 is ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 9, 11, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable by Xu et al. (US20190362235A1), in view of Liu et al., (US20210264278A1)). Regarding claim 1 and analogous claim 11 and 20: Xu teaches: An optimizing method for a neural network, comprising: then sequentially pruning the neural network using two different pruning algorithms; and ([0053], “In the case of hybrid pruning, an additional fine-grained pruning step may be performed to further reduce the size and complexity of the model following the completion of coarse-grained channel pruning (i.e., wherein this is identified as one pruning algorithm). In one example, fine-grained pruning may be performed in connection with the training or fine-tuning of the pruned network resulting from a preceding coarse-grained prune. In one example, statistics-aware weight pruning may be performed for fine-grained weight pruning (i.e., wherein this is identified as the second pruning algorithm). For instance, a layer-wise weight threshold may be computed based on the statistical distribution of full dense weights in each channel-pruned layer and weight pruning may be performed to mask out those weights that are less than the corresponding layer-specific threshold. [sequentially pruning the neural network using two different pruning algorithms] (i.e., wherein first step involves pruning the channels 'first pruning algo' and then the next step prunes individual weights 'second pruning algo' within the pruned neural network, therefore, done in sequence.)”) retraining a pruned neural network in response to each of the pruning algorithms pruning the neural network ([0052], “a pruned version of a neural network from coarse-grained pruning [a pruned neural network] performed by a pruner tool. For instance, beginning with an initial, unpruned or dense version of a neural network model 505, sensitivity-test-based coarse-grained pruning may be performed 560 (such as described in the example of FIG. 5A). With all of the target layers now pruned (e.g., on a channel-, kernel-, or neuron-wise basis), a pruned version of the neural network may be formed from the determined layers and may be fine-tuned and retrained [retrain] 565. Fine-tuning 565 is used to restore, to the degree possible, the pruned model to original model performance. When this fine-tuning 565 is completed, the pruned (or thinned or sparse) neural network model is ready for use and deployment on target computing systems (e.g., resource-constrained computing systems). In the case of hybrid pruning, an additional fine-grained pruning step may be performed to further reduce the size and complexity of the model following the completion of coarse-grained channel pruning.” (i.e., wherein the pruned neural network is pruned by additional fine-grained pruning following the coarse-grained pruning, therefore, done in ‘response’)) Xu does not explicitly teach: converging a scaling factor of at least one batch normalization laver of the neural network, wherein the batch normalization is normalization of individual mini-batches until a normal distribution is formed with a mean of 0 and a standard deviation of 1; Liu teaches: converging a scaling factor of at least one batch normalization layer of the neural network. ([0076], “In one or more implementations, channel pruning can include an algorithm to remove inactive or less active channels of a neural network by monitoring the channel scaling parameter (e.g., γ) in the batch-normalization layers. If the channel scaling parameter value is close to zero during a training iteration, mathematically that channel is not outputting an impactful level of information. Indeed, when the channel scaling parameter value is close to zero, the channel is inactive and removing the channel will have an insignificant effect on the final performance of the neural network”) wherein the batch normalization is normalization of individual mini-batches until a normal distribution is formed with a mean of 0 and a standard deviation of 1; ([0082], “Additionally, μB can represent the mean values of input activations over mini-batch β [mini-batches] while σB can represent the standard deviation values of input activations over mini-batch ft. Further, in some implementations, γ and β are trainable parameters, such as trainable affine transformation parameters (i.e., wherein under the broadest reasonable interpretation batch normalization is interpreted to forming a distribution with mean of 0 and standard deviation of 1, thus batch normalization using batch mean and standard deviation)”) Liu and Xu are both related to the same field of endeavor (i.e., pruning a neural network). In view of the teachings of Liu it would have been obvious for a person of ordinary skill in the art to apply the teachings of Liu to Xu before the effective filing date of the claimed invention in order to improve the optimization of pruning a neural network (Liu, Abstract, “the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network.”) In regards to claim 11, Xu further teaches: a memory configured to store a code; and ([0027], “The preprocessing system 105 may further include one or more computer memory elements 215 to store software code (e.g., to implement all or a portion of the network pruner tool 205 and other tools (e.g., 230, 235) of the preprocessing system) as well as data (e.g., 230 b, 240, etc.) used in operation of the preprocessing system 105 generally, including the network pruner tool 205 specifically.”) a processor coupled to the memory and configured to load and execute the code to: ([0027], “a system 105 for use in performing preprocessing on existing neural network models (e.g., 230 a-b) to adapt and prepare the models for distribution to and use by resource-constrained devices (e.g., 125) and other computing systems, where it is desired to utilize lean, or sparse, versions of a neural network model. In one example, a pre-processing system 105 may implement a network pruner tool 205, implemented in hardware- and/or software-based logic on the preprocessing system 105. The preprocessing system 105 may include one or more data processing devices (e.g., a central processing units (CPUs), graphics processing unit (GPUs), tensor processing units (TPUs)) and corresponding hardware accelerators (e.g., machine learning accelerators, matrix arithmetic accelerators, etc.) co-functioning with the processors 210.”) Regarding claim 9 and analogous claim 19: Xu further teaches: comparing an accuracy loss of the pruned neural network with a quality threshold; and ([0044], “Sensitivity based channel pruning may seek to identify the upper bound in the percentage of the number of intact output channels in each layer with acceptable accuracy loss (i.e., wherein the threshold is met), with the potential loss in accuracy recoverable with fine-tuning of the percentage of pruned channels [comparing an accuracy loss]. When the neural network model is damaged too dramatically through pruning, it becomes more difficult to recover its accuracy. To define how much accuracy loss is too much, an accuracy tolerance [quality threshold] may be defined for the neural network (which may be based on the data set on which the original, dense version of the neural network was trained (e.g., an accuracy tolerance of 3 to 5% for ResNet50 on ImageNet).”) changing a pruning ratio of at least one of the two pruning algorithms according to a comparison result with the quality threshold. ([0047], “Continuing with the foregoing example, if it is determined during the test (at 535) that the initial prune of a layer allows the neural network to still retain sufficient accuracy, the initial pruning percentage may be increased (e.g., incremented (e.g., by 5%, 10%, etc.)) [changing a pruning ratio] by the pruner tool and a new mask may be created (at 525) to prune an additional number of channels from the layer according to the incremented percentage. For instance, in one example, the pruning percentage may be incremented by 10% to bring the pruning percentage to 40%. A corresponding mask may be created and the next 10% of the sorted channels may be pruned to bring the number of pruned channels to 40%. A version of the neural network may then be generated and tested that includes the 40%-pruned version of the particular layer. If the accuracy of the pruned neural network again remained above the threshold, the pruning percentage may be again incremented and the steps repeated until the observed accuracy of the test falls below the accuracy threshold. [quality threshold]”) Claim(s) 2-5, 8, 10, 12-15 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable by Xu et al., in view of Liu et al., further in view of Zhao et al., Non-Patent Literature (‘Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices’). Regarding claim 2 and analogous claim 12: Xu, as modify by Liu teaches the method of claim 1. Xu, as modify by Liu does not explicitly teach: wherein one of the two pruning algorithms is a channel pruning algorithm. Zhao teaches: wherein one of the two pruning algorithms is a channel pruning algorithm. (Page 1, Introduction, paragraph 2, “Pruning has been one of the predominant approaches to accelerating large deep neural networks. The pruning methods can be roughly divided into two categories, channel pruning which removes parameters in a channel-wise manner (i.e., wherein one of the algo is a channel pruning algo.)”) Zhao and Xu are both related to the same field of endeavor (i.e., pruning a neural network). In view of the teachings of Zhao it would have been obvious for a person of ordinary skill in the art to apply the teachings of Zhao to Xu before the effective filing date of the claimed invention in order to improve the optimization of pruning a neural network (Zhao, Abstract, “Among deep network acceleration related approaches, pruning is a widely adopted practice to balance the computational resource consumption and the accuracy, where unimportant connections can be removed either channel-wisely or randomly with a minimal impact on model accuracy. The channel pruning instantly results in a significant latency reduction, while the random weight pruning is more flexible to balance the latency and accuracy.”) Regarding claim 3 and analogous claim 13: Xu, as modify by Liu and Zhao teaches the method of claim 2. Xu as modify by Liu does not explicitly teach: wherein the other one of the two pruning algorithms is a weight pruning algorithm. Zhao further teaches: wherein the other one of the two pruning algorithms is a weight pruning algorithm. (Page 1, Introduction, paragraph 2, “Pruning has been one of the predominant approaches to accelerating large deep neural networks. The pruning methods can be roughly divided into two categories…and weight pruning which prunes parameters randomly (i.e., wherein the other algo of the two algos is a weight pruning algo.)”) The motivation for claim 3 and 13 is the same motivation for claim 2. Regarding claim 4 and analogous claim 14: Xu, as modify by Liu and Zhao teaches the method of claim 3. Xu further teaches: wherein sequentially pruning the neural network using the two different pruning algorithms comprises: ([0053], “In the case of hybrid pruning, an additional fine-grained pruning step may be performed to further reduce the size and complexity of the model following the completion of coarse-grained channel pruning (i.e., wherein this is identified as one pruning algorithm). In one example, fine-grained pruning may be performed in connection with the training or fine-tuning of the pruned network resulting from a preceding coarse-grained prune. In one example, statistics-aware weight pruning may be performed for fine-grained weight pruning (i.e., wherein this is identified as the second pruning algorithm). For instance, a layer-wise weight threshold may be computed based on the statistical distribution of full dense weights in each channel-pruned layer and weight pruning may be performed to mask out those weights that are less than the corresponding layer-specific threshold. [sequentially pruning the neural network using two different pruning algorithms] (i.e., wherein first step involves pruning the channels 'first pruning algo' and then the next step prunes individual weights 'second pruning algo' within the pruned neural network, therefore, done in sequence.)”) pruning the neural network using the weight pruning algorithm in response to pruning the neural network using the channel pruning algorithm ([0052], “a pruned version of a neural network from coarse-grained pruning [a pruned neural network] performed by a pruner tool. For instance, beginning with an initial, unpruned or dense version of a neural network model 505, sensitivity-test-based coarse-grained pruning may be performed 560 (such as described in the example of FIG. 5A). With all of the target layers now pruned (e.g., on a channel-, kernel-, or neuron-wise basis), a pruned version of the neural network may be formed from the determined layers and may be fine-tuned and retrained [retrain] 565. Fine-tuning 565 is used to restore, to the degree possible, the pruned model to original model performance. When this fine-tuning 565 is completed, the pruned (or thinned or sparse) neural network model is ready for use and deployment on target computing systems (e.g., resource-constrained computing systems). In the case of hybrid pruning, an additional fine-grained pruning step may be performed to further reduce the size and complexity of the model following the completion of coarse-grained channel pruning.” (i.e., wherein the pruned neural network is pruned by additional fine-grained pruning following the coarse-grained pruning, therefore, done in ‘response’.)) The motivation for claim 4 and 14 is the same motivation for claim 2. Regarding claim 5 and analogous claim 15: Xu, as modify by Liu and Zhao teaches the method of claim 2. Xu further teaches: wherein the channel pruning algorithm comprises a first channel pruning algorithm and a second channel pruning algorithm, and ([0046], “Continuing with the example of FIG. 5A, an initial prune may be defined such that a particular starting percentage of channels is identified for pruning. Any non-zero starting percentage may be set during configuration of the pruner tool. For instance, in an initial prune, 30% of the lowest ranked channels (e.g., those with the lowest aggregate weights) may be selected for pruning and a mask may be generated 525 [first channel] (i.e., ‘a mask’ is interpreted as a first channel) based on this pruning percentage and the sorting 520. The channels may then be pruned 530 according to the mask to generate a pruned version of the layer.”…([0047], “Continuing with the foregoing example, if it is determined during the test (at 535) that the initial prune of a layer allows the neural network to still retain sufficient accuracy, the initial pruning percentage may be increased (e.g., incremented (e.g., by 5%, 10%, etc.)) by the pruner tool and a new mask may be created (at 525) [second channel] (i.e., wherein ‘a new mask’ is interpreted as the second channel) to prune an additional number of channels from the layer according to the incremented percentage.”)) sequentially pruning the neural network using the two different pruning algorithms comprises: ([0053], “In the case of hybrid pruning, an additional fine-grained pruning step may be performed to further reduce the size and complexity of the model following the completion of coarse-grained channel pruning (i.e., wherein this is identified as one pruning algorithm). In one example, fine-grained pruning may be performed in connection with the training or fine-tuning of the pruned network resulting from a preceding coarse-grained prune. In one example, statistics-aware weight pruning may be performed for fine-grained weight pruning (i.e., wherein this is identified as the second pruning algorithm). For instance, a layer-wise weight threshold may be computed based on the statistical distribution of full dense weights in each channel-pruned layer and weight pruning may be performed to mask out those weights that are less than the corresponding layer-specific threshold. [sequentially pruning the neural network using two different pruning algorithms] (i.e., wherein first step involves pruning the channels 'first pruning algo' and then the next step prunes individual weights 'second pruning algo' within the pruned neural network, therefore, done in sequence.)”) Xu, as modified by Zhao does not explicitly teach: obtaining a first pruning set according to the first channel pruning algorithm, wherein the first pruning set comprises at least one channel to be pruned selected by the first channel pruning algorithm; obtaining a second pruning set according to the second channel pruning algorithm, wherein the second pruning set is at least one channel to be pruned selected by the second channel pruning algorithm; and determining at least one redundant channel to be pruned according to the first pruning set and the second pruning set Liu teaches: obtaining a first pruning set according to the first channel pruning algorithm, wherein the first pruning set comprises at least one channel to be pruned selected by the first channel pruning algorithm; ([0023]-[0024], “Further, by removing portions of a neural network, the neural network pruning system can increase the efficiency of the neural network. To illustrate, in one or more implementations, the neural network pruning system can initialize a convolutional neural network that includes multiple layers (i.e., batch-normalization layers) and multiple network weights [first channel algorithm]. Next, the neural network pruning system can prune the convolutional neural network based on a pruning parameter across multiple iterations while jointly learning the neural network weights and scaling parameters”…“In particular, the neural network pruning system can iteratively update the neural network weights and scaling parameters for each portion (e.g., channel or layer) of the neural network [a first pruning set], determine portions of the neural network that generate a scaling parameter not satisfying the pruning parameter, and modify the architecture of the neural network by removing the determined portions.” (i.e., wherein the selected channels are pruned based on the first channel algo)) obtaining a second pruning set according to the second channel pruning algorithm, wherein the second pruning set is at least one channel to be pruned selected by the second channel pruning algorithm; and ([0023]-[0024], “Further, by removing portions of a neural network, the neural network pruning system can increase the efficiency of the neural network. To illustrate, in one or more implementations, the neural network pruning system can initialize a convolutional neural network that includes multiple layers (i.e., batch-normalization layers) and multiple network weights [second channel algorithm] (i.e., wherein ‘multiple’ is interpreted as being more than one). Next, the neural network pruning system can prune the convolutional neural network based on a pruning parameter across multiple iterations while jointly learning the neural network weights and scaling parameters”…“In particular, the neural network pruning system can iteratively update the neural network weights and scaling parameters for each portion (e.g., channel or layer) of the neural network [a second pruning set], determine portions of the neural network that generate a scaling parameter not satisfying the pruning parameter, and modify the architecture of the neural network by removing the determined portions.” (i.e., wherein the selected channels are pruned based on the second channel algo)) [at least one channel to be pruned selected].”) determining at least one redundant channel to be pruned according to the first pruning set and the second pruning set ([0027]. “Further, the neural network pruning system can iteratively train and prune a convolutional neural network. For example, as described below, the neural network pruning system can jointly learn network weights and scaling parameters for each portion (e.g., layers or channels within those layers) [first pruning set and the second pruning set] of the neural network, determine a total loss, and back-propagate the loss to reduce total loss in the next iteration. (i.e., wherein for each iteration there is a pruning set)”) Liu and Xu are both related to the same field of endeavor (i.e., pruning a neural network). In view of the teachings of Liu it would have been obvious for a person of ordinary skill in the art to apply the teachings of Liu to Xu before the effective filing date of the claimed invention in order to improve the optimization of pruning a neural network (Liu, Abstract, “the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network.”) Regarding claim 8: Xu, as modified by Liu teaches the method of claim 1. Xu, as modified by Zhao does not explicitly teach: receiving an input operation, wherein the input operation is used to set a pruning ratio, and at least one of the two pruning algorithms prunes according to the pruning ratio Liu further teaches: receiving an input operation, ([0056], “In various implementations, the size parameter is provided via user input. (i.e., wherein the receiving an input is from a user)”) wherein the input operation is used to set a pruning ratio, and at least one of the two pruning algorithms prunes according to the pruning ratio ([0056], “The neural network pruning system can identify the size parameter in a variety of ways. For example, in one or more implementations, the size parameter is based on a default value (e.g., reduce the neural network by 30%, 50%, 70%, 90%). In some implementations, the size parameter is based on the hardware constraints of a computing device. For instance, when pruning a neural network to be implemented on a particular computing device (e.g., a mobile client device), the neural network pruning system can identify the size parameter based on the memory capacity and/or availability of the computing device. In various implementations, the size parameter is provided via user input. For instance, the neural network pruning system receives user input indicating the size parameter and/or other tuning parameters to apply when pruning the neural network (i.e., wherein the user input is used to set size parameters and the pruning is based on the ratio set by the user )”) The motivation for claim 8 is the same motivation for claim 5. Regarding claim 10: Xu, as modified by Lui and Zhao teaches the method of claim 8. Xu, as modified by Zhao does not explicitly teach: providing a user interface; and receiving a determination of the pruning ratio or a quality threshold through the user interface Liu further teaches: providing a user interface; and ([0056], “In various implementations, the size parameter is provided via user input. For instance, the neural network pruning system receives user input indicating the size parameter and/or other tuning parameters to apply when pruning the neural network.” (i.e., wherein a user interface is provided to receive user input)) receiving a determination of the pruning ratio or a quality threshold through the user interface ([0042], “As used herein, the term “pruning parameter” refers to a factor indicating how to prune a neural network. For instance, a pruning parameter can correspond to a network size pruning parameter (or simply “size parameter”) that indicates a size of the pruned neural network. For example, the size parameter can indicate an amount to remove from the neural network or the final size of the neural network [determination of the pruning ratio]. In another instance, the pruning parameter can correspond to a relative condition pruning parameter (or simply “relative parameter”). For instance, the relative condition pruning parameter can indicate a pruning sensitivity ratio/rate or a threshold relevance condition for one or more portions of the neural network.”… ([0056], “In various implementations, the size parameter is provided via user input. For instance, the neural network pruning system receives user input indicating the size parameter and/or other tuning parameters to apply when pruning the neural network.” (i.e., wherein a user interface is provided to receive user input))) The motivation for claim 10 is the same motivation for claim 5. Regarding claim 18: Xu, Xu, as modify by Liu teaches the method of claim 11. Xu further teaches: a display coupled to the processor, ([0099], “In some optional examples, various input/output (I/O) devices may be present within, or connected to, the IoT device 1150. For example, a display or other output device 1184 may be included to show information,”) wherein the processor is further configured to: provide a user interface through the display; and ([0115], “Chipset 1390 may be in communication with a bus 1320 via an interface circuit 1396. Bus 1320 may have one or more devices that communicate over it, such as a bus bridge 1318 and I/O devices 1316. Via a bus 1310, bus bridge 1318 may be in communication with other devices such as a user interface 1312 (such as a keyboard, mouse, touchscreen, or other input devices), communication devices 1326 (such as modems, network interface devices, or other types of communication devices that may communicate through a computer network 1360), audio I/O devices 1314, and/or a data storage device 1328. Data storage device 1328 may store code 1330, which may be executed by processors 1370”) wherein at least one of the two pruning algorithms prunes according to the pruning ratio, and the quality threshold is used to change the pruning ratio ([0047], “Continuing with the foregoing example, if it is determined during the test (at 535) that the initial prune of a layer allows the neural network to still retain sufficient accuracy, the initial pruning percentage may be increased (e.g., incremented (e.g., by 5%, 10%, etc.)) [changing a pruning ratio] by the pruner tool and a new mask may be created (at 525) to prune an additional number of channels from the layer according to the incremented percentage. For instance, in one example, the pruning percentage may be incremented by 10% to bring the pruning percentage to 40%. A corresponding mask may be created and the next 10% of the sorted channels may be pruned to bring the number of pruned channels to 40%. A version of the neural network may then be generated and tested that includes the 40%-pruned version of the particular layer. If the accuracy of the pruned neural network again remained above the threshold, the pruning percentage may be again incremented and the steps repeated until the observed accuracy of the test falls below the accuracy threshold. [quality threshold]”) Xu does not explicitly teach: receive determination of a pruning ratio or a quality threshold through the user interface Liu further teaches: receive determination of a pruning ratio or a quality threshold through the user interface ([0042], “As used herein, the term “pruning parameter” refers to a factor indicating how to prune a neural network. For instance, a pruning parameter can correspond to a network size pruning parameter (or simply “size parameter”) that indicates a size of the pruned neural network. For example, the size parameter can indicate an amount to remove from the neural network or the final size of the neural network [determination of the pruning ratio]. In another instance, the pruning parameter can correspond to a relative condition pruning parameter (or simply “relative parameter”). For instance, the relative condition pruning parameter can indicate a pruning sensitivity ratio/rate or a threshold relevance condition for one or more portions of the neural network.”… ([0056], “In various implementations, the size parameter is provided via user input. For instance, the neural network pruning system receives user input indicating the size parameter and/or other tuning parameters to apply when pruning the neural network.” (i.e., wherein a user interface is provided to receive user input))) The motivation for claim 18 is the same motivation for claim 5. Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable by Xu et al., in view of Zhao and Liu, and further in view of Kuang et al., Non-Patent Literature (“Network pruning via probing the importance of filters”). Regarding claim 6 and analogous claim 16: Xu, as modified by Zhao and Liu teaches the method of claim 5. Xu, as modified by Zhao does explicitly teach: wherein determining the at least one redundant channel to be pruned according to the first pruning set and the second pruning set comprises: Liu further teaches: wherein determining the at least one redundant channel to be pruned according to the first pruning set and the second pruning set comprises: ([0027]. “Further, the neural network pruning system can iteratively train and prune a convolutional neural network. For example, as described below, the neural network pruning system can jointly learn network weights and scaling parameters for each portion (e.g., layers or channels within those layers) [first pruning set and the second pruning set] of the neural network, determine a total loss, and back-propagate the loss to reduce total loss in the next iteration. (i.e., wherein for each iteration there is a pruning set)”) Kuang teaches: determining the at least one redundant channel according to an intersection of the first pruning set and the second pruning set. (Section 4.4.3, paragraph 1-2, "We analyze the stability of our algorithm from two aspects: the consistency of the filters marked as unimportant and the structural similarity of the pruned models to verify consistency, we performed a single pruning iteration for ResNet-32 on CIFAR-10 and recorded the filters marked as unimportant. We repeated the experiment five times with different random seeds, and then calculated IOU (Intersection of Union) of filters marked as unimportant between different experiments. The results are shown in Fig. 4a, we can see that the filters marked as unimportant in different experiments have high consistency (IOU is greater than 0.89).”) Kuang and Xu are both related to the same field of endeavor (i.e., pruning a neural network). In view of the teachings of Kuang it would have been obvious for a person of ordinary skill in the art to apply the teachings of Kuang to Xu before the effective filing date of the claimed invention in order to improve the optimization of pruning a neural network by evaluating the importance of each filter to prune (Kuang, Abstract, “Filter pruning is one of the most effective approaches to reduce the storage and computational cost of convolutional neural networks. How to measure the importance of each filter is the key problem for filter pruning.”) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMINA BENOURAIDA whose telephone number is (571)272-4340. The examiner can normally be reached Monday-Friday 8:30am-5pm ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMINA MORENO BENOURAIDA/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Aug 03, 2022
Application Filed
Sep 30, 2025
Non-Final Rejection — §101, §103
Dec 17, 2025
Response Filed
Mar 21, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month