Prosecution Insights
Last updated: April 19, 2026
Application No. 18/073,269

CONTROL METHOD AND SYSTEM BASED ON LAYER-WISE ADAPTIVE CHANNEL PRUNING

Final Rejection §101§103
Filed
Dec 01, 2022
Examiner
SACKALOSKY, COREY MATTHEW
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
16 granted / 25 resolved
+9.0% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
42.0%
+2.0% vs TC avg
§103
38.0%
-2.0% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office Action is in response to the amendments filed on 12/23/2025. Claims 1, 2, 10, 11, 19 and 20 are currently amended. Claims 1-20 are currently pending in this application and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments In reference to Applicant’s arguments on page(s) 12-15 regarding rejections made under 35 U.S.C. 101: Claims 1-20 are rejected under 35 U.S.C. § 101 as allegedly being directed to an abstract idea without significantly more. Applicant respectfully traverses the rejection. In the rejection, the Office Action alleges independent claim 1 recites features that can be categorized as a mental process. Applicant respectfully disagrees. Applicant respectfully submits performing channel pruning to obtain a channel-pruned model could not be practically performed as a mental process. Also, Applicant respectfully submits these features are not directed certain methods of organizing human activity or mathematical concepts. Applicant respectfully submits these additional features integrate any alleged abstract idea into a practical application. Specifically, as discussed in paragraph [0045] of the specification, these features allow for "the influences of the resource memory occupancy amount reduction and the computation amount reduction on the throughput increase may be analyzed and then the channel pruning is performed based on the factor having greater influence." In this regard, the claimed features provide a technical improvement. Additionally, Applicant respectfully submits these features do not merely indicate a field of use or technological environment in which to apply a judicial exception. Rather, the features perform channel pruning to obtain a channel-pruned model, which is then monitored with respect to a certain model analysis accuracy level, and employed in deep-learning model computation acceleration. Therefore, amended claim 1 integrates any possible judicial exception into a practical application, and is patent eligible under Prong Two of the revised Step 2A of the Alice test. Accordingly, independent claim 1 is patent eligible under Prong Two of the revised Step 2A of the Alice test. Examiner’s response: Applicant’s arguments have been fully considered but are found to be not persuasive. Applicant argues that the claims as amended do not recite mental processes and specifically, that channel pruning cannot be performed in the human mind. Examiner agrees that channel pruning cannot be performed in the human mind, however the claims as amended recite actions that can be reasonably performed in the human mind. The claims recite actions of profiling an accuracy pattern curve, comparing the influence of different techniques, and determining whether to execute a certain technique based on the comparisons. Profiling a graph, comparing techniques, and deciding which technique to use are all actions that can be performed in the human mind and as such they recite mental processes Applicant argues that the claims integrate any alleged abstract idea into a practical application since they allow for "the influences of the resource memory occupancy amount reduction and the computation amount reduction on the throughput increase may be analyzed and then the channel pruning is performed based on the factor having greater influence.". Examiner disagrees. The decision to use either technique does not constitute a technological improvement over the state of the art, and even it an improvement were presented, a technological improvement cannot arise from the abstract idea. In light of the amendments made on the claims, the rejections made under 35 U.S.C. 101 are maintained and updated below. In reference to Applicant’s arguments on page(s) 15-19 regarding rejections made under 35 U.S.C. 103: Claims 1, 2, 7, 10, 11, 16 and 19 are rejected under 35 U.S.C. § 103 as allegedly being unpatentable over Lym et al (S. Lym, E. Choukse, S. Zangeneh, W. Wen, S. Sanghavi and M. Erez, "PruneTrain: Fast Neural Network Training by Dynamic Sparse Model Reconfiguration,"SC19: International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, CO, USA, 2019, pp. 1-13., hereinafter Lym), in view of Naik et al (S. V. Naik et al., "Survey on Comparative Study of Pruning Mechanism on MobileNetV3 Model," 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 2021, pp. 1-8, doi: 10.1 109/CONIT51480.2021.9498400., hereinafter Naik). Claims 3, 5, 12 and 14 are rejected under 35 U.S.C. § 103 as allegedly being unpatentable over Lym and Naik, and further in view of Li et al (Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H.P. (2016). Pruning Filters for Efficient ConvNets. ArXiv, abs/1608.08710., hereinafter Li). Applicant respectfully traverses the rejections. Applicant respectfully submits independent claim 1 is patentable over the cited references because the cited references, whether considered alone or in combination, do not teach or suggest every feature that is claimed. For example, Applicant respectfully submits the cited references do not teach or suggest "comparing an influence of a resource memory occupancy reduction on a throughput of an accelerator resource with an influence of a computation amount reduction on the throughput of the accelerator resource; [and] determining, based on a result of the comparing, whether to perform channel pruning based on a model layer-wise resource memory occupancy characteristic of the original deep-learning model or based on a model layer-wise computation amount characteristic of the original deep-learning model," as claimed. In addressing independent claim 1, the Office Action acknowledges Lym does not disclose comparing an influence of a resource memory occupancy reduction on a throughput of an accelerator resource with an influence of a computation amount reduction on the throughput of the accelerator resource. Instead, Naik is cited at Section I, paragraphs 2 and 3, as well as Table 1, as allegedly disclosing these features. Applicant respectfully disagrees. The cited portion of Naik generally discloses optimization aspires to obtain a minimized or maximized model and also to find a "solution for the problem towards achieving better performance with limited resources." Naik discloses pruning "reduces the computational operation and the complexity of the networks by removing the unnecessary weights in the convolutional neural network layers." Naik discloses pruning can be done iteratively or in one shot. Table I of Naik discloses different pruning techniques. However, Naik does not teach or suggest comparing, with respect to throughput of an accelerator resource, an influence of a resource memory occupancy reduction on the throughput and an influence of a computation amount reduction on the throughput. Therefore Naik does not teach or suggest these features. Without any admissions and solely in an effort to expedite prosecution of the present application, amended claim 1 additionally recites "determining, based on a result of the comparing, whether to perform channel pruning based on a model layer-wise resource memory occupancy characteristic of the original deep-learning model or based on a model layer-wise computation amount characteristic of the original deep-learning model." Applicant respectfully submits Naik does not teach or suggest these features. Specifically, as noted above, Naik does not teach or suggest comparing influences on throughput of an accelerator resource. Naik also does not teach or suggest determining whether to perform channel pruning based on a model layer-wise resource memory occupancy characteristic of the original deep-learning model or based on a model layer-wise computation amount characteristic of the original deep-learning model, or that such a determination is made based on a result of such comparing. Li also does not teach or suggest these features. Therefore, the cited references do not teach or suggest "comparing an influence of a resource memory occupancy reduction on a throughput of an accelerator resource with an influence of a computation amount reduction on the throughput of the accelerator resource; [and] determining, based on a result of the comparing, whether to perform channel pruning based on a model layer-wise resource memory occupancy characteristic of the original deep-learning model or based on a model layer-wise computation amount characteristic of the original deep-learning model," as claimed. Accordingly, independent claim 1 is patentable over the cited references. To the extent independent claims 10 and 19 recite features similar to those discussed above with respect to independent claim 1, Applicant respectfully submits independent claims 10 and 19 are patentable over the cited references for at least reasons similar to those discussed above with respect to independent claim 1. Claims 2, 3, 5, 7, 11, 12, 14 and 16 are patentable over the cited references for at least the reasons discussed above due to their respective dependencies. Examiner’s response: Applicant’s arguments have been fully considered and are found to be persuasive. Applicant argues that the prior art reference of Naik does not teach the claim limitation related to comparing resource memory occupancy reduction with a computation amount reduction. Examiner agrees. Upon further review of Naik, the cited table of technique comparisons merely compares different pruning techniques and their advantages/disadvantages. The cited table does not compare resource memory occupancy reduction or computation amount reduction and it is clear to the Examiner that Naik is deficient in the teaching of that limitation. The additional prior art references of Lym and Li do not serve to cure the deficiencies. In light of the arguments presented and the amendments made on the claims, the rejections made under 35 U.S.C. 103 are withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. 101 because they are directed to an abstract idea without significantly more. Step 1 analysis: Independent claim 1 recites, in part, a control method, therefore falling into the statutory category of process. Independent claim 10 recites, in part, a control system, therefore falling into the statutory category of manufacture. Independent claim 19 recites, in part, a non-transitory computer-readable recording medium storing therein a program for performing a control method, therefore falling into the statutory category of machine. Regarding Claim 1: Step 2A: Prong 1 analysis: Claim 1 recites in part: “profiling an accuracy pattern curve to obtain a layer-wise pruning sensitivity of an original deep-learning model”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses checking the viability of a pruning method for a deep learning model. “comparing an influence of a resource memory occupancy reduction on a throughput of an accelerator resource with an influence of a computation amount reduction on the throughput of the accelerator resource”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses comparing the results of two different methods of increased throughput on a processor. “determining, based on a result of the comparing, whether to perform channel pruning based on a model layer-wise resource memory occupancy characteristic of the original deep- learning model or based on a model layer-wise computation amount characteristic of the original deep-learning model”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses deciding which technique to use based on the results of comparing two different methods of increased throughput on a processor. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “performing, based on a result of the determining, the channel pruning based on the model layer-wise resource memory occupancy characteristic of the original deep-learning model to obtain a channel-pruned model or based on the model layer-wise computation amount characteristic of the original deep-learning model to obtain the channel-pruned model”. This additional element is directed to a particular field of use (channel pruning). “in response to the channel-pruned model satisfying a certain model analysis accuracy level, determining a batch size for the accelerator resource”. This additional element is directed to a particular field of use (batch processing). “in response to a throughput of the channel-pruned model based on the determined batch size being greater than a throughput of the original deep-learning model, employing the channel-pruned model in the deep-learning model computation acceleration”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (deep learning model) (See MPEP 2106.05(f)). Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. The additional element(s) of “performing, based on a result of the determining, the channel pruning based on the model layer-wise resource memory occupancy characteristic of the original deep-learning model to obtain a channel-pruned model or based on the model layer-wise computation amount characteristic of the original deep-learning model to obtain the channel-pruned model” is/are directed to a particular field of use (channel pruning and batch processing) (MPEP 2106.05(h)) and therefore does not provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. As discussed above, the additional element(s) of “in response to a throughput of the channel-pruned model based on the determined batch size being greater than a throughput of the original deep-learning model, employing the channel-pruned model in the deep-learning model computation acceleration” is/are recited at a high-level of generality such that it/they amount(s) to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 2: Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “based on the influence of the resource memory occupancy reduction being greater than the influence of the computation amount reduction, determining to perform the channel pruning based on the model layer-wise resource memory occupancy characteristic”. This additional element is directed to a particular field of use (channel pruning). “or based on the influence of the resource memory occupancy reduction being not greater than the influence of the computation amount reduction, determining to perform the channel pruning based on the model layer-wise computation amount characteristic”. This additional element is directed to a particular field of use (channel pruning). Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. The additional element(s) of “based on the influence of the resource memory occupancy reduction being greater than the influence of the computation amount reduction, determining to perform the channel pruning based on the model layer-wise resource memory occupancy characteristic” and “or based on the influence of the resource memory occupancy reduction being not greater than the influence of the computation amount reduction, determining to perform the channel pruning based on the model layer-wise computation amount characteristic” is/are directed to a particular field of use (channel pruning) (MPEP 2106.05(h)) and therefore does not provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 3: Step 2A: Prong 1 analysis: Claim 3 recites in part: “setting a reference value to an initial value”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses initializing a value. “deriving a layer-wise pruning level satisfying a specific condition”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning condition. “based on the derived layer-wise pruning level satisfying an available batch size increase condition, deriving a final pruning policy, wherein the available batch size increase condition is a condition to increase an available batch size increase level via the resource memory occupancy reduction by a target value”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning policy based on a processing increase condition. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “performing the channel pruning based on the model layer-wise resource memory occupancy characteristic under the final pruning policy”. This additional element is directed to a particular field of use (channel pruning). Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. The additional element(s) of “performing the channel pruning based on the model layer-wise resource memory occupancy characteristic under the final pruning policy” is/are directed to a particular field of use (channel pruning) (MPEP 2106.05(h)) and therefore does not provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 4: Step 2A: Prong 1 analysis: Claim 4 recites in part: “based on the derived layer-wise pruning level not satisfying the available batch size increase condition, increasing the reference value and performing the deriving based on the increased reference value”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning policy based on a new reference value. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 5: Step 2A: Prong 1 analysis: Claim 5 recites in part: “setting a reference value to an initial value”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses initializing a value. “deriving a layer-wise pruning level satisfying a specific condition”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning condition. “based on the derived layer-wise pruning level satisfying a model inference computation acceleration condition, deriving a final pruning policy, wherein the model inference computation acceleration condition is a condition to increase a model inference computation latency acceleration level via the computation amount reduction by a target value”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning policy based on a processing increase condition. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “performing the channel pruning based on the model layer-wise computation amount characteristic under the final pruning policy”. This additional element is directed to a particular field of use (channel pruning). Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. The additional element(s) of “performing the channel pruning based on the model layer-wise resource memory occupancy characteristic under the final pruning policy” is/are directed to a particular field of use (channel pruning) (MPEP 2106.05(h)) and therefore does not provide significantly more than the abstract idea, and thus the claim is subject-matter ineligible. Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 6: Step 2A: Prong 1 analysis: Claim 6 recites in part: “based on the derived layer-wise pruning level not satisfying the model inference computation acceleration condition, increasing the reference value and deriving the layer-wise pruning level based on the increased reference value”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning policy based on a new reference value. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 7: Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “performing an additional training on the channel-pruned model”. This additional elements is recited at a high level of generality such that the claim recites only the idea of a solution or outcome (training a model) i.e., the claim fails to recite details of how a solution to a problem is accomplished. Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element(s) of “performing an additional training on the channel-pruned model” is/are recited at a high-level of generality such that the claim recites only the idea of a solution or outcome (training a model) i.e., the claim fails to recite details of how a solution to a problem is accomplished (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 8: Step 2A: Prong 1 analysis: Claim 8 recites in part: “based on the channel-pruned model not satisfying the certain model analysis accuracy level, decreasing a reduction amount in the resource memory occupancy reduction or in the computation amount reduction”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning policy based on a new reference value. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 9: Step 2A: Prong 1 analysis: Claim 9 recites in part: “in response to the throughput of the channel-pruned model based on the determined batch size being not greater than the throughput of the original deep-learning model, increasing a reduction amount in the resource memory occupancy reduction or in the computation amount reduction”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses creating a pruning policy based on a new reference value. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 10: Due to claim language similar to that of claim 1, claim 10 is rejected for the same reasons as presented above in the rejection of claim 1, with the exception of the limitation(s) covered below. Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “at least one processor”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (a processor) (See MPEP 2106.05(f)). “at least one memory configured to store instructions therein”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (a memory) (See MPEP 2106.05(f)). Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element(s) of “at least one processor” and “at least one memory configured to store instructions therein” is/are recited at a high-level of generality such that it/they amount(s) to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 11: Due to claim language similar to that of claim 2, claim 11 is rejected for the same reasons as presented above in the rejection of claim 2. Regarding Claim 12: Due to claim language similar to that of claim 3, claim 12 is rejected for the same reasons as presented above in the rejection of claim 3. Regarding Claim 13: Due to claim language similar to that of claim 4, claim 13 is rejected for the same reasons as presented above in the rejection of claim 4. Regarding Claim 14: Due to claim language similar to that of claim 5, claim 14 is rejected for the same reasons as presented above in the rejection of claim 5. Regarding Claim 15: Due to claim language similar to that of claim 6, claim 15 is rejected for the same reasons as presented above in the rejection of claim 6. Regarding Claim 16: Due to claim language similar to that of claim 7, claim 16 is rejected for the same reasons as presented above in the rejection of claim 7. Regarding Claim 17: Due to claim language similar to that of claim 8, claim 17 is rejected for the same reasons as presented above in the rejection of claim 8. Regarding Claim 18: Due to claim language similar to that of claim 9, claim 18 is rejected for the same reasons as presented above in the rejection of claim 9. Regarding Claim 19: Due to claim language similar to that of claims 1 and 10, claim 19 is rejected for the same reasons as presented above in the rejection of claims 1 and 10. Regarding Claim 20: Due to claim language similar to that of claims 8, 9, 17, and 18, claim 20 is rejected for the same reasons as presented above in the rejection of claims 8, 9, 17, and 18. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. S. Lym, E. Choukse, S. Zangeneh, W. Wen, S. Sanghavi and M. Erez, "PruneTrain: Fast Neural Network Training by Dynamic Sparse Model Reconfiguration,"SC19: International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, CO, USA, 2019, pp. 1-13. – PruneTrain, a cost-efficient mechanism that gradually reduces the training cost during training S. V. Naik et al., "Survey on Comparative Study of Pruning Mechanism on MobileNetV3 Model," 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 2021, pp. 1-8, doi: 10.1 109/CONIT51480.2021.9498400. – we have summarized the recent works done on optimization of DNNs using various pruning techniques and their comparisons Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H.P. (2016). Pruning Filters for Efficient ConvNets. ArXiv, abs/1608.08710. – an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY M SACKALOSKY whose telephone number is (703)756-1590. The examiner can normally be reached M-F 7:30am-3:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COREY M SACKALOSKY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Dec 01, 2022
Application Filed
Sep 12, 2025
Non-Final Rejection — §101, §103
Nov 13, 2025
Examiner Interview Summary
Nov 13, 2025
Applicant Interview (Telephonic)
Dec 23, 2025
Response Filed
Feb 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596932
METHOD AND SYSTEM FOR DEPLOYMENT OF PREDICTION MODELS USING SKETCHES GENERATED THROUGH DISTRIBUTED DATA DISTILLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591759
PARALLEL AND DISTRIBUTED PROCESSING OF PROPOSITIONAL LOGICAL NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12572441
FULLY UNSUPERVISED PIPELINE FOR CLUSTERING ANOMALIES DETECTED IN COMPUTERIZED SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12518197
INCREMENTAL LEARNING WITHOUT FORGETTING FOR CLASSIFICATION AND DETECTION MODELS
2y 5m to grant Granted Jan 06, 2026
Patent 12487763
METHOD AND APPARATUS WITH MEMORY MANAGEMENT AND NEURAL NETWORK OPERATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+49.4%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month