Prosecution Insights
Last updated: April 19, 2026
Application No. 18/356,415

METHOD AND APPARATUS FOR SEARCHING FOR LIGHT-WEIGHT MODEL THROUGH REPLACEMENT OF SUBNETWORK OF TRAINED NEURAL NETWORK MODEL

Non-Final OA §103
Filed
Jul 21, 2023
Examiner
BROWN, CHRISTOPHER J
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
533 granted / 707 resolved
+17.4% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
36 currently pending
Career history
743
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saniee US 2022/0318631 in view of En US 2021/030296 . As per claim 1. Saniee teaches A method of searching for a light-weight model through the replacement of a subnetwork of a trained neural network model, the method comprising: a preprocessing step of extracting a subnetwork from an original neural network model, constructing a mapping relation between the subnetwork and an alternative block corresponding to the subnetwork by extracting the alternative block from a pre-trained neural network model, and generating profiling information comprising performance information relating to the subnetwork and the alternative block; (teaches subnetworks extracted from a neural network and an alternative block extracted from the subnetwork through pruning, generating profiling information comprising performance (accuracy) relating to the blocks) [0004]-[0008], [0055][[0060][0062][0063][0068][0072][0073] En teaches and a query processing step of receiving a query, extracting a constraint that is included in the query through query parsing, and generating a final model based on the constraint, the original neural network model, the alternative block, the mapping relation, and the profiling information. [0036][0037][0046][0053] (teaches creating a lightweight model from subnetworks of a neural network based on the goal of accuracy and prediction time in order to recognize text at the best and fastest lightweight method) It would have been obvious to one of ordinary skill in the art before the priority date of the current application to use En with the prior art because it provides a more efficient neural network. As per claim 2. Saniee teaches The method of claim 1, wherein the preprocessing step comprises steps of: extracting the subnetwork from the original neural network model; constructing the mapping relation between the subnetwork and the alternative block by extracting the alternative block corresponding to the subnetwork from the pre-trained neural network model; and generating the profiling information based on the subnetwork and the alternative block, wherein the subnetwork is one connected neural network. (teaches extracting the subnetwork from the NN, and a pruned subnetwork as an alternate block, profiling the accuracy of the subnetwork and the alternative block) [0072]-[0074] [0081] As per claim 3. The method of claim 1, Saniee teaches wherein the query processing step comprises steps of: receiving the query and extracting the constraint through query parsing; generating a candidate neural network model based on the original neural network model, the alternative block, and the mapping relation; and evaluating the candidate neural network model based on the constraint and the profiling information and selecting the final model from the candidate neural network model based on results of the evaluation of the candidate neural network model. [0004] [0072]-[0074][0081][0083] (training the subnetwork and then pruning in an iterative manner until the performance of the alternative block is to the desired threshold) As per claim 4. The method of claim 2, wherein: En teaches the step of constructing the mapping relation between the subnetwork and the alternative block comprises determining compatibility between the subnetwork and the alternative block and constructing the mapping relation based on the compatibility, and the compatibility means that each of an input and output of the subnetwork and each of an input and output of the alternative block have an identical number of dimensions and an identical number of channels and a change in a spatial dimension of data when the data passes through the subnetwork and a change in a spatial dimension of the data when the data passes through the alternative block are identical with each other. [0053] (teaches that the parallel subnetworks are identical in structure) As per claim 5. The method of claim 4, Saniee teaches wherein the step of constructing the mapping relation between the subnetwork and the alternative block comprises: determining the compatibility between the subnetwork and the alternative block, and adjusting the number of channels of the alternative block by using at least any one of schemes comprising pruning and an addition of a projection layer, if the compatibility is not satisfied because at least any one of the number of input channels and the number of output channels of the alternative block is different from at least any one of the number of input channels and the number of output channels of the subnetwork. [0004] [0023-[0025] [0072]-[0074][0081][0083] (training the subnetwork and adding or pruning connections for the subnetwork depending on need; pruning in an iterative manner until the performance of the alternative block is to the desired threshold) En additionally teaches that the subnetworks can be chosen to be the same structures. [0053] As per claim 6. The method of claim 1, Saniee teaches wherein the preprocessing step comprises: after constructing the mapping relation, training the alternative block by using a knowledge distillation scheme based on data for training the alternative block, the original neural network model, and the mapping relation, and generating the profiling information comprising performance information relating to the subnetwork and the trained alternative block. [0004] [0072]-[0074][0081][0083] (training the subnetwork and then pruning in an iterative manner until the performance of the alternative block is to the desired threshold) As per claim 7. The method of claim 1, Saniee teaches wherein the profiling information comprises, at least any one of, accuracy of the original neural network model before and after replacement of the subnetwork with the alternative block, inference time and memory usage of the subnetwork and the alternative block, or any combination of the inference time, the memory usage and the accuracy. [0072]-[0074][0083] (determining accuracy with subnetwork and alternative block) As per claim 8. The method of claim 1, En teaches wherein the constraint comprises at least any one of a target platform, target latency, and target memory usage or a combination of the target platform, the target latency, and the target memory usage. [0032][0049][0053] (optimizing/constraint, of accuracy and time consumption/latency) As per claim 9. The method of claim 1, Saniee teaches wherein the query processing step comprises: training the final model by using a knowledge distillation scheme based on data for training the final model and the original neural network model, and outputting the trained final model. [0059] [0083] (training neural network to subnetwork and ultimately final pruned network) As per claim 10. Saniee teaches An apparatus for searching for a light-weight model, comprising: a preprocessing module configured to extract a subnetwork from an original neural network model, construct a mapping relation between the subnetwork and an alternative block corresponding to the subnetwork by extracting the alternative block from a pre-trained neural network model, and generate profiling information comprising performance information relating to the subnetwork and the alternative block; (teaches subnetworks extracted from a neural network and an alternative block extracted from the subnetwork through pruning, generating profiling information comprising performance (accuracy) relating to the blocks) [0004]-[0008], [0055][[0060][0062][0063][0068][0072][0073] En teaches and a query processing module configured to receive a query, extract a constraint that is included in the query through query parsing, and generate a final model based on the constraint, the original neural network model, the alternative block, the mapping relation, and the profiling information. [0036][0037][0046][0053] (teaches creating a lightweight model from subnetworks of a neural network based on the goal of accuracy and prediction time in order to recognize text at the best and fastest lightweight method) It would have been obvious to one of ordinary skill in the art before the priority date of the current application to use En with the prior art because it provides a more efficient neural network. As per claim 11. The apparatus of claim 10, Saniee teaches wherein the preprocessing module comprises: a subnetwork generation unit configured to extract the subnetwork from the original neural network model; an alternative block generation unit configured to construct the mapping relation between the subnetwork and the alternative block by extracting the alternative block corresponding to the subnetwork from the pre-trained neural network model; and a profiling unit configured to generate the profiling information based on the subnetwork and the alternative block, wherein the subnetwork is one connected neural network. (teaches extracting the subnetwork from the NN, and a pruned subnetwork as an alternate block, profiling the accuracy of the subnetwork and the alternative block) [0072]-[0074] [0081] As per claim 12. The apparatus of claim 10, Saniee teaches wherein the query processing module comprises: a query parsing unit configured to receive the query and extract the constraint through query parsing; a candidate model generation unit configured to generate a candidate neural network model based on the original neural network model, the alternative block, and the mapping relation; and a candidate model evaluation unit configured to evaluate the candidate neural network model based on the constraint and the profiling information and to select the final model from the candidate neural network model based on results of the evaluation of the candidate neural network model. [0004] [0072]-[0074][0081][0083] (training the subnetwork and then pruning in an iterative manner until the performance of the alternative block is to the desired threshold) As per claim 13. The apparatus of claim 11, wherein: En teaches the alternative block generation unit determines compatibility between the subnetwork and the alternative block and constructs the mapping relation based on the compatibility, and the compatibility means that each of an input and output of the subnetwork and each of an input and output of the alternative block have an identical number of dimensions and an identical number of channels and a change in a spatial dimension of data when the data passes through the subnetwork and a change in a spatial dimension of the data when the data passes through the alternative block are identical with each other. [0053] (teaches that the parallel subnetworks are identical in structure) As per claim 14. The apparatus of claim 13, Saniee teaches wherein the alternative block generation unit determines the compatibility between the subnetwork and the alternative block, and adjusts the number of channels of the alternative block by using at least any one of schemes comprising pruning and an addition of a projection layer, if the compatibility is not satisfied because at least any one of the number of input channels and the number of output channels of the alternative block is different from at least any one of the number of input channels and the number of output channels of the subnetwork. [0004] [0023-[0025] [0072]-[0074][0081][0083] (training the subnetwork and adding or pruning connections for the subnetwork depending on need; pruning in an iterative manner until the performance of the alternative block is to the desired threshold) En additionally teaches that the subnetworks can be chosen to be the same structures. [0053] As per claim 15. The apparatus of claim 10, Saniee teaches wherein after constructing the mapping relation, the preprocessing module trains the alternative block by using a knowledge distillation scheme based on data for training the alternative block, the original neural network model, and the mapping relation, and generates the profiling information comprising performance information relating to the subnetwork and the trained alternative block. [0004] [0072]-[0074][0081][0083] (training the subnetwork and then pruning in an iterative manner until the performance of the alternative block is to the desired threshold) As per claim 16. The apparatus of claim 10, Saniee teaches wherein the profiling information comprises at least any one of, accuracy of the original neural network model before and after replacement of the subnetwork with the alternative block, inference time and memory usage of each of the subnetwork and the alternative block, or any combination of the inference time, the memory usage and the accuracy. . [0072]-[0074][0083] (determining accuracy with subnetwork and alternative block) As per claim 17. The apparatus of claim 10, En teaches wherein the constraint comprises at least any one of a target platform, target latency, and target memory usage or a combination of the target platform, the target latency, and the target memory usage. [0032][0049][0053] (optimizing/constraint, of accuracy and time consumption/latency) As per claim 18. The apparatus of claim 10, Saniee teaches wherein the query processing module trains the final model by using a knowledge distillation scheme based on data for training the final model and the original neural network model, and outputs the trained final model. [0059] [0083] (training neural network to subnetwork and ultimately final pruned network) As per claim 19. The apparatus of claim 13, Saniee teaches wherein: the alternative block generation unit constructs the mapping relation by extracting, from the pre-trained neural network model, the alternative block having the compatibility, but having a structure different from a structure of the subnetwork, and the different structure means that at least any one of criteria comprising a parameter, a number of layers, an arrangement of the layers, a connection structure between the layers, or a conversion function or a combination of the criteria is different. [0004] [0055][0059] [0072]-[0074][0081][0083] (teaches that the alternative block is a pruning of the subnetwork and therefore connections and arrangements are a different structure, that the network is a series of layers and connections) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER BROWN whose telephone number is (571)272-3833. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at (571) 270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER J BROWN/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603822
SOFTWARE AS A SERVICE (SaaS) USER INTERFACE (UI) FOR DISPLAYING USER ACTIVITIES IN AN ARTIFICIAL INTELLIGENCE (AI)-BASED CYBER THREAT DEFENSE SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12574725
METHODS, APPARATUSES, COMPUTER PROGRAMS AND CARRIERS FOR SECURITY MANAGEMENT BEFORE HANDOVER FROM 5G TO 4G SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12563390
AUTHENTICATING A DEVICE IN A COMMUNICATION NETWORK OF AN AUTOMATION INSTALLATION
2y 5m to grant Granted Feb 24, 2026
Patent 12563056
SYSTEM AND METHOD FOR MONITORING AND MANAGING COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12537828
ON-DEMAND SOFTWARE-DEFINED SECURITY SERVICE ORCHESTRATION FOR A 5G WIRELESS NETWORK
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+12.6%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month