Prosecution Insights
Last updated: April 19, 2026
Application No. 17/863,433

COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MACHINE LEARNING PROGRAM, METHOD FOR MACHINE LEARNING, AND INFORMATION PROCESSING APPARATUS

Non-Final OA §101§103§112
Filed
Jul 13, 2022
Examiner
PHUNG, QUOC LY PHU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
32%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
6 granted / 19 resolved
-23.4% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
31.5%
-8.5% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 1, it is unclear how the limitation “calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers” [line 4] is structured. The phrase “between before and after reduction” is confusing and lacks necessary prepositions or formatting to clarify if it refers to a comparison between state before reduction and a state after reduction. The phrase “one for each element” is unclear whether the calculating happens once per element, or if “one” refers to a single threshold. For the purposes of examination, examiner will interpret this limitation as “calculating thresholds of errors in tensors by comparing a first state before reduction to a second state after reduction, wherein the calculating is performed for each element of a plurality of layers.” It is unclear how the limitation “selecting reduction ratio candidates to be applied one to each of the plurality of layers based on a plurality of the thresholds and errors in tensors between before and after reduction” [line 10] is structured. The phrase “errors in tensors between before and after reduction” is unclear if this refers to a difference in error values, a comparison of tensors or a specific time-based state. For the purposes of examination, examiner will interpret this limitation as “selecting reduction ratio candidates to be applied wherein each of the plurality of layers based on the plurality of thresholds and a comparison of errors in tensors calculated before and after a reduction...” It is unclear how the limitation “determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model after machine learning” [line 17] is structured. The phrase “after machine learning” is vague and is unclear if this refers to the end of an initial training phase or a fine-tuning process after reduction. For the purposes of examination, examiner will interpret this limitation as “determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a fine-tuned version of the reduced model…” With respect to claim 9, it is unclear how the limitation “calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers” [line 3] is structured. The phrase “between before and after reduction” is confusing and lacks necessary prepositions or formatting to clarify if it refers to a comparison between state before reduction and a state after reduction. The phrase “one for each element” is unclear whether the calculating happens once per element, or if “one” refers to a single threshold. For the purposes of examination, examiner will interpret this limitation as “calculating thresholds of errors in tensors by comparing a first state before reduction to a second state after reduction, wherein the calculating is performed for each element of a plurality of layers.” It is unclear how the limitation “selecting reduction ratio candidates to be applied one to each of the plurality of layers based on a plurality of the thresholds and errors in tensors between before and after reduction” [line 9] is structured. The phrase “errors in tensors between before and after reduction” is unclear if this refers to a difference in error values, a comparison of tensors or a specific time-based state. For the purposes of examination, examiner will interpret this limitation as “selecting reduction ratio candidates to be applied wherein each of the plurality of layers based on the plurality of thresholds and a comparison of errors in tensors calculated before and after a reduction...” It is unclear how the limitation “determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model after machine learning” [line 16] is structured. The phrase “after machine learning” is vague and is unclear if this refers to the end of an initial training phase or a fine-tuning process after reduction. For the purposes of examination, examiner will interpret this limitation as “determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a fine-tuned version of the reduced model…” With respect to claim 17, it is unclear how the limitation “calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers” [line 5] is structured. The phrase “between before and after reduction” is confusing and lacks necessary prepositions or formatting to clarify if it refers to a comparison between state before reduction and a state after reduction. The phrase “one for each element” is unclear whether the calculating happens once per element, or if “one” refers to a single threshold. For the purposes of examination, examiner will interpret this limitation as “calculating thresholds of errors in tensors by comparing a first state before reduction to a second state after reduction, wherein the calculating is performed for each element of a plurality of layers.” It is unclear how the limitation “selecting reduction ratio candidates to be applied one to each of the plurality of layers based on a plurality of the thresholds and errors in tensors between before and after reduction” [line 11] is structured. The phrase “errors in tensors between before and after reduction” is unclear if this refers to a difference in error values, a comparison of tensors or a specific time-based state. For the purposes of examination, examiner will interpret this limitation as “selecting reduction ratio candidates to be applied wherein each of the plurality of layers based on the plurality of thresholds and a comparison of errors in tensors calculated before and after a reduction...” It is unclear how the limitation “determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model after machine learning” [line 18] is structured. The phrase “after machine learning” is vague and is unclear if this refers to the end of an initial training phase or a fine-tuning process after reduction. For the purposes of examination, examiner will interpret this limitation as “determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a fine-tuned version of the reduced model…” With respect to claims 2-8, 10-16 and 18-20, they are rejected based on their virtual dependency of claims 1, 9 and 17. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims Step 1 Claim 1 is drawn to a non-transitory computer-readable recording medium, claim 9 is drawn to a computer-implemented method, and claim 17 is drawn to an information processing apparatus comprising a memory and a processor being configured to execute the process and to perform the method of claim 9. Therefore, each of these groups falls under one of four categories of statutory subject matter (process/method, machines/product/apparatus, manufactures, and composition of matter). Step 2A – Prong 1 Claims 1, 9 and 17 are directed to a judicially recognized exception of an abstract idea without significantly more. Claim 1, 9 and 17 recite a method of calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers in a trained model of a neural network including the plurality of layers that under its broadest reasonable interpretation enumerates a mathematical concept. A human can perform the calculation using words or using mathematical symbols to calculate the thresholds in tensors. Therefore, the step of calculating thresholds of errors in tensors is nothing more than a mathematical concept (MPEP 2106.04(a)(2)(I)). Claim 1, 9 and 17 recite further a method of selecting reduction ratio candidates to be applied one to each of the plurality of layers based on a plurality of the thresholds and errors in tensors between before and after reduction in cases where the elements are reduced by each of a plurality of reduction ratio candidates in each of the plurality of layers that under its broadest reasonable interpretation enumerates a mathematical concept. A human can perform the calculation using words or using mathematical symbols to select specific values (reduction ratios) to evaluate numerical data (thresholds and errors). Therefore, the step of selecting reduction ratio candidates based on a plurality of the thresholds and errors is nothing more than a mathematical concept (MPEP 2106.04(a)(2)(I)). Step 2A – Prong 2 Claims 1, 9 and 17 recite further a method of determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model after machine learning, the reduced model being obtained by reducing each element of the plurality of layers in the trained model according to the reduction ratio candidates to be applied that fails to integrate the abstract idea into a practical application. The step of determining reduction ratios to be applied to each of the plurality of layers is a form of insignificant input and output solution activities, where determining reduction ratios to each of the plurality of layers based on inference accuracy is necessary for all uses of the judicial exception. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Step 2B The additional elements in step 2A-Prong 2 those are forms of insignificant extra-solution activities, do not amount to significantly more than an abstract idea because the court decision have determined that the additional element of determining reduction ratios to each of the plurality of layers based on inference accuracy to be well-understood, routine, and conventional when claimed in a merely generic manner (MPEP 2106.05(d)(II)). As such, claims 1, 9 and 17 are not patent eligible. Dependent claims Claims 2-8, 10-16 and 18-20 merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to claims 1, 9 and 17, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental processes that are practically capable of being performed in the human mind with the assistance of pen. Therefore, claims 2-8, 10-16 and 18-20 also recite abstract ideas that do not integrate into a practical application or amount to significantly more than the judicial exception, and are rejected under U.S.C. 101. Step 1 Claims 2-8 are drawn to a non-transitory computer-readable recording medium, claims 10-16 are drawn to a computer-implemented method, and claim 18-20 are drawn to an information processing apparatus comprising a memory and a processor being configured to execute the process and to perform the method of claims 10-16. Therefore, each of these groups falls under one of four categories of statutory subject matter (process/method, machines/product/apparatus, manufactures, and composition of matter). Step 2A – Prong 1 Dependent claims 2, 10 and 18 recite further the mathematical concept by wherein the calculating the thresholds includes calculating the thresholds based on values of loss functions of the trained model at a time of reducing elements of each of the plurality of layers and weight gradients of each of the plurality of layers that based on one or more features of the ML project (MPEP 2106.04(a)(2)(I)). Dependent claims 3, 11 and 19 recite further the mathematical concept by discarding a plurality of the selected reduction ratio candidates when a sum of the inference accuracy of the reduced model after machine learning and a margin is lower than the inference accuracy of the trained model; and determining to adopt a plurality of the selected reduction ratio candidates as the reduction ratios to be applied one to each of the plurality of layers when the sum of the inference accuracy of the reduced model after machine learning and the margin is equal to or higher than the inference accuracy of the trained model those based on one or more features of the ML project (MPEP 2106.04(a)(2)(I)). Dependent claims 4, 12 and 20 recite further the mathematical concept by wherein the calculating the thresholds includes scaling the thresholds such that an L2 norm of thresholds of the plurality of layers becomes equal to or smaller than a threshold upper limit that based on one or more features of the ML project (MPEP 2106.04(a)(2)(I)). Dependent claims 5 and 13 recite further the mathematical concept by decreasing the threshold upper limit when the sum of the inference accuracy of the reduced model after machine learning and the margin is lower than the inference accuracy of the trained model; and increasing the threshold upper limit when the sum of the inference accuracy of the reduced model after machine learning and the margin is equal to or higher than the inference accuracy of the trained model those based on one or more features of the ML project (MPEP 2106.04(a)(2)(I)). Dependent claims 6 and 14 recite further the mathematical concept by wherein the calculating the thresholds includes updating the threshold upper limit such that combinations of reduction ratio candidates of the plurality of layers differ in each execution of selecting the reduction ratio candidates that based on one or more features of the ML project (MPEP 2106.04(a)(2)(I)). Dependent claims 7 and 15 recite further the mathematical concept by wherein the calculating the thresholds includes setting an initial value of the threshold upper limit so as to calculate thresholds that causes, among the plurality of layers, an element of a layer in which the threshold is maximum to be reduced and that causes an element of a layer other than the layer in which the threshold is maximum not to be reduced that based on one or more features of the ML project (MPEP 2106.04(a)(2)(I)). Step 2A – Prong 2 Dependent claims 8 and 16 recite further the insignificant extra solution activities by repeating execution of the calculating the thresholds, the selecting the reduction ratio candidates, and the determining the reduction ratios until execution times or the reduction ratios satisfy a predetermined condition; and outputting the reduction ratios determined when the predetermined condition is satisfied. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). As such, dependent claims 2-8, 10-16 and 18-20 are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pravendra Singh et al (“Play and Prune: Adaptive Filter Pruning for Deep Model Compression”) hereafter Singh, and further in view of Cho et al (US 20210081798 A1) hereafter Cho. Singh was cited in the IDS filed on 03/22/2023. With respect to claim 1, Singh teaches a non-transitory computer-readable recording medium having stored therein a machine learning program for causing a computer to execute a process (a Play & Prune (PP) framework configured to prune and to fine-tunes CNN model parameters with an adaptive pruning rate while maintaining the model’s predictive performance [page 1, Abstract]) comprising: calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers in a trained model of a neural network including the plurality of layers (L1 regularization constant is employed in the cost function. An adaptive weight threshold is chosen for each layer Li, such that results in negligible accuracy drop after removing. Adaptive Filter Pruning (AFP) is configured to minimize the number of filters in the model, wherein the weight thresholds W is calculated initially. The pruning rate is changed using the pruning rate controller (PRC). Equation (6) is used to calculate the adaptive thresholds WA [page 3, 3.3. Weight Threshold Initialization – page 4, 3.5. Pruning Rate Controller (PRC)]); selecting reduction ratio candidates to be applied one to each of the plurality of layers based on a plurality of the thresholds and errors in tensors between before and after reduction in cases where the elements are reduced by each of a plurality of reduction ratio candidates in each of the plurality of layers (the pruning module P needs to identify the candidates to be pruned. U and I are the set of unimportant and important filters respectively. Equation (2) and (3) indicate the approach of calculating filter importance uses the L1 norm. U is treated as a candidate set of filters to be pruned which is a subset that will be pruned eventually. Parameter α is treated as the reduction ratio in the Equation (3) [page 3, 3.2. Convolutional Filter Partitioning – 3.3. Weight Threshold Initialization]); and determining reduction ratios to be applied one to each of the plurality of layers based on (the AFP minimizes the number of filters in the network, and the PRC optimizes the accuracy given that number of filters. Figure 1 shows how AFP minimizes while PRC maximizes the accuracy during training. Equation (7) is given where C(#w) is the accuracy with #w remaining filters, ε is the accuracy of the unpruned network, and C(#w)-( ε- ϵ) indicates the gap of accuracy and tolerance error level [page 2, 2. Related Work – page 4, 3. Proposed Approach and FIG. 1]). However, Singh does not explicitly teach determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model. In the same field of endeavor, Cho teaches determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model (the change in inference accuracy may include calculating sensitivity for each of the plural layers based on the difference between an inference accuracy before pruning on each layer is performed and an inference accuracy after pruning on each of the plural layers is performed. A compression method is performed to reduce the size of a neural network, reduce system costs, and reduce the amount of computations in the implementation of a neural network. The pruning process of a neural network may comprise the compression or removal of the connectivity between nodes [par. 0010-0014, 0066-0071]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the concept of pruning a neural network with setting a weight threshold value based on a weight distribution of layers included in a neural network as suggested by Cho into the concept of using Play and Prune framework to prune and to fine-tune the CNN parameters as suggested by Singh because both of these systems addressing the process of pruning neurons/parameters in the neural networks. Doing so would be desirable because the concept of Singh would be more efficient by predicting a change in inference accuracy by calculating a sensitivity each of the plurality of layers based on the difference between an inference accuracy before pruning and an inference accuracy after pruning, while adjusting the weight threshold value (Cho, [par. 0010-0015]). With respect to claim 2, the combination of Singh and Cho teaches wherein the calculating the thresholds includes calculating the thresholds based on values of loss functions of the trained model at a time of reducing elements of each of the plurality of layers and weight gradients of each of the plurality of layers (Singh, an objective function is used as the original cost function along with the L1 regularization constant. W indicates the initial weight thresholds for the K layers, and the adaptive thresholds are calculated in Equation (6). The AFP is used as an assistance to minimize the number of filters in the model. The weight thresholds is updated dynamically by the PRC module [page 3, 3.3. Weight Threshold Initialization – page 4, 3.5. Pruning Rate Controller]). With respect to claim 3, the combination of Singh and Cho teaches wherein the determining the reduction ratios includes: discarding a plurality of the selected reduction ratio candidates when a sum of the inference accuracy of the reduced model after machine learning and a margin is lower than the inference accuracy of the trained model (Cho, operation 704 measures inference accuracy of a neural network with respect to the pruning data set. Operation 706 is performed when the measured inference accuracy is lower than the threshold accuracy. The processor updates the weight threshold value by increasing the weight threshold value. When the measured inference accuracy is less than the threshold accuracy, the processor terminates the pruning on the current layer [par. 0119-0128]); and determining to adopt a plurality of the selected reduction ratio candidates as the reduction ratios to be applied one to each of the plurality of layers when the sum of the inference accuracy of the reduced model after machine learning and the margin is equal to or higher than the inference accuracy of the trained model (Cho, operation 704 measures inference accuracy of a neural network with respect to the pruning data set. Operation 707 is performed when the measured inference accuracy is greater than the threshold accuracy. The processor determines whether the pruning is completed regarding all layers of the neural network [par. 0119-0128]). With respect to claim 4, the combination of Singh and Cho teaches wherein the calculating the thresholds includes scaling the thresholds such that an L2 norm of thresholds of the plurality of layers becomes equal to or smaller than a threshold upper limit (Singh, the initial weight thresholds for K layers denoted in W=[W1, W2, …, Wk]. w is the collection of all the filters from each layer that has a sum of absolute value greater than the W. The adaptive thresholds are calculated in Equation 6. From Equation 6, the adaptive threshold WA depends on the performance of the system over the w filters in the model. By controlling the pruning rate, a balance between filter pruning and accuracy is also maintained [page 3-4, 3.5. Pruning Rate Controller]). With respect to claim 5, the combination of Singh and Cho teaches wherein the calculating the thresholds includes: decreasing the threshold upper limit when the sum of the inference accuracy of the reduced model after machine learning and the margin is lower than the inference accuracy of the trained model (Cho, the processor may adjust the weight threshold value when a decrease in the inference accuracy is less than or equal to a certain level. Pruning may be performed on the current layer. The processor updates the weight threshold value until the inference accuracy of a neural network with respect to the pruning data set is decreased to the threshold accuracy [par. 0119-0128]); and increasing the threshold upper limit when the sum of the inference accuracy of the reduced model after machine learning and the margin is equal to or higher than the inference accuracy of the trained model (Cho, the processor may update the weight threshold value by increasing the weight threshold value by δ, wherein δ may be a value that is arbitrarily set based on various factors such as the weight distribution of a neural network, a pruning rate to the current layer and the like [par. 0119-0128]). With respect to claim 6, the combination of Singh and Cho teaches wherein the calculating the thresholds includes updating the threshold upper limit such that combinations of reduction ratio candidates of the plurality of layers differ in each execution of selecting the reduction ratio candidates (Cho, operations 702 to 707 to update the weight threshold value are repeatedly performed from the current layer to the next layer based on the result of operation 707, whether the operation 707 determines whether the pruning is completed. The retraining of a neural network is repeatedly performed to reduce a decrease in the accuracy of pruning. When a pruning of a current layer is completed, the processor repeatedly performs pruning on another layer of the neural network [par. 0119-0130, 0137]). With respect to claim 7, the combination of Singh and Cho teaches wherein the calculating the thresholds includes setting an initial value of the threshold upper limit so as to calculate thresholds that causes, among the plurality of layers, an element of a layer in which the threshold is maximum to be reduced and that causes an element of a layer other than the layer in which the threshold is maximum not to be reduced (Singh, the initial weight threshold is calculated for each layer with an initial regularization constant in Figure 2, which creates 2 clusters of filters. Left cluster uses the binary search to find the maximum threshold for a layer such that the accuracy drop is nearly zero [page 3, 3.3. Weight Threshold Initialization]). With respect to claim 8, the combination of Singh and Cho teaches wherein the process further includes: repeating execution of the calculating the thresholds, the selecting the reduction ratio candidates, and the determining the reduction ratios until execution times or the reduction ratios satisfy a predetermined condition (Singh, the AFP iteratively minimizes the number of filters in the model, and the PRC iteratively maximizes the accuracy with the set of filters retained by AFP. The AFP will prune the filter only when the accuracy drop is within the tolerance limit. After each pruning step, the controller C checks the accuracy drop. If the drop is more than the tolerance limit, then the pruning rate is reset to zero, and the controller C tries to recover the system performance. In such cases, the pruning is rollback [page 2, 3.1. Overview – page 3, 3.2. Convolutional Filter Partitioning]); and outputting the reduction ratios determined when the predetermined condition is satisfied (Singh, the reduction ratio α is selected as the filter of the lowest importance from each layer and partition into U and I. Equation (4) indicates the optimization that transfers the knowledge of important filters to the rest of the network [par. 3, 3.4. Adaptive Filter Pruning]). With respect to claim 9, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 1. Therefore, it is rejected for the same reason as claimed in claim 1 above. With respect to claim 10, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 2. Therefore, it is rejected for the same reason as claimed in claim 2 above. With respect to claim 11, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 3. Therefore, it is rejected for the same reason as claimed in claim 3 above. With respect to claim 12, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 4. Therefore, it is rejected for the same reason as claimed in claim 4 above. With respect to claim 13, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 5. Therefore, it is rejected for the same reason as claimed in claim 5 above. With respect to claim 14, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 6. Therefore, it is rejected for the same reason as claimed in claim 6 above. With respect to claim 15, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 7. Therefore, it is rejected for the same reason as claimed in claim 7 above. With respect to claim 16, it is a computer-implemented method claim that is corresponding to the non-transitory computer-readable recording medium of claim 8. Therefore, it is rejected for the same reason as claimed in claim 8 above. With respect to claim 17, it is an information processing apparatus claim that is corresponding to the non-transitory computer-readable recording medium of claim 1. Therefore, it is rejected for the same reason as claimed in claim 1 above. With respect to claim 18, it is an information processing apparatus claim that is corresponding to the non-transitory computer-readable recording medium of claim 2. Therefore, it is rejected for the same reason as claimed in claim 2 above. With respect to claim 19, it is an information processing apparatus claim that is corresponding to the non-transitory computer-readable recording medium of claim 3. Therefore, it is rejected for the same reason as claimed in claim 3 above. With respect to claim 20, it is an information processing apparatus claim that is corresponding to the non-transitory computer-readable recording medium of claim 4. Therefore, it is rejected for the same reason as claimed in claim 4 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lee et al (US 12327191 B2) disclosed a neural network pruning method includes acquiring a first task accuracy of an inference task processed by a pretrained neural network, pruning, based on a channel unit, the neural network by adjusting weights between nodes of channels based on a preset learning weight and based on a channel-by-channel pruning parameter corresponding to a channel of each of a plurality of layers of the pretrained neural network, updating the learning weight based on the first task accuracy and a task accuracy of the pruned neural network, updating the channel-by-channel pruning parameter based on the updated learning weight and the task accuracy of the pruned neural network, and repruning, based on the channel unit, the pruned neural network based on the updated learning weight and based on the updated channel-by-channel pruning parameter. Kim et al (US 20230168921 A1) disclosed a neural network processing unit (NPU) includes a processing element array, an NPU memory system configured to store at least a portion of data of an artificial neural network model processed in the processing element array, and an NPU scheduler configured to control the processing element array and the NPU memory system based on artificial neural network model structure data or artificial neural network data locality information. Arikawa et al (US 20230297856 A1) disclosed an inference processing device uses a learned neural network to infer a feature of input data, the inference processing device including: a first storage unit that stores the input data; a second storage unit that stores a weight of the learned neural network; a data filtering unit that extracts only specific input data from pieces of the input data; and an inference operation unit that uses the specific input data extracted by the data filtering unit and the weight as inputs, performs inference operation of the learned neural network, and infers the feature of the input data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Quoc Phung whose telephone number is (703) 756 1330. The examiner can normally be reached on Monday through Friday from 9am to 5pm PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Q.L.P./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jul 13, 2022
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554998
DATA ANALYTICS FOR MORE-INFORMED REPAIR OF A MECHANICAL OR ELECTROMECHANICAL SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12415528
COMPLEX NETWORK COGNITION-BASED FEDERATED REINFORCEMENT LEARNING END-TO-END AUTONOMOUS DRIVING CONTROL SYSTEM, METHOD, AND VEHICULAR DEVICE
2y 5m to grant Granted Sep 16, 2025
Patent 12353983
AN INFERENCE DEVICE AND METHOD FOR REDUCING THE MEMORY USAGE IN A WEIGHT MATRIX
2y 5m to grant Granted Jul 08, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
32%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month