Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of Claims
This action is in reply to the amendments filed January 8, 2026.
Claims 1-10 are currently pending.
Claims 1-10 have been amended.
Information Disclosure Statement
The information disclosure statement, filed January 8, 2026, has been considered.
Specification
The amended title is accepted.
Claim Interpretation
The amended claims no longer invoke a 35 U.S.C. 112(f) interpretation.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Claim 1 recites “narrow down candidates for one or more contraction patterns from a plurality of contraction patterns to which a contraction rate of the neural network is set, based on the state of ignition of the neurons.” It is not clear how the state of ignition of the neurons relates to the contraction patterns or contraction rate and the specification does not describe this step sufficiently to enable one of ordinary skill to make and/or use the invention. Considering the Wands factors:
(A) The breadth of the claims – the claims are broad and lacking in detail.
(B) The nature of the invention – the invention is directed to neural networks and artificial intelligence.
(C) The state of the prior art – pruning neural networks is generally known.
(D) The level of one of ordinary skill – the level of ordinary skill would be high, but the specification provides no guidance and one of ordinary skill would be unable to make and/or use the claimed invention.
(E) The level of predictability in the art – the level of predictability is low as there are many ways to implement neural networks and determine how to prune/contract neural networks.
(F) The amount of direction provided by the inventor – no direction is provided regarding how the ignition state relates to the contraction patterns and contraction rate.
(G) The existence of working examples – no evidence of working examples has been provided beyond the specification.
(H) The quantity of experimentation needed to make or use the invention based on the content of the disclosure – the level of experimentation would be high as there are many ways to implement neural networks and determine how to contract/prune neural networks.
Additionally, Claims 1-10 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Independent claim 1 has been amended to recite “calculate a state of ignition of neurons of the neural network by the input data, the state of ignition including a frequency with which a neuron is ignited.” The specification does not detail calculating a state of ignition that includes an ignition frequency. [0041] of the specification indicates that frequently ignited neurons may be determined to be large in sensitivity, but does not discuss calculating a state of ignition that includes a frequency of ignition.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The rejection of claims 1-10, under 35 U.S.C. 112(b), due to the invocation of 35 U.S.C. 112(f), is withdrawn due to Applicant’s amendments to the claims that remove the 35 U.S.C. 112(f) invocation.
Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites: “calculate a state of ignition of neurons of the neural network by the input data, the state of ignition including a frequency with which a neuron is ignited.” It is not clear which neuron, from the plurality of neurons, is being used for the ignition frequency.
Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "the one or more contraction patterns minimized in the inference error" in line two. There is insufficient antecedent basis for this limitation in the claim.
The previous rejection of claim 8, under 35 U.S.C. 112(b), for "the application destination of the neural network" is withdrawn in view of Applicant’s amendments.
The previous rejection of claim 8, under 35 U.S.C. 112(b), for “contraction execution parts different in contraction method” is withdrawn in view of Applicant’s amendments.
The previous rejection of claim 10, under 35 U.S.C. 112(b), for "the neural networks" is withdrawn in view of Applicant’s amendments.
The previous rejection of claim 10, under 35 U.S.C. 112(b), for “using a probabilistic search set in advance” is withdrawn in view of Applicant’s amendments.
The previous rejection of claims 7, 9, and 10, under 35 U.S.C. 112(b), for “optimal” is withdrawn in view of Applicant’s amendments.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao et al., U.S. Patent Application Publication 2019/0197407 (“Yao”), in view of Seibold et al., U.S. Patent Application Publication 2018/0181867 (“Seibold”).
With respect to independent claim 1 Yao teaches:
A computing device having input data and a neural network which performs an operation using a weighting factor (Yao teaches a DNN comprising parameters including weights and biases; see [0129].), comprising a processor configured to:
calculate a state of ignition of neurons of the neural network by the input data (Yao teaches measuring the importance of parameters, in part, by using feature maps of the input in [0130]. The instant specification teaches that calculating a feature amount is equivalent to the ignition state in [0035].),
narrow down candidates for one or more contraction patterns from a plurality of contraction patterns to which a contraction rate of the neural network is set (Yao teaches a method for pruning (contracting) a specified set of parameters from each layer of a dense neural network model, in part, on a sparsity rate; see abstract, figure 16, and [0139]. The fixed portion of parameters removed in [0139] is considered a pattern. Yao further teaches pruning until a final sparsity rate is reached in [0126].), based on the state of ignition of the neurons, and
execute contraction of the neural network, based on the narrowed-down candidates for the one or more contraction patterns to generate a post-contraction neural network (Yao performs pruning in figure 16 and [0139] and outputs a final DNN model.).
Yao does not explicitly disclose:
the state of ignition including a frequency with which a neuron is ignited;
However, Seibold teaches this limitation:
Siebold teaches pruning a neural network by removing low performing neurons, where low performing neurons are identified by studying activation frequency (i.e. frequency with which a neuron is ignited); see [0040]-[0041].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yao such that the contraction is determined, in part, on an ignition frequency as similarly taught by Seibold, because pruning a neural network will improve performance (see at least [0002] of Siebold).
With respect to claim 2, the rejection of claim 1 is incorporated. Further Yao teaches:
perform inference on the post-contraction neural network to calculate an inference error and extract the one or more contraction pattern based on the inference error from among the plurality of contraction patterns (Yao teaches a unified framework that minimizes errors from feed-forward approximation and backward propagation to achieve a more powerful pruning technique in [0124]. Yao also teaches performs pruning in figure 16 and [0139] and outputs a final DNN model that is used for inference.).
With respect to claim 3, the rejection of claim 2 is incorporated. Further Yao teaches:
extract the one or more contraction pattern minimized in the inference error (Yao teaches a unified framework that minimizes errors from feed-forward approximation and backward propagation to achieve a more powerful pruning technique in [0124]. Yao also teaches performs pruning in figure 16 and [0139] and outputs a final DNN model that is used for inference.).
With respect to claim 4, the rejection of claim 1 is incorporated. Further Yao teaches:
learning again on the post-contraction neural network in accordance with the input data (Yao teaches retraining the pruned neural network; see abstract, figure 16, and [0139]. Claim 1 of Yao teaches a retraining module.).
With respect to claim 5, the rejection of claim 2 is incorporated. Further Yao teaches:
perform learning again on the post-contraction neural network in accordance with the input data (Yao teaches retraining the pruned neural network; see abstract, figure 16, and [0139].), and
further including:
a memory configured to store intermediate data in the middle of operation of the computing device (Yao teaches implementation on a computer device disclosed in at least figures 1-3 that includes a memory controller, memory device (comprising instructions and data), and a memory interface connected to various engines and sub-systems. Yao further teaches outputting data generated by threads can be output to memory in a unified return buffer; see [0054].),
a scheduler configured to serve as a master configured to control the processor and the memory as slaves (Yao teaches various controller hubs in at least figure 1, with interconnects to processors, memory, input/output, etc. The memory controller hub facilitates communication between a memory device and other components, which is considered a master/slave relationship.), and
an interconnect configured to connect the master with the slaves (Yao teaches various controller hubs and busses in at least figure 1, with interconnects to processors, memory, input/output, etc.).
With respect to claim 6, the rejection of claim 1 is incorporated. Further Yao teaches:
Wherein the processor is further configured to receive the input data corresponding to the neural network and a destination for application of the post- contraction neural network (Yao teaches implementation on a computer device disclosed in at least figures 1-3 that includes a memory controller, memory device (comprising instructions and data), and a memory interface connected to various engines and sub-systems. Yao further teaches input/output (I/O) controller hub in figure 1.), calculate a feature amount obtained by estimating and digitalizing the state of ignition of each neuron of the neural network, and output the feature amount as an analysis result including a feature specific to an application destination (Digitizing the ignition state is applicant admitted prior art, see [0039] of the instant specification, but Yao teaches implementation of the system on a computer device, which implies digitization. Further, Yao teaches measuring the importance of parameters, in part, by using feature maps of the input in [0130]. The instant specification teaches that calculating a feature amount is equivalent to the ignition state in [0035].).
With respect to claim 7, the rejection of claim 6 is incorporated. Further Yao teaches:
wherein the processor is further configured to receive the analysis result (Yao teaches implementation on a computer device disclosed in at least figures 1-3 that includes a memory controller, memory device (comprising instructions and data), and a memory interface connected to various engines and sub-systems. Yao further teaches outputting data generated by threads can be output to memory in a unified return buffer; see [0054].), execute the contraction of the neural network, based on the feature amount digitalized in the analysis result, and output a plurality of solution candidates for the post- contraction neural network and the weighting factor (Yao teaches pruning is done through optimizing a well-defined Joint Feed-forward and Backward Propagation Approximation in [0123].).
With respect to claim 8, the rejection of claim 1 is incorporated. Further, Yao teaches:
wherein the processor is further configured to execute the contraction based at least on pruning, low rank approximation, weight sharing or low bit converting, according to an application destination of the neural network (Yao teaches a method for pruning (contracting) a specified set of parameters from each layer of a dense neural network model, in part, on a sparsity rate; see abstract, figure 16, and [0139]. Pruning each layer is considered a different part and destination of the neural network.).
With respect to claim 9, the rejection of claim 1 is incorporated. Further Yao teaches:
wherein the processor is further configured to perform learning again on the post-contraction neural network in accordance with the input data (Yao teaches retraining the pruned neural network; see abstract, figure 16, and [0139]. Claim 1 of Yao teaches a retraining module.),
receive the plurality of solution candidates for the neural network and the weighting factor as inputs and perform learning again with the neural network and the weighting factor as initial values to thereby output the relearned neural network and the relearned weighting factor (Yao teaches retraining the pruned neural network; see abstract, figure 16, and [0139]. Retraining of the neural network would result in weights being updated.).
With respect to claim 10, the rejection of claim 1 is incorporated. Further:
wherein the processor is further configured to perform inference on the post-contraction neural network on which the contraction is performed to calculate an inference error, extract the one or more contraction pattern from among the plural contraction patterns, based on the inference error (Yao teaches a unified framework that minimizes errors from feed-forward approximation and backward propagation to achieve a more powerful pruning technique in [0124]. Yao also teaches performs pruning in figure 16 and [0139] and outputs a final DNN model that is used for inference.),
receive a plurality of neural networks and the relearned weighting factors as inputs and calculate the one or more contraction pattern (Yao teaches a method for pruning (contracting) a specified set of parameters from each layer of a dense neural network model, in part, on a sparsity rate; see abstract, figure 16, and [0139]. The fixed portion of parameters removed in [0139] is considered a pattern. Yao further teaches pruning until a final sparsity rate is reached in [0126].).
Response to Arguments
Applicant's arguments filed January 7, 2026 have been fully considered but they are not persuasive.
Beginning on page 7 of remarks, Applicant argues that the claims are enabled and the previous rejection of claims, under 35 U.S.C. 112(a), has been traversed. Examiner respectfully disagrees. Applicant has cited numerous paragraphs from the specification that supposedly support the claims, however it is not clear how these paragraphs relate to the claim or offer support. Applicant first cites [0035] which indicates that the calculated feature amount is in fact the ignition state; “calculate a feature amount (ignition state)”. Paragraphs [0042], [0049], [0050], [0051], [0062], and [0063] are then cited and relate to contraction based on the feature amount and merely indicate that the feature amount is based on the state of ignition of the neural network (see [0042]) or ignition of neurons (see [0062]). The cited paragraphs do not teach one of ordinary skill how to narrow candidates for contraction based on the state of ignition. The cited portions may, at best, support determining candidates for contraction based on a feature amount, where the feature amount is based on the ignition state. But, the disclosure does not support determining a contraction from the state of ignition alone. Additional explanation as to how the cited portions of the specification support the claimed features would be helpful in understanding Applicant’s argument.
On page 8, Applicant argues that the amendments overcome the rejections of claims 3, 7, 8, 9, and 10 under 35 U.S.C. 112(b). The rejection of claim 3 is maintained and a new rejection of claims 1-10 has been made; see above. The rejection of claim 3 is maintained because there is no prior recitation of “the one or more contraction patterns minimized in the inference error.”
On page 9, Applicant argues that the prior art rejection has been overcome in view of the amendments. Applicant argues that Yao does not teach the newly claimed “state of ignition including a frequency with which a neuron is ignited.” The rejection above has been modified, necessitated by Applicant’s amendments, to teach the newly claimed features.
Conclusion
Claims 1-10 are rejected.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T PELLETT whose telephone number is (571)270-7156. The examiner can normally be reached Monday - Friday 9-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached on 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T PELLETT/Primary Examiner, Art Unit 2121