Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
No Art for claims 4-8 and 14-18
Claims 4 and 14 have no art rejection because they claim, “a plurality of sets of second weight values to the neuron circuits, generate a second processing result by utilizing the sets of second weight values to process the N detection results, and generate the sets of first weight values those are assigned to the neuron circuits by adjusting the sets of second weight values according to the second processing result.” The art of record does not teach generating an updated first set of weights based on the second set of weights and a second processing weight. The claims are unclear, but if Applicant is just claiming an iterative training process, then obviously that is taught by the prior art if not the prior art of record -– see A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients by Grondman et al cited by Idgunji para 111. However, when claim 4 is read in light of claims 6 and 8, it is clearer that the two sets of the plurality of sets of weights are picked for different neural network operations, and saved separately.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 4-8 and 14-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claims 4 and 14 Applicant recites “generate the sets of first weight values those are assigned to the neuron circuits by adjusting the sets of second weight values according to the second processing result.” Emphasis added. This clause doesn’t make sense. It will be interpreted as, generate the sets of first weight values assigned to the neuron circuits. Further, it is unclear how a first set of weights can be generated when they were already created in figure 3. For purposes of examination, this will be interpreted as a second set of separate weights that are used alternatively to the first set of weights to yield different processing results.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 9-13 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over US20200050920A1 to Idgunji et al and US 20170160338 A1 to Connor et al.
Idgunji teaches claims 1 and 11. A processor circuit, comprising: a processor, configured to provide a control signal, wherein the control signal indicates an operational status of the processor; (Idgunji para 86 “At operation 304, each of the parallel processors 116 may process received instructions to provide predictive power signals 142 for pending (soon-to-be-executed) instructions…” The predictive power signals are the control signal that indicates operational status.)
N detection circuits, configured to detect NIdgunji para 85 “Operation 302 monitors the current power used by the GPU….” Power affect voltage.)
a neural network circuit, coupled to the processor and the N detection circuits, the neural network circuit being configured to determine the operating voltage of the processor according to the control signal and the N detection results. (Idgunji para 85-86 “Operation 302 monitors the current power used by the GPU….[0086] At operation 304, each of the parallel processors 116 may process received instructions to provide predictive power signals 142 for pending (soon-to-be-executed) instructions…” The current power is the detection results and the predictive power signals are the control signal. Idgunji para 90 “the deep learning block 312 can be performed over a number of such individual processing cycles and produce outputs that are used to more occasionally update the hardware-maintained voltage and/or frequency parameters.” Idgunji para 110 “The learning system 460 includes one or more deep neural networks…”) Idgunji doesn’t monitor several variational factors.
However, Connor teaches N different types of variation factors affecting an operating voltage of the processor respectively, and accordingly generate N detection results respectively, N being an integer greater than one; (Connor para 41 “the reliability rack scale architecture may use memory to store aggregate characteristics regarding workloads, voltage, and temperature for every discretized portion of a given component, allowing for autonomous analytics and warranty verification in addition to cumulative reliability lifetime calculation.”)
Connor, Idgunji and the claims teach variable operating condition in integrated circuits. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to keep track of more variation factors, like taught in Connor, because “[w]ith no method to assess reliability in real time with respect to actual product use and environmental conditions, extra reliability that may be in the form of additional product lifetime and/or performance may be unused, translating to additional product cost over time.” Connor para 3.
Idgunji teaches claims 2 and 12. The processor circuit of claim 1, wherein the N different types of variation factors comprise at least one of a process variation factor, a voltage variation factor, a temperature variation factor, and an aging variation factor. (Connor para 41 “the reliability rack scale architecture may use memory to store aggregate characteristics regarding workloads, voltage, and temperature for every discretized portion of a given component, allowing for autonomous analytics and warranty verification in addition to cumulative reliability lifetime calculation.”)
Idgunji teaches claims 3 and 13. The processor circuit of claim 1, wherein the neural network circuit comprises a plurality of neuron circuits; the neural network circuit is configured to determine a set of first weight values that is assigned to each neuron circuit according to the control signal and the N detection results, process the N detection results (Idgunji para 85-86 “Operation 302 monitors the current power used by the GPU….[0086] At operation 304, each of the parallel processors 116 may process received instructions to provide predictive power signals 142 for pending (soon-to-be-executed) instructions…” The current power is the detection results and the predictive power signals are the control signal. Idgunji para 90 “the deep learning block 312 can be performed over a number of such individual processing cycles and produce outputs that are used to more occasionally update the hardware-maintained voltage and/or frequency parameters.” Idgunji para 110 “The learning system 460 includes one or more deep neural networks…” Idgunji para 117 “If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset.” The inputs to the learning system are the current power and the predictive power signals, the output is an update to voltage. Therefore the DNN is assigning its weight according to the inputs due to the backpropagation taught in para 117.) according to a plurality of sets of first weight values those are assigned to the neuron circuits and accordingly generate a first processing result, (A plurality of sets of weights includes any teaching of two or more weights, because each set would be a set of 1 in a group of two weights. Therefore, any teaching of weights, teaches Applicant’s plurality of sets of weights. Idgunji para 117 “If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset.”) and determine the operating voltage of the processor at least according to the first processing result. (Idgunji para 90 “the deep learning block 312 can be performed over a number of such individual processing cycles and produce outputs that are used to more occasionally update the hardware-maintained voltage and/or frequency parameters.”)
Idgunji teaches claims 9 and 19. The processor circuit of claim 3, wherein the neural network circuit is configured to (Idgunji para 117 “If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset.” The second set of weights is the updated weights during training. Because the iterative nature of weights and training, the weights are updated several times.) Idgunji doesn’t teach normalizing data.
However, Bobra teaches how to normalize the N detection results to N input signals having signal values within a predetermined range. (Bobra para 111 “receive a temperature measurement of the strip from the temperature sensor of the control module; normalize the voltage measurement according to the temperature measurement to generate a normalized voltage measurement…”)
Idgunji, Bobra and the claims all monitor voltage. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to normalize the input data because different inputs have different units and scales and if they are input without normalization then certain features/numbers will have undue influence on the output just because that feature/number is bigger, on average, than another potentially more important feature.
Idgunji teaches claims 10 and 20. The processor circuit of claim 3, wherein when the neural network circuit is configured to output the first processing result and accordingly use the voltage indicated by the first processing result as the operating voltage, the neural network circuit is further configured to store the sets of first weight values in a storage unit of the processor. (Idgunji para 117 “If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset.” This shows that during training, the DNN is not used to control voltage. However, when the weights are trained, the first set of weights are used to control voltage, see Idgunji para 90 “the deep learning block 312 can be performed over a number of such individual processing cycles and produce outputs that are used to more occasionally update the hardware-maintained voltage and/or frequency parameters.” The trained DNN will have saved the weights to memory so that it does not have to relearn weights every time the DNN operates on inputs.)
Notice of References cited
US 20210224643 A1 abstract teaches “the input module is configured to transmit an operating voltage according to the convolution result in the convolutional neural network…”
US 20200202215 A1 para 17 “A voltage controller 124 controls voltages provided to the CUs through suitable control signals 126 as known in the art. The neural network CU remapping logic 116 in one example issues voltage control information 128 to the voltage controller to change the operating voltage of one or more CUs as re-mapping of CUs will lead to extended life of the chip.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AUSTIN HICKS/ Primary Examiner, Art Unit 2124