DETAILED ACTION
Claims 1-24 are pending. Claims 1-24 are considered in this Office action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/18/2024 has been acknowledged. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The initialed and dated copy of Applicant’s IDS form 1449 is attached to the instant Office action.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1 and 20 of the current application (Hereby known as ‘749) are rejected on the ground of provisional nonstatutory double patenting as being unpatentable over claims 1 and 20 of U.S. Application No. 18/355,689 (Hereby known as ‘689). Although the claims at issue are not identical, they are not patentably distinct from each other because:
Regarding Claims 1 and 20, Claims 1 and 20 of the current application (‘749) recite substantially similar steps of '689 - Claims 1 and 20 respectively.
Claims 1 and 20 of ‘749 recite the steps of:
an instruction buffer;
an input data register;
a parameter buffer configured to store first post-processing parameters for a particular neural network layer;
a weights register;
an intermediate output data register;
an output data register;
a configuration register configured to store a first indication of a particular output precision, a second indication of a particular weight precision, and second post-processing parameters;
a computing engine coupled to the intermediate output data register;
a post-processing engine coupled to the intermediate output data register, the post-processing engine configurable to perform different post-processing operations for a range of output precisions and a range of weight precisions; and
a controller configured to:
receive the first indication of the particular output precision, the second indication of the particular weight precision, and the second post-processing parameters from the configuration register;
receive the first post-processing parameters from the parameter buffer;
configure the post-processing engine based on the first and second indications and the first post-processing parameters and the second post-processing parameters;
receive a first instruction from the instruction buffer;
responsive to the first instruction: fetch input data elements and weight elements from, respectively, the input data register and the weight register to the computing engine;
perform, using the computing engine, multiplication and accumulation operations between the input data elements and the weight elements to generate intermediate data elements; and
store the intermediate data elements at the intermediate output data register;
receive a second instruction from the instruction buffer;
responsive to the second instruction:
fetch the intermediate data elements from the intermediate output data register to the post-processing engine;
perform, using the post-processing engine configured based on the first and second indications, the first post-processing parameters, and the second post-processing parameters, post-processing operations on the intermediate data elements to generate output data elements; and
store the output data elements at the output data register.
Whereas Claims 1 and 20 of ‘689 states:
a memory interface;
an instruction buffer;
an input data address register;
a weights buffer;
an input data register;
a weights register;
an output data register;
a computing engine; and
a controller configurable to:
receive a first instruction from the instruction buffer, the first instruction referring to the input data address register, and including sub-instructions for fetching input data elements and fetching weight elements;
responsive to the first instruction and based on content of the input data address register, access the memory interface directly to fetch the input data elements [[from ]]viathe memory interface to the input data register, and fetch the weight elements from the weights buffer to the weights register;
receive a second instruction from the instruction buffer; and
responsive to the second instruction:
fetch the input data elements and the weight elements from, respectively, the input data register and the weights register to the computing engine;
perform, using the computing engine, computation operations between the input data elements and the weight elements to generate output data elements; and
store the output data elements at the output data register.
These are obvious variants of each other as both recite substantially the same limitations. Further, elimination of an element or its functions is deemed to be obvious in light of prior art teachings of at least the recited element or its functions (see In re Karlson, 136 USPQ 184, 186; 311 F2d 581 (CCPA 1963)), thereby rendering the elimination of any elements recited in the claims of the related patent (that are not recited in the instant claims) obvious.
Thus, Claims 1 and 20 of the current application are obvious variants of claims 1 and 20 in ‘689.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Alice - Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1 and 20 recite the limitations to store a first indication of a particular output precision, a second indication of a particular weight precision, and second post-processing parameters (Collecting and Storing Information, an Observation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), receive the first indication of the particular output precision, the second indication of the particular weight precision, and the second post-processing parameters from the configuration register (Collecting Information, an Observation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), receive the first post-processing parameters from the parameter buffer (Collecting Information, an Observation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), configure the post-processing engine based on the first and second indications and the first post-processing parameters and the second post-processing parameters (Analyzing the Information, an Evaluation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), receive a first instruction from the instruction buffer (Collecting Information, an Observation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), responsive to the first instruction: fetch input data elements and weight elements from, respectively, the input data register and the weight register to the computing engine (Collecting Information, an Observation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), perform, using the computing engine, multiplication and accumulation operations between the input data elements and the weight elements to generate intermediate data elements (Analyzing the Information, an Evaluation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), and store the intermediate data elements at the intermediate output data register (Transmitting and Storing the Information, a Judgment, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), receive a second instruction from the instruction buffer (Collecting Information, an Observation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), responsive to the second instruction: fetch the intermediate data elements from the intermediate output data register to the post-processing engine (Analyzing the Information, an Evaluation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), perform, based on the first and second indications, the first post-processing parameters, and the second post-processing parameters, post-processing operations on the intermediate data elements to generate output data elements (Analyzing the Information, an Evaluation, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), and store the output data elements at the output data register (Transmitting and Storing the Information, a Judgment, a Mental Process; Managing Human Behavior, i.e. observing surroundings, a Certain Method of Organizing Human Activity), which under their broadest reasonable interpretation, covers performance of the limitation in the mind for the purposes of sending recipes to users, but for the recitation of generic computer components. That is, other than reciting an instruction buffer; an input data register; a parameter buffer configured to store first post-processing parameters for a particular neural network layer; a weights register; an intermediate output data register; an output data register; a configuration register configured, a computing engine coupled to the intermediate output data register; a post-processing engine coupled to the intermediate output data register, the post-processing engine configurable to perform different post-processing operations for a range of output precisions and a range of weight precisions; and a controller, nothing in the claim element precludes the step from practically being performed or read into the mind for the purposes of Managing Human Behavior, a Certain Method of Organizing Human Activity. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas, an observation, evaluation, and judgment. Further, as described above, the claims recite limitations for Managing Human Behavior, a “Certain Method of Organizing Human Activity”. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites the above stated additional elements to perform the abstract limitations as above. The buffers, registers, engines, controller, and processor are recited at a high-level of generality (i.e., as a generic software/module performing a generic computer function of storing, retrieving, sending, and processing data) such that they amount to no more than mere instructions to apply the exception using generic computer components. Even if taken as an additional element, the collecting, storing, and transmitting steps above are at best insignificant extra-solution activity as these are receiving, storing, and transmitting data as per the MPEP 2106.05(d). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, when considered both individually and as an ordered combination. As discussed above with respect to integration of the abstract idea into a practical application, the additional element being used to perform the abstract limitations stated above amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claim is not patent eligible. Applicant’s Specification states:
“[0030]FIG. 1 is a schematic diagram illustrating a system 100. System 100 can include multiple electronic devices 102, including electronic devices 102a, 102b, and 102c., and a cloud network 103. Each electronic device 102 can include a sensor 104 and a data processor 106. For example, electronic device 102a includes sensor 104a and data processor 106a, electronic device 102b includes sensor 104b and data processor 106b, and electronic device 102c includes sensor 104c and data processor 106c.”
Which shows that any generic processor with a sensor and data processor can be used to perform the abstract limitations, such as a laptop, phone, desktop, etc., and from this interpretation, one would reasonably deduce the aforementioned steps are all functions that can be done on generic components, and thus application of an abstract idea on a generic computer, as per the Alice decision and not requiring further analysis under Berkheimer, but for edification the Applicant’s specification has been used as above satisfying any such requirement. This is “Applying It” by utilizing current technologies. For the collecting, storing, and transmitting steps that were considered extra-solution activity in Step 2A above, if they were to be considered additional elements, they have been re-evaluated in Step 2B and determined to be well-understood, routine, conventional, activity in the field. The background does not provide any indication that the additional elements, such as the computer system, medium, product, etc., nor the collecting, storing, and transmitting steps as above, are anything other than a generic, and the MPEP Section 2106.05(d) indicates that mere collection or receipt, storing, or transmission of data is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). For these reasons, there is no inventive concept. The claim is not patent eligible.
Claims 2-19 and 21-24 contain the identified abstract ideas, further narrowing them, with the additional elements of adders and controllers which are highly generic when considered as part of a practical application or under prong 2 of the Alice analysis of the MPEP, thus not integrated into a practical application, nor are they significantly more for the same reasons and rationale as above.
After considering all claim elements, both individually and in combination, Examiner has determined that the claims are directed to the above abstract ideas and do not amount to significantly more. Therefore, the claims and dependent claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. v. CLS Bank International, No. 13–298.
Allowable Subject Matter
Claims 1-20 have overcome the prior art and would be allowable if amended to overcome the 35 USC 101 rejection.
The closest prior art of record is Da Costa (U.S. Publication No. 2023/018,5880), Li (U.S. Publication No. 2023/016,8894), Raha (U.S. Publication No. 2022/029,2366 and Rangachar (U.S. Publication No. 2022/031,9162). Da Costa, a system and method for Data Processing in a Machine Learning Computer, teaches multiple registers being used in the processes, both pre and post processing of multimedia data, use of weights and parameters in a machine learning, use of a deep neural network with a weighting matrix, using pruning techniques with a neural network, and precision of weights in deep neural networks, but it does not explicitly state performing, using post-processing based on first and second indications, the first post-processing parameters, and the second post-processing parameters, post-processing operations on the intermediate data elements to generate output data elements, nor does it teach performing multiplication and accumulation operations between the input data elements and the weight elements to generate intermediate data elements. Li, a system and method enabling one-hot neural networks on a machine learning compute platform, also teaches multiple registers being used in the processes performed in a different manner, both pre and post processing of multimedia data, and precision of weights in deep neural networks, but it does not explicitly state performing, using post-processing based on first and second indications, the first post-processing parameters, and the second post-processing parameters, post-processing operations on the intermediate data elements to generate output data elements, nor does it teach performing multiplication and accumulation operations between the input data elements and the weight elements to generate intermediate data element. Raha, a method and apparatus to perform low overhead sparsity acceleration logic for multi-precision dataflow in deep neural network accelerators, teaches a register that is structured to store four 2-byte precision activations and the corresponding 2-byte precision weights. When the FIFO is full, the FIFO outputs the four 2-byte precision activations and the four 2-byte precision weights to the MAC PE to perform an 8 byte operation, the queue may include a 4-byte based FIFO structured to store two 4-byte precision activations and corresponding weights, a single 8-byte based FIFO structured to store one 8-byte precision activation and corresponding weight, etc. In this manner, the MAC PE can perform a particular precision operation (e.g., an 8-byte operation) on activations and corresponding weights of any type of precision, but does not teach the explicit way the pre-processing is performed as above, and neither does Rangachar, a Bayesian compute unit with reconfigurable sampler methods and apparatus, which teaches use of registers, parameterization, and weights in a neural network with use in AI. None of the above prior art explicitly teaches performing, using post-processing based on first and second indications, the first post-processing parameters, and the second post-processing parameters, post-processing operations on the intermediate data elements to generate output data elements, nor does it teach performing multiplication and accumulation operations between the input data elements and the weight elements to generate intermediate data element, along with the other limitations of the Claims, and these are the reasons which adequately reflect the Examiner's opinion as to why Claims 1-24 are allowable over the prior art of record.
Conclusion
The prior art made of record is considered pertinent to applicant's disclosure.
US 20230325656 A1
Li; Rundong et al.
ADJUSTING PRECISION OF NEURAL NETWORK WEIGHT PARAMETERS
US 20230185880 A1
Da Costa; Godfrey et al.
Data Processing in a Machine Learning Computer
US 20230168894 A1
Li; Jianguo et al.
SYSTEM AND METHOD ENABLING ONE-HOT NEURAL NETWORKS ON A MACHINE LEARNING COMPUTE PLATFORM
US 20220319162 A1
Rangachar Srinivasa; Srivatsa et al.
BAYESIAN COMPUTE UNIT WITH RECONFIGURABLE SAMPLER AND METHODS AND APPARATUS TO OPERATE THE SAME
US 20220292366 A1
Raha; Arnab et al.
METHODS AND APPARATUS TO PERFORM LOW OVERHEAD SPARSITY ACCELERATION LOGIC FOR MULTI-PRECISION DATAFLOW IN DEEP NEURAL NETWORK ACCELERATORS
US 20220284294 A1
Keller; Alexander et al.
ARTIFICIAL NEURAL NETWORKS GENERATED BY LOW DISCREPANCY SEQUENCES
US 20220261650 A1
Zhao; Jiawei et al.
MACHINE LEARNING TRAINING IN LOGARITHMIC NUMBER SYSTEM
US 20220138586 A1
KIM; Lok Won
MEMORY SYSTEM OF AN ARTIFICIAL NEURAL NETWORK BASED ON A DATA LOCALITY OF AN ARTIFICIAL NEURAL NETWORK
US 20220103186 A1
Kaminitz; Guy et al.
Weights Safety Mechanism In An Artificial Neural Network Processor
US 20210397974 A1
Hoang; Tung Thanh et al.
MULTI-PRECISION DIGITAL COMPUTE-IN-MEMORY DEEP NEURAL NETWORK ENGINE FOR FLEXIBLE AND ENERGY EFFICIENT INFERENCING
US 20210174214 A1
Venkatesan; Vaidehi et al.
SYSTEMS AND METHODS FOR QUANTIZING A NEURAL NETWORK
US 20210133583 A1
Chetlur; Sharan et al.
DISTRIBUTED WEIGHT UPDATE FOR BACKPROPAGATION OF A NEURAL NETWORK
US 20200380357 A1
YAO; ANBANG et al.
INCREMENTAL NETWORK QUANTIZATION
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH M WAESCO whose telephone number is (571)272-9913. The examiner can normally be reached on 8 AM - 5 PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BETH BOSWELL can be reached on (571) 272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-1348.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH M WAESCO/Primary Examiner, Art Unit 3625B 2/28/2026