Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Referenced prior art include:
Claim 1-20 remain pending in the application under prosecution and have been examined.
In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a1) as being anticipated by US 20230078569 A1 (WANG et al)
With respect to claims 1, 11, and 20, WANG teaches method executed by a data processing apparatus, the data processing apparatus comprising: a memory; and at least one processor communicatively coupled to the memory and configured to perform operations
(artificial neural network system comprising neural network inference engine and memory storage storing computer readable program code) [Fig. 10-11] comprising:
obtaining, at a rectified linear unit-activated neuron of a neural network, a set of input elements based on input data at the neuron
(the artificial neural network including a plurality of node layers comprising an input layer, one or more hidden layers, and an output layer implementing rectified linear unit (bm-ReLU) activation function) [Fig. 2; Par. 0046-0049];
generating a first group of input elements based on the set of input elements
(an input layer or node featuring a group of input elements) [Fig. 2; Par. 0046-0047], wherein the first group of input elements is associated with first weight elements, each first weight element having a first sign, each input element of the first group of input elements being associated with a respective first weight element
(each individual node or neuron may be viewed as implementing a linear model, which is composed of input data, weights) [Fig. 2; Par. 0046-0049];
generating a second group of input elements based on the set of input elements, wherein the second group of input elements is associated with second weight elements, each second weight element having a second sign different from the first sign
(weighted sum input viewed in succession across a number of nodes or artificial neurons and view as an input for nodes or artificial neurons in a succeeding layer of a preceding layer of the artificial neural network having associated respective weights of a product of an output from a respective one of the preceding layer node or artificial neuron) [Par. 0046-0049; Par. 0053-0054],
each input element of the second group of input elements being associated with a respective second weight element
(activation function generating output of the inverse of the weight scaling factor) [Fig. 5; Par. 0054-0054);
generating, by a first accumulator, a first value based on the first group of input elements and the first weight elements
(CIM accelerator configured to receive input values combined with the weight associated with the input values that are output from the nodes or neurons of a preceding layer) [Fig. 2; Fig. 4; Fig. 5; Par. 0055; Par. 0054];
generating by a second accumulator, a second value based on the second group of input elements and the second weight elements
(CIM accelerator configured to receive next set of input values combined with the weight associated with the input values that are output from the nodes or neurons of a preceding layer) [Fig. 2; Fig. 4; Par. 0055; Par. 0054];
generating a third value based on a first operation on the first value and the second value
(CIM accelerator configured to receive additional set of input values combined with the weight associated with the input values that are output from the nodes or neurons of a preceding layer) [Fig. 2; Fig. 4; Par. 0055; Par. 0054];
generating a fourth value based on a second operation on the first value and the second value; and generating an output of the neuron based on the third value and the fourth value
(generating an output that is proportional to an input value comprising the weight, of a product of the output of the bit activation function and weight bit value at each weight bit position or neurons of a preceding layer, summation value across a plurality of nodes or artificial neurons of a preceding layer of the artificial neural network and having a plurality of weights associated therewith, respectively, of a product of an output from a respective one of the plurality of nodes or artificial neurons of the preceding layer) [Fig. 2; Fig. 4; [Par. 0054-0057].
With respect to claims 2 and 12, WANG teaches method executed by a data processing apparatus, wherein the input data comprises homomorphically- encrypted input data (input values to identify independent variables used to make inferences or categorizations [Par. 0044-0045].
With respect to claim 3, WANG teaches method executed by a data processing apparatus, wherein the input data is an element of a finite extension field, and obtaining the set of input elements based on the input data comprises decomposing the input data into the set of input elements based on a set of linear maps on the finite extension field (bitwise modified rectified linear unit activation function comprising a bit activation function, which is configured to generate an output that is proportional to an input) [Fig. 3; Par. 0040) .
With respect to claims 4 and 13, WANG teaches method executed by a data processing apparatus, wherein generating, by the first accumulator, the first value based on the first group of input elements and the first weight elements comprises generating a weighted sum of the first group of input elements, each input element of the first group of input elements being weighted by its respective first weight element (plurality of node layers comprising an input layer, one or more hidden layers, and an output layer wherein node or neuron connects to another having an associated weight wherein the input comprises a sum, across a second plurality of artificial neurons of a preceding layer of the artificial neural network having a plurality of weights associated therewith, the weight being a product of an output from a respective one of the second plurality of artificial neurons and one bit of a respective one of the plurality of weights) [Fig. 2; Par. 0046; Par. 0052-0054].
With respect to claims 5 and 14, WANG teaches method executed by a data processing apparatus, wherein each of the first weight elements is non-negative (the weights and the inputs received from a preceding layer are non-negative) [Par. 0049; Par. 0017].
With respect to claims 6 and 15, WANG teaches method executed by a data processing apparatus, wherein generating, by the second accumulator, the second value based on the second group of input elements and the second weight elements comprises a weighted sum of the second group of input elements, each input element of the second group of input elements being weighted by a negation of its respective second weight element (weighted sum being a summation across a number of nodes or artificial neurons in a preceding layer of the artificial neural network having associated respective weights of a product of an output from a respective one of the preceding layer node or artificial neuron) [Par. 0049-0054].
With respect to claims 7 and 16, WANG teaches method executed by a data processing apparatus, wherein each of the second weight elements is negative (training the artificial neural network using a bitwise modified rectified linear unit activation function for ones of the first plurality of artificial neurons, the bitwise modified rectified linear unit activation function comprising a bit activation function, which is configured to generate an output) [Par. 0049-0054].
With respect to claims 8 and 17, WANG teaches method executed by a data processing apparatus, wherein generating the third value based on the first operation on the first value and the second value comprises subtracting the second value from the first value to obtain the third value (generating an output that is proportional to an input value comprising the weight, of a product of the output of the bit activation function and weight bit value at each weight bit position or neurons of a preceding layer, which summation value across a plurality of nodes or artificial neurons of a preceding layer of the artificial neural network and having a plurality of weights associated therewith) [Fig. 2; Fig. 4; [Par. 0054-0057].
With respect to claims 9 and 18 WANG teaches method executed by a data processing apparatus, wherein generating the fourth value based on the second operation on the first value and the second value comprises: equating the fourth value to one in response to the first value being greater than or equal to the second value; and equating the fourth value to zero in response to the first value being less than the second value (input comprises a sum, across a second plurality of artificial neurons of a preceding layer of the artificial neural network having a plurality of weights associated therewith, the weight being a product of an output from a respective one of the second plurality of artificial neurons and one bit of a respective one of the plurality of weights) [Fig. 2; Par. 0046-0054].
With respect to claims 10 and 19, WANG teaches method executed by a data processing apparatus, wherein generating the output of the neuron based on the third value and the fourth value comprises: obtaining a product of the third value and the fourth value; and providing the product of the third value and the fourth value as the output of the neuron (summation value across a plurality of nodes or artificial neurons of two preceding layers of the artificial neural network and having weights associated therewith, respectively, of a product of an output from a respective one of the plurality of artificial neurons of the preceding layer) [Fig. 2; Fig. 4; [Par. 0054-0057].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20190377976 A1 (MARKRAM et al) teaching method for identifying decision moments in a recurrent artificial neural network including determining a complexity of patterns of activity in the recurrent artificial neural network, wherein the activity is responsive to input into the recurrent artificial neural network.
I. Tsmots, M. Medykovskyy and O. Skorokhoda, "Synthesis of hardware components for vertical-group parallel neural networks," 2015 Xth International Scientific and Technical Conference "Computer Sciences and Information Technologies" (CSIT), Lviv, Ukraine, 2015, pp. 1-4.
T. S. Sidhu, L. Mital and M. S. Sachdev, "Rule extraction from an artificial neural network based fault direction discriminator," 2000 Canadian Conference on Electrical and Computer Engineering. Conference Proceedings. Navigating to a New Era (Cat. No.00TH8492), Halifax, NS, Canada, 2000, pp. 692-696 vol.2.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE MICHEL BATAILLE whose telephone number is (571)272-4178. The examiner can normally be reached Monday - Thursday 7-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TIM VO can be reached at (571) 272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PIERRE MICHEL BATAILLE/ Primary Examiner, Art Unit 2138