CTNF 18/226,658 CTNF 93954 DETAILED ACTION 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is responsive to the original application filed on 7/26/2023. Acknowledgment is made with respect to a claim of priority to Provisional Application 63/392,432 filed on 7/26/2022. Claim Objections 07-29-01 AIA Claim s 1-20 are objected to because of the following informalities: Claim 1 recites the limitation “wherein each spike generated by the plurality of liquid layer spiking neurons for the duration of time that is generated is indicative of the temporal signal for the duration of time” (emphasis added) The redundant phrasing in this limitation creates grammatical ambiguity. It is unclear whether (1) the spike itself has duration or (2) the spike is counted during the duration. Please explain. For examination purposes, this limitation will be interpreted to mean “wherein each spike generated by the plurality of liquid layer spiking neurons for the duration of time that is generated is indicative of the temporal signal for the duration of time” (emphasis added). Independent claims 8 and 15 contain the same grammatically ambiguous limitation as claim 1 and are objected to for the same reasons as claim 1. Dependent claims 2-7, 9-14, and 16-20 depend on objected claims 1, 8, and 15 and are also objected to by virtue of this dependency . Appropriate correction is required. Claim 10 recites the limitation “propagate the plurality of input layer spiking neuron voltages as a corresponding negative inverted value of each input layer spiking neuron voltages ” (emphasis added) should read as “propagate the plurality of input layer spiking neuron voltages as a corresponding negative inverted value of each input layer spiking neuron voltage[[s]] ” (emphasis added) for consistent pluralization. Dependent claims 11-14 depend on objected claim 10 and are also objected to by virtue of this dependency. Appropriate correction is required. Claim Rejections - 35 USC § 112 07-30-02 AIA The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 07-34-01 Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “wherein the encoding signal applied to each input layer spiking neuron that is increased above the input spiking threshold ” (emphasis added). There is insufficient antecedent basis for the claimed “the input spiking threshold”. For examination purposes, the limitation will be interpreted to mean “wherein the encoding signal applied to each input layer spiking neuron that is increased above the input layer spiking voltage threshold ” (emphasis added)” to maintain consistent terminology throughout the claim. Dependent claims 2-7 depend on indefinite claim 1 , and are also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim 2 recites the limitation “The analog neuromorphic circuit of claim 2 ” (emphasis added). Claim 2 depends on itself. For examination purposes, the limitation will be interpreted to mean “The analog neuromorphic circuit of claim 2 1 ” (emphasis added)” Claim 2 further recites the limitation “wherein each weight of each input layer resistive memory is adjusted to generate the plurality of encoding signals that is indicative of the temporal pattern for the duration of time” (emphasis added). There is insufficient antecedent basis for the claimed “each weight” and “the temporal pattern”. For examination purposes, this limitation will be interpreted to mean “wherein [[each]] a weight of each input layer resistive memory is adjusted to generate the plurality of encoding signals that is indicative of [[the]] a temporal pattern for the duration of time” (emphasis added). Claim 2 further recites the limitation “when the corresponding encoding signal applied to each corresponding input layer spiking neuron is increased above t he input layer spiking threshold ” (emphasis added). There is insufficient antecedent basis for the claimed “the input layer spiking threshold”. For examination purposes, this limitation will be interpreted to mean “when the corresponding encoding signal applied to each corresponding input layer spiking neuron is increased above the input layer spiking voltage threshold ” (emphasis added). Appropriate correction is required. Claim 3 recites the limitation “to generate the plurality of liquid layer signals that is indicative of the temporal pattern for the duration of time” (emphasis added). There is insufficient antecedent basis for the claimed “the temporal pattern”. For examination purposes, this limitation will be interpreted to mean “to generate the plurality of liquid layer signals that is indicative of [[the]] a temporal pattern for the duration of time” (emphasis added). Dependent claims 4-7 depend on indefinite claim 3 , and are also rejected under 35 USC § 112(b) by virtue of this dependency Appropriate correction is required. Claim 6 recites the limitation “thereby generating the plurality of output voltages that is indicative of the temporal pattern for the duration of time, wherein each output voltage is generated at each output of each column included in the output resistive memory crossbar configuration” (emphasis added). There is insufficient antecedent basis for the claimed “the temporal pattern” and “each column”. For examination purposes, this limitation will be interpreted to mean “thereby generating the plurality of output voltages that is indicative of [[the]] a temporal pattern for the duration of time, wherein each output voltage is generated at each output of [[each]] a respective column included in the output resistive memory crossbar configuration” (emphasis added). Appropriate correction is required. Claim 8 recites the limitation “wherein the encoding signal applied to each input layer spiking neuron that is increased above the input spiking threshold ” (emphasis added). There is insufficient antecedent basis for the claimed “the input spiking threshold”. For examination purposes, the limitation will be interpreted to mean “wherein the encoding signal applied to each input layer spiking neuron that is increased above the input layer spiking voltage threshold ” (emphasis added)” to maintain consistent terminology throughout the claim. Dependent claims 2-7 depend on indefinite claim 1 , and are also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim 9 recites the limitation “wherein each weight of each input layer resistive memory is adjusted to generate the plurality of encoding signals that is indicative of the temporal pattern for the duration of time” (emphasis added). There is insufficient antecedent basis for the claimed “each weight” and “the temporal pattern”. For examination purposes, this limitation will be interpreted to mean “wherein [[each]] a weight of each input layer resistive memory is adjusted to generate the plurality of encoding signals that is indicative of [[the]] a temporal pattern for the duration of time” (emphasis added). Appropriate correction is required. Claim 10 recites the limitation “wherein each weight of the liquid layer resistive memory crossbar configuration is adjusted to generate the plurality of liquid layer signals that is indicative of the temporal pattern for the duration of time” (emphasis added). There is insufficient antecedent basis for the claimed “each weight” and “the temporal pattern”. For examination purposes, this limitation will be interpreted to mean “wherein [[each]] a weight of the liquid layer resistive memory crossbar configuration is adjusted to generate the plurality of liquid layer signals that is indicative of [[the]] a temporal pattern for the duration of time” (emphasis added). Dependent claims 11-14 depend on indefinite claim 10 , and are also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim 12 recites the limitation “thereby providing the count of spikes by each liquid layer spiking neuron during the duration of them via the binary signals generated by each corresponding spike counter” (emphasis added). There is insufficient antecedent basis for the claimed “them”. For examination purposes, this limitation will be interpreted to mean “thereby providing the count of spikes by each liquid layer spiking neuron during the duration of [[them]] time via the binary signals generated by each corresponding spike counter.” (emphasis added). Dependent claims 13-14 depend on indefinite claim 12 , and are also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim 15 recites the limitation “wherein the encoding signal is applied to each input layer spiking neuron that is increased above the input spiking neuron threshold is indicative of a temporal signal for a duration of time” (emphasis added). There is insufficient antecedent basis for the claimed the input spiking neuron threshold”. Further, the phrase is grammatically unclear because of the double “is” construction. It is unclear whether the signal is applied and increased or the increase occurs before application. For examination purposes, this limitation will be interpreted to mean “wherein the encoding signal [[is]] applied to each input layer spiking neuron that is increased above the input layer spiking [[neuron]] voltage threshold is indicative of a temporal signal for a duration of time” (emphasis added). Claim 15 further recites the limitation “wherein the identified temporal signal converted from the plurality of output voltages generated from the output resistive memory crossbar configuration is an attempt to identify the temporal signal for the duration of time” (emphasis added). It is not clear as to what constitutes an “attempt to identify” a signal. This limitation does not define what qualifies as an “attempt” or what happens when the attempt fails. For examination purposes, this limitation will be interpreted to mean “wherein the identified temporal signal converted from the plurality of output voltages generated from the output resistive memory crossbar configuration [[is an attempt to identify]] identifies the temporal signal for the duration of time” (emphasis added). Dependent claims 16-20 depend on indefinite claim 15 , and are also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim 16 recites the limitation “compress each output voltage generated at each output of each column included in the output resistive memory crossbar configuration from the propagation of the counting voltages through the resistive memory crossbar configuration ” (emphasis added). It is not clear as to which resistive memory crossbar configuration this limitation is referring to. Is it the output resistive memory crossbar configuration, the input layer resistive memory crossbar configuration, or something else? Please explain. For examination purposes, this limitation will be interpreted to mean “compress each output voltage generated at each output of each column included in the output resistive memory crossbar configuration from the propagation of the counting voltages through the output resistive memory crossbar configuration ” (emphasis added). Dependent claims 17-20 depend on indefinite claim 16 , and are also rejected under 35 USC § 112(b) by virtue of this dependency Appropriate correction is required. Claim 18 recites the limitation “update each weight associated with each output layer resistive memory included in the output resistive memory crossbar configuration” (emphasis added). There is insufficient antecedent basis for the claimed “each weight”. For examination purposes, this limitation will be interpreted to mean “update [[each]] a weight associated with each output layer resistive memory included in the output resistive memory crossbar configuration” (emphasis added). Dependent claims 19-20 depend on indefinite claim 18 , and are also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim 19 recites the limitation “update each weight associated with each output layer resistive memory included in the output resistive memory crossbar configuration thereby reducing the error between the identified temporal signal and the correct identification of the temporal signal for the duration of time for each iteration of input voltages applied to the input layer thereby propagating through the input layer and liquid layer” (emphasis added). There is insufficient antecedent basis for the claimed “each weight” and “each iteration”. For examination purposes, this limitation will be interpreted to mean “update [[each]] a weight associated with each output layer resistive memory included in the output resistive memory crossbar configuration thereby reducing the error between the identified temporal signal and the correct identification of the temporal signal for the duration of time for [[each]] an iteration of input voltages applied to the input layer thereby propagating through the input layer and liquid layer” (emphasis added). Dependent claim 20 depends on indefinite claim 19 , and is also rejected under 35 USC § 112(b) by virtue of this dependency. Appropriate correction is required. Claim Rejections - 35 USC § 102 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 07-07-aia AIA 07-07 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – 07-08-aia AIA (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 07-15 AIA Claim s 1-20 are rejected under 35 U.S.C. 102( a)(1 ) as being anticipated by Henderson et al. (Henderson et al., “Memristor Based Circuit Design for Liquid State Machine Verified with Temporal Classification”, Jul. 23, 2022, 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1-9, hereinafter “Henderson”). 1 Regarding claim 1 , Henderson discloses [a]n analog neuromorphic circuit that implements a plurality of resistive memories, comprising: (Abstract; “In this paper we combine these ideas and present a memristor crossbar based implementation of a liquid state machine based on spiking neurons … The proposed circuit implementing this LSM is designed using SPICE to ensure accuracy at the device level, which aids in detailed analysis and circuit optimizations. Liquid layer activity is converted to state vectors using custom hardware that does not bottleneck throughput” ) an input layer configured to encode a plurality of input layer spiking neurons based on a plurality of encoding signals generated from a plurality of input voltages applied to the input layer via an input layer resistive memory crossbar configuration thereby encoding each input layer spiking neuron to spike when the corresponding encoding signal applied to each input layer spiking neuron is increased above an input layer spiking voltage threshold, wherein the encoding signal applied to each input layer spiking neuron that is increased above the input spiking threshold is indicative of a temporal signal for a duration of time; (Figures 2 and 4-12; and §V.A, Input Layer) a liquid layer configured to count each spike generated by a plurality of liquid layer spiking neurons for the duration of time based on a plurality of liquid layer signals generated from a plurality of input layer spiking neuron voltages generated from each input layer spiking neuron based on the plurality of input layer spiking neuron voltages applied to the liquid layer via a liquid layer resistive memory crossbar configuration thereby triggering each liquid layer spiking neuron to spike when the corresponding liquid layer signal is increased above a liquid layer spiking voltage threshold, wherein each spike generated by the plurality of liquid layer spiking neurons for the duration of time that is generated is indicative of the temporal signal for the duration of time; and (Figures 2 and 4-12; and §V.B&C, Liquid Layer and State Vector Collection) an output layer configured to identify the temporal signal for the duration of time based on a plurality of output voltages generated from a plurality of counting voltages generated from the count of each spike generated by the plurality of liquid layer spiking neurons for the duration of time applied to the output layer via an output resistive memory crossbar configuration, wherein the plurality of output voltages generated from the output resistive memory crossbar configuration is indicative of the temporal signal for the duration of time (Figures 2 and 4-12; and §V.C&D, State Vector Collection and Output Layer). Regarding claim 8 , it is a method claim corresponding to the steps of claim 1, and is rejected for the same reasons as claim 1. Regarding claims 2 and 9 , the rejection of claims 1 and 8 (per 35 USC §112(b) interpretation above) are incorporated and Henderson further discloses wherein the input layer is further configured to: propagate the plurality of input voltages applied to the input layer resistive memory crossbar configuration through a plurality of input resistive memories positioned in the input layer resistive memory crossbar configuration thereby generating the plurality of encoding signals, wherein each weight of each input layer resistive memory is adjusted to generate the plurality of encoding signals that is indicative of the temporal pattern for the duration of time; andclassify each encoding signal based on the weights of each input layer resistive memory as applied to each corresponding input layer spiking neuron as the plurality of input voltages propagate through the plurality of input resistive memories thereby triggering each corresponding input layer spiking neuron to spike when the corresponding encoding signal applied to each corresponding input layer spiking neuron is increased above the input layer spiking threshold, wherein a combination of input layer spiking neurons that spike generates a plurality of input layer spiking neuron voltages that is indicative of the temporal signal for the duration of time (Figures 2 and 4-12; and §V.A, Input Layer). Regarding claims 3 and 10 , the rejection of claims 1and 8 are incorporated and Henderson further discloses wherein the liquid layer is further configured to: propagate the plurality of input layer spiking neuron voltages generated from the plurality of input layer spiking neurons and propagate the plurality of input layer spiking neuron voltages as a corresponding negative inverted value of each input layer spiking neuron voltage as applied to the liquid layer resistive memory crossbar configuration through a plurality of liquid layer resistive memories positioned in the liquid layer resistive memory crossbar configuration thereby generating the plurality of liquid layer signals, wherein each weight of the liquid layer resistive memory crossbar configuration is adjusted to generate the plurality of liquid layer signals that is indicative of the temporal pattern for the duration of time; and classify each liquid layer signal based on the weights of each liquid layer resistive memory as applied to each corresponding liquid layer spiking neuron as the plurality of input layer spiking neuron voltages and each corresponding negative inverted value of each input layer spiking neuron propagate through the plurality of liquid layer resistive memories thereby triggering each corresponding liquid layer spiking neuron to spike when the corresponding liquid layer signal applied to each corresponding liquid layer spiking neuron is increased above the liquid layer spiking voltage threshold, wherein a count of liquid layer spiking neurons that spike for the duration of time generates the plurality of counting voltages that is indicative of the temporal signal for the duration of time (Figures 2 and 4-12; and §V.B, Liquid Layer). Regarding claims 4 and 11 , the rejection of claims 1 and 3 and 8 and 10 are incorporated and Henderson further discloses wherein the liquid layer is further configured to: generate the count of liquid layer spiking neurons that spike for the duration of time when a plurality of spike counters associated with each corresponding liquid layer spiking neuron detects a liquid layer spiking neuron voltage from each corresponding liquid layer spiking neuron that spikes for the duration of time thereby triggering each spike counter to generate each corresponding counting voltage that is indicative of each time each liquid layer spiking neuron spiked, wherein the counting voltages generated by each spike counter is indicative of the temporal signal for the duration of time (Figures 2 and 4-12; and §V.B, Liquid Layer). Regarding claims 5 and 12 , the rejection of claims 1, 3, 4, 8, 10, and 11 are incorporated and Henderson further discloses wherein the liquid layer is further configured to: output the counting voltages generated by each corresponding spike counter as a corresponding binary signal generated by each corresponding spike counter that is indicative of each time each liquid layer spiking neuron that is associated with each corresponding spike counter spiked, wherein each counting voltage associated with each spike counter is a binary bit that is converted into the corresponding binary signal generated by each spike counter thereby providing the count of spikes by each liquid layer spiking neuron during the duration of time via the binary signals generated by each corresponding spike counter (Figures 2 and 4-12; and §V.B&C, Liquid Layer and State Vector Collection). Regarding claims 6 and 13 , the rejection of claims 1, 3, 4, 5, 8, 10, 11, and 12 are incorporated and Henderson further discloses wherein the output layer is further configured to: propagate the plurality of counting voltages generated from the spike counters that provides the count of each time each liquid layer spiking neuron spiked during the duration of time and propagate the plurality of counting voltages as a corresponding negative inverted value of each counting voltage as applied to the output resistive memory crossbar configuration through a plurality of output layer resistive memories positioned in the output resistive memory crossbar configuration thereby generating the plurality of output voltages that is indicative of the temporal pattern for the duration of time, wherein each output voltage is generated at each output of each column included in the output resistive memory crossbar configuration (Figures 2 and 4-12; and §V.B&C&D, Liquid Layer and State Vector Collection and Output Layer). Regarding claim 7 and 14 , the rejection of claims 1, 3, 4, 5, 6, 8, 10, 11, 12, and 13 are incorporated and Henderson further discloses wherein the output layer is further configured to: compress each output voltage generated at each output of each column included in the output resistive memory crossbar configuration from the propagation of the counting voltages through the resistive memory crossbar configuration to a compressed output signal, wherein each compressed output signal is a binary voltage value that represents the output voltage; and identify the temporal signal for the duration of time based on a combination of each compressed output signal, wherein the combination of compressed output signals is indicative of the temporal signal for the duration of time (Figures 2 and 4-12; and §V.B&C&D, Liquid Layer and State Vector Collection and Output Layer). Regarding claim 15 , Henderson discloses [a]n analog neuromorphic circuit that implements a plurality of resistive memories, comprising: (Abstract; “In this paper we combine these ideas and present a memristor crossbar based implementation of a liquid state machine based on spiking neurons … The proposed circuit implementing this LSM is designed using SPICE to ensure accuracy at the device level, which aids in detailed analysis and circuit optimizations. Liquid layer activity is converted to state vectors using custom hardware that does not bottleneck throughput” ) an input layer configured to encode a plurality of input layer spiking neurons based on a plurality of encoding signals generated from a plurality of input voltages applied to the input layer via an input layer resistive memory crossbar configuration thereby encoding each input layer spiking neuron to spike when the corresponding encoding signal is applied to each input layer spiking neuron is increased above an input layer spiking voltage threshold, wherein the encoding signal is applied to each input layer spiking neuron that is increased above the input spiking neuron threshold is indicative of a temporal signal for a duration of time; (Figures 2 and 4-12; and §V.A, Input Layer) a liquid layer configured to count each spike generated by a plurality of liquid layer spiking neurons for the duration of time based on a plurality of liquid layer signals generated from a plurality of input layer spiking neuron voltages generated from each input layer spiking neuron based on the plurality of input layer spiking neuron voltages applied to the liquid layer via a liquid layer resistive memory crossbar configuration thereby triggering each liquid layer spiking neuron to spike when the corresponding liquid layer signal is increased above a liquid layer spiking voltage threshold, wherein each spike generated by the plurality of liquid layer spiking neurons for the duration of time that is generated is indicative of the temporal signal for the duration of time (Figures 2 and 4-12; and §V.B&C, Liquid Layer and State Vector Collection) an output layer configured to generate an identified temporal signal converted from a plurality of output voltages generated from a plurality of counting voltages generated from the count of each spike generated by the plurality of liquid layer spiking neurons for the duration of time applied to the output layer via an output resistive memory crossbar configuration, wherein the identified temporal signal converted from the plurality of output voltages generated from the output resistive memory crossbar configuration is an attempt to identify the temporal signal for the duration of time; and (Figures 2 and 4-12; and §V.C&D, State Vector Collection and Output Layer). a training layer configured to train the output resistive memory crossbar configuration based on a difference in the identified temporal signal converted from the output voltages of the output resistive memory crossbar configuration as compared to the temporal signal for the duration of time, wherein the output resistive memory crossbar configuration is trained to reduce the difference between the identified temporal signal and the temporal signal for the duration of time (Page 2, Column 2; “Once the input layer and liquid weight matrices are initialized, they are kept fixed during the training process. Alternatively, the readout layer is the only trained layer, and it is used to perform classification on the information contained within the liquid states. Unlike the other layers, the readout layer consists of fully connected artificial neurons and implements a single layer perceptron. Training is performed in a supervised manner through the optimization of an output weight matrix initialized in a random distribution” ; and §V.D, Output Layer; and §VII). Regarding claim 16 , the rejection of claim 15 is incorporated and Henderson further discloses wherein the output layer is further configured to: compress each output voltage generated at each output of each column included in the output resistive memory crossbar configuration from the propagation of the counting voltages through the resistive memory crossbar configuration to a compressed output signal, wherein each compressed output signal is a binary voltage value that represents each corresponding output voltage and is included in the identified temporal signal provided to the training layer (Figures 2 and 4-12; and §V.C&D, State Vector Collection and Output Layer). Regarding claim 17 , the rejection of claims 15 and 16 are incorporated and Henderson further discloses wherein the training layer is further configured to: determine each actual binary voltage value that corresponds to each binary voltage value included in each corresponding compressed output signal generated from each compressed output voltage at each output of each column included in the output resistive memory crossbar configuration and included in the identified temporal signal, wherein each actual binary voltage value corresponds to a correct identification of the temporal signal for the duration of time (Page 2, Column 2; “Once the input layer and liquid weight matrices are initialized, they are kept fixed during the training process. Alternatively, the readout layer is the only trained layer, and it is used to perform classification on the information contained within the liquid states. Unlike the other layers, the readout layer consists of fully connected artificial neurons and implements a single layer perceptron. Training is performed in a supervised manner through the optimization of an output weight matrix initialized in a random distribution” ; and Figures 2 and 4-12; and §V.C&D, State Vector Collection and Output Layer). Regarding claim 18 , the rejection of claims 15, 16, and 17 are incorporated and Henderson further discloses wherein the training layer is further configured to: compare each binary voltage value that represents each corresponding output voltage and is included in the identified temporal signal provided to the training layer to each corresponding actual binary voltage value that corresponds to the correct identification of the temporal signal for the duration of time; determine a deviation between each binary voltage value and each corresponding actual binary voltage value, wherein the deviation between each binary voltage value and each corresponding actual binary voltage value is indicative of an error between the identified temporal signal and the correct identification of the temporal signal for the duration of time; and update each weight associated with each output layer resistive memory included in the output resistive memory crossbar configuration based on the deviation between each binary voltage value and each corresponding actual binary voltage value to train the output resistive memory crossbar configuration (Page 2, Column 2; “Once the input layer and liquid weight matrices are initialized, they are kept fixed during the training process. Alternatively, the readout layer is the only trained layer, and it is used to perform classification on the information contained within the liquid states. Unlike the other layers, the readout layer consists of fully connected artificial neurons and implements a single layer perceptron. Training is performed in a supervised manner through the optimization of an output weight matrix initialized in a random distribution” ; and Figures 2 and 4-12; and §V.C&D, State Vector Collection and Output Layer). Regarding claim 19 , the rejection of claims 15, 16, 17, and 18 are incorporated and Henderson further discloses wherein the training layer is further configured to: update each weight associated with each output layer resistive memory included in the output resistive memory crossbar configuration thereby reducing the error between the identified temporal signal and the correct identification of the temporal signal for the duration of time for each iteration of input voltages applied to the input layer thereby propagating through the input layer and liquid layer (Page 2, Column 2; “Once the input layer and liquid weight matrices are initialized, they are kept fixed during the training process. Alternatively, the readout layer is the only trained layer, and it is used to perform classification on the information contained within the liquid states. Unlike the other layers, the readout layer consists of fully connected artificial neurons and implements a single layer perceptron. Training is performed in a supervised manner through the optimization of an output weight matrix initialized in a random distribution” ; and Figures 2 and 4-12; and §V.C&D, State Vector Collection and Output Layer). Regarding claim 20 , the rejection of claims 15, 16, 17, 18, and 19 are incorporated and Henderson further discloses wherein each resistive memory is a memristor (Abstract; “we combine these ideas and present a memristor crossbar-based implementation of a liquid state machine based on spiking neurons” ) . Conclusion 07-96 AIA The prior art made of record and not relied upon is considered pertinent to applicant's disclosure : Qiu et al., “Enhancing the Performance of Liquid State Machine with Time-Division Sampling and Reservoir Reconstructing”, May 30, 2022, 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), pp. 1108-1115. Soures et al., “Robustness of a memristor based liquid state machine”, Jul. 3, 2017, 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2414-2420. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brent Hoover whose telephone number is (303)297-4403. The examiner can normally be reached Monday - Friday 9-5 MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached on 571-270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRENT JOHNSTON HOOVER/Primary Examiner, Art Unit 2127 Application/Control Number: 18/226,658 Page 2 Art Unit: 2127 Application/Control Number: 18/226,658 Page 3 Art Unit: 2127 Application/Control Number: 18/226,658 Page 4 Art Unit: 2127 Application/Control Number: 18/226,658 Page 5 Art Unit: 2127 Application/Control Number: 18/226,658 Page 6 Art Unit: 2127 Application/Control Number: 18/226,658 Page 7 Art Unit: 2127 Application/Control Number: 18/226,658 Page 8 Art Unit: 2127 Application/Control Number: 18/226,658 Page 9 Art Unit: 2127 Application/Control Number: 18/226,658 Page 10 Art Unit: 2127 Application/Control Number: 18/226,658 Page 11 Art Unit: 2127 Application/Control Number: 18/226,658 Page 12 Art Unit: 2127 Application/Control Number: 18/226,658 Page 13 Art Unit: 2127 Application/Control Number: 18/226,658 Page 14 Art Unit: 2127 Application/Control Number: 18/226,658 Page 15 Art Unit: 2127 Application/Control Number: 18/226,658 Page 16 Art Unit: 2127 Application/Control Number: 18/226,658 Page 17 Art Unit: 2127 Application/Control Number: 18/226,658 Page 18 Art Unit: 2127 Application/Control Number: 18/226,658 Page 19 Art Unit: 2127 Application/Control Number: 18/226,658 Page 20 Art Unit: 2127 Application/Control Number: 18/226,658 Page 21 Art Unit: 2127 1 Note that this reference qualifies under 35 USC § 102(a)(1) as prior art because the application names fewer joint inventors (Henderson, Yakopcic, and Taha) than the prior art reference (Henderson, Yakopcic, Harbour, and Taha) and the reference was publicly disclosed by the joint inventors of the reference at the IJCNN conference that took place July 18-23, 2022. See MPEP § 2153.01 for more details.