DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-8 and 10-14 are pending. (Note claim objection below.) Claim Objections Claims 1-8 and 10-14 are objected to because of the following informalities . The numbering of claims is not in accordance with 37 CFR 1.126, given that Claim 9 has been omitted. When new claims are presented, they must be numbered consecutively beginning with the number next following the highest numbered claims previously presented. Appropriate correction is required. (The rejections below use applicant’s claim numbering.) Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 3 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 3 and 13 recite “ wherein the [ memory ] is a resistive random-access memory .” However, either the claim itself or a parent claim limits the type of memory to “ volatile memory.” Typically, resistive random-access memory (ReRAM or RRAM) is considered to be non-volatile . This inconsistency raises the issue of indefiniteness, since it can’t be determined which type of memory , volatile or non-volatile, applicant is attempting to claim . Applicant’s specification (see at least [0063]) does not provide clarifying explanation. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim s 1-8 and 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hoang (US 20210110235) in view of Hou (US 20220164630). Claim 1 Hoang discloses: A neural network accelerator architecture for multiple task adaptation {[0098] The embodiments presented above provide storage class memory array, or sub-array, for in-memory computing architectures to accelerate convolution neural network inference, i.e., tasks .}, comprising: a memory comprising a plurality of subarrays, each subarray comprising M rows and N columns of memory cells {[0063] FIG. 14 is an embodiment for an architecture that can leverage all-zero columns of a storage class memory sub-array to reduce the number of bit line accesses to improve performance and energy efficiency. FIG. 14 illustrates an array, or portion of an array, 1401 of resistive non-volatile memory cells and peripheral elements, similar to the portions of the array shown in FIG. 9, but with the memory cells represented as blocks at the intersection of word lines and bit lines. N bit lines, running from BL.sup.0 to BL.sup.N−1, and M word lines, running from WL.sup.0 to WL.sup.M−1, are shown and can represent the whole of an array or a compact portion of a larger array.}; a source line driver connected to a plurality of N source lines, each source line corresponding to a column in the subarray {[0067] The use of the ZCI values can provide energy savings as, when ZCI=0, there is no need to access the corresponding bit line. This is illustrated schematically with multiplex circuit MUX 1411 that receives the bit line addresses and also the ZCI values from the ZCI register 1420. If the selected bit line address matches a bit line with ZCI=0, the MUX 1411 can notify the bit line activation circuit, along with the ADC 1407, i.e., a source line driver , and shift and add 1409 so that the corresponding column can just be skipped in the sensing operation.}; a binary mask buffer memory having size at least N bits, each bit corresponding to a column in the subarray, where a 0 corresponds to turning off the column for a convolution operation and a 1 corresponds to turning on the column for the convolution operation {[0065] For this purpose, a Zero Column Index (ZCI) 1420 is added at the array or sub-array level, adding one vector per array or sub-array. The size of the ZCI 1420 is the number of bit lines in the array or subarray 1401, which can be the same as the corresponding row buffer. Each bit line BL.sup.i has an entry in ZCI 1420 where, in this embodiment, ZCI.sup.i=‘0’ indicates the i.sup.th column having all-zero weights and ZCI.sup.i=‘1’ indicates the i.sup.th column having at least one non-zero weight, i.e., a binary mask buffer .}; and a controller configured to selectively drive each of the N source lines with a corresponding value from the mask buffer {[0067] This is illustrated schematically with multiplex circuit MUX 1411, i.e., a controller , that receives the bit line addresses and also the ZCI values from the ZCI register 1420. If the selected bit line address matches a bit line with ZCI=0, the MUX 1411 can notify the bit line activation circuit, along with the ADC 1407 and shift and add 1409 so that the corresponding column can just be skipped in the sensing operation.}; wherein each column in the subarray is configured to store a convolution kernel {[0063] The memory cells have been written with a set of weights of a CNN to form a filter for use in a convolution, i.e., a convolution kernel . Depending on the embodiment, the weights can be binary, multi-level, or even analog. Similarly, the inputs can be binary, multi-level, or even analog. In any case, the weights are shown to have a fairly high degree of sparsity, where zero weights are represented as having a “0” in the corresponding block and non-zero weights have their corresponding block shaded. In FIG. 14, the columns corresponding to BL.sup.1 and BL.sup.N−2 are all-zero columns, so that for any input values, the output along the bit line will be zero.}. Hoang, while disclosing non-volatile memory doesn’t explicitly disclose volatile memory; however, Hou, in a similar field of endeavor directed to deep convolutional neural networks, teaches: volatile memory {[0126] The memory 1004 may be implemented by any type of volatile or non-volatile storage devices or a combination thereof, and the memory 1004 may be a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic memory, a flash memory, a magnetic disk or a compact disk.}. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Hoang to include the features of Hou, in order to provide high-speed, low-latency read/write access necessary for real-time inference and training. Claims 2 and 12 Hou further teaches: random access memory {[0126] memory 1004 may be a Static Random Access Memory (SRAM).}. The motivation and rationale to include the additional features of Hou is the same as set forth previously. Claims 3 and 13 Hoang further discloses: r esistive random access memory {[0040] Other examples of suitable technologies for memory cells of the memory structure 326 include ReRAM memories ( resistive random access memories ).}. Claims 4 and 11 Hou further teaches: a real-valued mask buffer configured to store a calculated real-valued mask / calculating real-valued masks to correspond to each task in the set of tasks {[0044] The present disclosure provides a MIMO strategy in the 3D separable CNN. While existing networks are SISO, MISO, or two-input two-output, the MIMO network provided in the present disclosure can take multiple input frames and output multiple binary masks using temporal-dimension in each sample.}; and a sigmoid element configured to convert the real-valued mask into a binary mask for storage in the binary mask buffer memory / calculating the corresponding binary masks from the real-valued masks with a sigmoid function {[0086] Finally in block 8, the feature maps are projected to a 4D output of size 6×H×W×1, and a sigmoid activation function is appended to generate the probability masks for 6 successive frames.}. The motivation and rationale to include the additional features of Hou is the same as set forth previously. Claim 5 Hou further teaches: wherein the real-valued mask buffer comprises floating-point values and the sigmoid element is a thresholding element having a threshold of 0.5 {[0086] A threshold of 0.5 is applied to convert the probability masks to binary masks that indicate the detected moving objects.}. The motivation and rationale to include the additional features of Hou is the same as set forth previously. Claim 6 Hoang further discloses: wherein each memory cell stores 2 bits {[0098] The use of the ZCI and ZRI bits allows for the elimination of unnecessary accesses to bit lines or word lines that contain all-zero weight values by deactivating their associated input/out, improving both performance and energy efficiency of CNN inference with sparse matrix multiplication.}. Claim 7 Hoang further discloses : a plurality of N/2 shift- adders, each configured to combine two 2-bit weights from adjacent columns of the subarray into a 4-bit partial sum activation {[0054] A s shift and add circuit 909 is used to perform accumulation operations from the values received from the ADC 907. Depending on the embodiment, the input and weight values can be binary or multi-state.}. Claim 8 Ho ang further discloses : wherein the binary mask buffer memory has a size of at least 2N bits, and is configured to store two separate masks of size N, each bit of each mask corresponding to a column in the subarray {[0065] For this purpose, a Zero Column Index (ZCI) 1420 is added at the array or sub-array level, adding one vector per array or sub-array. The size of the ZCI 1420 is the number of bit lines in the array or subarray 1401, which can be the same as the corresponding row buffer. Each bit line BL.sup.i has an entry in ZCI 1420 where, in this embodiment, ZCI.sup.i=‘0’ indicates the i.sup.th column having all-zero weights and ZCI.sup.i=‘1’ indicates the i.sup.th column having at least one non-zero weight, i.e., the binary mask buffer memory has a size of at least 2N bits .}. Claim 10 Hoang discloses: A method of machine learning for multiple task adaptation {[0022] When a convolution neural network (CNN) performs an inference operation, i.e., machine learning for multiple task adaptation , the most time consuming parts of the inference are the convolution operations as these are very computationally intensive matrix multiplication operations using large amounts of data. The convolutions, or matrix multiplications, are performed using sets of weights, referred to as filters, determined during a training process for the CNN; method described in [0101].}, comprising: loading a backbone model into a memory, the memory comprising a plurality of subarrays, each subarray comprising M rows and N columns of memory cells, wherein each column of the N columns is configured to store a convolution kernel of the backbone model {[0063] FIG. 14 is an embodiment for an architecture that can leverage all-zero columns of a storage class memory sub-array to reduce the number of bit line accesses to improve performance and energy efficiency. FIG. 14 illustrates an array, or portion of an array, 1401 of resistive non-volatile memory cells and peripheral elements, similar to the portions of the array shown in FIG. 9, but with the memory cells represented as blocks at the intersection of word lines and bit lines. N bit lines, running from BL.sup.0 to BL.sup.N−1, and M word lines, running from WL.sup.0 to WL.sup.M−1, are shown and can represent the whole of an array or a compact portion of a larger array, i.e., loading a backbone model into a memory, the memory comprising a plurality of subarrays, each subarray comprising M rows and N columns of memory cells, wherein each column of the N columns is configured to store a convolution kernel of the backbone model .}; selecting a set of tasks to run on the backbone model, each task having a corresponding binary mask configured to enable or disable each of the N columns of the subarray {[0065] For this purpose, a Zero Column Index (ZCI) 1420 is added at the array or sub-array level, adding one vector per array or sub-array. The size of the ZCI 1420 is the number of bit lines in the array or subarray 1401, which can be the same as the corresponding row buffer. Each bit line BL.sup.i has an entry in ZCI 1420 where, in this embodiment, ZCI.sup.i=‘0’ indicates the i.sup.th column having all-zero weights and ZCI.sup.i=‘1’ indicates the i.sup.th column having at least one non-zero weight, i.e., selecting a set of tasks to run on the backbone model, each task having a corresponding binary mask configured to enable or disable each of the N columns of the subarray .}; selecting one task of the set of tasks and applying the binary mask corresponding to the task to the N columns of the subarray, disabling at least one column of the subarray {[0067] This is illustrated schematically with multiplex circuit MUX 1411 that receives the bit line addresses and also the ZCI values from the ZCI register 1420. If the selected bit line address matches a bit line with ZCI=0, the MUX 1411 can notify the bit line activation circuit, along with the ADC 1407 and shift and add 1409 so that the corresponding column can just be skipped in the sensing operation, i.e., selecting one task of the set of tasks and applying the binary mask corresponding to the task to the N columns of the subarray, disabling at least one column of the subarray .}; and executing the task on the backbone model, ignoring the disabled convolution kernel to calculate a result {See [0067].}. Hoang, while disclosing non-volatile memory doesn’t explicitly disclose volatile memory; however, Hou, in a similar field of endeavor directed to deep convolutional neural networks, teaches: volatile memory {[0126] The memory 1004 may be implemented by any type of volatile or non-volatile storage devices or a combination thereof, and the memory 1004 may be a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic memory, a flash memory, a magnetic disk or a compact disk.}. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Hoang to include the features of Hou, in order to provide high-speed, low-latency read/write access necessary for real-time inference and training. Claim 1 4 Hou further teaches: calculating a first partial sum in a first subarray of the plurality of subarrays, and a second partial sum in a second subarray of the plurality of subarrays {[0047] For example, the model in the running average background dynamically updates the background image to adapt to the scene changes by computing the weighted sum of the current frame and the previously estimated background image.}; and combining the first and second partial sums to calculate an activation {See [0047].}. The motivation and rationale to include the additional features of Hou is the same as set forth previously. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure : “ Distributed Inference Acceleration with Adaptive DNN Partitioning and Offloading ” (NPL attached), which teaches: Deep neural networks (DNN) are the de-facto solution behind many intelligent applications of today, ranging from machine translation to autonomous driving. DNNs are accurate but resource-intensive, especially for embedded devices such as mobile phones and smart objects in the Internet of Things. The proposed scheme includes both an adaptive DNN partitioning scheme and a distributed algorithm to offload computations based on a matching game approach. Results obtained by using a self-driving car dataset and several DNN benchmarks show that the proposed solution significantly reduces the total latency for DNN inference compared to other distributed approaches and is 2.6 to 4.2 times faster than the state of the art. US 20180373975 , which teaches: Subject matter disclosed herein may relate to storage and/or processing of signals and/or states representative of neural network parameters in a computing device, and may relate more particularly to compressing signals and/or states representative of neural network nodes in a computing device. US 20210004668 , which teaches: Described is a neural network accelerator tile for exploiting input sparsity. The tile includes a weight memory to supply each weight lane with a weight and a weight selection metadata, an activation selection unit to receive a set of input activation values and rearrange the set of input activation values to supply each activation lane with a set of rearranged activation values, a set of multiplexers including at least one multiplexer per pair of activation and weight lanes, where each multiplexer is configured to select a combination activation value for the activation lane from the activation lane set of rearranged activation values based on the weight lane weight selection metadata, and a set of combination units including at least one combination unit per multiplexer, where each combination unit is configured to combine the activation lane combination value with the weight lane weight to output a weight lane product. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT JOHN SAMUEL WASAFF whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-5091 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday through Friday 8:00 am to 6:00 pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT SARAH MONFELDT can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-1833 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT JOHN SAMUEL WASAFF Primary Examiner Art Unit 3629 /JOHN S. WASAFF/ Primary Examiner, Art Unit 3629