Prosecution Insights
Last updated: April 19, 2026
Application No. 19/034,331

SYSTEM AND METHOD FOR EFFICIENTLY SCALING AND CONTROLLING INTEGRATED IN-MEMORY COMPUTE

Non-Final OA §103§112
Filed
Jan 22, 2025
Examiner
WADDY JR, EDWARD
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Rain Neuromorphics Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
278 granted / 337 resolved
+27.5% vs TC avg
Strong +23% interview lift
Without
With
+23.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
350
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
1.9%
-38.1% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 337 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is sent in response to Applicant’s Communication received on 22 January 2025 for application number 19/034,331. The Office hereby acknowledges receipt of the following and placed of record in file: Oath/Declaration, Abstract, Specification, Drawings, and Claims. Claims 1 – 20 are presented for examination. Priority As required by M.P.E.P. 201.14(c), acknowledgement is made of applicant’s claim for priority based on the application filed on 24 February 2024 (Provisional 63/624,491, 63/624,479, and 63/624,487). Drawings The applicant’s drawings submitted are acceptable for examination purposes. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 6, 14, and 19 rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 6 recites in the last limitation “…the input vector driving circuitry configured to drive the input vector to the second bank if the second bank stores a portion of the plurality of weights.” With respect to the use of “if” the second bank stores a portion of the plurality of weights, it is unclear what happens when the second bank does not store a portion of the plurality of weights. It is unclear if the claim is implying nothing happens, or the alternative result. The claim language is indefinite. Examiner suggests changing “if the second bank stores a portion…” to “[[if]]in response to the second bank storing a portion…” to use definite language and overcome the rejection. Claims 14 and 19 recite similar language and are rejected with like reasoning. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. [hereafter as Chou], US Pub. No. 2022/0351032 A1 in view of Mori et al. [hereafter as Mori], US Pub. No. 2025/0239281 A1 in view of Gupta et al. [hereafter as Gupta], US Pub. No. 2019/0205741 A1 and further in view of Guzy et al. [hereafter as Guzy], US Patent No. 10,802,735 B1. As per claim 1, Chou discloses a compute engine [“ANN accelerator 170 is configured to execute machine learning models, such as, for example, ANNs, CNNs, RNNs, etc., in support of various applications embodied by software modules 134. Generally, ANN accelerator 170 may include one or more processing engines (PEs) 180.”] [para. 0152], comprising: a compute-in-memory (CIM) hardware module, the CIM hardware module including an array of storage cells for storing a plurality of weights corresponding to a matrix [“In another embodiment of the CIM array module, the voltage levels received by the CIM array are provided by a plurality of digital-to-analog converters (DACs); the conductance of each cell of the CIM array is programmed to represent one element of a sparse weight matrix”] [para. 0180]; wherein the array of storage cells includes a portion of the array of storage cells corresponding to a plurality of rows and a particular number of columns corresponding to a portion of the matrix [“CIM array including a plurality of selectable row signal lines, a plurality of column signal lines and a plurality of cells, each selectable row signal line configured to receive one of the voltage levels, each cell located at an intersection of a row signal line and a column signal line, each cell having a programmable conductance”] [para. 0161] [para. 0180]. However, Chou does not explicitly disclose an input buffer; and a compute-in-memory (CIM) hardware module coupled with the input buffer, the input buffer being configured to provide an input vector to the CIM hardware module, and compute logic configured to perform a vector-matrix multiplication (VMM) for the matrix and the input vector; wherein the array of storage cells includes a plurality of storage blocks, each storage block including a portion of the array of storage cells; and the compute logic includes a plurality of compute logic blocks, each compute logic block corresponding to a storage block of the plurality of storage blocks, the compute logic block performing a portion of the VMM for the portion of the matrix. Mori teaches an input buffer [“The input buffer 103 is configured to receive input data to perform a CIM operation with the weight data or activation data stored in the CIM array 104.”] [para. 0022]; and a compute-in-memory (CIM) hardware module coupled with the input buffer [“The input buffer 103 is configured to receive input data to perform a CIM operation with the weight data or activation data stored in the CIM array 104.”] [para. 0022], the input buffer being configured to provide an input vector to the CIM hardware module [“The CIM array includes bit cells arranged in columns, in which the CIM array generates, in response to an input vector and a stored vector in the bit cells, accumulation results.”] [Abstract] [“the input buffer 103 forwards the first bits of the input vector to the CIM array 104”] [para. 0103]. Chou and Mori are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou with Mori in order to modify Chou for “an input buffer; and a compute-in-memory (CIM) hardware module coupled with the input buffer” as taught by Mori. One of ordinary skill in the art would be motivated to combine Chou with Mori before the effective filing date of the claimed invention to improve a system by providing for the ability to “shorten the time for computation, compute-in-memory (CIM) devices are used to process dot product multiplications based on performing multiply-accumulate (MAC) operations.”] [Mori, para. 0001]. However, Chou and Mori do not explicitly disclose compute logic configured to perform a vector-matrix multiplication (VMM) for the matrix and the input vector; wherein the array of storage cells includes a plurality of storage blocks, each storage block including a portion of the array of storage cells; and the compute logic includes a plurality of compute logic blocks, each compute logic block corresponding to a storage block of the plurality of storage blocks, the compute logic block performing a portion of the VMM for the portion of the matrix. Gupta teaches compute logic configured to perform a vector-matrix multiplication (VMM) for the matrix and the input vector [“The vector and matrix computations are executed through memristor crossbar arrays. As shown in FIG. 2, input voltages V.sup.1 corresponding to an input vector are applied along the rows of an N×M array, which has been programmed according to an N×M matrix input 210. The output currents are collected through the columns by measuring output voltage V.sup.O. At each column, every input voltage is weighted by the corresponding memristance (1/G.sub.i,j) and the weighted summation appears at the output voltage. Thus, the relation between the input and output voltages can be represented in a vector matrix multiplication form V.sup.O=−V.sup.1GR.sub.s (negative feedback of op-amp), where G is an N×M matrix determined by conductances of the memristor crossbar array.”] [para. 0050]; the compute logic block performing a portion of the VMM for the portion of the matrix [“wherein the input to the VMM processor includes a VMM signal-processing chain, the chain comprising digital logic blocks that perform a set of sequential functions to prepare floating point data for VMM,”] [claim 2]. Chou, Mori, and Gupta are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou and Mori with Gupta in order to modify Chou and Mori for “compute logic configured to perform a vector-matrix multiplication (VMM) for the matrix and the input vector; the compute logic block performing a portion of the VMM for the portion of the matrix” as taught by Gupta. One of ordinary skill in the art would be motivated to combine Chou and Mori with Gupta before the effective filing date of the claimed invention to improve a system by providing for the ability where a “digital controller … can implement several features for interfacing with the host processor … and the VMM engines …, to minimize memory access and improve computational efficiency.” [Gupta, para. 0088]. However, Chou, Mori, and Gupta do not explicitly disclose wherein the array of storage cells includes a plurality of storage blocks, each storage block including a portion of the array of storage cells; and the compute logic includes a plurality of compute logic blocks, each compute logic block corresponding to a storage block of the plurality of storage blocks. Guzy teaches wherein the array of storage cells includes a plurality of storage blocks, each storage block including a portion of the array of storage cells [“In step 602, a processing system (e.g., processing system 102) generates a first control instruction. For example, the first control instruction comprises an output instruction (e.g., signal) to set storage memory functionality of a block (e.g., a sub-array) of storage memory elements (e.g., storage memory elements 106) for one or more programmable cells of at least reconfigurable dual-function cell array (e.g., reconfigurable dual-function cell array 102). In some embodiments, a control logic circuit (e.g., control logic circuit 108) generates the first control instruction.”] [col. 7, lines 33-42]; and the compute logic includes a plurality of compute logic blocks, each compute logic block corresponding to a storage block of the plurality of storage blocks [“In step 602, a processing system (e.g., processing system 102) generates a first control instruction. For example, the first control instruction comprises an output instruction (e.g., signal) to set storage memory functionality of a block (e.g., a sub-array) of storage memory elements (e.g., storage memory elements 106) for one or more programmable cells of at least reconfigurable dual-function cell array (e.g., reconfigurable dual-function cell array 102). In some embodiments, a control logic circuit (e.g., control logic circuit 108) generates the first control instruction.”] [col. 7, lines 33-42]. Chou, Mori, Gupta, and Guzy are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou, Mori, and Gupta with Guzy in order to modify Chou, Mori, and Gupta “wherein the array of storage cells includes a plurality of storage blocks, each storage block including a portion of the array of storage cells; and the compute logic includes a plurality of compute logic blocks, each compute logic block corresponding to a storage block of the plurality of storage blocks” as taught by Guzy. One of ordinary skill in the art would be motivated to combine Chou, Mori, and Gupta with Guzy before the effective filing date of the claimed invention to improve a system by providing for the ability where “a processing system includes …reconfigurable dual-function function cell arrays. …This can improve system performance and/or consume less energy than traditional systems.” [Guzy, col. 4, lines 32-50]. Claim 17 is rejected with like reasoning. Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. [hereafter as Chou], US Pub. No. 2022/0351032 A1 in view of Mori et al. [hereafter as Mori], US Pub. No. 2025/0239281 A1 in view of Gupta et al. [hereafter as Gupta], US Pub. No. 2019/0205741 A1 and further in view of Guzy et al. [hereafter as Guzy], US Patent No. 10,802,735 B1 as applied to claim 1 above, and further in view of Lee et al. [hereafter as Lee], US Pub. No. 2023/0030605 A1. As per claim 2, Chou in view of Mori and further in view of Gupta and further in view of Guzy discloses the compute engine of claim 1, however Chou, Mori, Gupta, and Guzy do not explicitly disclose wherein each compute logic block includes an adder tree and an accumulator. Lee teaches wherein each compute logic block includes an adder tree and an accumulator [“adder tree circuit 200T includes one more circuit elements (not shown), e.g., an accumulator circuit, configured to generate some or all of output signal OUT based on the summation data element.”] [para. 0072]. Chou, Mori, Gupta, and Guzy are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou, Mori, Gupta, and Guzy with Lee in order to modify Chou, Mori, Gupta, and Guzy “wherein each compute logic block includes an adder tree and an accumulator” as taught by Lee. One of ordinary skill in the art would be motivated to combine Chou, Mori, Gupta, and Guzy with Lee before the effective filing date of the claimed invention to improve a system where it is “configured to be capable of performing computation-in-memory (CIM) operations based on weight data elements stored in the DRAM array. Compared to other approaches, such memory circuits are capable of performing CIM operations based on high memory capacity using a smaller area and lower power level. In various applications, e.g., convolutional neural network (CNN) applications, the memory circuit embodiments enable the weight data elements to be efficiently applied to sets of input data elements in multiply-and-accumulate (MAC) and other operations.” [Lee, para. 0012]. As per claim 3, Chou in view of Mori and further in view of Gupta and further in view of Guzy and further in view of Lee discloses the compute engine of claim 2, Lee teaches wherein each storage block corresponds to a base precision of the plurality of weights [“As the size of the weight data element increases, weight data precision increases along with complexity and execution time of the one or more operations performed by computation circuit 100B.”] [para. 0035]. Claims 4 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. [hereafter as Chou], US Pub. No. 2022/0351032 A1 in view of Mori et al. [hereafter as Mori], US Pub. No. 2025/0239281 A1 in view of Gupta et al. [hereafter as Gupta], US Pub. No. 2019/0205741 A1 and further in view of Guzy et al. [hereafter as Guzy], US Patent No. 10,802,735 B1 as applied to claim 17 above, and further in view of Lee et al. [hereafter as Lee], US Pub. No. 2023/0030605 A1 as applied to claim 3 above, and further in view of Brown [hereafter as Brown], US Pub. No. 2021/0303574 A1. As per claim 4, Chou in view of Mori and further in view of Gupta and further in view of Guzy and further in view of Lee discloses the compute engine of claim 3, however Chou, Mori, Gupta, Guzy, and Lee do not explicitly disclose wherein the plurality of compute logic blocks includes compute logic block pairs and wherein the plurality of storage blocks includes storage block pairs, each of the compute logic block pairs includes a first compute logic block and a second compute logic block, wherein each of the storage block pairs includes a first storage block and a second storage block, the first compute logic block corresponding to the first storage block and the second compute logic block corresponding to the second storage block, each of the compute logic block pairs further including merge logic for merging a first resultant of the first compute logic block with a second resultant of the second compute logic block. Brown teaches wherein the plurality of compute logic blocks includes compute logic block pairs and wherein the plurality of storage blocks includes storage block pairs, each of the compute logic block pairs includes a first compute logic block and a second compute logic block, wherein each of the storage block pairs includes a first storage block and a second storage block, the first compute logic block corresponding to the first storage block and the second compute logic block corresponding to the second storage block, each of the compute logic block pairs further including merge logic for merging a first resultant of the first compute logic block with a second resultant of the second compute logic block [“The data blocks can represent the data sets determined based on partitioning a matrix as noted above. By way of example, all of data blocks of first and second input matrices can be distributed to a single processor of multiple processors of a database management system where a final result is to be computed. In addition, copies of all of the data blocks for the first input matrix can be created on all of the processors the database management system while distributing a subset of data blocks of the second input matrix to different processors of the multiple processors to compute a per-processor local result that is a portion of the final result before combining per-processor portions into the final result...”] [para. 0069]. Chou, Mori, Gupta, Guzy, and Lee and Brown are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou, Mori, Gupta, Guzy, and Lee with Brown in order to modify Chou, Mori, Gupta, Guzy, and Lee “wherein the plurality of compute logic blocks includes compute logic block pairs and wherein the plurality of storage blocks includes storage block pairs, each of the compute logic block pairs includes a first compute logic block and a second compute logic block, wherein each of the storage block pairs includes a first storage block and a second storage block, the first compute logic block corresponding to the first storage block and the second compute logic block corresponding to the second storage block, each of the compute logic block pairs further including merge logic for merging a first resultant of the first compute logic block with a second resultant of the second compute logic block” as taught by Brown. One of ordinary skill in the art would be motivated to combine Chou, Mori, Gupta, Guzy, and Lee with Brown before the effective filing date of the claimed invention to improve a system by providing for “organizing matrix data in storage in a manner that lends itself to efficiently computing multiply is another area where a DBMS can be extended to improve the computation of matrix multiply results.” [Brown, para. 0070]. Claim 18 is rejected with like reasoning. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. [hereafter as Chou], US Pub. No. 2022/0351032 A1 in view of Mori et al. [hereafter as Mori], US Pub. No. 2025/0239281 A1 in view of Gupta et al. [hereafter as Gupta], US Pub. No. 2019/0205741 A1 and further in view of Guzy et al. [hereafter as Guzy], US Patent No. 10,802,735 B1 and further in view of Vasilopoulos et al. [hereafter as Vasilopoulos], US Pub. No. 2025/0165769 A1. Claim 9 is rejected with like reasoning as claim 1 above, except for the following remaining claim limitations: A compute tile, comprising: at least one general-purpose (GP) processor; and a plurality of compute engines coupled with the at least one GP processor, each of the plurality of compute engines. However, Chou, Mori, Gupta, and Guzy do not explicitly disclose a compute tile, comprising: at least one general-purpose (GP) processor; and a plurality of compute engines coupled with the at least one GP processor, each of the plurality of compute engines. Vasilopoulos teaches a compute tile [“a computing device configured to balance utilization of tiles in an analog in-memory computing system includes a processor operating an analog in-memory computing engine in the analog in-memory computing system and a memory coupled to the processor.”] [para. 0008], comprising: at least one general-purpose (GP) processor [“a computing device configured to balance utilization of tiles in an analog in-memory computing system includes a processor operating an analog in-memory computing engine in the analog in-memory computing system and a memory coupled to the processor.”] [para. 0008]; and a plurality of compute engines coupled with the at least one GP processor, each of the plurality of compute engines [“a computing device configured to balance utilization of tiles in an analog in-memory computing system includes a processor operating an analog in-memory computing engine in the analog in-memory computing system and a memory coupled to the processor.”] [para. 0008] [“In one embodiment, a method may include mapping a plurality of layers belonging to a plurality of neural networks to one or more two-dimensional tiers on one or more tiles in a heterogeneous three-dimensional compute-in-memory accelerator with one or more digital processing units, such that all units (digital processing units (DPUs) or AIMC tiles) in the system present equal processing load and units are spatially placed such that communication costs are minimized.”] [para. 0076]. Chou, Mori, Gupta, Guzy, and Vasilopoulos are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou, Mori, Gupta, and Guzy with Vasilopoulos in order to modify Chou, Mori, Gupta, and Guzy for “a compute tile, comprising: at least one general-purpose (GP) processor; and a plurality of compute engines coupled with the at least one GP processor, each of the plurality of compute engines” as taught by Vasilopoulos. One of ordinary skill in the art would be motivated to combine Chou, Mori, Gupta, and Guzy with Vasilopoulos before the effective filing date of the claimed invention to improve a system by providing for the ability where “all units (digital processing units (DPUs) or AIMC tiles) in the system present equal processing load and units are spatially placed such that communication costs are minimized.” [Vasilopoulos, para. 0076]. Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. [hereafter as Chou], US Pub. No. 2022/0351032 A1 in view of Mori et al. [hereafter as Mori], US Pub. No. 2025/0239281 A1 in view of Gupta et al. [hereafter as Gupta], US Pub. No. 2019/0205741 A1 and further in view of Guzy et al. [hereafter as Guzy], US Patent No. 10,802,735 B1 and further in view of Vasilopoulos et al. [hereafter as Vasilopoulos], US Pub. No. 2025/0165769 A1 as applied to claim 9 above, and further in view of Lee et al. [hereafter as Lee], US Pub. No. 2023/0030605 A1. As per claim 10, Chou in view of Mori and further in view of Gupta and further in view of Guzy and further in view of Vasilopoulos discloses the compute tile of claim 9, wherein the remaining claim limitations are rejected with like reasoning as claim 2 above. Chou, Mori, Gupta, Guzy, Vasilopoulos, and Lee are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou, Mori, Gupta, Guzy, and Vasilopoulos with Lee in order to modify Chou, Mori, Gupta, Guzy, and Vasilopoulos “wherein each compute logic block includes an adder tree and an accumulator” as taught by Lee. One of ordinary skill in the art would be motivated to combine Chou, Mori, Gupta, Guzy, and Vasilopoulos with Lee before the effective filing date of the claimed invention to improve a system where it is “configured to be capable of performing computation-in-memory (CIM) operations based on weight data elements stored in the DRAM array. Compared to other approaches, such memory circuits are capable of performing CIM operations based on high memory capacity using a smaller area and lower power level. In various applications, e.g., convolutional neural network (CNN) applications, the memory circuit embodiments enable the weight data elements to be efficiently applied to sets of input data elements in multiply-and-accumulate (MAC) and other operations.” [Lee, para. 0012]. As per claim 11, Chou in view of Mori and further in view of Gupta and further in view of Guzy and further in view of Vasilopoulos and further in view of Lee discloses the compute tile of claim 10, wherein the remaining claim limitations are rejected with like reasoning as claim 3 above. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. [hereafter as Chou], US Pub. No. 2022/0351032 A1 in view of Mori et al. [hereafter as Mori], US Pub. No. 2025/0239281 A1 in view of Gupta et al. [hereafter as Gupta], US Pub. No. 2019/0205741 A1 and further in view of Guzy et al. [hereafter as Guzy], US Patent No. 10,802,735 B1 and further in view of Vasilopoulos et al. [hereafter as Vasilopoulos], US Pub. No. 2025/0165769 A1 and further in view of Lee et al. [hereafter as Lee], US Pub. No. 2023/0030605 A1 as applied to claim 11 above, and further in view of Brown [hereafter as Brown], US Pub. No. 2021/0303574 A1. As per claim 12, Chou in view of Mori and further in view of Gupta and further in view of Guzy and further in view of Vasilopoulos and further in view of Lee discloses the compute tile of claim 11, wherein the remaining claim limitations are rejected with like reasoning as claim 4 above. Chou, Mori, Gupta, Guzy, Vasilopoulos, Lee, and Brown are analogous art aimed to improve memory performance in storage systems. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine Chou, Mori, Gupta, Guzy, Vasilopoulos, and Lee with Brown in order to modify Chou, Mori, Gupta, Guzy, Vasilopoulos, and Lee “wherein the plurality of compute logic blocks includes compute logic block pairs and wherein the plurality of storage blocks includes storage block pairs, each of the compute logic block pairs includes a first compute logic block and a second compute logic block, wherein each of the storage block pairs includes a first storage block and a second storage block, the first compute logic block corresponding to the first storage block and the second compute logic block corresponding to the second storage block, each of the compute logic block pairs further including merge logic for merging a first resultant of the first compute logic block with a second resultant of the second compute logic block” as taught by Brown. One of ordinary skill in the art would be motivated to combine Chou, Mori, Gupta, Guzy, Vasilopoulos, and Lee with Brown before the effective filing date of the claimed invention to improve a system by providing for “organizing matrix data in storage in a manner that lends itself to efficiently computing multiply is another area where a DBMS can be extended to improve the computation of matrix multiply results.” [Brown, para. 0070]. Conclusion STATUS OF CLAIMS IN THE APPLICATION CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 1 – 20 have received a first action on the merits and are subject of a first action non-final. Claims 6, 14, and 19 are rejected under a 112 rejection. Claims 1 – 4, 9 – 12, 17, and 18 are rejected under a 103 rejection. Allowable Subject Matter Claims 5 – 7, 13 – 15, 19, and 20 are objected to as being dependent upon a rejected based claim, but are considered as containing allowable subject matter. These claims would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) set forth in this Office action and to include all of the limitations of the base claim and any intervening claims in independent form. Claims 8 and 16 depend from claims 7 and 15 and are objected to as considered containing allowable subject matter based upon their dependency. The following is a statement of reasons for the indication of allowable subject matter: for dependent claims 5 and 13 the prior art of record, neither anticipates, nor renders obvious the base precision of the weights of each storage block is four bits and the weights are stored across a storage block pair. The following is a statement of reasons for the indication of allowable subject matter: for dependent claims 6, 14, and 19 the prior art of record, neither anticipates, nor renders obvious a CIM with first bank of storage blocks and a second bank of storage blocks, and driving the input vector to the second bank of the first and second bank in response to the second bank storing a portion of the weights for the storage blocks. The following is a statement of reasons for the indication of allowable subject matter: for dependent claims 7, 15, and 20 the prior art of record, neither anticipates, nor renders obvious weight update circuitry to store data to the storage cells array, where the compute logic, storage cell array, and weight update circuitry are configured to selectively receive power. Claims 8 and 16 depend from claims 7 and 15 and would be allowable based upon their dependency. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Reisser et al., US Pub. No. 2021/0073650 A1 – teaches [“performing vector-matrix multiplication of the input vector with the probabilistic binary weight matrix,”] [Abstract] Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD WADDY JR whose telephone number is (571)272-5156. The examiner can normally be reached M-Th 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at (571)272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EW/Examiner, Art Unit 2135 /JARED I RUTZ/Supervisory Patent Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Jan 22, 2025
Application Filed
Mar 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596652
MANAGING NAMESPACE MAPPING, TRUSTED COMPUTING GROUP RANGES, AND ENCRYPTIONS IN A MEMORY SUB-SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12585585
MEMORY DEVICE CONTROL METHOD AND ASSOCIATED APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579064
NON-VOLATILE MEMORY CONTROLLER AND CONTROL METHOD, AND COMPUTER PROGRAM PRODUCTS
2y 5m to grant Granted Mar 17, 2026
Patent 12541454
VARIABLE DISPATCH WALK FOR SUCCESSIVE CACHE ACCESSES
2y 5m to grant Granted Feb 03, 2026
Patent 12541462
STORAGE DEVICE FOR MAINTAINING PREFETCH DATA, ELECTRONIC DEVICE INCLUDING THE SAME, AND OPERATING METHOD THEREOF
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+23.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 337 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month