Prosecution Insights
Last updated: April 19, 2026
Application No. 19/033,296

TIME MULTIPLEXING AND WEIGHT DUPLICATION IN EFFICIENT IN-MEMORY COMPUTING

Non-Final OA §102§103
Filed
Jan 21, 2025
Examiner
YEW, CHIE W
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Openal Opco LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
210 granted / 281 resolved
+19.7% vs TC avg
Strong +27% interview lift
Without
With
+26.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
25.7%
-14.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 281 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 – 20 are pending. Specification The disclosure is objected to because of the following informalities. Appropriate correction is required. ¶[29] should be amended to “Mesh stop [[172]] 170 provides an interface between compute tile 150 and the fabric of a mesh network that includes compute tile 150. Thus, mesh stop [[172]] 170 may be used to communicate with remote DRAM [[190]] 192. Mesh stop [[172]] 170 may also be used to communicate with other compute tiles (not shown) with which compute tile 150 may be used”. Mesh stop is labeled 170 and DRAM is labeled 172 (see Fig. 1A). ¶[30] should be amended to “Compute engines 100 may also include local update (LU) module(s) (shown in FIG. [[1A]] 1B)”. Note that it is Fig. 1B (and not 1A) that has LU 140. ¶[45] should be amended to “Control unit [[240]] 208 is configured to provide control signals to CIM hardware module 230 and LU module 1549”. Control unit is labeled “208” in Fig. 2. ¶[61] should be amended to “The second set of blocks may include blocks [[614-4]] 614-3, 614-4, 614-7, and 614-8 for a third matrix and blocks 614-9, 614-10, 614-13, and 614-14 for a fourth matrix”. This is a typo. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because i) reference character “172” has been used to designate both mesh stop (see spec ¶[29]) and remote memory (see spec Fig. 1A), and ii) reference character “240” has been used to designate both control unit (see spec ¶[45]) and LU module (see spec ¶[44], Fig. 2). The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description. The drawings fail to disclose remote DRAM labeled as “190”. It is noted that in the event that the instant specification is amended as noted supra, these drawing objections would be withdrawn. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 5, 8 – 10 and 15 – 20 are objected to because of the following informalities. Appropriate correction is required. Claim 5 should be amended to “weight of the first block being replicated in weight of the second block”. Claim 1 already recites that weights are stored among plural blocks. Therefore, it doesn’t make sense for the same weights to now be stored in one block. Claims 8 and 15 should be amended to “a second output of the operations for the second block being output at a second time different from the first time”. This is so that first and second outputs are outputted at different (and not same) times (see spec Fig. 5-6 and corresponding paragraphs). There is no support for said first and second outputs being outputted at the same time. Claims 9 and 16 should be amended to “the adders providing [[a]] the first output at the first time and the second output at the second time”. This is to maintain consistency to previously recited first output being outputted in same first time (see claim 8). Claim 17 should be amended to “outputting, from the CIM hardware module, a second output of the operations for the second block being output at a second time different from the first time”. This is so that first and second outputs are outputted at different (and not same) times (see spec Fig. 5-6 and corresponding paragraphs). There is no support for said first and second outputs being outputted at the same time. Claims, dependent upon above identified claims, are also objected on the same grounds as said above identified claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 – 3 and 6 – 7 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Di Febbo (US 20230059200). Regarding claim 1, Di Febbo teaches A hardware compute-in-memory (CIM) module (CIM module = Fig. 4 in-memory compute circuit 101), comprising: storage sites; and compute logic (compute logic = multiple transconductance + multiple ADCs) coupled with the storage sites for performing, in parallel, operations on data (data = weight) stored in the storage sites; (Di Febbo teaches memory cells and multiple transconductance (compute logic), and input values are fed (in parallel (parallel) to said memory cells) to be (performing) multiplied (operations) with weight (data) (stored in each of said memory cells) using respective transconductance in (coupled to) each of said memory cells (see Fig.7, ¶[65-67]). Di Febbo also teaches said memory cells are in sets of rows (storage sites) that is part of memory circuit 120 in in-memory compute circuit 101 (see Fig.4, ¶[23-24]). Di Febbo also teaches said multiplications can also occur concurrently (parallel) (see ¶[86]).) wherein the CIM hardware module is configured to store weights in blocks of the storage sites, to utilize the blocks and portions of the compute logic corresponding to the blocks to selectively provide outputs of the operations for the weights stored in the blocks, and to read the outputs of the operations corresponding to the blocks at different times (Di Febbo teaches in-memory compute circuit 101 (CIM hardware) is configured to receive weights (w00 to w117) (weights) to be stored (store) in memory cells (blocks) for at least a portion of said sets of rows (storage sites) (see Fig. 4, ¶[37]). Di Febbo also teaches said memory cells, each respectively (portions corresponding to the blocks), has transconductance (compute logic) that multiplies (operations) input value with weight (stored in said respective memory cell) (see Fig. 7, ¶[65-67]) to generate a product (outputs) that is accumulated (read) (see ¶[43]) where there are plural input values (see Fig.4 , ¶[43]). Note that each memory cell provides a product that is multiplication of input and weight specific (selective) to each memory cell. Di Febbo further teaches that said input values originate from respective row of memory ranges 265a-c (see Fig. 2, ¶[38]) where at different times, different input values are stored in said respective row of said memory ranges 265a-c (see Fig. 5). As such, said memory cells, each respectively, will generate a product (outputs) that is accumulated (read) at different times (different times).) Regarding claim 2, Di Febbo teaches the CIM hardware module of claim 1 where Di Febbo also teaches wherein to utilize the blocks and the portions of the compute logic the CIM hardware module is configured to route an input to a portion of the compute logic (compute logic = multiple transconductance) for a block of the blocks (Di Febbo teaches input value (input) being input to a memory cell (block) of memory cells (blocks) wherein said memory cell has respective transconductance (portion of the compute logic) that multiplies said input value with weight in said memory cell (see Fig. 7, ¶[66-67]) wherein said input data is routed (route) to said memory cell (see ¶[38]).) Regarding claim 3, Di Febbo teaches the CIM hardware module of claim 2 where De Febbo also teaches wherein to read the outputs the CIM hardware module reads an output corresponding to the block (Di Febbo also teaches memory cells where a memory cell (block) has (corresponding to) respective transconductance that multiplies input value with weight (stored in said respective memory cell) (see Fig. 7, ¶[65-67]) to generate a product (output) that is accumulated (read) (see ¶[43]).) Regarding claim 6, Di Febbo teaches the CIM hardware module of claim 1 where Di Febbo also teaches wherein the hardware CIM is configured to store the weights in the blocks based on an optimization of throughput and utilization of the CIM hardware module (Di Febbo teaches storing weights (weights) in memory cells (blocks) (see ¶[37]). Di Febbo teaches to reduce time (throughput) for processing (utilization) data in memory buffer circuit, said stored weight remains constant (see ¶[58]) wherein in-memory compute circuit 101 (CIM hardware module) retrieve said data from said memory buffer circuit (see ¶[33]).) Regarding claim 7, Di Febbo teaches the CIM hardware module of claim 1 where Di Febbo also teaches wherein the operations comprise vector-matrix multiplication operations (Di Febbo teaches memory cells each includes transconductance that multiples (operations) input value with weight (see Fig. 7, ¶[66-67]) wherein said input values originate from respective row (vector) of memory ranges 265a-c (see Fig. 2, ¶[38]), and said respective row originates from a matrix (matrix) (see Fig. 5).) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of New (US 20090161442). Regarding claim 4, Di Febbo teaches the CIM hardware module of claim 3 where Di Febbo also teaches wherein the CIM hardware module further includes: a demultiplexer configured for routing the input to the portion of the compute logic for the block (Di Febbo teaches input value (input) being input to a memory cell (block) of memory cells (blocks) wherein said memory cell has respective transconductance (portion of the compute logic) that multiplies said input value with weight in said memory cell (see Fig. 7, ¶[66-67]) wherein said input data is routed (route) to said memory cell using routing circuit (demultiplexer) in in-memory compute circuit 101 (CIM hardware module) (see Fig. 2-4, ¶[38]).) Di Febbo teaches a base CIM module that reads an output corresponding to a block (see claim 3). The claimed invention improves upon said base module by using a multiplexer to select said output. This improvement to said base module is an application of known technique from New – using multiplexers (multiplexer) to select a read value (output) from an associated memory cell (block) (see New Fig. 2, ¶[47]). One of ordinary skill in the art would recognize that this known technique of using multiplexer to select which memory cell to read can also be applied to select Di Febbo’s output, and the result would have been predictable. In this instance, said block’s output is selected using multiplexer. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying New’s known technique would have yielded i) predictable result of said block’s output being selected using multiplexer, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Kale (US 20240086696). Regarding claim 5, Di Febbo teaches the CIM hardware module of claim 1 where Di Febbo also teaches wherein the blocks include a first block and a second block, [the weights of the first block being replicated in the weights of the second block] (Di Febbo teaches memory cells (blocks) comprising memory cell 727aa (first block) and memory cell 727ab (second block) (see Fig. 7).) As noted in claim 5, Di Febbo teaches first and second blocks but does not appear to explicitly teach the weights of the first block being replicated in the weights of the second block However, Kale teaches the weights of [the] first block being replicated in the weights of [the] second block (claim objection: This limitation should read “weight of the first block being replicated in weight of the second block”.) (Kale teaches layer 371’s memory cells (first block) storing (replicated) the same weight matrices as layer 373’s memory cells (second block) (see ¶[217]).) In view of Kale, Di Febbo is modified such that said first and second blocks store the same weights. Di Febbo and Kale are analogous art to the claimed invention because they are in the same field of endeavor, storage management. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify Di Febbo in the manner described supra because replicating weight matrix in multiple memory cells of different layers allows for error detection that reduces erroneous result of multiplication and accumulation (Kale, ¶[210]). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Lee (US 20240177751) and Ting (US 9653148). Regarding claim 8, Di Febbo teaches the CIM hardware module of claim 1 where Di Febbo also teaches wherein the blocks include a first block and a second block (Di Febbo teaches memory cells (blocks) comprising memory cell 727aa (first block) and memory cell 727ba (second block) (see Fig. 7).) a first output of the operations on the first block [being output at a first time], and a second output of the operations for the second block [being output at a second time] (Di Fabbo teaches memory cells 727aa and memory cells 727ba, each multiplies (operations) input value with weight (see Fig. 7, ¶[67]) that generate a product (first and second outputs) (see ¶[43]).) Di Febbo teaches a base CIM hardware module with compute logic and first and second blocks (see claim 8). The claimed invention improves upon said base module by having said first and second blocks share a portion of said compute logic. This improvement to said base module is an application of known technique from Lee – memory cells sharing accumulator (see Lee Fig. 3 and corresponding paragraphs). One of ordinary skill in the art would recognize that this known technique of sharing accumulator among memory cells can also be applied to first and second blocks of Di Febbo, and the result would have been predictable. In this instance, said compute logic is modified to include an accumulator that is shared between said first and second blocks. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Lee’s known technique would have yielded i) predictable result of said compute logic being modified to include an accumulator (portion) that is shared between said first and second blocks, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Modified Di Febbo teaches a base CIM module that outputs i) first output from first block, and ii) second output from second block (see claim 8). The claimed invention improves upon said base module by outputting said first output at first time and said second output at second time. This improvement to said base module is an application of known technique from Ting – reading (being output) data D0-D3 (first output) from memory cell array 106_1 (first block) at time period 5-8 (first time), and reading (being output) data D4-D5 (second output) from memory cell array 106_2 (second block) at time period 9-10 (second time) (see Ting Fig. 2, col 1 ln 36-54). One of ordinary skill in the art would recognize that this known technique of outputting data at different time periods can also be applied to output first and second outputs of modified Di Febbo, and the result would have been predictable. In this instance, said first and second outputs are outputted at different time periods. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Ting’s known technique would have yielded i) predictable result of said first and second outputs are outputted at different time periods, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Lee and Ting, and further in view of Chih (US 20220019407). Regarding claim 9, Di Fabbo in view of Lee and Ting teach the CIM hardware module of claim 8 where Di Fabbo also teaches wherein the compute logic includes a first plurality of logic gates corresponding to the first block, a second plurality of logic gates corresponding to the second block (Di Fabbo teaches memory cell 727aa (first block) coupled to ADC 285a (compute logic) and memory cell 727ba (second block) coupled to ADC 285b (compute logic) (see Fig. 7) wherein said ADCs are implemented using circuits (see ¶[87]) that are combinatorial logic circuity such as flops, register and latches (first plurality of logic gates and second plurality of logic gates) (see ¶[116]).) Modified Di Febbo teaches a base CIM module that has ADCs (first and second plurality of logic gates) each coupled respectively to memory cells 727aa, 727ba (first and second blocks) (see claim 9) where said memory cell 727aa (first block) output first output at a different time from second output of said memory cell 727ba (second block) (see claim 8). The claimed invention improves upon said base module by having adders coupled to said ADCs (first and second plurality of logic gates). This improvement to said base module is an application of known technique from Lee –coupling adder to ADC that is coupled to memory cells (first and second blocks) wherein outputs (first and second outputs) (from said memory cells) are pass from said memory cells to said ADC to said adder to (providing) said accumulator (see Lee Fig. 3, ¶[83-85]). One of ordinary skill in the art would recognize that this known technique of coupling adder to ADC can also be applied to couple modified Di Febbo’s ADCs, and the result would have been predictable. In this instance, said ADCs are coupled to an adder wherein said first and second outputs (from said memory cells 727aa and 727ba) are passed from said memory cells 727aa and 727ba to said adder to said accumulator. Note that since said first and second outputs are outputted at different times, said first and second outputs would be passed to from said adder at different times. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Lee’s known technique would have yielded i) predictable result of said ADCs (first and second plurality of logic gates) are coupled to an adder wherein said first and second outputs (outputted at different times (first and second times)) are passed from said memory cells 727aa and 727ba to said adder to (providing) said accumulator, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Modified Di Febbo teaches a base CIM module that has adder connected accumulator (see claim 9). The claimed invention improves upon said base module by having plural adders. This improvement to said base module is an application of known technique from Chih –adder trees 122 (adders) connected to accumulator M 140 (see Chih Fig. 1A). One of ordinary skill in the art would recognize that this known technique of using plural adder tress to connect to accumulator can also be applied to adder of modified Di Febbo, and the result would have been predictable. In this instance, said adder is modified to be plural adder tress 122 that are connected to said accumulator. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Chih’s known technique would have yielded i) predictable result of said adder being modified to be plural adder tress 122 that are connected to said accumulator, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Lee and Ting, and further in view of Edso (US 20170357570). Regarding claim 10, Di Febbo in view of Lee and Ting teach the CIM hardware module of claim 8 where first and second block outputting respective output at different times but do not appear to explicitly teach turning off block at said different times (see also limitation below). wherein the second block is not powered on io during the first time and the first block is not powered on during the second time However, Edso teaches wherein [the] second block is not powered on during [the] first time and [the] first block is not powered on during [the] second time (Edso teaches for cycles (first and second times) where row/columns (of a bank (first block or second block) (see ¶[22])) are read/written, unused bank (second block or first block) is turned off (not powered on)) (see ¶[162]).) In view of Edso, modified Di Febbo is further modified such that when said first block is outputting first output at said first time, said second block is turned off, and when said second block is outputting said second output at said second time, said first block is turned off. Di Febbo, Lee, Ting and Edso are analogous art to the claimed invention because they are in the same field of endeavor, storage management. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify modified Di Febbo in the manner described supra because it turning off unused bank would save power (Chih, ¶[162]). Claim 11 and 13 – 14 are rejected under 35 U.S.C. 103 as being unpatentable over Garde (US 20090177867) in view of Di Febbo. Regarding claim 11, Garde teaches A compute tile (compute tile = Fig. 2 processor 20), comprising: a plurality of compute engines (plurality of compute engines = Fig. 2 compute engines 50-57), each of the plurality of compute engines including a memory module [hardware compute-in-memory (CIM) module, the CIM hardware memory module including a plurality of is storage sites and compute logic coupled with the plurality of storage sites for performing, in parallel, operations on data stored in the storage sites]; and (Garde teaches each compute engine includes memory 78-80 (memory module) (see ¶[45]).) a general-purpose (GP) processor (GP processor = Fig. 2 control block 30) coupled with the plurality of compute engines and configured to provide control instructions and data to the plurality of compute engines; (Garde teaches instructions (control instructions) issued (provide) by control block 30 (GP processor) (with control logic (processor)) and corresponding data (data) flow through compute engines (compute engines) and are executed by said compute engines (see Fig. 2, ¶[41], [42]). Note that said compute engines and said control block 30 are functionally coupled (coupled with) via flow of said instructions.) Garde teaches each compute engine includes a memory module but does not appear to explicitly teach said memory module is hardware CIM module performing the functions outlined in the limitations below. each of the plurality of compute engines including a hardware compute-in-memory (CIM) module, the CIM hardware module including a plurality of storage sites and compute logic coupled with the plurality of storage sites for performing, in parallel, operations on data stored in the storage sites; and wherein the CIM hardware module is configured to store weights in blocks of the storage sites, to utilize the blocks and portions of the compute logic corresponding to the blocks to selectively provide outputs of the operations for the weights stored in the blocks, and to read the outputs of the operations corresponding to the blocks at different times However, Di Febbo teaches [each of the plurality of compute engines including a hardware compute-in-memory (CIM) module, the] CIM hardware module including a plurality of storage sites and compute logic coupled with the plurality of storage sites for performing, in parallel, operations on data stored in the storage sites; and wherein the CIM hardware module is configured to store weights in blocks of the storage sites, to utilize the blocks and portions of the compute logic corresponding to the blocks to selectively provide outputs of the operations for the weights stored in the blocks, and to read the outputs of the operations corresponding to the blocks at different times (Note that claim 1 teaches a CIM hardware module as described here. As such, Di Febbo’s mapping in claim 1 also applies here.) In view of Di Febbo, Garde is modified such that said memory module (in each of said plurality of compute engines) is a CIM hardware module as described here. Garde and Di Febbo are analogous art to the claimed invention because they are in the same field of endeavor, storage management. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify Garde in the manner described supra because use of im-memory compute arrays optimize execution of MAC operations in CNN accelerators (Di Febbo, ¶[3],[20]). Regarding claim 13, Garde in view of Di Febbo teach the compute tile of claim 11 where Di Febbo also teaches wherein the hardware CIM is configured to store the weights in the blocks based on an optimization of throughput and utilization of the CIM hardware module (see Di Febbo mapping in claim 6 supra) Regarding claim 14, Garde in view of Di Febbo teach the compute tile of claim 11 where Di Febbo also teaches wherein the operations comprise vector-matrix multiplication operations (see Di Febbo mapping in claim 7 supra) Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Garde in view of Di Febbo, and further in view of New. Regarding claim 12, Garde in view of Di Febbo teaches the compute tile of claim 11 where Di Febbo also teaches wherein the CIM hardware module further includes: a demultiplexer configured for routing the input to the portion of the compute logic for the block (see Di Febbo mapping in claim 4 supra) Garde in view of Di Febbo teach a base compute tile that reads outputs of blocks (block) (see claim 11). The claimed invention improves upon said base compute tile by using a multiplexer to select output of said block. This improvement to said base compute tile is an application of known technique from New – using multiplexers (multiplexer) to select a read value (output) from an associated memory cell (block) (see New mapping in claim 4 supra). One of ordinary skill in the art would recognize that this known technique of using multiplexer to select which memory cell to read can also be applied to select modified Garde’s output, and the result would have been predictable. In this instance, said block’s output is selected using multiplexer. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying New’s known technique would have yielded i) predictable result of said block’s output being selected using multiplexer, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Garde in view of Di Febbo, and further in view of Lee and Ting. Regarding claim 15, Di Febbo teaches the compute tile of claim 11 where Di Febbo also teaches wherein the blocks include a first block and a second block a first output of the operations on the first block [being output at a first time], and a second output of the operations for the second block [being output at a second time] (see Di Febbo mapping in claim 8 supra) Garde in view of Di Febbo teaches a base compute tile with compute logic and first and second blocks (see claim 15). The claimed invention improves upon said compute tile by having said first and second blocks share a portion of said compute logic. This improvement to said base compute tile is an application of known technique from Lee – memory cells sharing accumulator (see Lee mapping in claim 8 supra). One of ordinary skill in the art would recognize that this known technique of sharing accumulator among memory cells can also be applied to first and second blocks of modified Garde, and the result would have been predictable. In this instance, said compute logic is modified to include an accumulator that is shared between said first and second blocks. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Lee’s known technique would have yielded i) predictable result of said compute logic being modified to include an accumulator (portion) that is shared between said first and second blocks, and ii) the improved claimed invention (see MPEP 2143(I)(D)). As noted in claim 8, Garde in view of Di Febbo and Lee teach first and second outputs from respective operations on first and second blocks but do not appear to explicitly teach outputting said first and second outputs at first and second time. Modified Garde teaches a base compute tile that outputs i) first output from first block, and ii) second output from second block (see claim 15). The claimed invention improves upon said base compute tile by outputting said first output at first time and said second output at second time. This improvement to said base compute tile is an application of known technique from Ting – reading (being output) data D0-D3 (first output) from memory cell array 106_1 (first block) at time period 5-8 (first time), and reading (being output) data D4-D5 (second output) from memory cell array 106_2 (second block) at time period 9-10 (second time) (see Ting mapping in claim 8). One of ordinary skill in the art would recognize that this known technique of outputting data at different time periods can also be applied to output first and second outputs of modified Garde, and the result would have been predictable. In this instance, said first and second outputs are outputted at different time periods. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Ting’s known technique would have yielded i) predictable result of said first and second outputs are outputted at different time periods, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Garde in view of Di Febbo, Lee and Ting, and further in view of Chih. Regarding claim 16, Garde in view of Di Febbo, Lee and Ting teach the compute tile of claim 15 where Di Fabbo also teaches wherein the compute logic includes a first plurality of logic gates corresponding to the first block, a second plurality of logic gates corresponding to the second block (see Di Febbo mapping in claim 9) Modified Garde teaches a base compute tile that has ADCs (first and second plurality of logic gates) each coupled respectively to memory cells 727aa, 727ba (first and second blocks) (see claim 16) where said memory cell 727aa (first block) output first output at a different time from second output of said memory cell 727ba (second block) (see claim 15). The claimed invention improves upon said base compute tile by having adders coupled to said ADCs (first and second plurality of logic gates). This improvement to said base compute tile is an application of known technique from Lee –coupling adder to ADC that is coupled to memory cells (first and second blocks) wherein outputs (first and second outputs) (from said memory cells) are pass from said memory cells to said ADC to said adder to (providing) said accumulator (see Lee mapping in claim 9). One of ordinary skill in the art would recognize that this known technique of coupling adder to ADC can also be applied to couple modified Garde’s ADCs, and the result would have been predictable. In this instance, said ADCs are coupled to an adder wherein said first and second outputs (from said memory cells 727aa and 727ba) are passed from said memory cells 727aa and 727ba to said adder to said accumulator. Note that since said first and second outputs are outputted at different times, said first and second outputs would be passed to from said adder at different times. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Lee’s known technique would have yielded i) predictable result of said ADCs (first and second plurality of logic gates) are coupled to an adder wherein said first and second outputs (outputted at different times (first and second times)) are passed from said memory cells 727aa and 727ba to said adder to (providing) said accumulator, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Modified Garde teaches a base compute tile that has adder connected accumulator (see claim 16). The claimed invention improves upon said base compute tile by having plural adders. This improvement to said base compute tile is an application of known technique from Chih –adder trees 122 (adders) connected to accumulator M 140 (see Chih mapping in claim 9). One of ordinary skill in the art would recognize that this known technique of using plural adder tress to connect to accumulator can also be applied to adder of modified Garde, and the result would have been predictable. In this instance, said adder is modified to be plural adder tress 122 that are connected to said accumulator. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Chih’s known technique would have yielded i) predictable result of said adder being modified to be plural adder tress 122 that are connected to said accumulator, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Ting. Regarding claim 17, Di Febbo teaches A method, comprising: performing in parallel a first plurality of operations on a first set of weights (first sets of weights = Fig. 7 w00+w01) stored in a first block (first block = Fig. 7 memory cells 727aa+727ab) of a plurality of blocks (plurality of blocks = Fig. 7 memory cells 727aa-727bb) of storage sites of a hardware compute-in-memory (CIM) module (CIM hardware module = Fig. 4 in-memory compute circuit 101), the CIM hardware module including compute logic coupled with the storage sites, the compute logic configured to selectively perform in parallel, operations for the plurality of blocks and provide outputs for the operations, the operations including the first plurality of operations; outputting, from the CIM hardware module, a first output for the first plurality of operations corresponding to the first block [at a first time]; performing in parallel a second plurality of operations on a second set of weights (second set of weights = Fig. 7 w10+w11) stored in a second block (second block = Fig. 7 memory cells 727ba+memory cell 727bb) of the plurality of blocks, the operations including the second plurality of operations; and outputting, from the CIM hardware module, a second output for the second plurality of operations corresponding to the second block [at a second time] (Di Febbo teaches memory cells 727aa-727bb (first and second blocks), respectively, has (coupled with) transconductance (compute logic) that multiplies (operations, first and second plurality of operations) respective input value with respective weight stored in respective memory cell (see Fig. 7, ¶[66-67]). Di Febbo also teaches i) multiplications (first plurality of operations) in memory cell 727aa-727ab is accumulated to generate (provide) accumulated voltage 775a (first output) and ii) multiplications (second plurality of operations) in memory cell 727ba-727b is accumulated to generate (provide) accumulated voltage 775b (second output). Note the parallel (parallel) arrangement of i) memory cell 727aa and 727ab in which each multiplies (first plurality of operations) respective input value with respective weight stored in said respective memory cell and ii) memory cell 727ba and 727bb in which each multiplies (second plurality of operations) respective input value with respective weight stored in said respective memory cell, which results in parallel multiplication. Also note that each memory cell performs a multiplication (operations, first and second plurality of operations) for input and weight specific to (selectively) each memory cell (see Fig. 7, ¶[66-67]). Di Febbo also teaches memory cells 727aa-727bb (plurality of blocks) are in sets of rows (storage sites) that is part of memory circuit 120 within (including) in-memory compute circuit 101 (CIM hardware module) (see Fig.4, ¶[23-24]).) Modified Di Febbo teaches a base method that outputs i) first output from first block and ii) second output from second block (see claim 17). The claimed invention improves upon said base method by outputting said first output at first, and said second output second time. This improvement to said base method is an application of known technique from Ting – reading (being output) data D0-D3 (first output) from memory cell array 106_1 (first block) at time period 5-8 (first time), and reading (being output) data D4-D5 (second output) from memory cell array 106_2 (second block) at time period 9-10 (second time) (see Ting mapping in claim 8). One of ordinary skill in the art would recognize that this known technique of outputting data at different time periods can also be applied to output first and second output of modified Di Febbo, and the result would have been predictable. In this instance, said first and second outputs are outputted at different time periods. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying Ting’s known technique would have yielded i) predictable result of said first and second outputs are outputted at different time periods, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Claims 18 – 19 are rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Ting, and further in view of Kale. Regarding claim 18, Di Febbo in view of Ting teach the method of claim 17 where first set of weights is stored in first block, and second set of weights is stored in second block but do not appear to explicitly teach storing said first set of weights in said second block, and storing said second set of weights in said first block (see also limitation below). storing, in the first block and the second block, the first set of weights and the second set of weights However, Kale teaches storing, in [the] first block and [the] second block, the first set of weights and the second set of weights (Kale teaches configuring synapse memory cells 207-227 (first block) and synapse memory cells 206-226 (second block) to store same set of weights (first and second set of weights) (see Fig. 5, ¶[204], [208]).) In view of Kale, modified Di Febbo is modified such that i) said first block (storing said first set of weights) would store said second set of weights, and ii) said second block (storing said second set of weights) would store said first set of weights. Di Febbo, Ting and Kale are analogous art to the claimed invention because they are in the same field of endeavor, storage management. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify modified Di Febbo in the manner described supra because storing same weights across multiple synapse memory cells would allow for detection of error and correction of memory cells with corrupted weights (Kale, ¶[208-209]). Regarding claim 19, Di Febbo in view of Ting and Kale teach the method of claim 18 where Kale also teaches storing of the first set of weights and the second set of weights in the first block and the second block based on an optimization of throughput and utilization of the CIM hardware module (Kale teaches configuring synapse memory cells 207-227 (first block) and synapse memory cells 206-226 (second block) to store same set of weights (first and second set of weights) in order to (based on) i) improve reliability (utilization) of computations results generated by synapse memory cells and ii) to select same result generated (throughput) by most of memory cell sets (see Fig. 5, ¶[204], [208]) wherein memory cells are part of array 273 (see Fig. 5) in integrated circuit die B (CIM hardware module).) In view of Kale, modified Di Febbo is modified such that that i) said first block (storing said first set of weights) would store said second set of weights, and ii) said second block (storing said second set of weights) would store said first set of weights, in order to i) improve realizability (unitization) of computation results generated by memory cells of said CIM hardware module, and ii) select same result generated (throughput) by most of memory cell sets of said CIM hardware module. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify modified Di Febbo in the manner described supra because storing same weights across multiple synapse memory cells would allow for detection of error and correction of memory cells with corrupted weights (Kale, ¶[208-209]). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Di Febbo in view of Ting, and further in view of New. Regarding claim 20, Di Febbo in view of Ting teach the method of claim 17 where Di Febbo also teaches wherein the performing in parallel the first plurality of operations further includes: routing a first input to a first portion of the compute logic for the first block; (Di Febbo teaches i) memory cell 727aa (first block) having transconductance (first portion of compute logic) that multiplies (first plurality of operations) input value 718a (first input) that is routed (routing) to memory cell 727aa (see ¶[38]) and ii) memory cell 727ab (first block) having transconductance (first portion of compute logic) that multiplies (first plurality of operations) input value 718b (first input) (see Fig. 7, ¶[67-68]) that is routed to memory cell 7272ab (see ¶[38]).) wherein the performing in parallel the second plurality of operations further includes routing a second input to a second portion of the compute logic for the second block; (Di Febbo teaches i) memory cell 727ba (second block) having transconductance (second portion of compute logic) that multiplies (second plurality of operations) input value 718a (second input) that is routed (routing) to memory cell 727ba (see ¶[38]) and ii) memory cell 727bb (second block) having transconductance (second portion of compute logic) that multiplies (second plurality of operations) input value 718b (second input) (see Fig. 7, ¶[67-68]) that is routed to memory cell 7272bb (see ¶[38]).) Modified Di Febbo teaches a base method that outputs i) first output from first block and ii) second output from second block (see claim 17). The claimed invention improves upon said base method by using a multiplexer to select said first and second output. This improvement to said base module is an application of known technique from New – using multiplexers (multiplexer) to select a read value (first/second output) from an associated memory cell (first/second block) (see New Fig. 2, ¶[47]). One of ordinary skill in the art would recognize that this known technique of using multiplexer to select which memory cell to read can also be applied to select modified Di Febbo’s first and second outputs, and the result would have been predictable. In this instance, multiplexer is used to select i) said first output from said first block, and ii) said second output from said second block. It would have been obvious to one of ordinary skill in the art at the time of filing to recognize that applying New’s known technique would have yielded i) predictable result of multiplexer being used to select i) said first output from said first block, and ii) said second output from said second block, and ii) the improved claimed invention (see MPEP 2143(I)(D)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHIE YEW whose telephone number is (571)270-5282. The examiner can normally be reached Monday - Thursday and alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHIE YEW/ Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Jan 21, 2025
Application Filed
Mar 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602320
DYNAMIC PAGE MAPPING USING HEADER TO IDENTIFY COMPRESSED DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12602330
Method and Apparatus, Storage Medium, and Computer Program Product of Using Routing Information in Page Table Entry
2y 5m to grant Granted Apr 14, 2026
Patent 12602321
ADDRESS TRANSLATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602331
MEMORY SHARING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12596645
METHOD AND DEVICE TO UPDATE ONE OR MORE L2P MAPPING TABLES BASED ON JOURNAL REPOSITORY
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+26.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 281 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month