Prosecution Insights
Last updated: April 19, 2026
Application No. 18/911,459

OPTIMIZING DATA TRANSFERS BETWEEN A PARALLEL PROCESSING PROCESSOR AND A MEMORY

Non-Final OA §102§112
Filed
Oct 10, 2024
Examiner
BATAILLE, PIERRE MICHE
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
COMMISSARIAT À L'ÉNERGIE ATOMIQUE ET AUX ÉNERGIES ALTERNATIVES
OA Round
1 (Non-Final)
93%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
1100 granted / 1186 resolved
+37.7% vs TC avg
Moderate +6% lift
Without
With
+6.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
26 currently pending
Career history
1212
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1186 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-11 are now pending in the application under prosecution and have been examined. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. The specification should be amended to reflect the status of all related application, whether patented or abandoned. Therefore, applications noted by their serial number and/or attorney docket number should be updated with correct serial number and patent number if patented. Drawing [Fig. 1] is objected to because Fig. 1 should be labeled “PRIOR ART” and because statements throughout the specification (pages 1, 7states “[Fig. 1] a schematic representation of a conventional computing system”. The first instance of all acronyms or abbreviation should be spelled out for clarity, whether or not considered well known in the art. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. 37 C.F.R. § 1.83(a) requires the Drawings to illustrate or show all claimed features. Applicant must clearly point out the patentable novelty that they think the claims present, in view of the state of the art disclosed by the references cited or the objections made, and must also explain how the amendments avoid the references or objections. See 37 C.F.R. § 1.111(c). Claim Objections The claims are objected to because they include reference characters which are not enclosed within parentheses. Reference characters corresponding to elements recited in the detailed description of the drawings and used in conjunction with the recitation of the same element or group of elements in the claims should be enclosed within parentheses so as to avoid confusion with other numbers or characters which may appear in the claims. See MPEP § 608.01(m). Claim 1 recites “memory, referred to as "memory A" and “memory, referred to as "memory B"”. The clause “memory, referred to as "memory A"” should be replaced with “memory (Mem A). Similarly, memory, referred to as "memory B" should be replaced with “memory (Mem B). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1: it recites the limitation: "the computing system is characterized in that". It is unclear whether " characterized in that " is solely in reference to the computing system not based on the structure of the computing system. For examination purposes, the limitation is interpreted as "being an apparatus comprising of …". “referred to as “memory A” or “memory B” It is unclear whether " referred to" is solely in reference to defining the memory. For examination purposes, the limitation is interpreted as "being an apparatus comprising of …". Regarding claim 2: it recites the limitation: "the first data transfer makes it possible to transfer". It is unclear whether "makes it possible to transfer" features an actual action. For examination purposes, the limitation “makes it possible to transfer” is interpreted as " transfers … (data from the partition of the memory A associated with the first column to the memories B) …". Regarding claim 3: it recites the limitation: " the memory access control module is adapted to configure the connection module". It is unclear whether " is adapted to configure " features an actual configurative action. For examination purposes, the limitation “is adapted to configure” is interpreted as " configures …". Regarding claims 4 and 5: the limitation reciting: (similar to claim 3) " the memory access control module is adapted to configure the connection module". It is unclear whether " is adapted to configure " features an actual configurative action. For examination purposes, the limitation “is adapted to configure” is interpreted as " configures …". “data transfers involving a plurality of columns”. It is unclear how “involving” is adapted to configure "the data transfers” in connection to “the plurality of columns”. Claims 8 and 11, repeating the features “being adapted” and “transfer involving” as addressed with respect to claims 3, 4, and 5, is rejected based on the same assumption. Regarding claims 6, and 9-10, the claims are also rejected under 35 U.S.C.112(b), or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for depending on an indefinite parent claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-11 are rejected under 35 U.S.C. 102(a1) as being anticipated US 20180005059 (CHANG et al). With respect to claim 1, CHANG teaches the invention, a computing system (image processor architecture comprising data computation unit) [Fig. 3; Fig. 4; Fig. 13] comprising a memory "memory A" (RAM 407_1-407_R, Fig. 4 & 1307_1-1307_2, Fig. 13], a memory access control module (program controller 309 within scalar processor providing load and storage control, Fig. 3), and a parallel processing processor comprising a plurality of computing units (array of hardware execution lanes to compute processing , Fig. 3, Fig. 4), each computing unit comprising one or more elementary processors (comprising a plurality of execution logic ALU)[Par. 0068-0070] and a memory "memory B", shared by said elementary processors (memory unit (“M”) in each execution lane) [Par. 0072-0073], the computing units of the parallel processing processor are arranged into a plurality of columns (the hardware execution lane arranged in column, Fig. 4, Fig. 13) and, in each column, the computing units are ordered from a first computing unit to a last computing unit, with zero, one or more intermediate computing units between the first computing unit and the last computing unit, the first computing unit corresponding to the last computing unit when the column comprises only one computing unit (array of execution lanes positioned logically in row of executing unit) [Par. 0056-0059]; the computing system is characterized in that the memory A is partitioned so as to associate a partition of the memory A with each column of computing units (the memory being logically structured or partitioned into blocks (307, Fig. 3; 407, Fig. 4; 1307, Fig. 13) with each structured block being associated with an execution lone with each execution lane residing along a row is coupled to a same random access memory) [Fig. 3; Fig. 4, Fig. 13; Par. 0050; Par. 0059; Par. 0099-0100] and, for each column: the computing system comprises connection modules ordered from a first connection module connected to the partition of the memory A (307, Fig. 3; 407, Fig. 4; 1307, Fig. 13), to a last connection module connected to the memory B (shift register providing local memory function, 306 Fig. 3; Fig. 4, & Fig. 13) of the last computing unit, with zero, one or more intermediate connection modules between the first connection module and the last connection module, each intermediate connection module being connected to the memory B of a computing unit (each memory block or partition being coupled to an execution lane array, i.e., a first subset of execution lanes is coupled to random access memory 407_1, a second subset of execution lanes may be coupled to random access memory 407_2, where execution lanes that reside along a same row are coupled to a same random access memory and coupled to a shift register providing local memory function) [Fig. 4,and Fig. 13; Par. 0099-0100], the first connection module has a dedicated interface link with the next connection module, each intermediate connection module has a dedicated interface link on the one hand with the previous connection module and on the other hand with the next connection module, the last connection module has a dedicated interface link with the previous connection module (first subset of execution lanes coupled to random access memory 407_1, a second subset of execution lanes coupled to random access memory 407_2, … where execution lanes that reside along a same row are coupled to a same random access memory) [Fig. 4; Fig. 13, Fig. 15; Par. 0099-0102]; the memory access control module is adapted to configure the connection modules to carry out a first data transfer, for a first column, between the partition of the memory A (307, Fig. 3; 407, Fig. 4; 1307, Fig. 13) associated with said first column and a memory B (shift register providing local memory function, 306 Fig. 3; Fig. 4, & Fig. 13) of at least one computing unit of said first column (group of shift generator including the program controller 309 within the scalar processor providing fetch/store data from/to their associated random access memory execution lane to associated memory register) [Par. 0072; Par. 0052-0053] and, simultaneously with the first transfer, to carry out at least one second data transfer, for a second column, between the partition of the memory A associated with said second column and a memory B of at least one computing unit of said second column, the first and second data transfers being carried out via dedicated interface links (61) connecting the connection modules (60) one with another (repetitively performing fetch/store operation (during the same cycle) in separate column from/to memory partition with associated shift register, i.e., transfer operation execution lane allocation of memory space to associated shift register in a particular column in parallel to operations with a particular corresponding column of the execution lane array) [Fig. 16; Fig. 17; Par. 0105-0107; Par. 0088; Par. 0126; Par. 0006]. With respect to claim 2, CHANG teaches the computing system, wherein the first data transfer makes it possible to transfer data from the partition of the memory A associated with the first column to the memories B of a plurality of different computing units of the first column, by passing at most once through the connection module of each computing unit of the first column (shift register structure permitting, during a single cycle, the contents of any of registers associated with a column lane to be shifted “out” to one of its neighbor's register files through output multiplexer, having the contents of one of registers replaced with content that is shifted “in” from a corresponding one if its neighbors through input multiplexers) [Par. 0069]. With respect to claim 3, CHANG teaches the computing system, wherein the memory access control module configures the connection modules to carry out a data transfer, for at least one column, from a memory B of a computing unit of said column to a memory B of at least one other computing unit of said column (rows/columns of data transfer into the two dimensional shift register structure or respective random access memories featuring rows/column transfer of the execution lane array each executing data transfer to form rows/columns in the two dimensional shift structure) [Par. 0050-0051]. With respect to claim 4, CHANG teaches the computing system, wherein the memory access control module configures the connection modules to carry out simultaneous data transfers for a plurality of columns with, for each column involved, a data transfer from a region of the partition located at a local source address identical for all the columns involved, to a region of a memory B of at least one computing unit, said region being located at a local destination address identical for all the columns (FIG. 5b shows at least two execution lanes executing data transfer simultaneously causing data located at location respective to the execution lanes to be loaded at a different shift register) [Par. 0063-0064]. With respect to claim 5, CHANG teaches the computing system, wherein the memory access control module configures the connection modules to carry out simultaneous data transfers for a plurality of columns with, for each column, a data transfer from a region of a memory B of a computing unit, said region being located at a local source address identical for all the columns, to a region of the partition located at a local destination address identical for all the columns (Fig. 5a-5k show shift register hardware executing shifting operations from designated address to a designated address location corresponding to the execution lane) [Par. 0063-0067]. With respect to claim 6, CHANG teaches the computing system, wherein each intermediate connection module comprises an upper routing block and a lower routing block, the upper routing block configured in the following modes: Read”: for reading data in the memory B to which the intermediate connection module is connected and transmitting the data read to the previous connection module, or “Write”: for receiving data from the previous connection module and writing the data received in the memory B to which the intermediate connection module is connected, or “Default”: for receiving data from the previous connection module and transferring the data received to the lower routing block of the intermediate connection module, the lower routing block can be configured in the following modes: “Read”: for reading data in the memory B to which the intermediate connection module is connected and transmitting the data read to the next connection module, or “Write”: for receiving data from the next connection module and writing the data received in the memory B to which the intermediate connection module is connected, or “Default”: for receiving data from the next connection module and transferring the data received to the upper routing block of the intermediate connection module (neighboring execution lanes with horizontal shift connections and registers with vertical shift connections, registers on either side (right, bottom) both horizontal and vertical connections shifting to the right off the right edge or shifting vertically; memory unit (M) observed in each execution lane used to load/store data from/to a neighbor lane with the execution lane's row and/or column within the execution lane array shifting content out from its register file to each of its left, right, top, and bottom neighbors in shift sequence, the execution lane will also shift content into its register file from a particular one of its left, right, top, and bottom neighbors) [Par. 0069-0072; Par. 0058-0059]. With respect to claim 7, CHANG teaches the computing system, wherein: the first connection module comprises a lower routing block configured in the following modes: “Read”: for reading data in the memory A and transmitting the data read to the next connection module, “Write”: for receiving data from the next connection module and writing the data received in the memory A; the last connection module comprises an upper routing block that can be configured in the following modes: “Read”: for reading data in the memory B of the last connection module and transmitting the data read to the previous connection module, “Write”: for receiving data from the previous connection module and writing the data received in the memory B of the last connection module (neighboring execution lanes with horizontal shift connections and registers with vertical shift connections, registers on either side (right, bottom) both horizontal and vertical connections shifting to the right off the right edge or shifting vertically; memory unit (M) observed in each execution lane used to load/store data from/to a neighbor lane with the execution lane's row and/or column within the execution lane array shifting content out from its register file to each of its left, right, top, and bottom neighbors in shift sequence, the execution lane will also shift content into its register file from a particular one of its left, right, top, and bottom neighbors) [Par. 0069-0072; Par. 0058-0059]. With respect to claim 8, CHANG teaches the computing system wherein the memory access control module comprises at least two control modules, each control module being configured identically all the connection modules having the same order rank in the various columns [execution lane array with program controller with corresponding two dimensional shift register structure providing fetch/store data from/to their associated random access memory execution lane to associated memory register) [Par. 0072; Par. 0052-0053]. With respect to claim 9, CHANG teaches the computing system wherein the connection modules are all implemented identically, and the control modules are all implemented identically (execution lane array with program controller with corresponding two dimensional shift register structure providing fetch/store data from/to their associated random access memory execution lane to associated memory register) [Par. 0072; Par. 0052-0053]. With respect to claim 10, CHANG teaches the computing system comprising a host processor and an interconnection bus, the partitions of the memory A being defined with a contiguous address mapping, the memory A comprising program code instructions to configure the host processor, the program code instructions being stored in a region of the memory A distinct from the partitions (execution lane array with program controller with corresponding two dimensional shift register structure providing fetch/store data from/to their associated random access memory execution lane to associated memory register; the memory being logically structured or partitioned into blocks with each structured block being associated with an execution lone with each execution lane) [Fig. 3; Fig. 4, Fig. 13; Par. 0050; Par. 0056-0059; Par. 0099-0100]. With respect to claim 11, CHANG teaches the computing system comprising a host processor and an interconnection bus, the host processor being adapted to configure the memory access control module to exchange data with the memory A or with a memory B of at least one computing unit of the parallel processing processor, by passing through the interconnection bus, without passing through a dedicated interface link connecting two neighboring connection modules, each connection module comprises an arbitration module for managing an access priority to the memory to which the connection module is connected, between: a “dedicated” transfer involving a neighboring connection module, a “bus” transfer involving the interconnection bus (neighboring execution lanes within the execution lane array, each execution lane including a register that can accept data from shift register, accept data from an ALU output, or write output data into a neighbor execution register, i.e., execution lanes to shift data within the shift register array vertically or horizontally one unit to the left which causes the value to the right of each execution lane's respective position to be shifted into each execution lane′ position.) [Fig. 4; Par. 0060-0064]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. D. Kim et al., "An Overview of Processing-in-Memory Circuits for Artificial Intelligence and Machine Learning," in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 12, no. 2, pp. 338-353, June 2022. P. R. Sutradhar, S. Bavikadi, S. M. P. Dinakarrao, M. A. Indovina and A. Ganguly, "3DL-PIM: A Look-Up Table Oriented Programmable Processing in Memory Architecture Based on the 3-D Stacked Memory for Data-Intensive Applications," in IEEE Transactions on Emerging Topics in Computing, vol. 12, no. 1, pp. 60-72, Jan.-March 2024. US 20120216017 A1 (INADA) teaching Computational unit area selecting units, each of which is provided in individual multiple cores, sequentially select uncomputed computational unit areas in a computational area. Computing units, each of which is provided in the individual multiple cores, perform computation for the selected computational unit areas. In addition, the computing units write computational results in a memory device which is accessible from each of the multiple cores. Computational result transmitting unit of the core performs computational result acquisition and transmission processing in a different time period with respect to each of multiple computational result transmission areas. US 20180157966 A1 (HENRY et al) teaching neural network unit convolves a H×W×C input with F R×S×C filters to generate F Q×P outputs. N processing units (PU) each have a register receiving a memory word and a multiplexed-register selectively receiving a memory word or word rotated from an adjacent PU multiplexed-register, the N PUs being logically partitioned as G blocks each of B Pus; the PUs convolving in a column-channel-row order; for each filter column: the N registers reading a memory row, each PU multiplies the register and the multiplexed-register to generate a product to accumulate, and the multiplexed-registers being rotated by one; the multiplexed-registers being rotated to align the input blocks with the adjacent PU block for each channel; for each filter row, N multiplexed-registers read a memory row for the multiply-accumulations, F column-channel-row-sums are generated and written to the memory, then all steps are performed for each output row. US 20220129463 A1 (ARNOLD et al) teaching system has a set of computing devices (18-1-18-n) comprising a computing device controller hub and a set of parallelized nodes coupled to the computing device controller hub. Each node of the set of parallelized nodes comprises a central processing module, a main memory and a disk memory. The set of computing devices collectively executes query requests against a database table stored by the set of computing devices based on each node of the set of parallelized nodes of each computing device performing corresponding operations independently from other nodes of the set of parallelized nodes. WO 2024064538 A1 (TAN et al) teaching integrated circuit that combines transpose and compute operations including a transpose circuit coupled to a set of compute channels, each compute channel to include multiple arithmetic logic unit (ALU) circuits coupled in series, the transpose circuit operable to receive an input tensor, transpose the input tensor, and output a transposed tensor to the set of compute channels, the set of compute channels operable to generate outputs in parallel, with each of the outputs being generated from a corresponding vector of the transposed tensor. WO 2020264197 A1 (LI et al) teaching accelerator processing engine array to perform complex computations such as matrix multiply computations or other computations, the accelerator to include a memory subsystem that stores data, neural network weights and data to be processed by the processing engine array. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE MICHEL BATAILLE whose telephone number is (571)272-4178. The examiner can normally be reached Monday - Thursday 7-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kenneth LO can be reached at (571) 272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PIERRE MICHEL BATAILLE/ Primary Examiner, Art Unit 2136
Read full office action

Prosecution Timeline

Oct 10, 2024
Application Filed
Dec 12, 2025
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602175
Charge Domain Compute-in-DRAM for Binary Neural Network
2y 5m to grant Granted Apr 14, 2026
Patent 12596655
SYSTEMS AND METHODS FOR TRANSFORMING LARGE DATA INTO A SMALLER REPRESENTATION AND FOR RE-TRANSFORMING THE SMALLER REPRESENTATION BACK TO THE ORIGINAL LARGE DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596649
MEMORY ACCESS DEVICE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591523
PRIORITY-BASED CACHE EVICTION POLICY GOVERNED BY LATENCY CRITICAL CENTRAL PROCESSING UNIT (CPU) CORES
2y 5m to grant Granted Mar 31, 2026
Patent 12579082
Automated Participation of Solid State Drives in Activities Involving Proof of Space
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+6.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1186 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month