Prosecution Insights
Last updated: April 19, 2026
Application No. 18/124,115

AUTONOMOUS COMPUTE ELEMENT OPERATION USING BUFFERS

Final Rejection §102
Filed
Mar 21, 2023
Examiner
HUISMAN, DAVID J
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Ascenium, Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
92%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
389 granted / 670 resolved
+3.1% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
88 currently pending
Career history
758
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
33.6%
-6.4% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
31.7%
-8.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 670 resolved cases

Office Action

§102
DETAILED ACTION Claims 1-17, 19-20, and 22-23 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. This is a reminder to insert patent numbers, where they exist, for any application listed in paragraphs 1-4 of the specification. Terminal Disclaimer The terminal disclaimer filed on September 8, 2025, disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of a patent resulting from application 17/963,226 has been reviewed and is accepted. The terminal disclaimer has been recorded. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-17, 19-20, and 22-23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al., U.S. Patent Application Publication No. 2017/0123795 A1. Referring to claim 1, Chen has taught a processor-implemented method for task processing, the method comprising: accessing a two-dimensional (2D) array of compute elements (FIG.1, array 110, with n x n compute elements (PPEs)), wherein each compute element within the array of compute elements is known to a compiler (see at least paragraphs 59, 62, and 64, where a compiler maps a program to the PPEs and is thus aware of the PPEs) and is coupled to its neighboring compute elements within the array of compute elements (see FIG.3); providing control for the array of compute elements, wherein the control is enabled by a stream of wide control words generated by the compiler (see paragraphs 59, 62, and 64-66. The compiler distributes instructions (control words) to PPEs (e.g. as seen in FIG.14C) and the instructions are of the forms shown in FIG.15), wherein the wide control words comprise variable length control words (see FIG.15 and note that the individual control words of the overall wide control words have different lengths (e.g. opcode is 8 bits, input_0 is 5 bits, output is 9 bits, etc.)); loading an autonomous operation buffer with at least two operations contained in one or more control words, wherein the autonomous operation buffer is integrated in a compute element (see FIG.2, which shows a compute element (PPE) that contains an instruction buffer (autonomous operation buffer) that stores multiple instructions (multiple control words) (see paragraphs 50-51)); setting a compute element operation counter, coupled to the autonomous operation buffer, wherein the compute element operation counter is integrated in the compute element (see paragraph 52. Each PPE has a program counter (PC) to sequence through the instructions in the buffer); and executing the at least two operations, using the autonomous operation buffer and the compute element operation counter, wherein the operations complete autonomously from direct compiler control (see FIG.2 and paragraph 50. The PC will select the next operation from the buffer and send it for execution by execution unit(s) 62. At this point (during runtime), the compiler is not involved and, this, the execution/completion of operations occurs independently of direct compiler control). Referring to claim 2, Chen has taught the method of claim 1 further comprising grouping a subset of compute elements within the array of compute elements (FIG.3 shows fours subsets, one in each quadrant separated from another given quadrant by components 150/155. Alternatively, see FIG.14C, where the PPEs that are assigned instructions form a subset that different from an idle subset). Referring to claim 3, Chen has taught the method of claim 2 wherein the subset comprises compute elements that are adjacent to at least two other compute elements within the array of compute elements (see FIGs.3 and 14C. For instance, in FIG.3, PPE 1:1 is adjacent to PPEs 2:1 and 1:2. Each PPE is adjacent to at least two other PPEs). Referring to claim 4, Chen has taught the method of claim 3 further comprising loading additional autonomous operation buffers with additional operations contained in the one or more control words (each PPE has its own buffer that buffers its own set of control words among all the control words). Referring to claim 5, Chen has taught the method of claim 4 further comprising setting additional compute element operation counters, each coupled to an autonomous operation buffer of the additional operation buffers (each PPE will have its own program counter so as to sequence through its respective buffer). Referring to claim 6, Chen has taught the method of claim 5 further comprising executing the additional operations cooperatively among the subset of compute elements (see FIG.14C, which shows cooperative execution of the loop of FIGs.14A-B. The subset of elements pass data to one another (as shown by the arrows) to carry out the operation of the loop). Referring to claim 7, Chen has taught the method of claim 6 wherein the additional operations complete autonomously from direct compiler control (for similar reasoning as above, during runtime, the compiler is not operating, and this the execution/completion of operations is not dependent on direct compiler control). Referring to claim 8, Chen has taught the method of claim 1 wherein the control words include control word bunches (see FIG.15. Each control word (instruction) includes bunches of smaller control words (opcode, input, etc.)). Referring to claim 9, Chen has taught the method of claim 8 wherein the control word bunches provide operational control of a particular compute element (the instruction fields of FIG.15 control the compute element receiving the fields. For instance, the opcode indicates the type of operation the compute element is to perform. Also, see paragraphs 65-66). Referring to claim 10, Chen has taught the method of claim 9 wherein the operational control specifies arithmetic logic unit (ALU) connections (see paragraphs 65-66. In a given PPE, the ALU may receive a result from another PPE’s ALU. For instance, this would happen in FIG.14C, where PPE 102 sends result x down and to the right to obtain result y by performing y=x * 2 (as shown in FIG.14A)). Referring to claim 11, Chen has taught the method of claim 9 wherein the operational control specifies compute element memory addresses and/or control (the operational control, by name, specifies control). Referring to claim 12, Chen has taught the method of claim I wherein the compute element operation counter tracks cycling through the autonomous operation buffer (this is the purpose of the PC (to cycle through instructions in an instruction buffer)). Referring to claim 13, Chen has taught the method of claim 12 further comprising generating a task completion signal (this is overly broad and could encompass any of a number of signals in Chen. For instance, a result generated by an ALU is a signal that is indicative of completion of a task (since completing a task generates a result). Alternatively, any of the signals shown in FIG.3A may also be a task completion signal). Referring to claim 14, Chen has taught the method of claim 13 wherein the task completion signal is based on a value in the compute element operation counter (if the program counter points to an instruction in the buffer that is to be executed by an ALU, then the result of the ALU (task completion signal) is based on the program counter pointing to the ALU instruction.). Referring to claim 15, Chen has taught the method of claim 13 wherein the task completion signal is based on a decision calculation within a compute element (when the ALU performs a decision calculation, the task completion signal (result) of that calculation is based on that calculation. For instance, from FIGs.14A-C, a PPE decides to calculation the product of x and 2 to output result/signal y). Referring to claim 16, Chen has taught the method of claim 1 wherein a control word in the stream of control words includes a data dependent branch operation (as described above, the control words may implement a loop such as that in FIGs.14A-14C, which is a data dependent branch operation, because a loop requires a backward-taken branch to occur (e.g. paragraph 120). Also, there are data dependencies shown in FIG.14C, where data from one PPE must flow to another dependent PPE to carry out a next operation). Referring to claim 17, Chen has taught the method of claim 16 wherein the compiler calculates a latency for the data dependent branch operation (again, the compiler schedules the operations so it must know the latency of operations. For instance, see paragraph 66, which states that a result can come from a previous cycle. Thus, the compiler knows that an operation depending on this result must have at least a 1-cycle latency so that result can be passed to the relevant PPE. Also, see FIG.14C, which shows that any dependent instructions have to be scheduled based on a latency of a previous instruction, e.g. because PPE 102 will have some latency, the PPE generating result y will have to wait for that latency to expire before executing. That is why the operation generating y cannot execute earlier). Referring to claim 19, Chen has taught the method of claim 1 wherein the autonomous operation buffer contains sixteen operational entries (the instructions are 64 bits wide (e.g. paragraphs 62 and 65). Thus, the store one 64-bit instruction (paragraph 50 states that the buffer holds multiple instructions), there must be 64 1-bit entries. Each 1-bit entry is an operational entry. Thus, there are 16 operational entries (individual bit storage locations in the buffer) and many more beyond that). Referring to claim 20, Chen has taught the method of claim 19 wherein the operational entries comprise compute element operations (FIG.15, the instructions in their entireties are stored in the entries. Alternatively, just the opcode may indicate the operations), compute element data paths (from FIG.14C, the entries storing instructions store values that dictate flow of data on a particular path to another PPE), compute element ALU control (FIG.15, opcode), and compute element memory control (FIG.15, when the entries contain load/store (LSU) instructions, the various fields control memory). Claim 22 is rejected for similar reasoning as claim 1. Furthermore, Chen has taught a computer program product embodied in a non-transitory computer readable medium for task processing, the computer program product comprising code which causes one or more processors to perform the claimed operations (see paragraphs 233-240). Claim 23 is rejected for similar reasoning as claim 1. Furthermore, Chen has taught a computer system for task processing comprising: a memory which stores instructions (again, see storage medium of paragraphs 233-240. Alternatively, the instruction cache of paragraph 51 or the memory 16 of FIG.18 may be such a memory); one or more processors coupled to the memory (see paragraphs 233-240, or the processors of the array 110, or processor 12 of FIG.16, which may operate in conjunction with the array (paragraphs 225-238)), wherein the one or more processors, when executing the instructions which are stored, are configured to perform the claimed operations. Response to Arguments Applicant’s argument of the 112 rejection on page 9 of applicant’s response is persuasive. The examiner has withdrawn the rejection. On page 10 of applicant’s response, applicant argues that the instruction formats in FIG.15 of Chen are each 64 bits long in total, even if they include different opcode lengths. This is not persuasive. While the examiner agrees that FIG.15 only shows 64-bit formats, these formats correspond to the claims wide control words. However, the variable length control words comprised by the wide control words are something different. The variable length control words correspond to the individual fields within the 64-bit formats, as each of these individual field contributes to some portion of the overall control. These individual fields, from FIG.15, are clearly variable in length. On pages 10-11 of applicant’s response, applicant argues how the invention is different from Chen by pointing to paragraph 27 of the specification and explaining how the control word is not an ISA instruction. This is not persuasive. Applicant is arguing limitations not in the claim. The claim is broad enough to encompass 64-bit instructions (control words) that include variable-length control words. Conclusion The following prior art previously made of record and not relied upon is considered pertinent to applicant's disclosure: Cambonie, WO 2007/071795 A1, has taught a 2D array, in which each node includes a cyclic program counter 106 that cycles through instructions in memory 104. Clusters are formed and a compiler assigns tasks to the clusters. At least this document is deemed particularly relevant (at least to applicant’s independent claims) and applicant is encouraged to review and ensure any amendments also distinguish from this document. Lee et al., US 2015/0149747 A1, has taught an array that can be used in VLIW mode and CGRA mode. The array’s schedule is determined at compile time and mapped to various nodes of the array. Vorbach et al., U.S. Patent No. 7,996,827, has taught translating programs for mapping two a 2D array of elements. Mitu et al., US 2008/0059762 A1, has taught an array divided into multiple class clusters. A sequencer with instruction memory for each class sequences through respectively buffered instructions. Rabinovitch et al., US 2013/0298129 A1, has taught a loop buffer for each execution unit in a VLIW machine. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to David J. Huisman whose telephone number is 571-272-4168. The examiner can normally be reached on Monday-Friday, 9:00 am-5:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta, can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /David J. Huisman/Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Mar 21, 2023
Application Filed
Apr 03, 2025
Non-Final Rejection — §102
Sep 08, 2025
Response Filed
Nov 10, 2025
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602229
NEURAL NETWORK ACCELERATOR FOR OPERATING A CONSUMER PIPELINE STAGE USING A START FLAG SET BY A PRODUCER PIPELINE STAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12530199
SYSTEMS AND METHODS FOR LOAD-DEPENDENT-BRANCH PRE-RESOLUTION
2y 5m to grant Granted Jan 20, 2026
Patent 12499078
IMAGE PROCESSOR AND METHODS FOR PROCESSING AN IMAGE
2y 5m to grant Granted Dec 16, 2025
Patent 12468540
TECHNOLOGIES FOR PREDICTION-BASED REGISTER RENAMING
2y 5m to grant Granted Nov 11, 2025
Patent 12399722
MEMORY DEVICE AND METHOD INCLUDING PROCESSOR-IN-MEMORY WITH CIRCULAR INSTRUCTION MEMORY QUEUE
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
92%
With Interview (+33.8%)
4y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 670 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month