Prosecution Insights
Last updated: April 19, 2026
Application No. 18/971,323

DATA-FLOW-DRIVEN RECONFIGURABLE PROCESSOR CHIP AND RECONFIGURABLE PROCESSOR CLUSTER

Non-Final OA §102§103§112
Filed
Dec 06, 2024
Examiner
DOMAN, SHAWN
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Beijing Tsingmicro Intelligent Technology Co. Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
183 granted / 275 resolved
+11.5% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
47 currently pending
Career history
322
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
26.3%
-13.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 275 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-15 have been examined. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(a)-(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The instant application claims priority to Chinese Application 202310047127.8, filed January 31, 2023. Information Disclosure Statement The Applicant's submission of the Information Disclosure Statements dated December 6, 2024 and July 16, 2025 is acknowledged by the Examiner and the cited references have been considered in the examination of the claims now pending. Copies of the PTOL-1449s initialed and dated by the Examiner are attached to the instant office action. Drawings The drawings are objected to because of the following informalities. The sheet numbering is improperly formatted. The figures therefore fail to comply with 37 CFR 1.84(t), which states, “The drawing sheet numbering must be clear and larger than the numbers used as reference characters to avoid confusion. The number of each sheet should be shown by two Arabic numerals placed on either side of an oblique line, with the first being the sheet number and the second being the total number of sheets of drawings, with no other marking.” The sheet numbering is too small and includes extraneous text. Figures 1-4 include text that is too small. The figures therefore fail to comply with 37 CFR 1.84(p)(3), which states, “Numbers, letters, and reference characters must measure at least .32 cm. (1/8 inch) in height.” In Figures 2 and 4, the shaded arrows obscure text. The figures therefore fail to comply with 37 CFR 1.84(p)(3), which states, “Numbers, letters, and reference characters …should not cross or mingle with the lines.” Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the Applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the Applicant regards as the invention. Claim 6 recites, at line 2, “the reconfigurable processor chip.” There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the limitation is interpreted as, “[[the]]a reconfigurable processor chip of the plurality of reconfigurable processor chips.” Claim 7 recites, at line 2, “the reconfigurable processing element among the plurality of reconfigurable processor chips.” There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the limitation is interpreted as, “[[the]]a the reconfigurable processing element among the plurality of reconfigurable processor chips.” Claims 7-15 are rejected as depending from rejected base claims and failing to cure the indefiniteness of those base claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Publication No. 2023/0281156 by Zhang et al. (hereinafter referred to as “Zhang”). Regarding claim 1, Zhang discloses: a reconfigurable processor chip, comprising: a plurality of reconfigurable processing elements based on distributed storage, components of the reconfigurable processing elements being logically interconnected, wherein the components comprise (Zhang discloses, at Figure 1C and related description, a reconfigurable data processor that includes an array of configurable units. As disclosed at ¶ [0049], the processor uses distributed memory. As shown at Figure 3A, components of the configurable units are logically interconnected.): a reconfigurable computing component configured to calculate data (Zhang discloses, at Figure 4 and related description, a pattern compute unit, which discloses a reconfigurable computing component configured to calculate data.); a data flow controller using a data flow driving mode, the data flow driving mode is configured to control start and end of a computing task and a data transmission task based on data flow information about the computing task and message transferring of upstream and downstream reconfigurable processing elements (Zhang discloses, at Figure 4 and related description, a control block that controls starting and ending processing. See, e.g., ¶ [0085]. Zhang also discloses, at Figure 4 and related description, transmitting data. The starting, ending, and data transmission with adjacent units are understood to be based on data flow information about computing tasks. See, e.g., ¶ [0102].); a distributed memory configured to implement data storage of a corresponding reconfigurable processing element (Zhang discloses, at Figure 5 and related description, pattern memory units, which discloses a distributed memory configured to implement data storage of a corresponding reconfigurable processing element.); and a programmable data routing element configured to implement communication between the plurality of reconfigurable processing elements to control a direction of a data packet, and implement flexible transmission of the data packet (Zhang discloses, at Figure 3B and related description, switches that link the configurable units, which discloses a programmable data routing element configured to implement communication between the plurality of reconfigurable processing elements to control a direction of a data packet, and implement flexible transmission of the data packet.). Regarding claim 2, Zhang discloses the elements of claim 1, as discussed above. Zhang also discloses: the programmable data routing element is configured to change a routing direction and a routing destination of the data packet in real time by software configuration using a software programmable routing policy (Zhang discloses, at Figure 3B and related description, switches that link the configurable units and can transmit data during operation, which discloses the programmable data routing element is configured to change a routing direction and a routing destination of the data packet in real time by software configuration using a software programmable routing policy.). Regarding claim 3, Zhang discloses the elements of claim 1, as discussed above. Zhang also discloses: the reconfigurable processing element is configured to exchange data over a network-on-chip, an inter-chip interface and a network cable within a storage capacity range of a storage space (Zhang discloses, at ¶ [0047] et seq., an array level network (ALN), which discloses exchanging data over a network on chip. Zhang also discloses, at ¶ [0083], sending requests off-chip, which discloses an inter-chip interface as off-chip memory is understood to be implemented using chips. Zhang also discloses, at ¶ [0048], the ALN uses wires to transmit information, which discloses a network cable within a storage capacity range of a storage space.). Regarding claim 4, Zhang discloses the elements of claim 1, as discussed above. Zhang also discloses: the plurality of reconfigurable processing elements are divided into a plurality of computing areas based on algorithmic mapping requirements (Zhang discloses, at Figure 1A and related description, dividing the processing elements based on the algorithm to be performed. See also Figure 7 and related description, which discloses partitioning code and corresponding assignment of resources.), wherein a communication connection relationship of the programmable data routing element is changed in real time by changing configuration of an execution graph in the reconfigurable processing elements in the data flow driving mode of the data flow controller, and a division of the computing areas is changed based on the communication connection relationship (Zhang discloses, at Figure 7 and related description, runtime determination of placement and routing for assigning a physical dataflow graph to processing and memory resources, which discloses a communication connection relationship of the programmable data routing element is changed in real time by changing configuration of an execution graph in the reconfigurable processing elements in the data flow driving mode of the data flow controller, and a division of the computing areas is changed based on the communication connection relationship.). Regarding claim 5, Zhang discloses the elements of claim 4, as discussed above. Zhang also discloses: the plurality of computing areas perform pipeline computing or perform different assigned computing tasks (Zhang discloses, at ¶ [0035], performing pipelined processing.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of US Publication No. 2010/0158023 by Mukhopadhyay et al. (hereinafter referred to as “Mukhopadhyay”). Regarding claim 6, Zhang discloses: a reconfigurable processor cluster, comprising: a plurality of reconfigurable processor…[tiles], wherein the reconfigurable processor chip is composed of a plurality of reconfigurable processing elements based on distributed storage, and components of the reconfigurable processing elements are logically interconnected and comprise (Zhang discloses at Figure 2, a plurality of tiles each having an array of configurable units. Zhang discloses, at Figure 1C and related description, a reconfigurable data processor that includes an array of configurable units. As disclosed at ¶ [0049], the processor uses distributed memory. As shown at Figure 3A, components of the configurable units are logically interconnected.): a reconfigurable computing component configured to calculate data (Zhang discloses, at Figure 4 and related description, a pattern compute unit, which discloses a reconfigurable computing component configured to calculate data.); a data flow controller using a data flow driving mode, the data flow driving mode is configured to control start and end of a computing task and a data transmission task based on data flow information about the computing task and message transferring of upstream and downstream reconfigurable processing elements (Zhang discloses, at Figure 4 and related description, a control block that controls starting and ending processing. See, e.g., ¶ [0085]. Zhang also discloses, at Figure 4 and related description, transmitting data. The starting, ending, and data transmission with adjacent units are understood to be based on data flow information about computing tasks. See, e.g., ¶ [0102].); a distributed memory configured to implement data storage of a corresponding reconfigurable processing element; (Zhang discloses, at Figure 5 and related description, pattern memory units, which discloses a distributed memory configured to implement data storage of a corresponding reconfigurable processing element.); and a programmable data routing element configured to implement communication between the plurality of reconfigurable processing elements to control a direction of a data packet, and implement flexible transmission of the data packet (Zhang discloses, at Figure 3B and related description, switches that link the configurable units, which discloses a programmable data routing element configured to implement communication between the plurality of reconfigurable processing elements to control a direction of a data packet, and implement flexible transmission of the data packet.). Zhang does not explicitly disclose the aforementioned tiles are implemented using a plurality of chips. However, in the same field of endeavor (e.g., parallel processing) Mukhopadhyay discloses: multi-chip modules (Mukhopadhyay discloses, at the Abstract, multi-chip modules.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang to implement the plurality of tiles using a plurality of chips, as disclosed by Mukhopadhyay, because whether to implement a plurality of components on one chip or on separate chips is an obvious design choice that would be determined based on circumstances, each option having well-known benefits and drawbacks. Regarding claim 7, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: a routing control module is configured to implement data communication among the plurality of reconfigurable processor…[tiles] (Zhang discloses, at Figure 2, the tiles are communicatively coupled via top level switches, which discloses a routing control module.), wherein the reconfigurable processing element among the plurality of reconfigurable processor …[tiles] is configured to perform the data communication via a network by the programmable data routing element and the routing control module, and the programmable data routing element and the routing control module on the reconfigurable processor chip are connected by the network on the reconfigurable processor chip (Zhang discloses, at ¶ [0047] et seq., an array level network (ALN), which discloses the reconfigurable processing element among the plurality of reconfigurable processor …[tiles] is configured to perform the data communication via a network by the programmable data routing element and the routing control module, and the programmable data routing element and the routing control module on the reconfigurable processor chip are connected by the network on the reconfigurable processor chip.); and the routing control module is configured to receive or send a network data packet among the reconfigurable processor …[tiles] (Zhang discloses. Zhang does not explicitly disclose the aforementioned tiles are implemented using a plurality of chips. However, in the same field of endeavor (e.g., parallel processing) Mukhopadhyay discloses: multi-chip modules (Mukhopadhyay discloses, at the Abstract, multi-chip modules.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang to implement the plurality of tiles using a plurality of chips, as disclosed by Mukhopadhyay, because whether to implement a plurality of components on one chip or on separate chips is an obvious design choice that would be determined based on circumstances, each option having well-known benefits and drawbacks. Regarding claim 8, Zhang, as modified, discloses the elements of claim 7, as discussed above. Zhang also discloses: the routing control module has a bidirectional …function to send read request, write request, read response and write response control information (Zhang discloses, at Figure 2 and related description, sending data between the tiles, which discloses bidirectional functionality. Zhang also discloses, at ¶ [0047] et seq., an array level network (ALN), which discloses send read request, write request, read response and write response control information.). Zhang does not explicitly disclose Ethernet data transceiving and a flow control mechanism, and has functions of sending buffer back pressure and receiving buffer back pressure to control data transmission at a receiving end and a sending end. However, in the same field of endeavor (e.g., parallel processing) Mukhopadhyay discloses: bidirectional Ethernet ports (Mukhopadhyay discloses, at ¶ [0049], bidirectional Ethernet ports.); and back pressure flow control messages (Mukhopadhyay discloses, at ¶ [0087], back-pressure flow control messages.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang to include Ethernet and back-pressure flow control, as disclosed by Mukhopadhyay in order to improve performance by providing stable communication capabilities. Regarding claim 9, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: the plurality of reconfigurable processing elements on the reconfigurable processor chip are divided into a plurality of computing areas based on algorithm mapping requirements (Zhang discloses, at Figure 1A and related description, dividing the processing elements based on the algorithm to be performed. See also Figure 7 and related description, which discloses partitioning code and corresponding assignment of resources.); and the reconfigurable processor cluster is configured to support flexible division of the computing areas, and support asynchronous parallel computing on the computing areas (Zhang discloses, at ¶ [0034], reconfigurable units, which discloses flexible division of the computing areas, that support dataflow computing, which discloses asynchronous parallel computing.). Regarding claim 10, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: the reconfigurable processor cluster is configured to support a plurality of computing modes, a data parallel computing mode, a pipeline parallel computing mode or a model parallel computing mode (Zhang disclsoses, at ¶ [0035], supporting both parallel and pipelined modes.). Regarding claim 11, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: resources of the reconfigurable processor cluster are allocated to a plurality of tasks for parallel computing (Zhang discloses, at Figure 1A and related description, dividing the processing elements based on the algorithm to be performed. See also Figure 7 and related description, which discloses partitioning code and corresponding assignment of resources.). Regarding claim 12, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: the programmable data routing element is configured to change a routing direction and a routing destination of the data packet in real time by software configuration using a software programmable routing policy (Zhang discloses, at Figure 3B and related description, switches that link the configurable units and can transmit data during operation, which discloses the programmable data routing element is configured to change a routing direction and a routing destination of the data packet in real time by software configuration using a software programmable routing policy.). Regarding claim 13, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: the reconfigurable processing element is configured to exchange data over a network-on-chip, an inter-chip interface and a network cable within a storage capacity range of a storage space (Zhang discloses, at ¶ [0047] et seq., an array level network (ALN), which discloses exchanging data over a network on chip. Zhang also discloses, at ¶ [0083], sending requests off-chip, which discloses an inter-chip interface as off-chip memory is understood to be implemented using chips. Zhang also discloses, at ¶ [0048], the ALN uses wires to transmit information, which discloses a network cable within a storage capacity range of a storage space.). Regarding claim 14, Zhang, as modified, discloses the elements of claim 6, as discussed above. Zhang also discloses: the plurality of reconfigurable processing elements are divided into a plurality of computing areas based on algorithmic mapping requirements (Zhang discloses, at Figure 1A and related description, dividing the processing elements based on the algorithm to be performed. See also Figure 7 and related description, which discloses partitioning code and corresponding assignment of resources.), wherein a communication connection relationship of the programmable data routing element is changed in real time by changing configuration of an execution graph in the reconfigurable processing elements in the data flow driving mode of the data flow controller, and a division of the computing areas is changed based on the communication connection relationship (Zhang discloses, at Figure 7 and related description, runtime determination of placement and routing for assigning a physical dataflow graph to processing and memory resources, which discloses a communication connection relationship of the programmable data routing element is changed in real time by changing configuration of an execution graph in the reconfigurable processing elements in the data flow driving mode of the data flow controller, and a division of the computing areas is changed based on the communication connection relationship.). Regarding claim 15, Zhang, as modified, discloses the elements of claim 14, as discussed above. Zhang also discloses: the plurality of computing areas perform pipeline computing or perform different assigned computing tasks (Zhang discloses, at ¶ [0035], performing pipelined processing.). Conclusion The following prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. US 20230229612 by Luttrell discloses multiple chips and an inter-chip link. US 20230237013 by Dykema discloses data parallel, pipelines, partitioning into sub-arrays, ethernet. US 20220198117 by Rauman discloses backpressure, ethernet. US 9607355 by Zou discloses model parallel. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAWN DOMAN whose telephone number is (571)270-5677. The examiner can normally be reached on Monday through Friday 8:30am-6pm Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAWN DOMAN/Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Dec 06, 2024
Application Filed
Feb 12, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585469
Trace Cache Access Prediction and Read Enable
2y 5m to grant Granted Mar 24, 2026
Patent 12572358
System, Apparatus And Methods For Minimum Serialization In Response To Non-Serializing Register Write Instruction
2y 5m to grant Granted Mar 10, 2026
Patent 12561142
METHOD AND SYSTEM FOR PREVENTING PREFETCHING A NEXT INSTRUCTION LINE BASED ON A COMPARISON OF INSTRUCTIONS OF A CURRENT INSTRUCTION LINE WITH A BRANCH INSTRUCTION
2y 5m to grant Granted Feb 24, 2026
Patent 12554498
QUANTUM COMPUTER WITH A PRACTICAL-SCALE INSTRUCTION HIERARCHY
2y 5m to grant Granted Feb 17, 2026
Patent 12541368
LOOP EXECUTION IN A RECONFIGURABLE COMPUTE FABRIC USING FLOW CONTROLLERS FOR RESPECTIVE SYNCHRONOUS FLOWS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+23.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 275 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month