Prosecution Insights
Last updated: April 19, 2026
Application No. 17/893,993

METHOD AND SYSTEM FOR REPLICATING CORE CONFIGURATIONS

Final Rejection §103§112
Filed
Aug 23, 2022
Examiner
ZHAO, BING
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Cornami Inc.
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
420 granted / 468 resolved
+34.7% vs TC avg
Strong +46% interview lift
Without
With
+46.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
16 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
32.4%
-7.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 468 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA and is in response to the amendments filed on 12/12/2025. Claims 1-20 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. The following claim languages are not clear and indefinite: As per claim 1, 11 and 14 it is not clear what the “configuration” can entail (e.g. it entails activation of some fixed function processing cores to perform “one or more operations”; or it entails selection of the “some of the processing cores” out of the “first subset” to perform “one or more operations”). The dependent claims do not cure the 112(b) issues of their respective parent claims. Therefore, they are rejected for the same reasons as those presented for their respective parent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 103 as being unpatentable over Fleming et al (U.S. Pat. 10445451) in view of Vassiliev (U.S. Pat. 9294097). Fleming reference has been previously cited. As per claim 1 Fleming teaches the invention as substantially claimed including a die comprising: a plurality of processing cores; an interconnection network coupling the plurality of processing cores together (Figs. 1, 6 and 8 col 5 lines 39-60, col 11 lines 36-40, col 15 lines 4-56 CSA, which is an accelerator tile, is an accelerator of a processor with a core; it contains a plurality of PEs that are connected together by inter-PE network); a configuration of a first subset of at least some of the plurality of processing cores to perform a function, wherein the configuration includes configuring some of the processing cores of the first subset to perform one or more operations of the plurality of operations to execute the function and activating the interconnection network to connect some of the processing cores of the first subset of processing cores to other processing cores of the first subset of processing cores to route data for performance of the function (col 15 lines 4-65, col 16 lines 1-4, 32-49, col 17 lines 30-44, col 18 lines 8-27, col 18 line 55 – col 19 line 18: processing elements (PEs) may be configured (e.g. programmed) to implement a particular dataflow operation from among the set that the PE supports; the configuration of the PEs also involves configuration of network that selective connect different PEs, via data paths, together to perform operations according to a dataflow graph; Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67, col 43 lines 35-67: each accelerator tile comprising an array of PE contains one or more domains, which is a subset of CSA that contains a subset of processing and networking elements; each domain can be managed by its corresponding LCC, which is responsible to (re)configure its domain to perform different operations of different portions of an application); and a duplicate configuration of at least some of the other plurality of processing cores allocated to a second subset of the plurality of processing cores to perform the function independently of the first subset of the plurality of cores performing the function (Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67; col 44 lines 15-21, col 35 lines 21-23: each domain is configured by its own LCC, a single configuration request can be multi-casted to multiple LCCs, so that the single configuration can be replicated, such as configurations for certain key HPC operations that may be both replicated and pipelined, each of the domains; col 12 lines 48-59 each domains can be for a different application domain). Fleming does not explicitly teach that the plurality of processing cores can be homogeneous processing cores, where each processing core being programmed to perform a plurality of operations. However, Vassiliev explicitly teach that the plurality of processing cores can be homogeneous processing cores, where each processing core being programmed to perform a plurality of operations (Fig. 2, col 2 lines 64-67, col 3 lines 49-63, col 4 lines 34-53, col 5 lines 29-50 heterogeneous computing systems can be composed of multiple of the same FPGA devices, that can each be programed with different kernels). It would have been obvious to one with ordinary skill in the prior to the effective filling date of the invention to combine the teachings of Fleming and Vassiliev because both are directed towards configuration and management of reconfigurable accelerators One with ordinary skill in the art would be motivated to incorporate the teachings of Vassiliev into that of Fleming because Vassiliev provides a way to effectively configure and manage a plurality of reconfigurable accelerators that can lead to increase in performance of parallel task execution (col 2 lines 45-57, col 3 lines 17-37). As per claim 11 Fleming as modified by Vassiliev teaches a system of compiling a program in source code having at least one function to be executed on a plurality of homogeneous processing cores, and an interconnection network coupling the plurality of homogenous processing cores together, each processing core being programmed to perform a plurality of operations, the system (Fleming Figs. 3A-C, col 6 lines 43-67, col 7 lines 1-8, col 12 lines 13-18 CSA supports compiler produced programs; col 16 lines 32-67 programs are used to configure different processing elements (PEs) to perform different operations, the PEs are connected through a network; Vassiliev Fig. 2, col 2 lines 64-67, col 3 lines 49-63, col 4 lines 34-53, col 5 lines 29-50 the PEs can be homogeneous) comprising: a compiler operable to convert the at least one function to a configuration of a first subset of processing cores in the plurality of processing cores, wherein the configuration includes configuring some the processing cores of the first subset to perform one or more operations of the plurality of operations to execute the function and activating the interconnection network to connect some of the processing cores of the first subset of processing cores to other processing cores of the first subset of processing cores to route data for performance of the function and lay out the configuration of processing cores on a first subset of the array of processing cores (Fleming col 8 lines 58-64, col 12 lines 8-11, col 13 lines 4-8; see more details in cols. 31-33 compiler convert code into dataflow graph containing operations that are configured on one or more PEs of CSA; col 15 lines 4-65, col 16 lines 1-4, 32-49, col 17 lines 30-44, col 18 lines 8-27, col 18 line 55 – col 19 line 18: processing elements (PEs) may be configured (e.g. programmed) to implement a particular dataflow operation from among the set that the PE supports; the configuration of the PEs also involves configuration of network that selective connect different PEs, via data paths, together to perform operations according to a dataflow graph; Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67, col 43 lines 35-67: each accelerator tile comprising an array of PE contains one or more domains, which is a subset of CSA that contains a subset of processing and networking elements; each domain can be managed by its corresponding LCC, which is responsible to (re)configure its domain to perform different operations of different portions of an application); and a structured memory to store the configuration of processing cores (Fleming col 41 lines 58-67 configurations are stored in configuration caches), wherein the compiler replicates the stored configuration of processing cores on a second subset of the array of processing cores to perform the function independently of the first subset of the plurality of cores performing the function (Fleming Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67; col 44 lines 15-21, col 35 lines 21-23: each domain is configured by its own LCC, a single configuration request can be multi-casted to multiple LCCs, so that the single configuration can be replicated, such as configurations for certain key HPC operations that may be both replicated and pipelined; the configurations are produced by compilers: col 30 lines 25-34, col 12 lines 34-39, col 13 lines 4-8, col 8 lines 58-64; col 12 lines 48-59 each domains can be for a different application domain). As per claim 14 Fleming as modified by Vassiliev teaches a method of configuring an array of homogeneous processing cores to perform functions of a program written in source code, wherein each of the array of homogeneous processing cores are coupled together via an interconnection network, and wherein each processing core is programmed to perform a plurality of operations (Fleming Figs. 3A-C, col 6 lines 43-67, col 7 lines 1-8, col 12 lines 13-18 CSA supports compiler produced programs; col 16 lines 32-67 programs are used to configure different processing elements (PEs) to perform different operations, the PEs are connected through a network; Vassiliev Fig. 2, col 2 lines 64-67, col 3 lines 49-63, col 4 lines 34-53, col 5 lines 29-50 the PEs can be homogeneous), the method comprising: converting the source code performing a function of the program to a configuration of a first subset of the array of processing cores; configuring the first subset of the array of processing cores according to the configuration, wherein the configuration includes configuring some the processing cores of the first subset to perform one or more operations of the plurality of operations to execute the function and activating the interconnection network to connect some of the processing cores of the first subset of processing cores to other processing cores of the first subset of processing cores to route data for performance of the function (Fleming col 8 lines 58-64, col 12 lines 8-11, col 13 lines 4-8; see more details in cols. 31-33 compiler convert code into dataflow graph containing operations that are configured on one or more PEs of CSA; col 15 lines 4-65, col 16 lines 1-4, 32-49, col 17 lines 30-44, col 18 lines 8-27, col 18 line 55 – col 19 line 18: processing elements (PEs) may be configured (e.g. programmed) to implement a particular dataflow operation from among the set that the PE supports; the configuration of the PEs also involves configuration of network that selective connect different PEs, via data paths, together to perform operations according to a dataflow graph; Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67, col 43 lines 35-67: each accelerator tile comprising an array of PE contains one or more domains, which is a subset of CSA that contains a subset of processing and networking elements; each domain can be managed by its corresponding LCC, which is responsible to (re)configure its domain to perform different operations of different portions of an application); storing the configuration along with an identifier of the configuration, wherein the identifier is associated with characteristics of the configuration to index the configuration in a memory structure (Fleming col 41 lines 58-67 configurations are stored in configuration caches; and the configurations are associated with reference IDs: col 44 lines 5-13; col 42 lines 3-29 configuration caches are pre-loaded with configurations data that can be referenced by the IDs); and replicating the configuration to perform the function on a second subset of the array of cores to perform the function independently of the first subset of the plurality of cores performing the function: and performing the function via the configured first subset of the array of processing cores (Fleming Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67; col 44 lines 15-21, col 35 lines 21-23: each domain is configured by its own LCC, a single configuration request can be multi-casted to multiple LCCs, so that the single configuration can be replicated, such as configurations for certain key HPC operations that may be both replicated and pipelined; the configurations are produced by compilers: col 30 lines 25-34, col 12 lines 34-39, col 13 lines 4-8, col 8 lines 58-64; col 12 lines 48-59 each domains can be for execution of a different application domain). As per claim 2 Fleming teaches wherein the plurality of processing cores are arranged in a grid (Figs. 2, 22, 25, 26, 28, 29 and 31). As per claim 3 Fleming teaches wherein the configuration includes a topology and interconnection of the first subset of some of the plurality of processing cores, and wherein the configuration is stored in on-die memory of the second subset of the plurality of processing cores to create the duplicate configuration (Figs. 25, 26, 28, 29 and 31, col 41 lines 53-67). As per claim 4 Fleming teaches further comprising: a third subset of at least some of the plurality of processing cores to perform a second function on the plurality of processing cores; and a duplicate configuration of at least some of the other plurality of processing cores allocated to a fourth subset of the plurality of processing cores performing the second function (Fig. 25, col 38 lines 3-15, 25-30, col 39 lines 9-31, col 41 lines 20-24, 55-67; col 44 lines 15-21, col 35 lines 21-23: each domain, of a plurality of domains, is configured by its own LCC to perform particular operations of dataflow graph). As per claim 5 Fleming teaches wherein each of the processing cores includes a memory, an arithmetic logic unit, and a set of interfaces interconnected to neighboring cores of the plurality of processing cores (Fig. 9A, col 20 lines 45-67, col 21 lines, col 22). As per claim 6 Fleming teaches wherein each of the processing cores are configurable to perform at least one of numeric, logic and math operations, data routing operations, conditional branching operations, input processing, and output processing (col 16 lines 32-49, col 8 lines 1-64). As per claim 7 Fleming teaches wherein the processing cores in the first subset are configured as wires connecting other processing cores in the first subset (col 8 lines 5-9, 29-31, 48-55 each PE can be configured to perform a different operation in a dataflow graph, such as a channel demultiplexer operation). As per claim 8 Fleming teaches wherein the configuration is produced by a complier compiling source code to produce the configuration (col 8 lines 58-64, col 12 lines 8-11, col 13 lines 4-8; see more details in cols. 31-33 compiler convert code into dataflow graph containing operations that are configured on one or more PEs of CSA). As per claim 9 Fleming teaches wherein the configuration is stored in a memory, wherein the memory is one of a host server memory, an integrated circuit high bandwidth memory, or an on-die memory (col 41 line 54 – col 42 line 25). As per claim 10 Fleming as modified by Vassiliev teaches wherein each of the second set of the plurality of processing cores are on a single die including an on-die memory (Fleming Fig. 8, col 18 lines 13-53 CSA containing PEs can be on a same die with its own on-die memory and interconnect, this means that in some configurations, a second set of PEs can be on its own die; Vassiliev col 5 lines 36-43 FPGA devices can be in their own cards, each of the cards can obviously be considered to be its own die with its own memory), wherein the duplicate configuration is configured in the second subset of the plurality of processing cores by copying the stored configuration from the memory to on-die memory of the second subset of the plurality of processing cores (Fleming col 41 lines 35-67, col 42 lines 18-28, col 44 lines 15-21, col 38 lines 3-35 each domain is managed by its own LCC and has its own configuration cache that is used to store configurations which can be a replicated configuration). As per claims 12 and 13 they are reworded system versions of claims 9 and 3. Therefore, they are rejected for the same reasons, mutatis mutandis, as those presented for claims 9 and 3, respectively. As per claims 15-20 they are reworded method versions of claims 3-6 and 8-10, where claim 16 is a reworded version for claim 4, claim 17 corresponds to claims 5 and 6; and claims 18-20 corresponds to claims 8-10. Therefore, they are rejected for the same reasons, mutatis mutandis, as those presented for claims 3-6 and 8-10, respectively. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. However, in the interest of compact prosecution the examiner also respectfully submits that the following arguments for 35 U.S.C. 103 issues are not persuasive. Response for arguments for 35 U.S.C. 103 issues: With regard to applicant’s argument for claim 1 that: " the configuration in Fleming relies on having the correct PEs that perform required functions, not selecting from different operations performed by an individual processing core… Moreover, Fleming does not disclose that each of the processing cores has a plurality of operations and that the configuration involves selecting one or more of the plurality of operations. The examiner respectfully disagrees. Firstly, the claims themselves do not teach that the configuration involves selecting one or more of the plurality of operations. Instead the claims state that “wherein the configuration includes configuring some the processing cores of the first subset to perform one or more operations of the plurality of operations to execute the function”; which, under BRI, doesn’t require that the configuration involve selecting one or more of the plurality of operations; the only requirement of the configuration is that “some of the processing cores” are configured to “perform one or more operations”, this configuration can simply be activation of some number of pre-programed processing cores. Secondly, even if one were to interpret the claims to mean that the configuration involves selecting one or more of the plurality of operations. This interpretation is still taught by Fleming in col 16 lines 32-49, col 18 line 55 – col 19 line 8: PEs can be created in order to meet the complexity or function of programs; and each PE may be configured (e.g. programmed) before the beginning of execution to implement a particular dataflow operation from among the set that PE supports. As such, Fleming teaches “that each of the processing cores has a plurality of operations and that the configuration involves selecting one or more of the plurality of operations” Therefore, the above cited argument of the applicant is not persuasive. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BING ZHAO whose telephone number is (571)270-1745. The examiner can normally be reached 9:30am - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached on (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BING ZHAO/Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Aug 23, 2022
Application Filed
Sep 25, 2023
Response after Non-Final Action
Dec 02, 2024
Response after Non-Final Action
Jun 13, 2025
Non-Final Rejection — §103, §112
Dec 12, 2025
Response Filed
Jan 14, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585488
DYNAMIC SCRIPT GENERATION AND EXECUTION IN CONTAINERS
2y 5m to grant Granted Mar 24, 2026
Patent 12579009
SUSTAINABILITY MODES IN SOFTWARE ENGINEERING
2y 5m to grant Granted Mar 17, 2026
Patent 12572341
COMPILING TENSOR OPERATORS FOR NEURAL NETWORK MODELS BASED ON TENSOR TILE CONFIGURATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12566641
SYSTEMS AND METHODS FOR OFFLOADING COMPUTATION TO A STORAGE DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12561180
SYSTEM AND METHOD FOR SUBSCRIPTION MANAGEMENT BY INSTANTIATING AND/OR RETIRING COMPOSED SYSTEMS OF AMANAGED SYSTEM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+46.5%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 468 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month