DETAILED ACTION
Claims 1-20 are pending.
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/10/2025 has been entered.
The office acknowledges the following papers:
Claims and remarks filed on 8/5/2025,
IDS filed on 11/14/2025, 7/18/2025, and 7/7/2025.
New Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (U.S. 2018/0225124), in view of Hutchings et al. (U.S. 2007/0257702).
As per claim 1:
Gupta disclosed a processor comprising:
a plurality of processor cores (Gupta: Figures 1 element 110, paragraph 41), each of the plurality of processor cores including a data cache for reading and writing data and an instruction cache for reading instructions, the instruction cache being separate from the data cache (Gupta: Figures 1-2 elements 110-111, 227, and 235, paragraphs 41 and 58-59); and
a distributor communicatively coupled to the plurality of processor cores (Gupta: Figure 1 elements 110, 120, and 140, paragraphs 41-43)(The core interconnect and memory interface (i.e. distributor) distribute data and instructions from the L2 cache to the cores for instruction block processing by the cores.) and configured to:
receive configuration information indicating at least an association between to-be-processed data and instructions (Gupta: Figures 1-3 and 5 elements 110-111, 120, 140, 160, 311-315, and 510, paragraphs 41-43, 46, 49, 52, 67, 75, 79, 88-92)(The control unit allocates and assigns instruction blocks to processor cores for processing. The core interconnect and memory interface (i.e. distributor) receives control signals and instruction blocks (e.g. configuration information) to transmit to the processor cores. The instruction block header includes information regarding an execution mode, block size, block exit types, pointing to next blocks, information about the instructions within the block, etc. (e.g. configuration information). The instructions within the instruction block provide an association between instructions and to-be-processed data (i.e. data referred by the instructions themselves for processing, such as source operands).).
Gupta failed to teach a distributor configured to: based at least in part on the association, distribute the to-be-processed data to a respective data cache of at least one processor core of the plurality of processor cores; and based at least in part on the association, distribute the instructions associated with the to-be-processed data to a respective instruction cache of the at least one processor core for processing.
However, Hutchings combined with Gupta disclosed a distributor configured to:
based at least in part on the association, distribute the to-be-processed data to a respective data cache of at least one processor core of the plurality of processor cores (Hutchings: Figure 3 element 320, paragraphs 61-64)(Gupta: Figures 1-2 and 5 elements 111, 152, 277, and 540, paragraphs 41-43, 68, 75, and 96)(Hutchings disclosed a configurable interconnect circuit receiving configuration data. The combination allows for the multiplexers/switches/routing components of the interconnect of Gupta to be configurable by receiving configuration data. The core interconnect and memory interface (i.e. distributor) distribute data and instructions from the L2 cache to the cores for instruction block processing by the cores. This distribution is done by the configuration data configuring the interconnect elements. Load instructions in the instruction block request operand data that is sent from the L2/main memory when not present in the data cache. The memory interface and interconnect send this operand data from the L2/main memory back to the data cache when requested.); and
based at least in part on the association, distribute the instructions associated with the to-be-processed data to a respective instruction cache of the at least one processor core for processing (Hutchings: Figure 3 element 320, paragraphs 61-64)(Gupta: Figures 1-3 and 5 elements 111, 152, 160, 227, 321, and 540, paragraphs 41-43, 46, 58, 75, and 96)(Hutchings disclosed a configurable interconnect circuit receiving configuration data. The combination allows for the multiplexers/switches/routing components of the interconnect of Gupta to be configurable by receiving configuration data. The core interconnect and memory interface (i.e. distributor) distribute data and instructions from the L2 cache to the cores for instruction block processing by the cores. This distribution is done by the configuration data configuring the interconnect elements. The control unit allocates instruction blocks to processor cores for processing. Each instruction block includes a header followed by number of instructions to be processed (e.g. instruction block A includes seventy instructions). The memory interface and interconnect send the instructions of the instruction block from the L2/main memory to the L1 instruction cache of an allocated core for processing by the control unit.).
The advantage of implementing a configurable interconnect is that routing of data can be configured based on processor status. Thus, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to implement the configurable interconnect circuits of Hutchings into the interconnect of Gupta for the above advantage.
As per claim 2:
Gupta and Hutchings disclosed the processor of claim 1, wherein distributing the instructions to the at least one processor core comprises:
broadcasting the instructions to the at least one processor core (Gupta: Figures 1-2 and 5 elements 111, 152, 160, 227, 321, and 540, paragraphs 41-43, 46, 58, 75, and 96)(The broadest reasonable interpretation of broadcasting in the context of the claim language is sending the instructions to the at least one processor core. The memory interface and interconnect send the instructions of the instruction block from the L2/main memory to the L1 instruction cache of an allocated core for processing by the control unit.).
As per claim 3:
Gupta and Hutchings disclosed the processor of claim 2, wherein distributing the to-be-processed data to the at least one processor core comprises:
sending first data in the to-be-processed data to a first processor core of the at least one processor core for processing (Gupta: Figures 1-3 and 5 elements 111, 152, 160, 277, 321, and 540, paragraphs 41-43, 46, 68, 75, and 96)(Load instructions in the instruction block request operand data that is sent from the L2/main memory when not present in the data cache. The memory interface and interconnect send this operand data from the L2/main memory back to the data cache when requested. The control unit scheduler allocates instruction blocks to processing cores. For example, load instructions in instruction block A assigned to a first processing core send a first set of operand data to the first processing core.); and
sending second data in the to-be-processed data to a second processor core of the at least one processor core for processing, the second data being different from the first data (Gupta: Figures 1-3 and 5 elements 111, 152, 160, 277, 321, and 540, paragraphs 41-43, 46, 68, 75, and 96)(Load instructions in the instruction block request operand data that is sent from the L2/main memory when not present in the data cache. The memory interface and interconnect send this operand data from the L2/main memory back to the data cache when requested. The control unit scheduler allocates instruction blocks to processing cores. For example, load instructions in instruction block E are assigned to a second processing core send a second set of operand data to the second processing core. It would have been obvious to one of ordinary skill in the art that different instructions blocks load different operand data for different processing operations.).
As per claim 4:
Gupta and Hutchings disclosed the processor of claim 1, wherein distributing the to-be-processed data to the at least one processor core comprises:
broadcasting the to-be-processed data to the at least one processor core (Gupta: Figures 1-2 and 5 elements 111, 152, 277, and 540, paragraphs 41-43, 68, 75, and 96)(The broadest reasonable interpretation of broadcasting in the context of the claim language is sending the data to the at least one processor core. The memory interface and interconnect send operand data from the L2/main memory back to the data cache when requested for load instructions.).
As per claim 5:
Gupta and Hutchings disclosed the processor of claim 4, wherein distributing the instructions to the at least one processor core comprises:
sending a first instruction to a first processor core of the at least one processor core, so that the first processor core processes the to-be-processed data based on the first instruction (Gupta: Figures 1-3 and 5 elements 111, 152, 160, 227, 321, and 540, paragraphs 41-43, 46, 58, 75, and 96)(Each instruction block includes a header followed by number of instructions to be processed (e.g. instruction block A includes seventy instructions), any of which reads upon the first instruction. The control unit allocates instruction blocks to processor cores for processing. For example, instructions in block A are assigned to a first processing core. The memory interface and interconnect send the instructions of the instruction block from the L2/main memory to the L1 instruction cache of the allocated first processing core for processing by the control unit.); and
sending a second instruction to a second processor core of the at least one processor core, so that the second processor core processes the to-be-processed data based on the second instruction, the first instruction being different from the second instruction (Gupta: Figures 1-3 and 5 elements 111, 152, 160, 227, 321, and 540, paragraphs 41-43, 46, 58, 75, and 96)(Each instruction block includes a header followed by number of instructions to be processed (e.g. instruction block E includes one-hundred twenty-eight instructions), any of which reads upon the second instruction. The control unit allocates instruction blocks to processor cores for processing. For example, instructions in block E are assigned to a second processing core. The memory interface and interconnect send the instructions of the instruction block from the L2/main memory to the L1 instruction cache of the allocated second processing core for processing by the control unit. It would have been obvious to one of ordinary skill in the art that a given instruction in instruction block A is different than a given instruction in instruction block E.).
As per claim 8:
Gupta and Hutchings disclosed the processor of claim 1, wherein the distributor is further configured to:
receive a set of data and a set of instructions to be processed by the processor (Gupta: Figure 1 elements 110, 120, and 140, paragraphs 41-43)(The core interconnect and memory interface (i.e. distributor) receives data and instructions from the L2 cache to distribute to the cores for instruction block processing.), wherein the configuration information indicating at least an association between the to-be-processed data in the set of data and the instructions in the set of instructions (Hutchings: Figure 3 element 320, paragraphs 61-64)(Gupta: Figure 1 element 120, paragraph 42)(The combination allows for the multiplexers/switches/routing components of the interconnect of Gupta to be configurable by receiving configuration data. The configuration allows for providing where the incoming/outgoing data/instructions are to be next sent to (i.e. association). Additionally, the instructions within the instruction block provide an association between instructions and to-be-processed data (i.e. data referred by the instructions themselves for processing, such as source operands).).
As per claim 9:
Claim 9 essentially recites the same limitations of claim 1. Therefore, claim 91 is rejected for the same reasons as claim 1.
As per claim 10:
The additional limitation(s) of claim 10 basically recite the additional limitation(s) of claim 2. Therefore, claim 10 is rejected for the same reason(s) as claim 2.
As per claim 11:
The additional limitation(s) of claim 11 basically recite the additional limitation(s) of claim 3. Therefore, claim 11 is rejected for the same reason(s) as claim 3.
As per claim 12:
The additional limitation(s) of claim 12 basically recite the additional limitation(s) of claim 4. Therefore, claim 12 is rejected for the same reason(s) as claim 4.
As per claim 13:
The additional limitation(s) of claim 13 basically recite the additional limitation(s) of claim 5. Therefore, claim 13 is rejected for the same reason(s) as claim 5.
As per claim 16:
The additional limitation(s) of claim 16 basically recite the additional limitation(s) of claim 8. Therefore, claim 16 is rejected for the same reason(s) as claim 8.
As per claim 17:
Claim 17 essentially recites the same limitations of claim 1. Therefore, claim 17 is rejected for the same reasons as claim 1.
As per claim 18:
The additional limitation(s) of claim 18 basically recite the additional limitation(s) of claim 2. Therefore, claim 18 is rejected for the same reason(s) as claim 2.
As per claim 19:
The additional limitation(s) of claim 19 basically recite the additional limitation(s) of claim 3. Therefore, claim 19 is rejected for the same reason(s) as claim 3.
As per claim 20:
The additional limitation(s) of claim 20 basically recite the additional limitation(s) of claim 4. Therefore, claim 20 is rejected for the same reason(s) as claim 4.
Claims 6-7 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (U.S. 2018/0225124), in view of Hutchings et al. (U.S. 2007/0257702), in view of Official Notice.
As per claim 6:
Gupta and Hutchings disclosed the processor of claim 1, wherein the distributor is further configured to:
receive a processed result from a corresponding data cache of the at least one processor core, the processed result being obtained by respectively processing the received to-be-processed data based on the instruction by the at least one processor core (Gupta: Figures 1-2 and 5 elements 111, 120, 140, 152, 277, and 540, paragraphs 41-43, 68, 96)(Store operations write execution results to the data cache. Official notice is given that operand data in data caches can be evicted to external shared memory for the advantage of freeing entries to be used on new operand data for processing. Thus, it would have been obvious to one of ordinary skill in the art to implement evicting processing results in the data cache to the external L2 cache. In view of the official notice, the interconnect and memory interface receive the evicted processing result data.).
As per claim 7:
Gupta and Hutchings disclosed the processor of claim 6, wherein distributing the to-be-processed data to the at least one processor core comprises:
distributing third data in the to-be-processed data to a first processor core of the at least one processor core for processing (Gupta: Figures 1-3 and 5 elements 111, 152, 160, 277, 321, and 540, paragraphs 41-43, 46, 68, 75, and 96)(Load instructions in the instruction block request operand data that is sent from the L2/main memory when not present in the data cache. The memory interface and interconnect send this operand data from the L2/main memory back to the data cache when requested. The control unit scheduler allocates instruction blocks to processing cores. For example, load instructions in instruction block A assigned to a first processing core send a first and third set of operand data to the first processing core (e.g. first load is first data and second load is third data). It would have been obvious to one of ordinary skill in the art that instruction block A includes multiple load instructions.); and
in response to receiving a first result obtained by processing the third data from the first processor core, distributing fourth data in the to-be-processed data to the first processor core for processing, the third data being different from the fourth data (Gupta: Figures 1-3 and 5 elements 111, 152, 160, 277, 321, and 540, paragraphs 41-43, 46, 68, 75, and 96)(Load instructions in the instruction block request operand data that is sent from the L2/main memory when not present in the data cache. The memory interface and interconnect send this operand data from the L2/main memory back to the data cache when requested. The control unit scheduler allocates instruction blocks to processing cores. For example, load instructions in instruction block E are assigned to a second processing core send a second and fourth set of operand data to the second processing core (e.g. first load is second data and second load is fourth data). It would have been obvious to one of ordinary skill in the art that instruction block E includes multiple load instructions. It would have been obvious to one of ordinary skill in the art that different instructions blocks load different operand data for different processing operations.).
As per claim 14:
The additional limitation(s) of claim 14 basically recite the additional limitation(s) of claim 6. Therefore, claim 14 is rejected for the same reason(s) as claim 6.
As per claim 15:
The additional limitation(s) of claim 15 basically recite the additional limitation(s) of claim 7. Therefore, claim 15 is rejected for the same reason(s) as claim 7.
Response to Arguments
The arguments presented by Applicant in the response, received on 8/5/2025 are not considered persuasive.
Applicant argues for claims 1, 9, and 17:
“In Gupta, the processor cores are connected to each other via core interconnect 120. The memory interface 140 is used to connect to additional memory located on another integrated circuit besides the processor 100. The core interconnect 120 and the memory interface 140 are simply pathways used by the core to transmit with external memory and other cores. In paragraph [0102] of Gupta, the instruction block scheduler can assign which instruction block will execute on which processor core and when the instruction block will be executed.
Gupta at most discloses distributing the instructions to a specified processor core. Gupta is silent about an association between to-be-processed data and instructions and distributing respectively data and instruction according to configuration information while data and instructions are associated through configuration information.”
This argument is not found to be persuasive for the following reason. Gupta disclosed a core interconnect and memory interface (i.e. distributor) that receives control signals and instruction blocks (e.g. configuration information) to transmit to the processor cores. The instruction blocks themselves include a set of instructions to be processed by an assigned processor core. Each individual instruction provides an association between data to be processed by the instruction (e.g. source operands, memory operands, etc.) and the instruction itself. Thus, reading upon the claim limitation at hand.
Applicant argues for claims 1, 9, and 17:
“In view of this, Applicant respectfully submits that Gupta at least fails to disclose or suggest the above underlined features. Other cited documents such as Hutchings also fail to disclose or suggest the above underlined features. Furthermore, the technical objectives and application scenarios of Gupta and Hutchings are fundamentally different and cannot teach the above underlined features. Therefore, taking the combined teachings of Gupta and Hutchings as a whole, it would not have been obvious before the effective filing date of the claimed invention to incorporate this feature (user) into the system of Gupta as taught by Hutchings.”
This argument is not found to be persuasive for the following reason. The combination of Hutchings additionally allows for the added configuration information of the routing multiplexers to provide an association between instructions and processed data from the L2 cache and main memory. This association is provided by the configuration data sending instructions and load data for a given instruction block to the same processor core. Thus, reading upon the claimed limitation.
Conclusion
The following is text cited from 37 CFR 1.111(c): In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB A. PETRANEK whose telephone number is (571)272-5988. The examiner can normally be reached on M-F 8:00-4:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on (571) 270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACOB PETRANEK/Primary Examiner, Art Unit 2183