DETAILED ACTION
Claims 1-21 are pending.
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 7/17/2025 has been entered.
The office acknowledges the following papers:
Claims and remarks filed on 7/17/2025.
Allowable Subject Matter
Claim 21 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
New Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (U.S. 2019/0171448), Tran (U.S. 2022/0382547), in view of Official Notice.
As per claim 16:
Chen and Tran disclosed a method, comprising:
receiving an instruction for execution using a vector processor (Tran: Figure 1 elements 11 and 13, paragraphs 28-29)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed an instruction cache that stores instructions that are sent to the decode/issue unit. The combination implements both elements in the processor of Chen to receive instructions for execution by the stream processors.);
testing for hazards in issuing the instruction using a plurality of hazard trackers (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed a decode/issue unit to check and resolve all possible conflicts, including data dependency, resource availability, and memory hazards (i.e. plurality of hazard trackers), prior to issuing an instruction. The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations.), wherein the plurality of hazard trackers include a structure availability hazard tracker that detects structural hazards, a data available hazard tracker that detects data hazards, and a memory hazard tracker that detects memory hazards and each hazard tracker of the plurality of hazard trackers is configured to track and detect a different type of hazard (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed a decode/issue unit to check and resolve all possible conflicts, including data dependency (i.e. data available hazard), resource availability (i.e. structure availability), and memory hazards, prior to issuing an instruction. The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations. The logic to track each hazard is different and the types of hazards are different.);
issuing the instruction in response to the plurality of hazard trackers agreeing that no hazards exist during the testing (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations. Once all hazards are removed, the combination allows for issuing the matrix and FMA operations to their corresponding execution units.);
providing, to each vector processor lane of a plurality of vector processor lanes in the vector processor, control signals for the instruction in response to determining to issue the instruction (Tran: Figure 1 elements 11 and 13, paragraphs 28-29 and 32-33)(Chen: Figure 1 element 110 and 115, paragraphs 22 and 25)(The combination implements the decode/issue unit of Tran into the processing system of Chen. Official notice is given that SIMD instructions can be implemented on processing lanes and decoders generate control signals from received instructions for the advantage of increased performance and providing the required control to perform the instruction. Thus, it would have been obvious to one of ordinary skill in the art to implement SIMD instructions in Chen and to allow for control signals to be sent to each stream processor for parallel matrix multiplication processing.), wherein each vector processor lane performs operations on a subset of data in parallel (Chen: Figures 1 and 3 elements 115 and 300, paragraphs 22 and 31)(Each stream processor can perform parallel matrix multiplication on VGPR data, which is a subset of total VGPR data spread across all stream processors.); and
performing, using a plurality of execution units in each vector processor lane, the operations on the data in a plurality of register file banks within each vector processor lane (Chen: Figure 3 element 300, 308, 324, and 330, paragraphs 25 and 31-33)(The DOT4x4 and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank.).
The advantage of data dependency checks, resource availability checks, and memory instruction ordering checks is that it allows for programs to execute properly without errors from executing instructions with known hazards. Thus, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to implement the decode/issue unit of Tran into the processor of Chen for the above advantage.
As per claim 17:
Chen and Tran disclosed the method of claim 16, further comprising:
determining to hold the instruction in response to one hazard tracker of the plurality of hazard trackers identifying a hazard during the testing for hazards until the hazard is resolved (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed a decode/issue unit to check and resolve all possible conflicts, including data dependency, resource availability, and memory hazards (i.e. plurality of hazard trackers), prior to issuing an instruction. The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations. While hazards exist, instructions are held in various queues until all of the hazards are removed. Once all hazards are removed, the combination allows for issuing the matrix and FMA operations to their corresponding execution units.).
As per claim 18:
Chen and Tran disclosed the method of claim 16, wherein performing, using the plurality of execution units, the operation further includes:
reading, by one execution unit of the plurality of execution units at a time, a portion of data in each register file bank in sequential order of the plurality of register file banks for a fixed number of cycles and performing an operation on the portion of data in each register file bank during the fixed number of cycles until the data is read from each of the plurality of register file banks (Chen: Figures 3 and 11 element 300, 308, 324, 330, and 1108, paragraphs 25, 31-33, and 52)(The DOT4x4 and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank. An embodiment allows for writing results back to the VGPR where source data was loaded from. It would have been obvious to one of ordinary skill in the art that separate matrix instructions can sequentially read source matrices from bank 0 to bank 3 and sequentially write destination matrices to bank 0 to bank 3. Each reading of source data and performing matrix multiplication is a fixed number of cycles.).
As per claim 19:
The additional limitation(s) of claim 19 basically recite the additional limitation(s) of claim 3. Therefore, claim 19 is rejected for the same reason(s) as claim 3.
As per claim 20:
The additional limitation(s) of claim 20 basically recite the additional limitation(s) of claim 6. Therefore, claim 20 is rejected for the same reason(s) as claim 6.
Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (U.S. 2019/0171448), in view of Gonzalez et al. (U.S. 2019/0108031), Tran (U.S. 2022/0382547), in view of Official Notice.
As per claim 1:
Chen and Gonzalez disclosed a processor implemented on programmable hardware, comprising:
the plurality of vector processor lanes (Chen: Figures 1 and 3 elements 115 and 300, paragraphs 22 and 31)(Each stream processor is a processing lane that processes vector data from the vector general purpose register file.) where each vector processor lane of the plurality of vector processor lanes includes:
a vector register file with a plurality of register file banks to store data (Chen: Figure 3 elements 300 and 308, paragraphs 25 and 31)(Each stream processor includes a vector general purpose register file with multiple register banks.);
a load unit with a plurality of first in first out queues that loads the data into the plurality of register file banks (Gonzalez: Figures 2-3 elements 218 and 238-240, paragraphs 51-52 and 64-65)(Chen: Figure 3 element 318)(Chen disclosed an export unit connected to memory, but doesn’t detail how operands are moved between the register file and memory. Gonzalez disclosed a LSU unit using load reorder queues to load data to into the processor. The combination implements the LSU unit and load reorder queues into each stream processor for loading source data from memory to the VGPR files.);
a plurality of execution units configured to perform operations on the data in each of the plurality of register file banks (Chen: Figure 3 element 324 and 330, paragraphs 31-33)(The DOT4x4 and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file.) and execute the instructions from an instruction issue queue (Chen: Figures 1 and 3 elements 115 and 300, paragraphs 22 and 31)(Official notice is given that issue queues can be used for the advantage of holding instructions until they are ready to issue for execution. Thus, it would have been obvious to one of ordinary skill in the art to implement issue queues within the stream processors of Chen.); and
a store unit with a plurality of first in first out queues that reads the data from each of the plurality of register file banks (Gonzalez: Figures 2-3 elements 220 and 238-240, paragraphs 51-52 and 72-76)(Chen: Figure 3 element 318)(Chen disclosed an export unit connected to memory, but doesn’t detail how operands are moved between the register file and memory. Gonzalez disclosed a LSU unit using store reorder queues to store data out of the processor. The combination implements the LSU unit and store reorder queues into each stream processor for storing execution results from the VGPR files to memory.).
The advantage of implementing LSU units and reorder load/store queues is that operand data can be brought into and out of processors as needed in an out-of-order fashion for increased memory access performance. Thus, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to implement the LSU and reorder queues of Gonzalez into the stream processors of Chen for the above advantage.
Chen and Gonzalez failed to teach a vector controller configured to receive instructions for execution on the processor, test for hazards in issuing the instructions using a plurality of hazard trackers, and provide control signals to a plurality of vector lanes issuing the instructions in response to the plurality of hazard trackers agreeing that no hazards exist during the testing, wherein the plurality of hazard trackers include a structure availability hazard tracker that detects structural hazards, a data available hazard tracker that detects data hazards, and a memory hazard tracker that detects memory hazards and each hazard tracker of the plurality of hazard trackers is configured to track and detect a different type of hazard.
However, Tran combined with Chen and Gonzalez disclosed a vector controller configured to receive instructions for execution on the processor (Tran: Figure 1 elements 11 and 13, paragraphs 28-29 and 32-34)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed a decode/issue unit to decode received instructions and to control dispatching and processing of the instruction. The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations.), test for hazards in issuing the instructions using a plurality of hazard trackers (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed a decode/issue unit to check and resolve all possible conflicts, including data dependency, resource availability, and memory hazards (i.e. plurality of hazard trackers), prior to issuing an instruction. The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations.), and provide control signals to a plurality of vector lanes issuing the instructions in response to the plurality of hazard trackers agreeing that no hazards exist during the testing (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations. Once all hazards are removed, the combination allows for issuing the matrix and FMA operations to their corresponding execution units. Official notice is given that decoders generate control signals from received instructions for the advantage of providing the required control to perform the instruction. Thus, it would have been obvious to one of ordinary skill in the art to implement generating control signals from the decoder of Tran added to processor of Chen to generate control signals to send to the stream processors for execution.), wherein the plurality of hazard trackers include a structure availability hazard tracker that detects structural hazards, a data available hazard tracker that detects data hazards, and a memory hazard tracker that detects memory hazards and each hazard tracker of the plurality of hazard trackers is configured to track and detect a different type of hazard (Tran: Figures 1, 3A, 7, and 9 elements 13, 19E, and 151, paragraphs 32-34, 37-39, 60-61, 64, and 69-71)(Chen: Figures 1 and 3 elements 110, 115, and 300, paragraphs 22 and 31)(Tran disclosed a decode/issue unit to check and resolve all possible conflicts, including data dependency (i.e. data available hazard), resource availability (i.e. structure availability), and memory hazards, prior to issuing an instruction. The combination implements the decode/issue unit of Tran into the processing system of Chen to detect and control for hazards prior to processing of matrix and FMA operations. The logic to track each hazard is different and the types of hazards are different.).
The advantage of data dependency checks, resource availability checks, and memory instruction ordering checks is that it allows for programs to execute properly without errors from executing instructions with known hazards. Thus, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to implement the decode/issue unit of Tran into the processor of Chen for the above advantage.
As per claim 2:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein each vector processor lane of the plurality of vector processor lanes receives a subset of the data (Chen: Figures 1 and 3 elements 115 and 300, paragraphs 22 and 31)(Each stream processor can perform parallel matrix multiplication on VGPR data, which is a subset of total VGPR data spread across all stream processors.) and each vector processor lane performs a same operation on the subset of the data in parallel (Chen: Figure 1 element 110 and 115, paragraphs 22 and 25)(Official notice that SIMD instructions can be implemented on processing lanes and decoded to generate control signals for the advantage of increased performance. Thus, it would have been obvious to one of ordinary skill in the art to implement SIMD instructions in Chen to allow for control signals to be sent to each stream processor for parallel matrix multiplication processing.).
As per claim 3:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein the control signals and an availability of the plurality of execution units determines an order for each execution unit of the plurality of execution units to read from the plurality of register file banks (Chen: Figure 1 element 110 and 115, paragraphs 22, 25, and 28)(Official notice that SIMD instructions can be implemented on processing lanes and decoded to generate control signals for the advantage of increased performance. Thus, it would have been obvious to one of ordinary skill in the art to implement SIMD instructions in Chen to allow for control signals to be sent to each stream processor for parallel matrix multiplication processing. Execution unit availability determines when instructions are processed in each stream processor. For example, a stream processor executing a 32-cycle matrix op determines when reads for subsequent matrix ops occur.).
As per claim 4:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein each register file bank of the plurality of register file banks includes a portion of the data (Chen: Figure 3 element 308, paragraph 31) and each register file bank stores the portion of the data in sequential order (Chen: Figure 3 element 308, paragraph 31)(It would have been obvious to one of ordinary skill in the art that data elements can be either sequentially stored or sequentially accessed by instructions executed in a stream processor.).
As per claim 5:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein each execution unit includes a multiplexer to select a register file bank of the plurality of register file banks to read the data from during a cycle (Chen: Figure 3 element 312, 324, and 330, paragraph 31).
As per claim 6:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein the plurality of execution units read the data from the plurality of register file banks in a sequential order starting with a first register file bank and writes the data to the plurality of register file banks in the sequential order starting with the first register file bank (Chen: Figures 3 and 11 element 300, 308, 324, 330, and 1108, paragraphs 25, 31-33, and 52)(The DOT4x4 and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank. An embodiment allows for writing results back to the VGPR where source data was loaded from. It would have been obvious to one of ordinary skill in the art that separate matrix instructions can sequentially read source matrices from bank 0 to bank 3 and sequentially write destination matrices to bank 0 to bank 3.).
As per claim 7:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a first execution unit of the plurality of execution units reads the data from the plurality of register file banks in a sequential order starting with a first register file bank at a first cycle (Chen: Figures 3 and 11 element 300, 308, 324, 330, and 1108, paragraphs 25, 31-33, and 52)(The DOT4x4 (i.e. first execution unit) and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank.) and a subsequent execution unit of the plurality of execution units reads the data for a subsequent instruction from the plurality of register file banks in a sequential order starting with the first register file bank in response to the subsequent instruction being issued (Chen: Figures 3 and 11 element 300, 308, 324, 330, and 1108, paragraphs 25, 31-33, and 52)(The DOT4x4 and FMA (i.e. subsequent) execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. Each stream processor includes a vector general purpose register file with multiple register banks. Separate inputs can be stored in any register bank. It would have been obvious to one of ordinary skill in the art that separate matrix multiplication and FMA instruction store their source inputs in the same register bank.), and
wherein execution units of the plurality of execution units continue to read the data from the plurality of register file banks during subsequent cycles until an instruction stream is processed (Chen: Figures 3 and 11 element 300, 308, 324, 330, and 1108, paragraphs 25, 31-33, and 52)(The DOT4x4 and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank. An embodiment allows for writing results back to the VGPR where source data was loaded from. It would have been obvious to one of ordinary skill in the art that separate matrix and FMA instructions can sequentially read source matrices from bank 0 to bank 3.).
As per claim 8:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein each execution unit performs different operations on the data (Chen: Figure 3 elements 324 and 330, paragraphs 31-33).
As per claim 9:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a subset of the execution units perform a same operation on the data (Chen: Figure 3 element 330A-H, paragraphs 31-33).
As per claim 10:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a number of the plurality of first in first out queues of the load unit equals a number of the plurality of register file banks (Gonzalez: Figures 2-3 elements 218 and 238-240, paragraphs 51-52 and 64-65)( Chen: Figure 3 elements 304, 308, and 318, paragraphs 31 and 33)(The combination implements the LSU unit and load reorder queues into each stream processor for loading source data from memory to the VGPR files. Gonzalez disclosed 2 load buffer queues. Chen disclosed 4 banks in each register file and stated that each can include different number of banks. It would have been obvious to one of ordinary skill in the art that the load buffer queues can be increased for performance benefits or the register file banks can be decreased for cost savings such that they are the same number. In addition, according to “In re Rose” (105 USPQ 237 (CCPA 1955)), changes in size or range doesn’t give patentability over prior art.).
As per claim 11:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a number of the plurality of first in first out queues of the store unit equals a number of the plurality of register file banks (Gonzalez: Figures 2-3 elements 220 and 238-240, paragraphs 51-52 and 72-76)(Chen: Figure 3 elements 304, 308, and 318, paragraphs 31 and 33)(The combination implements the LSU unit and store reorder queues into each stream processor for storing execution results from the VGPR files to memory. Gonzalez disclosed 2 store buffer queues. Chen disclosed 4 banks in each register file and stated that each can include different number of banks. It would have been obvious to one of ordinary skill in the art that the store buffer queues can be increased for performance benefits or the register file banks can be decreased for cost savings such that they are the same number. In addition, according to “In re Rose” (105 USPQ 237 (CCPA 1955)), changes in size or range doesn’t give patentability over prior art.).
As per claim 12:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a number of the plurality of execution units is different from a number of the plurality of register file banks (Chen: Figure 3 elements 304, 308, 324, and 330, paragraphs 31-33).
As per claim 13:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a number of plurality of register file banks is selected in response to a length of a vector (Chen: Figures 2-3 element 202-204, 300, 308, 324, and 330, paragraphs 25 and 31-33)(Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank. A given register bank is read from based on the matrix multiplication selected source, which has a given vector length by default.) and a size of each register file bank is determined using the number of plurality of register file banks and the length of the vector (Chen: Figures 2-3 element 202-204, 300, 308, 324, and 330, paragraphs 25 and 31-33)(Each stream processor includes a vector general purpose register file with multiple register banks. Separate input matrices can be stored in any register bank. It would have been obvious to one of ordinary skill in the art that the size of the register bank is based on the sizes of input source matrices.)
As per claim 14:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein a number of execution units is selected in response to types of operations needed for the instructions (Chen: Figures 3 and 11 element 300, 308, 324, 330, and 1108, paragraphs 25, 31-33, and 52)(The DOT4x4 and FMA execution units in each stream processor execute operations using vector data stored in the vector general purpose register file. The execution units are chosen based on instructions needed to be processed.) and a number of vector processor lanes is selected in response to performance requirements of the programmable hardware and available space on the programmable hardware (Chen: Figure 1 element 110, paragraph 22)(Official notice is given that vector processors can be implemented on FPGAs for the advantage of allowing faster chip build times. Thus, it would have been obvious to one of ordinary skill in the art to implement the processors of Chen on FPGAs. In view of the official notice, the number of stream processors added to the FPGA is based on both performance needs and space on the FPGA.).
As per claim 15:
Chen, Gonzalez, and Tran disclosed the processor of claim 1, wherein the programmable hardware is a field programmable gate array (FPGA) device and the processor is a vector processor implemented as an overlay on the FPGA device (Chen: Figure 1 element 110, paragraph 22)(Official notice is given that vector processors can be implemented on FPGAs for the advantage of allowing faster chip build times. Thus, it would have been obvious to one of ordinary skill in the art to implement the processors of Chen on FPGAs.).
Response to Arguments
The arguments presented by Applicant in the response, received on 7/17/2025 are not considered persuasive.
Applicant argues regarding claims 1 and 16:
“The Applicant respectfully submits that the cited portions of Tran do not disclose or suggest at least "testing for hazards in issuing the instruction using a plurality of hazard trackers, wherein the plurality of hazard trackers include a structure availability hazard tracker that detects structural hazards, a data available hazard tracker that detects data hazards, and a memory hazard tracker that detects memory hazards and each hazard tracker of the plurality of hazard trackers is configured to track and detect a different type of hazard; [and] issuing the instruction in response to the plurality of hazard trackers agreeing that no hazards exist during the testing," as recited in amended independent claim 16. Instead, in Tran, the decode/issue unit checks and resolves the possible conflicts before issuing the instruction.”
This argument is not found to be persuasive for the following reason. Tran disclosed checking for all possible conflicts as part of issuing instructions. Instructions can only be issued once all hazards have been correctly addressed. For example, an instruction that doesn’t have data dependencies or memory hazards will still check for register port and execution unit availability as part of determining if the instruction can be issued or not. Thus, reading upon the claimed limitations.
Applicant argues regarding claims 1 and 17:
“The Applicant respectfully requests that the Examiner provide references supporting the teachings officially noticed, as well as provide the required motivation or suggestion to combine reference with the other art of record.”
This argument is partially found to be persuasive for the following reason. MPEP 2144.03 C states “To adequately traverse such a finding, an applicant must specifically point out the supposed errors in the examiner’s action, which would include stating why the noticed fact is not considered to be common knowledge or well-known in the art … A general allegation that the claims define a patentable invention without any reference to the examiner’s assertion of official notice would be inadequate.” Applicant’s response hasn’t included why the noticed fact isn’t considered well-known in the art. Thus, a subset of the previous official notices taken are maintained. The previous official notice taken regarding structural hazards for claim 17 has been withdrawn and replaced with a prior art reference.
The remaining official notice in independent claim 1 has been restructured to detail what is very well-known to one of ordinary skill in the art of decoders producing control signals in processors. This fact can likely be found in any college level text that serves as an introduction to computer architecture.
Conclusion
The following is text cited from 37 CFR 1.111(c): In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB A. PETRANEK whose telephone number is (571)272-5988. The examiner can normally be reached on M-F 8:00-4:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on (571) 270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACOB PETRANEK/Primary Examiner, Art Unit 2183