DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1, 11, and 12 have been amended.
Claims 6 and 7 have been cancelled.
Claims 18-21 have been added.
Claims 1-5 and 8-21 have been examined.
The specification and claim objections in the previous Office Action have been addressed and are withdrawn, except as otherwise indicated below.
The § 112 rejections in the previous Office Action have been addressed and are withdrawn.
Information Disclosure Statement
The applicant's submission of the Information Disclosure Statement dated September 18, 2025 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. A copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Specification
The disclosure is objected to because of the following informalities.
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Appropriate correction is required. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Claims 18-21 are objected to because of the following informalities.
Claims 18 and 19 are improperly ordered. The claims fail to comply with MPEP § 608.01(n)(IV), which states, “A claim which depends from a dependent claim should not be separated therefrom by any claim which does not also depend from said "dependent claim."” Claims 18 and 19 depend from dependent claim 2 and are separated therefrom by claims 3-5, which do not.
The claim numbering of claims 19 and 20 is incorrect. There are two claims numbered 19. This makes it impossible to determine the dependency of claim 20. For purposes of examination, the second claim 19 is considered claim 20 and claim 20 is considered claim 21.
Claim 21 recites, at lines 4-5, “a fourth plurality of instructions that containing a greater number of instructions.” This appears to be a typographical error. Applicant may have intended, “a fourth plurality of instructions that contains a greater number of instructions.”
Claims 19 and 20 are objected to as depending from objected to base claims and failing to remedy the deficiencies of those claims. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 8-10, 12, 13, 16, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2007/0113059 by Tran (hereinafter referred to as “Tran”) in view of US Publication No. 2021/0208891 by Wen et al. (hereinafter referred to as “Wen”).
Regarding claim 1, Tran discloses:
a processor comprising: a branch target buffer (BTB) …the BTB including a plurality of BTB entries addressable by an entry address …the loop type is one of a plurality of loop types that are classified as a function of number of instructions in the loop(Tran discloses, at Figure 2 and related description, a BTB, which discloses a plurality of entries addressable by an entry address. Tran also discloses, at ¶ [0036], determining if a loop fits in an instruction queue, which discloses a plurality of loop types classified as a function of a number of instructions, i.e., those that have few enough instructions to fit and those that have too many instructions to fit.);
a first instruction queue that processes a first plurality of iterations of the loop in response to the loop being classified as a first loop type (Tran discloses, at Figure 2 and related description, an instruction queue that stores instructions of a loop if the loop fits therein, which discloses processing a plurality of iterations of the loop.); and
a second instruction queue, in a stage of a pipeline of the processor that precedes the first instruction queue, that processes a second plurality of iterations of the loop in response to the loop being classified as a second loop type, wherein the second plurality of instructions comprises more instructions than the first plurality of instructions, and wherein the second instruction queue provides instructions to the first instruction queue (Tran discloses, at Figure 2 and related description, an instruction cache that stores loop instructions when the loop includes too many instructions to fit completely in the instruction queue, that precedes the instruction queue in the pipeline, and that provides instructions to the instruction queue, which discloses a second instruction queue that processes a plurality of iterations of the loop. As disclosed in Figure 1 and related description, the instruction cache, i.e., second instruction queue, stores more instructions than the instruction queue, i.e., first instruction queue.); and
wherein the second instruction queue receives instructions from the instruction cache or external memory, and wherein the first instruction queue provides instructions to an instruction decode unit. (Tran discloses, at Figure 2 and related description, the instruction cache receives instructions from memory and provides instructions to the instruction queue, which provides the instructions to the decode unit, which discloses wherein the second instruction queue receives instructions from the instruction cache or external memory, and wherein the first instruction queue provides instructions to an instruction decode unit.).
Tran does not explicitly disclose the aforementioned BTB stores a predicted loop type of a loop and each of the aforementioned BTB entries includes a branch type comprising a loop type and a loop count.
However, Tran discloses, at ¶ [0022], the branch predictor stores a loop count. It would have been obvious to store the loop count in BTB entries in order to enable simple access to the count values.
Also in the same field of endeavor (e.g., loops) Wen discloses:
storing an identifier that indicates whether a loop is a long or short loop (Wen discloses, at ¶ [0070], storing an identifier indicating whether a loop is a long loop or a short loop, which discloses a predicted loop type.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran’s BTB to include storing the loop type identifier disclosed by Wen in order to enable simple access to the loop type.
Regarding claim 2, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
the first loop type is classified as a function of a number of instructions that fit into the first instruction queue (Tran discloses, at ¶ [0036], storing a loop in an instruction queue if the loop fits therein, which discloses classifying the loop as a function of the number of instructions.).
Regarding claim 3, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
the second instruction queue comprises: a plurality of instruction cache line addresses and wherein the second loop type is classified as function of a number of cache lines required for loop instructions; and wherein the number of cache lines fit into the second instruction queue (Tran discloses, at ¶ [0036], storing loops that have too many instructions for the instruction queue in the instruction cache, which discloses classifying as a function of the number of cache lines and that the number of cache lines fits in the second instruction queue.).
Regarding claim 5, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
instructions of the loop form a basic block (Tran discloses, at Figure 1 and related description, an instruction set, which discloses a basic block.).
Regarding claim 8, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
the loop type of the loop corresponds to a nested loop and wherein the first instruction queue processes iterations of an inner loop of the loop and …processes iterations of an outer loop of the loop (Tran discloses, at Figure 4 and related description, processing iterations of an inner loop from a first instruction queue and processing iterations of an outer loop, which discloses a nested loop.).
Tran does not explicitly disclose the aforementioned outer loop iterations are processed from the second instruction queue. However, it would have been obvious to modify Tran such that the instruction cache, i.e., the second queue, was used for iterations of the outer loop. Since selection of the queue is based on size of the loop, utilizing the second queue would enable larger loops to be used, thereby increasing flexibility.
Regarding claim 9, Tran, as modified, discloses the elements of claim 8, as discussed above. Tran also discloses:
…identify entry into the inner loop and exiting from the inner loop of the nested loop; and a branch target address field that identifies an entry address for the inner loop (Tran discloses, at Figure 4 and related description, processing nested loops, which discloses identifying entry, exit, and a branch target address for the inner loop.).
Tran does not explicitly disclose the aforementioned information is stored in each entry of the BTB. However, Tran discloses, at ¶ [0022], using bits, i.e., encoding, to represent loop indications and counters. It would have been obvious to store the information as encoded bits in BTB entries in order to enable simple access to the count values.
Regarding claim 10, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
a branch execution unit detects a loop type based on the number of instructions in the loop and the loop count to write to an entry in the BTB (Tran discloses, at ¶ [0036], an instruction queue that stores instructions of a loop if the loop fits therein, which discloses detecting loop type based on number of instructions in the loop. Tran also discloses, at Figure 3 and related description, the loop count and the branch prediction module writing an entry to the BTB.).
Regarding claim 12, Tran discloses:
…a method that is executable by a processor, the method detecting a loop type in a series of instructions and generating a predicted loop count, the method comprising: identifying in the series of instructions, a basic block of instructions and classifying the basic block of instructions as a loop (Tran discloses, at Figure 2 and related description, detecting a loop and loop count, which discloses identifying a basic block of instructions and classifying the basic block as a loop. Tran also discloses, at ¶ [0036], determining if a loop fits in an instruction queue, which discloses a plurality of loop types classified as a function of a number of instructions, i.e., those that have few enough instructions to fit and those that have too many instructions to fit.);
classifying the loop into one of a plurality of loop types based on a number of instructions in the loop Tran also discloses, at ¶ [0036], determining if a loop fits in an instruction queue, which discloses a plurality of loop types classified as a function of a number of instructions, i.e., those that have few enough instructions to fit and those that have too many instructions to fit.);
sending the basic block of instructions to a first instruction queue that stores a first number of instructions if the loop type comprises a first loop type from a second instruction queue (Tran discloses, at Figure 2 and related description, storing a loop in an instruction queue if the loop fits therein where the instructions are received from an instruction cache, which sending the basic block of instructions to a first instruction queue that stores a first number of instructions if the loop type comprises a first loop type from a second instruction queue.); and
sending the basic block of instructions to a second instruction queue that stores a second number of instructions, wherein the second number of instructions is greater than the first number of instructions, if the loop type comprises a second loop type, and wherein the second instruction queue provides the basic block of instructions to the first instruction queue (Tran discloses, at Figure 2 and related description, storing a loop in an instruction cache if the loop is too big to fit in the instruction queue and providing instructions from the instruction cache to the instruction queue and, as disclosed at Figure 1, that the instruction cache stores more instructions than the instruction queue, which discloses sending the basic block of instructions to a second instruction queue that stores a second number of instructions, wherein the second number of instructions is greater than the first number of instructions, if the loop type comprises a second loop type, and wherein the second instruction queue provides the basic block of instructions to the first instruction queue.); and
responding to receiving the basic block of instructions in the second instruction queue by processing the basic block of instructions and providing instructions to the first instruction queue which is in a subsequent stage of the processor (Tran discloses, at Figure 2 and related description, providing instructions from the instruction cache to the instruction queue, which discloses responding to receiving the basic block of instructions in the second instruction queue by processing the basic block of instructions and providing instructions to the first instruction queue which is in a subsequent stage of the processor.).
Tran does not explicitly disclose a computer program product stored on a non-transitory computer readable storage medium and including computer system instructions.
However, in the same field of endeavor (e.g., loops) Wen discloses:
a computer program product stored on a non-transitory computer readable storage medium and including computer system instructions for causing a computer system to execute (Wen discloses, at claim 21, a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include the CPP implementation disclosed by Wen in order to increase the value of the system.
Regarding claim 13, Tran, as modified, discloses the elements of claim 13, as discussed above. Tran also discloses:
generating the predicted loop count for the loop; and generating a program counter calculation as a function of the predicted loop count (Tran discloses, at Figure 3 and related description, generating a predicted loop count and a PC calculation based thereon.).
Regarding claim 16, Tran, as modified, discloses the elements of claim 12, as discussed above. Tran also discloses:
the basic block of instructions comprises an inner loop, wherein the method further comprises: predicting the inner loop; classifying the inner loop as the first loop type; and classifying an outer loop of the loop … (Tran discloses, at Figure 4 and related description, processing iterations of an inner loop from a first instruction queue and processing iterations of an outer loop, which discloses a nested loop.).
Tran does not explicitly disclose the aforementioned outer loop iterations are the second type. However, it would have been obvious to modify Tran such that the outer loop was of the second type. Since classification is based on size of the loop, utilizing the loop classified as the second type would enable larger loops to be used, thereby increasing flexibility.
Regarding claim 17, Tran, as modified, discloses the elements of claim 16, as discussed above. Tran also discloses:
…cause the BTB to use a target address field of the BTB to access a basic block in the BTB that comprises the inner loop and access another basic block in the BTB to exit the inner loop (Tran discloses, at Figure 4 and related description, processing nested loops, which discloses identifying entry, exit, and a branch target address for the inner loop.).
Tran does not explicitly disclose writing prediction bits associated with the inner loop to an entry of a branch target buffer (BTB). However, Tran discloses, at ¶ [0022], using bits, i.e., encoding, to represent loop indications and counters. It would have been obvious to store the information as encoded bits in BTB entries in order to enable simple access to the count values.
Regarding claim 20, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
the first plurality of instructions …an instruction decode unit of the processor (Tran discloses, at Figure 2 and related description, the first plurality of instructions and a decode unit.); and
the second plurality of instructions fit within an instruction cache queue of the processor (Tran discloses, at Figure 2 and related description, the second plurality of instructions fit within an instruction cache of the processor.).
Tran does not explicitly disclose the aforementioned plurality of instructions fit within the decode unit. However, the sizing of the decode unit is an obvious design choice based on well-known tradeoffs, such as chip area versus capacity. Therefore, it would have been obvious to a person having ordinary skill in the art before the filing date of the claimed invention to modify Tran such that the decode unit was large enough to include a particular number of instructions.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Wen in view of US Publication No. 2016/0092230 by Chen et al. (hereinafter referred to as “Chen”).
Regarding claim 4, Tran, as modified, discloses the elements of claim 1, as discussed above. Tran also discloses:
an instruction issue unit that dispatches instructions to one or more execution queues (Tran discloses, at Figure 2 and related description, a decode unit that provides instructions to an instruction execution module, which discloses dispatching instruction to execution queues.);
a branch execution unit that detects the loop and …stores for the loop…the branch type…the loop count with the entry address for the loop to the BTB, the branch execution unit …tracks branch predictions including predicting loops (Tran discloses, at Figure 2 and related description, a branch prediction module and loop detection unit, which discloses storing an entry address and tracking predictions including predicting loops. Tran also discloses, at Figure 4 and related description, storing information indicating whether a loop is an inner loop or outer loop, which discloses a branch type. Tran also discloses, at ¶ [0022], the branch predictor storing a loop count.); and
tracks a predicted loop count in the branch execution unit and the instruction issue unit for instruction address calculation (Tran discloses, at Figure 2 and related description, using the program counter and loop count to identify and predict loops, which discloses tracking a loop count and using it for address calculation.).
Tran does not explicitly disclose the aforementioned branch execution unit generates for the loop the predicted loop type, stores the loop type, and comprises comprising a branch prediction queue.
However, in the same field of endeavor (e.g., loops) Wen discloses:
generating and storing an identifier that indicates whether a loop is a long or short loop (Wen discloses, at ¶ [0070], generating and storing an identifier indicating whether a loop is a long loop or a short loop, which discloses generating and storing a predicted loop type.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include generating and storing the loop type identifier, as disclosed by Wen, in order to enable simple access to the loop type.
Also in the same field of endeavor (e.g., loops) Chen discloses:
a branch prediction queue (Chen discloses, at Figure 1 and related description, a branch history table, which discloses a branch prediction queue.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include a branch prediction queue, as disclosed by Chen, to improve performance by ensuring accurate branch prediction.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Wen in view of Chen in view of in view of US Patent No. 6,591,359 by Hass et al. (hereinafter referred to as “Hass”).
Regarding claim 11, Tran discloses:
a processor comprising: a branch execution unit that identifies a loop and classifies the loop in accordance with a number of instructions in the loop into one of a plurality of loop types (Tran discloses, at Figure 2 and related description, a branch prediction module, which discloses a branch execution unit. Tran also discloses, at ¶ [0036], determining if a loop fits in an instruction queue, which discloses a plurality of loop types classified as a function of a number of instructions.);
a branch target buffer (BTB), including a plurality of BTB entries addressable by an entry address, the BTB receiving from the branch execution unit, an entry address for the loop… (Tran discloses, at Figure 2 and related description, a BTB, which discloses a plurality of entries addressable by an entry address and receiving entry addresses.);
…tracks all branch predictions and tracks the predicted loop count in the branch execution unit and the instruction issue unit for program counter calculation (Tran discloses, at Figure 2 and related description, using the program counter and loop count to identify and predict loops, which discloses tracking a loop count and using it for address calculation.);
an instruction queue that processes a plurality of iterations of the loop based on a first loop type and wherein the first loop type is based on the number of instructions that fit into the instruction queue …and wherein the loop instructions from the plurality of loop iterations can be read and sent to a next pipeline stage of the processor that stores a fewer number of instructions than a preceding stage (Tran discloses, at Figures 1 and 2 and related description, an instruction queue that stores instructions of a loop if the loop fits therein and that stores fewer instructions than the instruction cache at the preceding stage , which discloses reading and sending loop instructions to a next pipeline stage.); and
an instruction address queue that processes the plurality of iterations of the loop based on a second loop type and wherein the instruction address queue comprises one or more instruction cache line addresses and wherein the second loop type is based on a number of cache lines required for loop instructions and wherein the number of cache lines fit into the instruction address queue and wherein the instruction cache line addresses in the loop are part of the instruction address queue …and wherein the cache line addresses from the plurality of loop iterations can be read and sent to the next pipeline stage of the processor (Tran discloses, at ¶ [0036], an instruction cache that stores loop instructions when the loop includes too many instructions to fit completely in the instruction queue, which discloses an instruction address queue that processes a plurality of iterations of the loop and that comprises cache lines and reading and sending the instructions to the next pipeline stage.).
Tran does not explicitly disclose the aforementioned BTB stores a loop type for the loop, and a predicted loop count for the loop, a branch prediction queue, the loop instructions are virtually unrolled, and the cache line addresses in the loop are virtually unrolled.
However, Tran discloses, at ¶ [0022], the branch predictor stores a loop count. It would have been obvious to store the loop count in BTB entries in order to enable simple access to the count values.
Also in the same field of endeavor (e.g., loops) Wen discloses:
storing an identifier that indicates whether a loop is a long or short loop (Wen discloses, at ¶ [0070], storing an identifier indicating whether a loop is a long loop or a short loop, which discloses a predicted loop type.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran’s BTB to include storing the loop type identifier disclosed by Wen in order to enable simple access to the loop type.
Also in the same field of endeavor (e.g., loops) Chen discloses:
a branch prediction queue (Chen discloses, at Figure 1 and related description, a branch history table, which discloses a branch prediction queue.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include a branch prediction queue, as disclosed by Chen, to improve performance by ensuring accurate branch prediction.
Also in the same field of endeavor (e.g., loops) Hass discloses:
virtually unrolling loops (Hass discloses, at col. 4, lines 12-13, virtual unrolling of loops.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include virtual unrolling, as disclosed by Hass, in order to enable faster execution of loops. See Hass, col. 1, lines 31-33.
Claims 14, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Wen in view of Hass.
Regarding claim 14, Tran, as modified, discloses the elements of claim 12, as discussed above. Tran also discloses:
if the loop comprises the first loop type …sending instructions of the basic block of instructions from a plurality of iterations of the loop to a next pipeline stage (Tran discloses, at Figure 2 and related description, sending instructions to the next stage in the pipeline.)
Tran does not explicitly disclose virtually unrolling instructions.
However, in the same field of endeavor (e.g., loops) Hass discloses:
virtually unrolling loops (Hass discloses, at col. 4, lines 12-13, virtual unrolling of loops.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include virtual unrolling, as disclosed by Hass, in order to enable faster execution of loops. See Hass, col. 1, lines 31-33.
Regarding claim 15, Tran, as modified, discloses the elements of claim 14, as discussed above. Tran also discloses:
writing sequential instructions after the loop into the first instruction queue (Tran discloses, at ¶ [0022], processing the next sequential instruction after the loop, which discloses writing the instruction into the first instruction queue.).
Regarding claim 18, Tran, as modified, discloses the elements of claim 2, as discussed above. Tran also discloses:
the first instruction queue and the second instruction queue each operate to …[send] instructions in the corresponding plurality of iterations to a next pipeline stage of the processor (Tran discloses, at Figure 2 and related description, sending instructions to the next stage in the pipeline.)
Tran does not explicitly disclose virtually unrolling instructions.
However, in the same field of endeavor (e.g., loops) Hass discloses:
virtually unrolling loops (Hass discloses, at col. 4, lines 12-13, virtual unrolling of loops.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include virtual unrolling, as disclosed by Hass, in order to enable faster execution of loops. See Hass, col. 1, lines 31-33.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Wen in view of Hass in view of US Publication No. 2007/0113058 by Tran et al. (hereinafter referred to as “Tran ‘058”).
Regarding claim 19, Tran, as modified, discloses the elements of claim 18, as discussed above. Tran also discloses:
one or more of the first instruction queue and the second instruction queue process sequential instructions after the loop …(Tran discloses, at ¶ [0022], processing the next sequential instruction after the loop.).
Tran does not explicitly disclose the aforementioned executing is performed concurrently during execution of the loop.
However, in the same field of endeavor (e.g., loops) Tran ‘058 discloses:
executing following instructions during execution of a loop (Tran ‘058 discloses, at ¶ [0028], executing next sequential instructions concurrently with execution of the loop.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include concurrent execution as disclosed by Tran ‘058 in order to improve performance by increasing parallelism.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Wen in view of US Publication No. 202000125498/0089141 by Betts et al. (previously cited and hereinafter referred to as “Chen ‘141”) in view of US Publication No. 2015/0089141 by Chen et al. (previously cited and hereinafter referred to as “Chen ‘141”).
Regarding claim 21, Tran, as modified, discloses the elements of claim 20, as discussed above. Tran does not explicitly disclose a third instruction queue that processes a third plurality of instructions that fit within an instruction tag queue of the processor; and a fourth instruction queue that processes a fourth plurality of instructions that containing a greater number of instructions than capacity of the instruction tag queue; wherein the fourth instruction queue provides instructions to the third instruction queue, and the third instruction queue provides instructions to the second instruction queue.
However, in the same field of endeavor (e.g., loops) Betts discloses:
a third instruction queue that processes a third plurality of instructions…; and a fourth instruction queue that processes a fourth plurality of instructions that containing a greater number of instructions than a capacity…; wherein the fourth instruction queue provides instructions to the third instruction queue, and the third instruction queue provides instructions to the second instruction queue (Betts discloses, at Figure 1 and related description, a cache hierarchy that discloses multiple levels of cache beyond two where the levels provide instructions to one another, which discloses third and fourth queues that process third and fourth plurality of instructions including more instructions than a particular number of instructions and the fourth instruction queue provides instructions to the third instruction queue, and the third instruction queue provides instructions to the second instruction queue.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include additional levels of memory, as disclosed by Betts, in order to provide quicker access to content than if the content were stored in main memory according to well-known principles of cache operation involving tradeoffs between size and speed. See Betts, ¶ [0002].
However, in the same field of endeavor (e.g., loops) Chen ‘141 discloses:
instruction tag storage that stores a particular number of instructions (Tran discloses, at Figure 3 and related description, tag storage, which discloses storing a particular number of instructions.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Tran to include instruction tag storage, as disclosed by Chen ‘141, in order to improve performance by speeding up access to storage, which is a well-known purpose of instruction tags.
Response to Arguments
On page 9 of the response filed September 18, 2025 (“response”), the Applicant argues that the title has been amended to better reflect the claimed subject matter.
Though fully considered, the Examiner respectfully maintains that the title is insufficiently descriptive of the claimed invention. The amended title is, “PROCESSOR WITH LOOP BUFFERS OF DIFFERENT SIZES AND CONCURRENT AND SEQUENTIAL PIPELINED INSTRUCTION QUEUES.” The independent claims do not recite loop buffers. Also, the meaning of concurrent queues is unclear. The Examiner proposes for consideration the following, which reflects the Examiner’s understanding of inventive aspects of the claimed invention: “Classifying loops based on number of instructions and storing different classes of loops in different queues.”
On page 10 of the response the Applicant argues, “the art of record fails to disclose the staged queues of claim 1. Independent claim 12 recites a first instruction queue and a second instruction queue and the operation of "responding to receiving the basic block of instructions in the send instruction queue by processing the basic block of instructions and providing instructions to the first instruction queue which is in a subsequent stage of the processor." This operation also recites a staged queue operation which is not disclosed in the art of record. Independent claim 11 recites "wherein the loop instructions from the plurality of25 lo iterations can be read and sent to a next pipeline stage of the processor that stores a fewer number of instructions than a preceding stage." This novel arrangement is also not taught or suggested by the art of record. “
Though fully considered, the Examiner respectfully disagrees. Tran discloses, e.g. at Figure 2 and related description, a pipeline. Different stages of the pipeline include data storage, or queues. Accordingly, the Applicant’s arguments are deemed unpersuasive.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAWN DOMAN whose telephone number is (571)270-5677. The examiner can normally be reached on Monday through Friday 8:30am-6pm Eastern Time.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAWN DOMAN/
Primary Examiner, Art Unit 2183