DETAILED ACTION
This action is responsive to the application filed on 11/29/2023. Claims 1-3 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Receipt is acknowledged of certified copies of papers required by 37 CFR
1.55.
Drawings
Figs. 1-6 are objected to because they fail to comply with 37 CFR 1.84(l-m and p), which requires that all drawings be made by a process which will give them satisfactory reproduction characteristics. Every line, number, and letter must be durable, clean, solid black (except for color drawings), sufficiently dense and dark, and uniformly thick and well-defined. The weight of all lines and letters must be heavy enough to permit adequate reproduction. This requirement applies to all lines however fine, to shading, and to lines representing cut surfaces in sectional views.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In regards to claim 3, line 12 the limitation stating “the system” lacks clarity. The limitation lacks clarity because it lacks proper antecedent basis as there is no prior recitation of “a system”. The examiner suggests amending line 1 of claim 3 to state “A system of a direct jump comprising…” as to correct the issue.
In regards to claim 3, line 15 the limitation stating “the processor” lacks clarity. The limitation lacks clarity because it lacks proper antecedent basis as there is no prior recitation of “a processor”.
In regards to claim 3, line 16 the limitation stating “the second level pipeline stage” lacks clarity. The limitation lacks clarity because it lacks proper antecedent basis as there is no prior recitation of “a second level pipeline stage”. The examiner suggests amending the limitation to state “a second level pipeline stage”.
In regards to claim 3, line 18 the limitation stating “the processor” lacks clarity. The limitation lacks clarity because it lacks proper antecedent basis as there is no prior recitation of “a processor”.
In regards to claim 3, line 23 the limitation stating “the instruction-fetching source” lacks clarity because there is no prior recitation of “an instruction-fetching source”. The examiner believes the applicant is referring to “a next instruction-fetching source” of claim 3, lines 21-22, and for purposes of examination will interpret the limitation as such. The examiner further suggests amending line 23 to state “the next instruction-fetching source” to correct the issue.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, CN 109783143 (cited on IDS filed on 11/29/2023; note examiner has attached machine translation of this document which is used in citations of rejections below), Nagao, PGPUB No. 2014/0019722 and further in view of Ishii, PGPUB No. 2020/0310811.
The examiner has provided two different 103 rejections below in regards to claim 1. The first rejection interprets the claim under broadest reasonable interpretation, in which a method claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met (See MPEP 2111.04). Thus, in the case of claim 1 which states “…step 2: determining whether a current instruction of the instructions fetched from a system cache is a branch instruction; if the current instruction is not a branch instruction, sequentially fetching instructions in order; if the current instruction is a branch instruction, then proceeding to step 3; step 3: determining whether the fetched current branch instruction hits a branch target instruction trace cache; if not hitting, taking a jumping address of the branch instruction as a next instruction-fetching address, and establishing a corresponding ahead predictable branch instruction trace cache entry; if hitting, proceeding to step 4; step 4: fetching a next instruction to be executed directly from the branch target instruction trace cache, and predicting a jump of the next instruction to be executed according to the ahead branch instruction trace cache prediction table; if the next instruction to be executed is a branch instruction and a jump is predicted, taking a jumping address of a target instruction of the branch instruction stored in the branch target instruction trace cache as a next instruction-fetching address; if the next instruction to be executed is not a branch instruction or a jump is not predicted, taking a sequential address of the target instruction of the branch instruction stored in the branch target instruction trace cache as the next instruction-fetching address”, all limitations of steps 3-4 are not required if it is determined at step 2 that the current instruction is not a branch instruction. Therefore, the first interpretation rejects the claim under the condition that the current instruction is not a branch instruction.
The second claim interpretation rejects all limitations of claim 1, despite the claim limitations not being required based on contingency, in light of compact prosecution.
In regards to claim 1, Zhang discloses A method for ahead predicting a direct jump ([0049-0053]: wherein look-ahead or advanced predicting of a direct jump is performed), comprising the steps of: step 1: accessing a prediction table based on branch history information and a branch target instruction trace cache ([0049, 0051, 0053 and 0055]: wherein a prediction table (interpreted as the combination of a current branch prediction table and a look-ahead/advanced prediction table) based on branch history information (numerical values indicating whether branch jumps or not based on previous branch execution) and a branch instruction tracking cache are accessed (also see [0064] for clarity regarding branch history information as it states that updates to the prediction table are made after branch execution, thus the prediction table stores previous or historical branch information)) wherein the prediction table based on the branch history information comprises a current branch prediction table and an ahead branch instruction trace cache prediction table ([0049, 0051, 0053 and 0055]: wherein the prediction table based on branch history information (numerical values indicating whether branch jumps or not based on previous branch execution) comprises a current branch prediction table and a look-ahead/advanced prediction table (e.g. ahead branch instruction trace cache prediction table) (also see [0064] for clarity regarding branch history information as it states that updates to the prediction table are made after branch execution, thus the prediction table stores previous or historical branch information)) and fetching contents of the ahead branch instruction trace cache prediction table and a branch target instruction trace cache and instructions in a system memory ([0049-0055, 0058 and 0065]: wherein numerical values ,indicating whether branch jump target instructions will jump or not, of the look-ahead/advanced branch prediction table and contents of the branch instruction tracking cache are obtained (i.e. fetched). Wherein processor fetches program code of a system memory) step 2: determining whether a current instruction of the instructions fetched from a system memory is a branch instruction ([0049 and 0059]: wherein it is determined whether a current instruction of instructions fetched from a system memory is a branch instruction or not) if the current instruction is not a branch instruction ([0049 and 0059]: wherein it is determined that a current instruction is not a branch instruction) if the current instruction is a branch instruction, then proceeding to step 3; step 3: determining whether the fetched current branch instruction hits a branch target instruction trace cache; if not hitting, taking a jumping address of the branch instruction as a next instruction-fetching address, and establishing a corresponding ahead predictable branch instruction trace cache entry; if hitting, proceeding to step 4; step 4: fetching a next instruction to be executed directly from the branch target instruction trace cache, and predicting a jump of the next instruction to be executed according to the ahead branch instruction trace cache prediction table; if the next instruction to be executed is a branch instruction and a jump is predicted, taking a jumping address of a target instruction of the branch instruction stored in the branch target instruction trace cache as a next instruction-fetching address; if the next instruction to be executed is not a branch instruction or a jump is not predicted, taking a sequential address of the target instruction of the branch instruction stored in the branch target instruction trace cache as the next instruction-fetching address (Note: The following limitations are interpreted under broadest reasonable interpretation and are thus not required because they fall under a contingent limitation that is not required to occur. The examiner interprets the claim such that the condition of “a current instruction not being a branch instruction occurs”, thus steps 3-4 of claim 1 are not required as they are only required if a current instruction is a branch instruction)
Zhang does not disclose accessing a prediction table based on branch history information and a branch target instruction trace cache in a first level pipeline stage,
fetching contents of the ahead branch instruction trace cache prediction table and a branch target instruction trace cache and instructions in a system cache in a second level pipeline stage nor if the current instruction is not a branch instruction, sequentially fetching instructions in order. Zhang discloses accessing a prediction table and a branch target instruction trace cache, as well as fetching contents of an ahead branch instruction trace cache prediction table, branch target instruction trace cache and instructions in a system memory. However, Zhang does not disclose a first or second level pipeline stages; nor does Zhang explicitly disclose a cache memory.
Nagao discloses accessing a prediction table based on branch history information and a branch target instruction cache in a first level pipeline stage ([0011-0013]: wherein a branch history table and a branch target address cache in a first fetch stage are accessed simultaneous (See Figs. 13-14)) fetching contents of the prediction table and a branch target instruction cache and instructions in a system cache in a second level pipeline stage ([0011-0013]: wherein contents of prediction table and branch target instruction cache and instructions in a system instruction cache are fetched in a second fetch stage (See Figs. 13-14))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify accessing and fetching of contents of prediction tables, branch instruction trace cache, and an instruction cache of Zhang to be performed in first and second fetch stages as the accessing and fetching performed in Nagao. It would have been obvious to one of ordinary skill in the art because including multiple fetch stages can enable fetch operations across multiple stages and allow different CPU units to engage in fetch operations simultaneously, thus optimizing utilization of CPU fetch resources and increasing instruction-level parallelism.
It would have been further obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the system memory of Zhang to be a system cache memory as in Nagao. It would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (using a system cache as taught in Nagao) for another (using a generic system memory as taught in Zhang) to obtain predictable results (fetching instructions from a system cache) (MPEP 2143, Example B).
The combination of Zhang and Nagao does not explicitly disclose if the current instruction is not a branch instruction, sequentially fetching instructions in order. Zhang discloses determining that a current instruction is not a branch instruction, and if the instruction is not a branch instruction a control method ends. Thus, Zhang does not explicitly disclose fetching of instructions after the control method ends.
Ishii discloses if the current instruction is not a branch instruction, sequentially fetching instructions in order ([0074-0077]: wherein if no instruction flow changing instructions (e.g. no branches) are identified, sequential addresses in memory are used to fetch instructions in order)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify fetching of instructions of Zhang to sequentially fetch instructions in order when it is determined that the current instruction is not a branch instruction as taught in Ishii. It would have been obvious to one of ordinary skill in the art because it would have been applying a known technique (fetching instructions from sequential addresses when an instruction is not a branch instruction as taught in Ishii) to a known device (method and system of Zhang that fetches instructions that include non-branch instructions) ready for improvement to yield predictable results (a system/method that sequentially fetches instructions in order when a current instruction is not a branch instruction) for the benefit of efficient instruction fetching in a system (MPEP 2143, Example D).
In regards to claim 1, Zhang discloses A method for ahead predicting a direct jump ([0049-0053]: wherein look-ahead or advanced predicting of a direct jump is performed), comprising the steps of: step 1: accessing a prediction table based on branch history information and a branch target instruction trace cache ([0049, 0051, 0053 and 0055]: wherein a prediction table (interpreted as the combination of a current branch prediction table and a look-ahead/advanced prediction table) based on branch history information (numerical values indicating whether branch jumps or not based on previous branch execution) and a branch instruction tracking cache are accessed (also see [0064] for clarity regarding branch history information as it states that updates to the prediction table are made after branch execution, thus the prediction table stores previous or historical branch information)) wherein the prediction table based on the branch history information comprises a current branch prediction table and an ahead branch instruction trace cache prediction table ([0049, 0051, 0053 and 0055]: wherein the prediction table based on branch history information (numerical values indicating whether branch jumps or not based on previous branch execution) comprises a current branch prediction table and a look-ahead/advanced prediction table (e.g. ahead branch instruction trace cache prediction table) (also see [0064] for clarity regarding branch history information as it states that updates to the prediction table are made after branch execution, thus the prediction table stores previous or historical branch information)) and fetching contents of the ahead branch instruction trace cache prediction table and a branch target instruction trace cache and instructions in a system memory ([0049-0055, 0058 and 0065]: wherein numerical values ,indicating whether branch jump target instructions will jump or not, of the look-ahead/advanced branch prediction table and contents of the branch instruction tracking cache are obtained (i.e. fetched). Wherein processor fetches program code of a system memory) step 2: determining whether a current instruction of the instructions fetched from a system memory is a branch instruction ([0049 and 0059]: wherein it is determined whether a current instruction of instructions fetched from a system memory is a branch instruction or not) if the current instruction is not a branch instruction ([0049 and 0059]: wherein it is determined that a current instruction is not a branch instruction, the control method ends) if the current instruction is a branch instruction, then proceeding to step 3 ([0049 and 0061]: wherein it is determined that the current instruction is a branch instruction then the method proceeds) step 3: determining whether the fetched current branch instruction hits a branch target instruction trace cache ([0049, 0053, 0055 and 0061]: wherein it is determined if the current branch instruction includes information related to the branch instruction stored in a branch instruction tracking cache (e.g. determined whether the branch hits in the branch instruction tracking cache)) if not hitting, taking a jumping address of the branch instruction as a next instruction-fetching address, and establishing a corresponding ahead predictable branch instruction trace cache entry ([0050-0051 and 0061-0064]: wherein if the current branch instruction includes no entry in the branch instruction tracking cache, an entry is established/updated in the advanced/look-ahead branch prediction table. Wherein the current branch instruction is determined to jump at step 230, and thus a jumping address of the branch instruction is used as a next instruction-fetching address) if hitting, proceeding to step 4; step 4: fetching a next instruction to be executed directly from the branch target instruction trace cache, and predicting a jump of the next instruction to be executed according to the ahead branch instruction trace cache prediction table ([0053-0056]: wherein if the current branch instruction includes a related entry in the branch instruction tracking cache, a next instruction to control execution is directly obtained from the branch instruction tracking cache and predicting a jump of the next instruction to be executed according to the advanced/look-ahead branch prediction table) if the next instruction to be executed is a branch instruction and a jump is predicted, taking a jumping address of a target instruction of the branch instruction stored in the branch target instruction trace cache as a next instruction-fetching address ([0053-0056]: wherein if the next instruction is a branch instruction where a jump is predicted, a jumping address of the target instruction of the branch instruction stored in the branch instruction tracking cache as the next instruction fetching address) if the next instruction to be executed is not a branch instruction or a jump is not predicted, taking a sequential address of the target instruction of the branch instruction stored in the branch target instruction trace cache as the next instruction-fetching address ([0053-0056]: wherein if the next instruction is a branch instruction where a jump is not predicted, a sequential address of the target instruction of the branch instruction stored in the branch instruction tracking cache as the next instruction fetching address)
Zhang does not disclose accessing a prediction table based on branch history information and a branch target instruction trace cache in a first level pipeline stage,
fetching contents of the ahead branch instruction trace cache prediction table and a branch target instruction trace cache and instructions in a system cache in a second level pipeline stage nor if the current instruction is not a branch instruction, sequentially fetching instructions in order. Zhang discloses accessing a prediction table and a branch target instruction trace cache, as well as fetching contents of an ahead branch instruction trace cache prediction table, branch target instruction trace cache and instructions in a system memory. However, Zhang does not disclose first or second level pipeline stages; nor does Zhang explicitly disclose a cache memory.
Nagao discloses accessing a prediction table based on branch history information and a branch target instruction cache in a first level pipeline stage ([0011-0013]: wherein a branch history table and a branch target address cache in a first fetch stage are accessed simultaneously (See Figs. 13-14)) fetching contents of the prediction table and a branch target instruction cache and instructions in a system cache in a second level pipeline stage ([0011-0013]: wherein contents of prediction table and branch target instruction cache and instructions in a system instruction cache are fetched in a second fetch stage (See Figs. 13-14))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify accessing and fetching of contents of prediction tables, branch instruction trace cache, and an instruction cache of Zhang to be performed in first and second fetch stages as the accessing and fetching performed in Nagao. It would have been obvious to one of ordinary skill in the art because including multiple fetch stages can enable fetch operations across multiple stages and allow different CPU units to engage in fetch operations simultaneously, thus optimizing utilization of CPU fetch resources and increasing instruction-level parallelism.
It would have been further obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the system memory of Zhang to be a system cache memory as in Nagao. It would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (using a system cache as taught in Nagao) for another (using a generic system memory as taught in Zhang) to obtain predictable results (fetching instructions from a system cache) (MPEP 2143, Example B).
The combination of Zhang and Nagao does not explicitly disclose if the current instruction is not a branch instruction, sequentially fetching instructions in order. Zhang discloses determining that a current instruction is not a branch instruction, and if the instruction is not a branch instruction a control method ends. Thus, Zhang does not explicitly disclose fetching of instructions after the control method ends.
Ishii discloses if the current instruction is not a branch instruction, sequentially fetching instructions in order ([0074-0077]: wherein if no instruction flow changing instructions (e.g. no branches) are identified, sequential addresses in memory are used to fetch instructions in order)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify fetching of instructions of Zhang to sequentially fetch instructions in order when it is determined that the current instruction is not a branch instruction as taught in Ishii. It would have been obvious to one of ordinary skill in the art because it would have been applying a known technique (fetching instructions from sequential addresses when an instruction is not a branch instruction as taught in Ishii) to a known device (method and system of Zhang that fetches instructions that include non-branch instructions) ready for improvement to yield predictable results (a system/method that sequentially fetches instructions in order when a current instruction is not a branch instruction) for the benefit of efficient instruction fetching (MPEP 2143, Example D).
In regards to claim 2, the combination of Zhang, Nagao and Ishii discloses The method according to claim 1 (see rejection of claim 1 above) wherein in step 3, determining whether the branch target instruction hits the branch instruction trace cache (Zhang [0049, 0053, 0055 and 0061]: wherein it is determined if the current branch instruction includes information related to the branch instruction stored in a branch instruction tracking cache (e.g. determined whether the branch hits in the branch instruction tracking cache))
The combination of Zhang, Nagao and Ishii thus far does not disclose further comprises: determining whether the branch instruction matches a corresponding tag in the branch target instruction trace cache; and if matching, the branch target instruction trace cache is hit; if not matching, the branch target instruction trace cache is not hit. Zhang does disclose determining whether a branch instruction includes a related entry in a branch instruction tracking cache, and one of ordinary skill in the art would understand that a cache structure includes a tag. However, Zhang does not explicitly disclose determining whether a branch instruction includes a related entry in a branch instruction tracking cache by matching a tag in the branch instruction tracking cache.
Nagao discloses further comprises: determining whether the branch instruction matches a corresponding tag in the branch target instruction cache; and if matching, the branch target instruction trace cache is hit; if not matching, the branch target instruction cache is not hit. ([0011-0013]: wherein it is determined whether a branch instruction matches a corresponding instruction address (tag) in the branch target cache, and if matching a hit occurs. If not matching the branch target cache is not hit)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the system of Zhang which includes a branch instruction tracking cache to determine if a hit occurs in the branch instruction tracking cache by matching an instruction address of a branch instruction to an address (tag) of a branch cache as taught in Nagao. It would have been obvious to one of ordinary skill in the art because it would have been applying a known technique (determining if a hit occurs in a branch cache by matching a branch instruction address to an entry of a branch cache as taught in Nagao) to a known device (method and system of Zhang that determines if a hit occurs in a branch instruction tracking cache) ready for improvement to yield predictable results (a system/method that determines if a branch instruction hits an entry of a branch instruction tracking cache by matching an instruction address (i.e. tag)) for the benefit of efficiently accessing/indexing a cache data structure (e.g. branch instruction tracking cache) (MPEP 2143, Example D).
11. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, CN 109783143 (cited on IDS filed on 11/29/2023; note examiner has attached machine translation of this document which is used in citations of rejections below), Nagao, PGPUB No. 2014/0019722 and further in view of Choudhary, PGPUB No. 2017/0083333.
In regards to claim 3, Zhang discloses A branch instruction trace cache for ahead predicting a direct jump ([0050 and 0053-0054]: wherein a branch instruction tracking cache for ahead predicting of a direct jump is disclosed) comprising: a prediction table based on branch history information for achieving two-jump predictions ([0049 and 0051-0055]: wherein the prediction table based on branch history information (numerical values indicating whether branch jumps or not based on previous branch execution) comprises a current branch prediction table and a look-ahead/advanced prediction table (e.g. ahead branch instruction trace cache prediction table) for achieving continuous jump (two-jump) predictions (also see [0064] for clarity regarding branch history information as it states that updates to the prediction table are made after branch execution, thus the prediction table stores previous or historical branch information)) wherein the prediction table based on the branch history information comprises a current branch prediction table and an ahead branch instruction trace cache prediction table ([0049, 0051, 0053 and 0055]: wherein the prediction table based on branch history information (numerical values indicating whether branch jumps or not based on previous branch execution) comprises a current branch prediction table and a look-ahead/advanced prediction table (e.g. ahead branch instruction trace cache prediction table) (also see [0064] for clarity regarding branch history information as it states that updates to the prediction table are made after branch execution, thus the prediction table stores previous or historical branch information))
a branch target instruction trace cache comprising an entry, wherein the entry as a branch target instruction trace cache entry comprises: a target instruction of the branch instruction, a sequential address of the target instruction of the branch instruction, a jumping address of the target instruction of the branch instruction ([0052 and 0062]: wherein the branch instruction tracking cache comprises at least one entry, wherein the entry comprises a first associated instruction (e.g. target instruction of the branch instruction), sequential instruction address of the first associated instruction and a jump address of the target instruction of the branch instruction)
and a system memory ([0065]) the system accesses the branch target instruction trace cache and simultaneously accesses the current branch prediction table and the ahead branch instruction trace cache prediction table via a current instruction address ([0067-0069]: wherein a branch target instruction trace cache is accessed. Wherein the current branch prediction table and a look-ahead/advanced prediction table are accessed simultaneously using an instruction address) and if the current branch prediction table predicts that the instruction does not jump, then the processor sequentially fetches instructions in order in ([0045, 0049-0051 and 0066]: wherein if the current prediction table predicts that the instruction does not jump, then the processor sequentially fetches instructions in order starting at instr_1)
if the current branch prediction table predicts an instruction jump and the branch target instruction trace cache is hit, the processor fetches a next instruction to be executed from the branch target instruction trace cache ([0053 and 0065]: wherein if the current branch prediction table predicts an instruction jump and the branch instruction tracking cache includes a related entry (e.g. an entry is hit) the processor fetches a next instruction to be executed from the branch instruction tracking cache) at the same time, the ahead branch instruction trace cache prediction table predicts an instruction stored in the branch target instruction trace cache, and if a jump is predicted, a next instruction-fetching source is a jumping address of a target instruction of the branch instruction ([0067-0071]: wherein at the same time the look-ahead/advanced prediction table predicts an instruction stored in the branch instruction tracking cache and if a jump is predicted a next-instruction fetching source is a jumping address (Va_t) of a target instruction of the branch instruction) if a jump is not predicted, the instruction-fetching source is the sequential address of the target instruction of the branch instruction. ([0067-0071]: if a jump is not predicted the instruction-fetching source is the sequential address (Va_x+1) of the target instruction of the branch instruction)
Zhang does not disclose a branch target instruction trace cache comprising a plurality of entries, each entry storing a plurality of consecutive instructions containing a branch instruction, wherein each entry as a branch target instruction trace cache entry comprises: a tag, a system cache, wherein when in the first level pipeline stage, the system simultaneously accesses the branch target instruction trace cache, the current branch prediction table, and the ahead branch instruction trace cache prediction table via a current instruction address, and if the current branch prediction table predicts that the instruction does not jump, then the processor sequentially fetches instructions in order in the second level pipeline stage, nor the processor fetches a next instruction to be executed from the branch target instruction trace cache in a second level pipeline stage. Zhang discloses accessing a prediction table and a branch target instruction trace cache, as well as fetching contents of an ahead branch instruction trace cache prediction table, branch target instruction trace cache and instructions in a system memory. However, Zhang does not explicitly disclose first or second level pipeline stages; nor does Zhang explicitly disclose a cache memory.
Nagao discloses a system cache ([0013 and Fig. 15]) wherein when in the first level pipeline stage, the system simultaneously accesses the branch target instruction cache and a prediction table via a current instruction address ([0011-0013]: wherein a branch history table and a branch target address cache are accessed simultaneously in a first fetch stage via a current instruction address (See Figs. 13-14)) the processor fetches instructions in the second level pipeline stage ([0011-0013]: wherein the processor fetches instructions in a second fetch stage (See Figs. 13-14)) the processor fetches a next instruction to be executed in a second level pipeline stage ([0011-0013]: wherein a processor fetches next instructions to be executed in a second fetch stage)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify accessing and fetching of contents of prediction tables and branch instruction trace cache of Zhang to be performed in first and second fetch stages as the accessing and fetching performed in Nagao. It would have been obvious to one of ordinary skill in the art because including multiple fetch stages can enable fetch operations across multiple stages and allow different CPU units to engage in fetch operations simultaneously, thus optimizing utilization of CPU fetch resources and increasing instruction-level parallelism.
It would have been further obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the system memory of Zhang to be a system cache memory as in Nagao. It would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (using a system cache as taught in Nagao) for another (using a system memory as taught in Zhang) to obtain predictable results (fetching instructions from a system cache) (MPEP 2143, Example B).
The combination of Zhang and Nagao does not disclose a branch target instruction trace cache comprising a plurality of entries, each entry storing a plurality of consecutive instructions containing a branch instruction, wherein each entry as a branch target instruction trace cache entry comprises: a tag. Zhang does disclose a branch instruction tracking cache including an entry comprising a target instruction of the branch instruction, a sequential address of the target instruction of the branch instruction and a jumping address of the target instruction of the branch instruction. However, Zhang does not disclose the cache storing a plurality of entries, each entry storing a plurality of consecutive instructions.
Choudhary discloses a branch target instruction cache comprising a plurality of entries, each entry storing a plurality of consecutive instructions containing a branch instruction, wherein each entry as a branch target instruction trace cache entry comprises: a tag ([0024-0029]: wherein a BTIC (element 102) comprises a plurality of entries, each entry storing a plurality of consecutive instructions containing a branch. Wherein each entry comprises a tag (element 104t) (See Fig. 1A))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the branch instruction tracking cache of Zhang to include a plurality of entries storing consecutive instructions including branches and tags as the branch instruction cache of Choudhary. It would have been obvious to one of ordinary skill in the art because it would allow the cache prediction structure to efficiently handle storage of conditional branches and minimize bubbles during processing of BTIC-hitting branches. Additionally, it would increase the throughput or number of instructions that can be fetched and processed in each cycle. (Choudhary [0023 and 0041])
Examiner Notes
12. The examiner suggest the applicant amend claim 1 to remove the contingent limitations stating “whether” or “if” as to overcome any broad interpretations based on contingent limitations. The applicant should amend the claims such that each step is positively recited.
Conclusion
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY P SPANN whose telephone number is (571)431-0692. The examiner can normally be reached M-F, 9am-6pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COURTNEY P SPANN/ Primary Examiner, Art Unit 2183