Prosecution Insights
Last updated: April 19, 2026
Application No. 18/626,929

Hierarchical Trace Cache

Final Rejection §103§112§DP
Filed
Apr 04, 2024
Examiner
DOMAN, SHAWN
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
183 granted / 275 resolved
+11.5% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
47 currently pending
Career history
322
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
26.3%
-13.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 275 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 2, 5, 14-17, and 20 have been amended. Claims 1-6, 14-17, and 20 have been examined. The specification objections in the previous Office Action have been addressed and are withdrawn. The double patenting rejections in the previous Office Action have been addressed and are withdrawn. Claim Objections Claim 5 is objected to because of the following informalities. Claim 5 recites, at lines 5-6, “the first-level trace cache storage circuitry of the second-level trace cache storage circuitry.” This appears to be a typographical error. Applicant may have intended “the first-level trace cache storage circuitry [[of]]or the second-level trace cache storage circuitry.” Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-6, 14-17, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 is amended to recite tracking addresses of instructions in an execution path and predicting based the tracked addresses. The Applicant has not indicated where support for this limitation may be found in the written description, nor is such support evident to the Examiner. The specification does not demonstrate that Applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Claims 14 and 20 include similar language and are similarly rejected. Claims 2-6 and 15-17 are rejected as depending from rejected base claims and failing to cure the indefiniteness of those base claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. 7,555,633 by Smaus et al. (as cited by Applicant and hereinafter referred to as “Smaus”). Regarding claims 1, 14, and 20, taking claim 1 as representative, Smaus discloses: an apparatus, comprising: processor circuitry configured to execute control transfer instructions (Smaus discloses, at Figure 1 and related description, a microprocessor having a branch prediction unit, which discloses an apparatus, comprising: processor circuitry configured to execute control transfer instructions.); prediction circuitry configured to predict directions of control transfer instructions (Smaus discloses, at Figure 1 and related description, a microprocessor having a branch prediction unit, which discloses prediction circuitry configured to predict directions of control transfer instructions.); trace cache circuitry that includes trace cache storage circuitry that includes first and second levels (Smaus discloses, at Figure 1 and related description, trace cache, which discloses trace cache storage circuitry. As disclosed at col. 8, lines 59-61, the trace cache can be implemented in multiple tiers, which discloses first and second levels.), the trace cache circuitry configured to: identify traces of instructions that satisfy one or more criteria, wherein one or more of the identified traces include at least one internal taken control transfer instruction (Smaus discloses, at Figure 1 and related description, trace cache, which discloses identifying traces of instructions that satisfy one or more criteria. Smaus also discloses, at col. 7, lines 64-67, traces that include predicted taken branches.); and store identified traces in one or both of: first-level trace cache storage circuitry; and second-level trace cache storage circuitry (Smaus discloses, at Figure 1 and related description, trace cache, which discloses storing traces in the levels of the trace cache storage circuitry.); and prefetch circuitry configured to: track addresses of instructions in an execution path (Smaus discloses, at col. 7, lines 7-50, trace cache entries include tags that include some or all of the address bits identifying operations in the entry, i.e., instructions, which discloses tracking addresses of instructions in an execution path.); predict, based on the tracked addresses, that a trace will be executed by the processor circuitry (Smaus discloses, at Figure 1 and related description, a prefetch unit, which discloses prefetch circuitry. Smaus also discloses, at col. 6, lines 45-51, determining that it is likely that instructions will be needed in the near future, which discloses predicting that a trace will be executed. See also col. 8, lines 48-50, which discloses performing an analysis as to the likelihood of instructions being needed. Smaus discloses, at col. 7, lines 7-50, the trace cache entries include tags, i.e., addresses, which discloses predictions are based on the tracked addresses.); and prefetch the predicted trace from the second-level trace cache storage circuitry…in response to the prediction (Smaus discloses, at col. 6, lines 45-51, prefetching instructions determined as likely to be needed in the near future in response to the determination.). Smaus does not explicitly disclose that the aforementioned prefetching is into the first level of trace cache. Instead, Smaus discloses prefetching into the instruction cache. However, Smaus discloses examples in which there are multiple levels of trace cache. See, e.g., col. 8, lines 59-61. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Smaus to include prefetching from the second level trace cache to the first level trace cache in order to improve performance by retaining instructions likely to be needed in faster memory, i.e., cache, thereby reducing access time. Claims 2-4, 6, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Smaus in view of US Publication No. 2023/0023860 by Holman et al. (hereinafter referred to as “Holman”). Regarding claims 2 and 15, taking claim 2 as representative, Smaus, as modified, discloses the elements of claim 1, as discussed above. Smaus also discloses: the prefetch circuitry is configured to predict whether traces will be executed by the processor circuitry (Smaus discloses, at col. 6, lines 45-51, prefetching instructions determined as likely to be needed in the near future.). Smaus does not explicitly disclose that the aforementioned prefetching is based on signature information that is generated based on the addresses of the instructions in the execution path. However, in the same field of endeavor (e.g., prefetch) Holman discloses: prefetching using signatures generated based on addresses of instructions (Holman discloses, at ¶ [0021], prefetching using generating signatures based on addresses of instructions.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Smaus to include signature based prefetching, as disclosed by Holman, in order to improve prefetch performance for instruction streams with branches. See, e.g., ¶ [0024]. Regarding claim 3, Smaus, as modified, discloses the elements of claim 2, as discussed above. Smaus does not explicitly disclose that the aforementioned signature information is generated by an exclusive-OR operation between a given address and a previous signature. However, in the same field of endeavor (e.g., prefetch) Holman discloses: signature information is generated by an exclusive-OR operation between a given address and a previous signature (Holman discloses, at ¶ [0021], generating signatures using an XOR of current address and previous signatures.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Smaus to use XORs, as disclosed by Holman, because selecting a particular logic function is an obvious design choice and XORs are relatively simple and commonly used. Regarding claim 4, Smaus, as modified, discloses the elements of claim 2, as discussed above. Smaus does not explicitly disclose the prediction is a lookahead prediction that predicts multiple cycles ahead of an address used to generate the signature information upon which the prediction is based. However, in the same field of endeavor (e.g., prefetch) Holman discloses: attempting to prefetch various distances ahead in the instruction stream (Holman discloses, at ¶ [0022], different lead times, which discloses predicting multiple cycles ahead of an address used to generate the signature information.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Smaus to use lookahead prediction, as disclosed by Holman, in order to improve performance by increasing likelihood that needed instructions have been prefetched by the time they are needed. Regarding claims 6 and 17, taking claim 5 as representative, Smaus, as modified, discloses the elements of claim 1, as discussed above. Smaus does not explicitly disclose next fetch predictor circuitry configured to predict a next fetch address for the processor circuitry; wherein the prefetch circuitry is further configured to prefetch a target address associated with the predicted trace into the next fetch predictor circuitry. However, in the same field of endeavor (e.g., prefetch) Holman discloses: next fetch predictor (Holman discloses, at Figure 5 and related description, a next fetch predictor, which discloses, next fetch predictor circuitry configured to predict a next fetch address for the processor circuitry.); and prefetching into the next fetch predictor ((Holman discloses, at Figure 5 and related description, fetching into the next fetch predictor, which discloses, prefetch a target address associated with the predicted trace into the next fetch predictor circuitry.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Smaus to include using a next fetch predictor, as discloses, by Holman, in order to improve performance by providing relatively fast prediction results. See, e.g., ¶ [0046]. Claims 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Smaus in view of US Publication No. 2017/0286304 by Peled et al. (hereinafter referred to as “Peled”). Regarding claims 5 and 16, taking claim 5 as representative, Smaus, as modified, discloses the elements of claim 1, as discussed above. the trace cache circuitry …[is] configured to store a prefetched predicted trace from the second-level trace cache storage circuitry and provide the prefetched predicted trace for execution… (Smaus discloses, at Figure 1 and related description, trace cache, which discloses trace cache storage circuitry and providing prefetched traces for execution. As disclosed at col. 8, lines 59-61, the trace cache can be implemented in multiple tiers, which discloses first and second levels.). Smaus does not explicitly disclose the aforementioned storing is by prefetch buffer circuitry included in the aforementioned trace cache circuitry, wherein the prefetch buffer circuitry includes a smaller number of entries configured to store traces than the first-level trace cache storage circuitry of the second-level trace cache storage circuitry, and to prefetch the predicted trace to the first-level trace cache storage circuitry, the prefetch is configured to promote the prefetched predicted trace from the prefetch buffer circuitry based on one or more execution metrics corresponding to execution of the prefetch predicted trace from the prefetch buffer circuitry. However, in the same field of endeavor (e.g., prefetch) Peled discloses: a prefetch buffer and promote the prefetched data from the mid-level cache circuitry based on one or more execution metrics corresponding to execution of the prefetched data from the prefetch buffer circuitry (Peled discloses, at Figure 20 and related description, a mid-level cache, which discloses a prefetch buffer. Peled also discloses, Id promoting the data to first-level cache based on a request to read the data, which discloses prefetch buffer circuitry configured to store a prefetched predicted trace from the second-level trace cache storage circuitry and provide the prefetched predicted trace for execution and promoting the prefetched predicted trace from the prefetch buffer circuitry based on one or more execution metrics corresponding to execution of the prefetch predicted trace.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Smaus to use a three-level hierarchical cache arrangement, as disclosed by Peled, to improve performance by allowing greater flexibility in tuning cache performance based on the obvious design choices implicit in cache design, such as the tradeoff between size and latency. Peled does not explicitly disclose that the mid-level cache, i.e., prefetch buffer, has fewer entries than the first-level or second-level caches. However, selection of the number of entries is an obvious design choice based on well-known tradeoffs, such as cost versus speed and size versus power. A person having ordinary skill in the art would have considered the various options and selected relative sizes according to circumstances. Therefore, it would have been obvious to have the prefetch buffer have fewer entries. Response to Arguments On pages 9-10 of the response filed December 22, 2025 (“response”), the Applicant argues, “the cited references, taken singly or in combination, do not teach or suggest to "prefetch [a] predicted trace from the second-level trace cache storage circuitry to the first-level trace cache storage circuitry in response to the prediction" "that [the] trace will be executed by the processor circuitry" as recited in claim 1. Applicant has also amended claim 1 to clarify that the prediction is "based on" "tracked addresses" "in an execution path."“ In support of this position the Applicant argues, “Smaus' prefetching into an instruction cache is not in response to a "prediction" "that a trace will be executed by the processor circuitry," particularly where the prediction is "based on" "tracked addresses" "in an execution path" as recited in amended claim 1.” Though fully considered, the Examiner respectfully disagrees. As an initial matter, the Applicant has not indicated where support for this amendment may be found in the written description, nor is such support evident to the Examiner. The broadest reasonable interpretation is broad indeed. The plain meaning of tracking addresses of instructions in an execution path is clear, and is interpreted as reading on any recordation of instruction addresses. The meaning of basing prediction on the tracked addresses is less clear. In what way is the prediction based on the tracked addresses? For purposes of examination, the limitation is interpreted to read on any prediction that utilizes the recorded addresses. As discussed above, Smaus discloses, e.g., at col. 7, lines 7-19, trace cache entries include tags, which include all or some of the address bits identifying an operation. This discloses the claimed tracking addresses of instructions in an execution pathway. Smaus also discloses, Id., searching the trace caches using the tags to determine whether or not a particular trace is found. Whether or not the trace is found impacts the decision of whether or not to prefetch the entry. This discloses predicting based on the tracked addresses. Accordingly, the Applicant’s arguments are deemed unpersuasive. On page 10 of the response the Applicant argues, “As discussed during the phone interview, Smaus states that when "a trace is evicted from the trace cache 160, there may be a fairly high probability that the instructions included in the evicted trace will be re-executed in the near term." Smaus at 6:45-48 (emphasis added). Eviction of a trace, however, is essentially the opposite of prediction that the trace will be executed. Rather, traces are typically evicted when they are less likely to be executed (e.g., to make room for traces that are more likely to be executed according to a replacement scheme such as least-recently-used (LRU)). Therefore, while Smaus stores the instructions from an evicted trace in a trace cache, the eviction is not a prediction that the trace will be executed. Rather, the eviction seems to be a prediction that the trace is less likely to be executed than other traces and has the result that the trace cannot be executed as a trace (although individual instructions may be fetched and executed from Smaus' instruction cache).” Though fully considered, the Examiner respectfully disagrees. The reason a trace cache entry is evicted is because a new trace is generated and the trace cache does not have room for the new trace. See, e.g., Smaus, col. 7, lines 40-50. This in and of itself has nothing to do with likelihood of execution. It is a simple matter of the trace cache is full, space is required for a new entry, one of the current entries must go. In practice, which must go is a decision that is made not randomly, but intelligently to attempt to minimize performance impacts due to cache misses. So, algorithms like LRU are commonly employed. Again, identifying that an entry was least recently used is not a prediction that the entry will not be used. In any event, Smaus discloses that after an entry is selected for eviction, a prediction is made as to whether the entry is likely to be used. See, e.g., Smaus col. 6, lines 45-51. If the prediction indicates that the instructions of the trace will be executed, the trace can be prefetched. Id. If the prediction indicates that the instructions of the trace are not likely to be executed, prefetching of the trace is inhibited. See, e.g., Smaus, col. 8, lines 48-58. To reiterate, the Examiner agrees that the eviction is not a prediction that a trace will be executed. Neither is the eviction a prediction that the trace will not, or is less likely to be, executed. Instead, the eviction is simply a function of limited memory capacity. However, after the eviction, a prediction is made as to whether or not the trace will be executed, and a prefetch decision is made based on that prediction. Accordingly, the Applicant’s arguments are deemed unpersuasive. On pages 10-11 of the response the Applicant argues, “Second, the Examiner noted on the phone call that Smaus' instructions from the trace are prefetched into the instruction cache because they may be "re-executed in the near term," which seems similar to prediction that the instructions will be executed. Given that Smaus' prefetch into the instruction cache is based on an eviction from the trace cache, however, any such prediction would not be "based on" "tracked addresses" "in an execution path" as recited in amended claim 1. To the extent that Smaus or the other references discuss prefetching based on addresses in an execution path, the Office Action appears to admit that none of this prefetching is into a trace cache.” Though fully considered, the Examiner respectfully disagrees. As discussed above, Smaus discloses that the trace entries include tags, which disclose tracked addresses and that the tags are used in operations contributing to prefetch decisions. This discloses that extremely broad and unsupported limitation of predicting “based on” tracked addresses. Accordingly, the Applicant’s arguments are deemed unpersuasive. On page 11 of the response the Applicant argues, “Third, the Examiner's proposed reason to combine to "improve performance" does not appear to be relevant, given that Smaus' prefetching in response to an eviction does not actually improve performance. Rather, Smaus' prefetch appears to ameliorate potential negative performance effects of the eviction from the trace cache, but it seems that executing from the trace cache (instead of evicting) would have better performance than from the instruction cache.” Though fully considered, the Examiner respectfully disagrees. The Examiner agrees that executing from the trace cache instead of evicting would have better performance. However, the trace cache has limited size, and not all traces can be stored therein. So, rather than have to take the performance hit associated with waiting for the instructions to be needed again, Smaus discloses prefetching them in response to predicting that they are likely to be executed. Accordingly, the Applicant’s arguments are deemed unpersuasive. On page 11 of the response the Applicant argues, “Finally, the proposed modification does not seem to make sense, as it would appear to modify Smaus to prefetch into a trace cache (instead of an instruction cache) based on an eviction from a trace cache. To the extent that the office action is generally asserting that prefetching between trace cache levels would have been obvious, none of the references suggests such prefetching. Therefore, this reasoning would appear to involve impermissible hindsight in view of claim 1.” Though fully considered, the Examiner respectfully disagrees. As discussed above, Smaus discloses that an eviction triggers a prediction about whether the evicted trace is likely to be executed. Smaus disclose prefetching based on that prediction. Accordingly, the Applicant’s arguments are deemed unpersuasive. On page 12 of the response the Applicant argues, “The Office Action cites U.S. 2017/0286304 (Peled) for claim 5 prior to amendment. Office Action at 9-10. Peled discusses a last-level cache, a mid-level cache, and a first-level cache. Applicant submits that none of these caches corresponds to a "prefetch buffer" with a "smaller number of entries" than the caches between which it resides. To facilitate discussion, Applicant notes that non-limiting example embodiments covered by claim 5 may advantageously maintain efficiency in the first-level trace cache by promoting traces from the prefetch buffer into the first-level trace cache only if they are actually used, thereby preventing pollution of the first-level trace cache if the prediction was incorrect.” These remarks have been fully considered and, in light of the claim amendments presented in the response, are deemed persuasive, in part. Please see above for new grounds of rejection of the amended claims. While Peled is silent about the relative number of entries, selecting the number of entries of a cache is not considered inventive. Instead, selecting the number of entries is dictated by circumstances and based on well-known design considerations. Therefore, the newly added limitations regarding relative size are obvious in view of the cited references. Regarding promotion only if a trace is actually used, the Examiner notes that these features are not recited in the claims. The Examiner also notes that the arguments appear to be describing a demand fetch or promotion. Demand fetching is well-known and is disclosed throughout Peled. Accordingly, the Applicant’s arguments are deemed unpersuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAWN DOMAN whose telephone number is (571)270-5677. The examiner can normally be reached on Monday through Friday 8:30am-6pm Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAWN DOMAN/ Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Sep 24, 2025
Non-Final Rejection — §103, §112, §DP
Nov 14, 2025
Examiner Interview Summary
Nov 14, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Response Filed
Jan 16, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585469
Trace Cache Access Prediction and Read Enable
2y 5m to grant Granted Mar 24, 2026
Patent 12572358
System, Apparatus And Methods For Minimum Serialization In Response To Non-Serializing Register Write Instruction
2y 5m to grant Granted Mar 10, 2026
Patent 12561142
METHOD AND SYSTEM FOR PREVENTING PREFETCHING A NEXT INSTRUCTION LINE BASED ON A COMPARISON OF INSTRUCTIONS OF A CURRENT INSTRUCTION LINE WITH A BRANCH INSTRUCTION
2y 5m to grant Granted Feb 24, 2026
Patent 12554498
QUANTUM COMPUTER WITH A PRACTICAL-SCALE INSTRUCTION HIERARCHY
2y 5m to grant Granted Feb 17, 2026
Patent 12541368
LOOP EXECUTION IN A RECONFIGURABLE COMPUTE FABRIC USING FLOW CONTROLLERS FOR RESPECTIVE SYNCHRONOUS FLOWS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+23.4%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 275 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month