Prosecution Insights
Last updated: April 19, 2026
Application No. 17/699,062

CACHE STREAMING APPARATUS AND METHOD FOR DEEP LEARNING OPERATIONS

Final Rejection §103
Filed
Mar 18, 2022
Examiner
KROFCHECK, MICHAEL C
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
530 granted / 652 resolved
+26.3% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to amendment filed on 12/17/2025. The specification, figures 32-33, and claims 1, 3-5, 9, 12-13, and 17-24 have been amended. The objections and rejections from the prior correspondence that are not restated herein are withdrawn. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/17/2025 was filed after the mailing date of the non-final rejection on 9/18/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings were received on 12/17/2025. These drawings are acceptable. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: first logic in claims 1, 3-5, 9, 12-13, 17and 20-21 and second logic in claims 4-5, 12-13, and 20-21. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-6, 9-14, and 17-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2023/0289291), Gu et al. (US 2021/0397934), and O’Connor et al. (US 2007/0204121). With respect to claim 1, Lee teaches of an apparatus comprising: a plurality of compute units to perform machine learning operations (fig. 3, items 314; paragraph 49-50; where each of the neural engines performs machine learning operations); a cache subsystem comprising a hierarchy of cache levels, at least some of the cache levels shared by two or more of the plurality of compute units (fig. 2-3, item 240, 334; paragraph 35, 50, 53, 55-56, 61; buffer 334 and cache circuit makeup the claimed cache hierarchy of cache levels. The cache circuit caches data from the system memory for faster access by the neural processor circuit. The buffer stores data input and output to/from the neural engines. As the neural engines perform operations on the data, the buffer and cache can be considered to be shared by the neural engines); and first logic to stream machine learning data in and out of the cache subsystem based on the machine learning operations (fig. 3, 6; paragraph 55-56, 61, 85-94; where the data processor circuit, cache access circuit and system memory access circuit prefetch large datasets from the system memory into the cache to more efficiently utilize the available bandwidth, which are then loaded into the buffer to be input to the neural engines and output data from the neural engines to the buffer), the first logic to load data into the cache subsystem from memory before the data is needed by a first portion of the machine learning operations (fig. 3, 6; paragraph 61, 85-94; where the cache access circuit and system memory access circuit prefetch large datasets from the system memory into the cache to more efficiently utilize the available bandwidth) and Lee fails to explicitly teach of to ensure that results produced by the first portion of machine learning operations are maintained in the cache subsystem until used by a second portion of the machine learning operations. However, Gu teaches of to ensure that results produced by the first portion of machine learning operations are maintained in the cache subsystem until used by a second portion of the machine learning operations (fig. 2; paragraph 25-26; where the results of the first layer calculations are retained in the cache memory until after second layer calculations on those results are performed). The combination of Lee and Gu fails to explicitly teach of wherein ensuring the results being maintained comprises tagging, with one or more cacheline bits of corresponding cache lines in which the results are stored, to indicate that the results are to be stored in the cache subsystem until being using by the second portion of the machine learning operations. However, O’Connor teaches of wherein ensuring the results being maintained comprises tagging, with one or more cacheline bits of corresponding cache lines in which the results are stored, to indicate that the results are to be stored in the cache subsystem (paragraph 14-16; where the lock indication bit is stored in the cache line’s tag and indicates that the cache line is to remain within the cache memory hierarchy). The combination of Lee, Gu, and O’Connor teaches of wherein ensuring the results being maintained comprises tagging, with one or more cacheline bits of corresponding cache lines in which the results are stored, to indicate that the results are to be stored in the cache subsystem until being using by the second portion of the machine learning operations (Gu, paragraph 25-26; O’Connor, paragraph 14-16; where in the combination, Gu’s retaining the calculation results in the cache until the second layer’s calculations are completed occurs via the cache tag lock bit of O’Connor). Lee and Gu are analogous art because they are from the same field of endeavor, as they are directed to caching data used in machine learning. It would have been obvious to one of ordinary skill in the art having the teachings of Lee and Gu before the time of the effective filing of the claimed invention to retain the results of the operation in the cache until they are used by a later operation in Lee as taught in Gu. Their motivation would have been to more efficiently utilize the cache and processing resources via minimizing data transmission between the cache and main memory (Gu; paragraph 5-6, 9). Lee, Gu, and O’Connor are analogous art because they are from the same field of endeavor, as they are directed to caching data. It would have been obvious to one of ordinary skill in the art having the teachings of Lee, Gu, and O’Connor before the time of the effective filing of the claimed invention to use the tag lock bits of O’Connor to lock the data in the cache in the combination of Lee and Gu. Their motivation would have been to more efficiently manage the cache lines. With respect to claim 9, the combination of Lee, Gu, and O’Connor teaches of the limitations cited and described above with respect to claim 1 for the same reasoning as recited with respect to claim 1. With respect to claim 17, the combination of Lee, Gu, and O’Connor teaches of the limitations cited and described above with respect to claim 1 for the same reasoning as recited with respect to claim 1. The combination of Lee, Gu, and O’Connor also teaches of a non-transitory machine-readable medium having program code stored thereon which when executed by a machine, causes the machine to perform the operations of claim 1 (Lee, 27, 87, 105-106; Gu, paragraph 47-48; where code is compiled and executed to carry out the operations. Since the code is compiled and executed, it must be stored in a memory in order for compilation and execution to occur). The reasons for obviousness are the same as indicated above with respect to claim 1. With respect to claims 2, 10, and 18, Lee teaches of wherein the first portion of the machine learning operations comprise a forward-propagation sequence of operations to produce activation results (paragraph 42-48; where forward propagation occurs with an activation function that weights the output of the node) and the second portion of the machine learning operations comprise a back-propagation sequence of operations which are to use the activation results (paragraph 42-48; where back propagation is performed on the results to adjust the coefficients in order to improve the value of the loss function). With respect to claims 3, 11, and 19, the combination of Lee, Gu, and O’Connor teaches of wherein the data streaming hardware logic is to cause the activation results to be flushed from the cache subsystem following use by the back-propagation sequence of operations (Gu, fig. 2; paragraph 25-26; where after the second calculation is completed (back propagation of Lee), the first calculation result retained in the cache is invalidated). The reasons for obviousness are the same as indicated above with respect to claim 1. With respect to claims 4, 12, and 20, Lee teaches of wherein the first logic is to be programmed to stream the machine learning data in and out of the cache subsystem by second logic of the plurality of compute units (fig. 3, 6; paragraph 53, 55-56, 61, 85-94; where the neural task manager, data processor circuit, cache access circuit and system memory access circuit prefetch large datasets from the system memory into the cache to more efficiently utilize the available bandwidth, which are then loaded into the buffer to be input to the neural engines and output data from the neural engines to the buffer). With respect to claims 5, 13, and 21, Lee teaches of wherein the second logic is to issue one or more commands to the first logic to cause the machine learning hardware logic to stream the machine learning data in and out of the cache subsystem (fig. 3, 6; paragraph 53, 55-56, 61, 85-94; where the neural task manager, data processor circuit, cache access circuit and system memory access circuit prefetch large datasets from the system memory into the cache to more efficiently utilize the available bandwidth, which are then loaded into the buffer to be input to the neural engines and output data from the neural engines to the buffer. The neural task manager issues commands for the prefetching and the other operations that are to occur within the neural processor circuit). With respect to claims 6, 14, and 22, Lee teaches of wherein the one or more commands are to indicate a particular set of data to be prefetched or maintained in the cache subsystem (fig. 6; paragraph 53, 85-90; where the instructions indicate to prefetch the second input data of the second task). Claim(s) 7, 15, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee, Gu, and O’Connor as applied to claims 6, 14, and 22 above, and further in view of Moyer (US 20210182214). With respect to claims 7, 15, and 23 the combination of Lee, Gu, and O’Connor fails to explicitly teach of wherein the one or more commands are to further indicate a particular cache level in which to prefetch or maintain the particular set of data. However, Moyer teaches of wherein the one or more commands are to further indicate a particular cache level in which to prefetch or maintain the particular set of data (paragraph 23; where the instructions can include explicit instructions to prefetch certain data to a particular specified level of cache). Lee, Gu, O’Connor, and Moyer are analogous art because they are from the same field of endeavor, as they are directed to caching. It would have been obvious to one of ordinary skill in the art having the teachings of Lee, Gu, O’Connor, and Moyer before the time of the effective filing of the claimed invention to incorporate the specifying the cache level to prefetch data into in the combination of Lee, Gu, and O’Connor as taught in Moyer. Their motivation would have been to increase the flexibility and control over the caching of data. Claim(s) 8, 16, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee, Gu, O’Connor, and Moyer as applied to claims 7, 15, and 23 above, and further in view of Kumar et al. (US 6,237,064). With respect to claims 8, 16, and 24 the combination of Lee, Gu, O’Connor, and Moyer fails to explicitly teach of wherein the cache subsystem comprises a Level 2 (L2) cache, a Level 1(L1) cache, and a Level 0 (L0) cache. However, Kumar teaches of wherein the cache subsystem comprises a Level 2 (L2) cache, a Level 1(L1) cache, and a Level 0 (L0) cache (fig. 1; column 3, lines 1-29; where the cache system includes a L0, L1, and L2 caches). Lee, Gu, Moyer, Kumar are analogous art because they are from the same field of endeavor, as they are directed to caching. It would have been obvious to one of ordinary skill in the art having the teachings of Lee, Gu, O’Connor, Moyer, and Kumar before the time of the effective filing of the claimed invention to incorporate the L0, L1, and L2 caches of Kumar into the cache system of the combination of Lee, Gu, O’Connor, and Moyer. Their motivation would have been to reduce the latency of the system (Kumar, column 2, lines 5-10). Response to Arguments Applicant's arguments with respect to independent claims 1, 9, and 17 have been considered but are moot because of the new reference(s) being applied, in light of the amendment, to the particular limitations the arguments are referencing. Thereby the arguments no longer apply to the rejection. Applicant's arguments filed 12/17/2025 have been fully considered but they are not persuasive. Applicant argues with respect to the examiner’s application of 35 U.S.C. 112(f) to the claimed “first logic” and “second logic” as indicated above as the applicant alleges that the term “logic” is not a generic place holder, but is instead comparable to the terms, “circuit” and “computing unit” and is sufficient structure to perform the claimed functions. The examiner disagrees. On page 10 of the applicant’s remarks, the applicant directs the examiner to paragraphs 313, 315, and 353 as evidence that the term logic defines sufficient structure to preclude the application of 35 U.S.C. 112(f). Paragraph 313 specifies that ML data management logic can be implemented in hardware, software, or any combination thereof. Paragraph 315 details some of the operations carried out by the ML data management logic. Paragraph 353 indicates that the steps disclosed may be embodied in machine-executable instructions or may be performed by specific hardware components or by any combination of them. By equating the ML data management logic with a software implementation that then needs to be executed on compute units shows that the in this embodiment, the ML data management logic itself is not sufficient structure to perform the claimed operations as it needs the structure of compute units in order to realize its functionality. Thus, the term “logic” has been properly interpreted as a generic place holder. The examiner encourages the applicant to use either “compute unit” or “circuit” to replace the term “logic” in the claims to provide sufficient structure in the claims to preclude the above application of 35 U.S.C. 112(f). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C KROFCHECK whose telephone number is (571)272-8193. The examiner can normally be reached on Monday - Friday 8am -5pm, first Friday off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim Vo can be reached on (571) 272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MICHAEL C. KROFCHECK Primary Examiner Art Unit 2138 /Michael Krofcheck/Primary Examiner, Art Unit 2138
Read full office action

Prosecution Timeline

Mar 18, 2022
Application Filed
May 13, 2022
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591514
TRAFFIC AWARE SMART CACHING IN FABRIC SWITCHES
2y 5m to grant Granted Mar 31, 2026
Patent 12591509
RECONFIGURABLE PARTITIONING OF HIGH BANDWIDTH MEMORY
2y 5m to grant Granted Mar 31, 2026
Patent 12585579
LOCKED RAID WITH COMPRESSION FOR MEMORY INTERCONNECT APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12563242
UTILIZING A SINGLE BUFFER FOR A DYNAMIC NUMBER OF PLAYERS, EACH USING A DYNAMICALLY SIZED BUFFER
2y 5m to grant Granted Feb 24, 2026
Patent 12561090
NAND-BASED STORAGE DEVICE WITH PARTITIONED NONVOLATILE WRITE BUFFER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+17.1%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month